Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78


In this post, AYLIEN NLP Research Intern, Mahdi, talks us through a quick experiment he performed on the back of reading an interesting paper on evolution strategies, by Tim Salimans, Jonathan Ho, Xi Chen and Ilya Sutskever.

Having recently read Evolution Strategies as a Scalable Alternative to Reinforcement Learning, Mahdi wanted to run an experiment of his own using Evolution Strategies. Flappy Bird has always been among Mahdi’s favorites when it comes to game experiments. A simple yet challenging game, he decided to put theory into practice.

Training Process

The model is trained using Evolution Strategies, which in simple terms works like this:

  1. Create a random, initial brain for the bird (this is the neural network, with 300 neurons in our case)
  2. At every epoch, create a batch of modifications to the bird’s brain (also called “mutations”)
  3. Play the game using each modified brain and calculate the final reward
  4. Update the brain by pushing it towards the mutated brains, proportionate to their relative success in the batch (the more reward a brain has been able to collect during a game, the more it contributes to the update)
  5. Repeat steps 2-4 until a local maximum for rewards is reached.

At the beginning of training, the bird usually either drops too low or jumps too high and hits one of the boundary walls, therefore losing immediately with a score of zero. In order to avoid scores of zero in training, which would means there won’t be a measure of success among brains, Mahdi set a small 0.1 score for every frame the bird stays alive. This way the bird learns to avoid dying at the first attempt. He then set a score of 10 for passing each wall, so the bird tries not only to stay alive, but to pass as many walls as possible.

The training process is quite fast as there is no need for backpropagation, and it is also not very costly in terms of memory as there is no need to record actions, as it is in policy gradients.

The model learns to play pretty well after 3000 epochs, however it is not completely flawless and it rarely loses in difficult cases, such as when there is a high difference between two wall entrances.

Here is a demonstration of the model after 3000 epochs

(~5 minutes on an Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz):

Use the controls to set speed level or to restart

Web version

For ease of access, Mahdi has created a web version of the experiment which can be accessed here.

Try it yourself

Note: You need python3 and pip for installing and running the code.

First, download or clone the repository:

git clone

Next, install dependencies (you may want to create a virtualenv):

pip install -r requirements

The pretrained parameters are in a file named load.npy and will be loaded when you run or will train the model, saving the parameters to saves/<TIMESTAMP>/save-<ITERATION>. shows the game in a GTK window so you can see how the AI actually plays. if you feel like playing the game yourself, space: jump, once lost, press enter to play again.


It seems that training past a maximum point leads to a reduction in performance. Learning rate decay might help with this. Mahdi’s interpretation is that after finding a local maximum for accumulated reward and being able to receive high rewards, the updates become pretty large and will pull the model too much to different sides, thus the model will enter a state of oscillation.

To try it yourself, there is a long.npy file, rename it to load.npy (backup load.npy before doing so) and run, you will see the bird failing more often than not. was trained for only 100 more epochs than load.npy.



With our News API, our goal is to make the world’s news content easier to collect, monitor and query, just like a database. We leverage Machine Learning and Natural Language Processing to process, normalize and analyze this content to make it easier for our users to gain access to rich and high quality metadata, and use powerful filtering capabilities that will ultimately help you to find precise and targeted stories with ease.

To this end, we have just launched a cool new feature, Real-time monitoring. Real-time monitoring allows you to further automate your collection and analysis of the world’s news content by creating tailored searches that source and automatically retrieve highly-relevant news stories, as soon as they are published.

real-time monitoring

You can read more about our latest feature – which is now also available in our News API SDKs – below.

Real-time monitoring

With Real-time monitoring enabled you can automatically pull stories as they are published, based on your specific search query. Users who rely on having access to the latest stories as soon as they are published, such as news aggregators and news app developers for example, should find this new feature particularly interesting.

The addition of this powerful new feature will help ensure that your app, webpage or news feed is bang up to date with the latest and most relevant news content, without the need for manual searching and updating.

Newly published stories can be pulled every minute (configurable), and duplicate stories in subsequent searches will be ignored. This ensures you are only getting the most recent publications, rather than a repeat of what has come before.


We have created code in seven different programming languages to help get you started with Real-time monitoring, each of which can be found below, as well as in our documentation.

NB: Real-time monitoring will only work when you set the sort_by parameter to published_at and sort_direction to desc.


The main benefit of this cool new feature is that you can be confident you are receiving the very latest stories and insights, without delay, by creating an automated process that will continue to retrieve relevant content as soon as it is published online. By automating the retrieval of content in real-time, you can cut down on manual input and generate feeds, charts and graphs that will automatically update in real-time.

We hope that you find this new update useful, and we would love to hear any feedback you may have.

To start using our News API for free and query the world’s news content easily, click the image below.


News API - Sign up



Interest in Artificial Intelligence and Machine Learning has seen a significant boom in recent times as the techniques and technologies behind them have quickly emerged from the research labs to the mainstream and into our everyday lives.

AI is helping organizations to automate routine operational tasks that would otherwise need to be performed by employees, often at a steep time and financial cost. By automating high-volume tasks, the need for human input in many areas is being reduced, creating more efficient and cost-effective processes.

Today we’re going to take a look at why we are seeing this rapid increase in interest in the areas of AI and Machine Learning, the key trends emerging, how various industries are leveraging them, and the challenges that lie ahead in a fascinating area with seemingly unlimited potential.

What are the main reasons behind the boom?

The mathematical approaches underlying Machine Learning are not new. In fact, many date back as far as the early 1800’s, which begs the question, why are we only now seeing this boom in Machine Learning and AI? The techniques behind these advancements generally require a considerable amount of both data and computational power, both of which continue to become more and more accessible and affordable to even the smallest of organizations. Significant recent improvements in computational capacities and an ever-expanding glut of accessible data are helping to bring AI and Machine Learning from futuristic fiction to the everyday norm. So much of what we do and touch on daily basis, whether in work, at home, or at play, contains some form of ML or AI element, even if we are not always aware of it.

We’re seeing this boom now because technological advancements have made it possible. Not only that, organizations are seeing clear and quantifiable evidence that these advancements can help them overcome a variety of operational problems, streamline their processes and enable better decision-making.

Screen Shot 2016-09-28 at 18.36.46

Key trends in Machine Learning & AI

Increased volume of data requires more powerful methods of analysis

Analyzing the sheer volume of data that is being generated on a daily basis creates a unique challenge that requires sophisticated and cutting-edge research to help solve. As the volume and variety of data sources continues to expand so too does the need to develop new methods of analysis, with research focussing on the development of new algorithms and ‘tricks’ to improve performance and enable greater levels of analysis.

Affordability and accessibility in the cloud

As the level of accessible data continues to grow, and the cost of storing and maintaining it continues to drop, more and more Machine Learning solutions hosting pre-trained models-as-a-service are making it easier and more affordable for organizations to take advantage. Without necessarily needing to hire Machine Learning experts, even the smallest of companies are now just an API call away from retrieving powerful and actionable insights from their data. From a development point of view, this is enabling the quick movement of application prototypes into production, which is spurring the growth of new apps and startups that are now entering and disrupting most markets and industries out there.

Every company is becoming a data company

Regardless of what an organization does or what industry they belong to, data is helping to drive value. Some will be using it to spot trends in performance or markets to help predict and prepare for future outcomes, while others will be using it to personalize their inventory, creating a better user experience and promoting an increased level of engagement with their customers.

Traditionally, organizational decisions have been made based on numerical and/or structured data, as access to relevant unstructured data was either unavailable, or simply unattainable. With the explosion of big data in recent time and the improvement in Machine Learning capabilities, huge amounts of unstructured data can now be aggregated and analyzed, enabling a deeper level of insight and analysis which leads to more informed decision-making.

How these trends are being leveraged

Machine Learning techniques are being applied in a wide range of applications to help solve a number of fascinating problems.

Contextualized data for a personalized UX

Today’s ever-connected consumer offers a myriad of opportunities to companies and providers who are willing to go that extra step in providing a personalized user experience. Contextualized experience goes beyond simple personalization, such as knowing where your user is or what they are doing at a certain point in time. Such experience has become a basic expectation – my phone knows my location, my smartwatch knows that I’m running, etc.

There is now a greater expectation among users for a deeper, almost predictory experience with their applications and Machine Learning is certainly assisting in the quest to meet these expectations. An abundance of available data enables improved features and better machine learning models to be created, generating higher levels of performance and predictability, which ultimately leads to an improved user experience.

Via Machine Learning, a person’s future actions can be predicted at the individual level with a high degree of confidence. No longer are you viewed as a member of a cohort. Now you are known individually by a computer so that you may be targeted surgically” – John Foreman

Internet of Things

As the rapid increase in devices and applications connected to the Internet of Things continues, the sheer volume of data being generated will continue to grow at an incredible rate. It’s simply not possible for us mere mortals to analyze and understand such quantities of data manually. Machine Learning is helping to aggregate all of this data from countless sources and touchpoints to deliver powerful insights, spot actionable trends and uncover user behavior patterns.

Software and hardware innovations

We are seeing the implementation of AI and Machine Learning capabilities in both software and hardware across pretty much every industry out there. For example;

Retail buyers are being fed live inventory updates and in many cases enabling the auto-replenishment of stock as historical data predicts the future stock-level requirements and sales patterns.

Healthcare providers are receiving live updates from patients connected to a variety of devices and again, through Machine Learning of historical data, are predicting potential issues and making key decisions that are helping save lives.

Financial service providers are pinpointing potential instances of fraud, evaluating credit worthiness of applicants, generating sales and marketing campaigns and performing risk analysis, all with the help of Machine Learning and AI-powered software and interfaces.

Every application will soon be an intelligent application

AI and machine learning capabilities are being included in more and more platforms and software, enabling business and IT professionals to take advantage of them, even if they don’t quite know how they work. Similar to the way many of us drive a car without fully understanding what’s going on under the hood, professionals from all walks of life, regardless of their level of education or technical prowess, are more and more beginning to use applications on a daily basis that appear simple and user-friendly on the surface, but are powered in many ways by ML and AI.

Challenges and opportunities going forward

Machine Learning has very quickly progressed from research to mainstream and is helping drive a new era of innovation that still has a long and perhaps uncapped future ahead of it. Companies in today’s digital landscape need to consider how Machine Learning can serve them in creating a competitive advantage within their respective industries.

Despite the significant advancements made in recent years, we are still looking at an industry in its infancy. Here are some of the main challenges and opportunities for AI and Machine Learning going forward;

Security concerns

With such an increase in collected data and connectivity among devices and applications comes the risk of data leaks and security breaches which may lead to personal information finding its way into the wrong hands and applications facing

Access to resources and data

Previously, only a few big companies had access to the quality and size of datasets required to train production-level AI. However, we’re now seeing startups and even individual researchers coming up with clever ways of collecting training data in cost effective ways. For example, researchers are now using GTA as an environment for training self-driving cars.

The same applies to research, as previously it was much more difficult for a startup or an individual researcher to get access to ‘Google-level’ tools and resources for conducting research within AI, now with the proliferation of open-source frameworks and libraries such as Torch, Theano, TensorFlow, etc. and also the openness around publications and sharing the results of research, we’re seeing a more level playing field in AI research in both the industry and academia.

Hype vs. reality

There is still somewhat of a disconnect between the potential impact advancements in AI will have on our world and how it’s actually being utilized in everyday life. In some cases technology providers, the media and PR teams are seen to be pushing the boundaries of the extent of what’s possible within AI and Machine Learning, and speculating what’s next. In some cases this can lead to frustration for the users of these technologies (consumer or enterprise) when these promises go unfulfilled, and that may cause a backlash at the expense of the entire AI industry.


News API - Sign up