Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78

Artificial Intelligence and Machine Learning play a bigger part in our lives today than most people can imagine. We use intelligent services and applications every day that rely heavily on Machine Learning advances. Voice activation services like Siri or Alexa, image recognition services like Snapchat or Google Image Search, and even self driving cars all rely on the ability of machines to learn and adapt.

If you’re new to Machine Learning, it can be very easy to get bogged down in buzzwords and complex concepts of this dark art. With this in mind, we thought we’d put together a quick introduction to the basics of Machine Learning and how it works.

Note: This post is aimed at newbies – if you know a Bayesian model from a CNN, head on over to the research section of our blog, where you’ll find posts on more advanced subjects.

So what exactly is Machine Learning?

Machine Learning refers to a process that is used to train machines to imitate human intuition – to make decisions without having been told what exactly to do.

Machine Learning is a subfield of computer science, and you’ll find it defined in many ways, but the simplest is probably still Arthur Samuel’s our definition from 1959: “Machine Learning gives computers the ability to learn without being explicitly programmed”. Machine Learning explores how programs, or more specifically algorithms, learn from data and make predictions based on it. These algorithms differ from traditional programs by not relying on strict coded instruction, but by making data-driven, informed predictions or decisions based on sample training inputs. Its applications in the real world are highly varied but the one common element is that every Machine Learning program learns from past experience in order to make predictions in the future.

Machine Learning can be used to process massive amounts of data efficiently, as part of a particular task or problem. It relies on specific representations of data, or “features” in order to recognise something, similar to how when a person sees a cat, they can recognize it from visual features like its shape, its tail length, and its markings, Machine Learning algorithms learn from from patterns and features in data previously analyzed.

Different types of Machine Learning

There are many types of Machine Learning programs or algorithms. The most common ones can be split into three categories or types:

    1. Supervised Machine Learning
    2. Unsupervised Machine Learning
    3. Reinforcement Learning

1. Supervised Machine Learning

Supervised learning refers to how a Machine Learning application has been trained to recognize patterns and features in data. It is “supervised”, meaning it has been trained or taught using correctly labeled (usually by a human) training data.

The way supervised learning works isn’t too different to how we learn as humans. Think of how you teach a child: when a child sees a dog, you point at it and say “Look! A dog!”. What you’re doing here essentially is labelling that animal as a “dog”. Now, It might take a few hundred repetitions, but after a while the child will see another dog somewhere and say “dog,” of their own accord. They do this by recognising the features of a dog and the association of those features with the label “dog” and a supervised Machine Learning model works in much the same way.

It’s easily explained using an everyday example that you have certainly come across. Let’s consider how your email provider catches spam. First, the algorithm used is trained on a dataset or list of thousands of examples of emails that are labelled as “Spam” or “Not spam”. This dataset can be referred to as “training data”. The “training data” allows the algorithm to build up a detailed picture of what a Spam email looks like. After this training process, the algorithm should be able to decide what label (Spam or Not spam) should be assigned to future emails based on what it has learned from the training set. This is a common example of a Classification algorithm – a supervised algorithm trained on pre-labeled data.

Screenshot (58)
Training a spam classifier

2. Unsupervised Machine Learning

Unsupervised learning takes a different approach. As you can probably gather from the name, unsupervised learning algorithms don’t rely on pre-labeled training data to learn. Alternatively, they attempt to recognize patterns and structure in data. These patterns recognized in the data can then be used to make decisions or predictions when new data is introduced to the problem.

Think back to how supervised learning teaches a child how to recognise a dog, by showing it what a dog looks like and assigning the label “dog”. Unsupervised learning is the equivalent to leaving the child to their own devices and not telling them the correct word or label to describe the animal. After a while, they would start to recognize that a lot of animals while similar to each other, have their own characteristics and features meaning they can be grouped together, cats with cats and dogs with dogs. The child has not been told what the correct label is for a cat or dog, but based on the features identified they can make a decision to group similar animals together. An unsupervised model will work in the same way by identifying features, structure and patterns in data which it uses to group or cluster similar data together.

Amazon’s “customers also bought” feature is a good example of unsupervised learning in action. Millions of people buy different combinations of books on Amazon every day, and these transactions provide a huge amount of data on people’s tastes. An unsupervised learning algorithm analyzes this data to find patterns in these transactions, and returns relevant books as suggestions. As trends change or new books are published, people will buy different combinations of books, and the algorithm will adjust its recommendations accordingly, all without needing help from a human. This is an example of a clustering algorithm – an unsupervised algorithm that learns by identifying common groupings of data.

Screenshot (40)
Clustering visualization

Supervised Versus Unsupervised Algorithms

Each of these two methods have their own strengths and weaknesses, and where one should be used over the other is dependent on a number of different factors:
The availability of labelled data to use for training

    Whether the desired outcome is already known
    Whether we have a specific task in mind or we want to make a program for very general use
    Whether the task at hand is resource or time sensitive

Put simply, supervised learning is excellent at tasks where there is a degree of certainty about the potential outcomes, whereas unsupervised learning thrives in situations where the context is more unknown.

In the case of supervised learning algorithms, the range of problems they can solve can be constrained by their reliance on training data, which is often difficult or expensive to obtain. In addition, a supervised algorithm can usually only be used in the context you trained it for. Imagine a food classifier that has only been trained on pictures of hot dogs – sure it might do an excellent job at recognising hotdogs in images, but when it’s shown an image of a pizza all it knows is that that image doesn’t contain a hotdog.


The limits of supervised learning – HBO’s Silicon Valley


Unsupervised learning approaches also have many drawbacks: they are more complex, they need much more computational power, and theoretically they are nowhere near as understood yet as supervised learning. However, more recently they have been at the center of ML research and are often referred to as the next frontier in AI. Unsupervised learning gives machines the ability to learn by themselves, to extract information about the context you put them in, which essentially, is the core challenge of Artificial Intelligence. Compared with supervised learning, unsupervised learning offers a way to teach machines something resembling common sense.

3. Reinforcement Learning

Reinforcement learning is the third approach that you’ll most commonly come across. A reinforcement learning program tries to teach itself accuracy in a task by continually giving itself feedback based on its surroundings, and continually updating its behaviour based on this feedback. Reinforcement learning allows machines to automatically decide how to behave in a particular environment in order to maximize performance based off ‘reward‘ feedback or a reinforcement signal. This approach can only be used in an environment where the program can take signals from its surroundings as positive or negative feedback.

Reinforcement Learning in action


Imagine you’re programming a self-driving car to teach itself to become better at driving. You would program it to understand certain actions – like going off the road for example – is bad by providing negative feedback as a reinforcement signal. The car will then look at data where it went off the road before, and try to avoid similar outcomes. For instance, if the car sees a pattern like when it didn’t slow down at a corner it was more likely to end up driving off the road, but when it slowed down this outcome was less likely, it would slow down at corners more.

Conclusion

So this concludes our introduction to the basics of Machine Learning. We hope it provides you with some grounding as you try to get familiar with some of the more advanced concepts of Machine Learning. If you’re interested in Natural Language Processing and how Machine Learning is used in NLP specifically, keep an eye on our blog as we’re going cover how Machine Learning has been applied to the field. If you want to read some in-depth posts on Machine Learning, Deep Learning, and NLP, check out the research section of our blog.





Text Analysis API - Sign up




0

Last Friday we witnessed the start of what has been one of the biggest worldwide cyber attacks in history, the WannaCry malware attack. While information security and hacking threats in general receive regular coverage in the news and media, we haven’t seen anything like the coverage around the WannaCry malware attack recently. Not since the Sony Playstation hack in 2011 have we seen as much media interest in a hacking event.

News outlets cover hacking stories quite frequently because they pose this kind of threat to people. However, when we look at the news coverage over the course of the past 12 months in the graph below, we can see that triple the average monthly story volume on malware was produced in the first three days of the attack alone.

In this blog, we’ll use our News API to look at the media coverage of WannaCry before the news of the attack broke and afterwards, as details of the attack began to surface.

Monthly count of articles mentioning “malware” or “ransomware” over the last 12 months

By analyzing the news articles published about WannaCry and malware in general, with the help of some visualizations we’re going to look at three aspects:  

  • warning signs in the content published before the attack;  
  • how the story developed in the first days of the attack;
  • how the story spread across social media channels.

WannaCry

At 8am CET on Friday May 12th, the WannaCry attack began, and by that evening it had infected over 50,000 machines in 70 countries. By the following Monday, that had risen to 213,000 infections, paralyzing computer systems in hospitals, factories, and transport networks as well as personal devices. WannaCry is a ransomware virus – it encrypts all of the data on computers it infects, with users only having their data decrypted after they had paid $300 or $600 ransom to the hackers. Users who have had their device infected can only see the screen below until they have paid the ransom.

WannaCry Screen

Source: CNN Money

In the first six days after the attacks, the hackers have received over USD$90,000 through over 290 payments (you can track the payments made to the known Bitcoin wallets here via a useful Twitter bot created by @collinskeith), which isn’t a fantastic conversion rate considering they managed to infect over 200,000 computers. Perhaps if the hackers had done their market research they would have realized that their target audience – those still using Windows XP – are more likely to still write cheques than pay for things with Bitcoin.

The attack was enabled by tools that exploit security vulnerabilities in Windows called DoublePulsar and EternalBlue. These tools essentially allow someone to access every file on your computer by avoiding the security built into your operating system. The vulnerabilities were originally discovered by the National Security Agency (NSA) in the US, but were leaked by a hacker group called The Shadow Brokers in early April 2017.

The graph below, generated using the time series feature in our News API, shows how the coverage of ransomware and malware in articles developed over time. The Shadow Brokers’ dump in early April was reported on and certainly created a bit of noise, however it seems this was forgotten or overlooked by almost everyone until the attack itself was launched. The graph then shows the huge spike in news coverage once the WannaCry attack was launched.


Volume of articles mentioning “malware” or “ransomware” in April and May

Monitoring the Media for Warning Signs

Since WannaCry took the world by such surprise, we thought we’d dig into the news content in the weeks prior to the attack and see if we could find any signal in the noise that would have alerted us to a threat. Hindsight is 20/20, but an effective media monitoring strategy can give an in-depth insight into threats and crises as they emerge.

By simply creating a list of the hacking tools dumped online in early April and tracking mentions of these tools, we see definite warning signs. Of these 30 or so exploits, DoublePulsar and EternalBlue were the only ones mentioned again before the attack, and these ended up being the ones used to enable the WannaCry attack.

Mentions of each of the exploit tools dumped in April and May

 

We can then use the stories endpoint to collect the articles that contributed to the second spike in story volumes, around April 25th. Digging into these articles provides another clear warning: the articles collected cover reports by security analysts estimating that DoublePulsar had been installed on 183,000 machines since the dump ten days earlier (not too far off the over 200,000 machines WannaCry has infected). Although these reports were published in cybersecurity publications, news on the threat didn’t make it to mainstream media until the NHS was hacked and hospitals had to send patients home.

DoublePulsar article

Story on the spread of DoublePulsar and EternalBlue in SC Magazine

Trends in the Coverage

As it emerged early on Friday morning that malware was spreading through personal computers, private companies and government organizations, media outlets broke the story to the world as they gained information. Using the trends endpoint of our News API, we decided it would be interesting to try and understand what organizations and companies were mentioned in the news alongside the WannaCry attack. Below you can see the most mentioned organisations that were extracted from news articles about the attack.

Organisations mentioned in WannaCry stories

The next thing we wanted to do was to try and understand how the story developed over time and to illustrate how the media focus shifted from “what,” to “how,” to “who” over a period of a few days.

The focus on Friday was on the immediate impact on the first targets, like the NHS and Telefonica, but as the weekend progressed the stories began to focus on the method of attack, with many mentions of Windows and Windows XP (the operating system that was particularly vulnerable). On Monday and Tuesday the media turned then their focus to who exactly was responsible and as you can see from the visualization below mentions of  North Korea, Europol, and the NSA began to surface in the news stories collected.
Take a look at the chart below to see how the coverage of the entities changed over time.

 

Mentions of organisations on WannaCry stories published from Friday to Tuesday

 

Most Shared Stories about WannaCry

The final aspect of the story as a whole we focused on was how news of the threat spread across different social channels. Using the stories endpoint, we can rank WannaCry stories by their share counts across social media to get an understanding into what people shared about WannaCry. We can see below that people were very interested in the young man who unintentionally found a way to prevent the malware from attacking the machines it installed itself on. This contrasts quite a bit with the type of sources and subject matter of the articles from before the attack began.

 

Facebook

  1. The 22-year-old who saved the world from a malware virus has been named,” Business Insider. 33,800 shares.
  2. ‘Accidental hero’ finds kill switch to stop spread of ransomware cyber-attack,” MSN.com. 28,420 shares.
  3. Massive ransomware attack hits 99 countries,” CNN. 13,651 shares.

 

LinkedIn

  1. A Massive Ransomware ‘Explosion’ Is Hitting Targets All Over the World,” VICE Motherboard. 3,612 shares.
  2. Massive ransomware attack hits 99 countries,” CNN. 2,963 shares.
  3. Massive ransomware attack hits 74 countries,” CNN. 2,656 shares.

 

Reddit

  1. ‘Accidental hero’ finds kill switch to stop spread of ransomware cyber-attack,” MSN.com. 24,497 upvotes.
  2. WannaCrypt ransomware: Microsoft issues emergency patch for Windows XP,” ZDNet. 4,454 upvotes.
  3. Microsoft criticizes governments for stockpiling cyber weapons, says attack is ‘wake-up call’” CNBC. 3,403 upvotes.

This was a quick analysis of the media reaction to the WannaCry attack using our News API. If you’d like to try it for yourself you can create your free account and start collecting and analyzing stories. Our News API is the most powerful way of searching, sourcing, and indexing news content from across the globe. We crawl and index thousands of news sources every day and analyze their content using our NLP-powered Text Analysis Engine to give you an enriched and flexible news data source.




News API - Sign up




0

Last month was full of unexpected high-profile publicity disasters, from passengers being dragged off planes to Kendall Jenner failing to solve political unrest.  For this month’s Monthly Media Roundup we decided to collect and analyze news stories related to three major events and try to understand the media reaction to each story, while also uncovering the impact this coverage had on the brands involved.

In the roundup of the month’s news, we’ll cover three major events:

  1. United Airlines’ mishandling of negative public sentiment cut their market value by $255 million.
  2. Pepsi’s ad capitalizing on social movements shows the limits of appealing to people’s consciousness for advertising.
  3. The firing of Bill O’Reilly shows how brands have become aware of the importance of online sentiment.

1: United Airlines

On Monday, April 10th, a video went viral showing a passenger being violently dragged off a United Airlines flight. On the same day, United CEO Oscar Munoz attempted to play down the controversy by defending staff and calling the passenger “disruptive and belligerent”. With investors balking at the tsunami of negative publicity that was compounded by Munoz, the following day United’s share price fell by over 1%, shaving $255 million off their market capitalization by the end of trading.
We collected relevant news articles published in April using a detailed search query with our News API. By analyzing the volume of articles we collected and
analyzing the sentiment of each article, we were able to get a clear picture of how the media responded to the video and subsequent events:

Media Reaction to United Airlines Controversy

The volume of stories published shows how quickly the media jumped on the story (and also that Munoz’s statement compounded the issue), while the sentiment graph shows just how negative all that coverage was. The key point here is that the action United took in dealing with the wave of negative online sentiment – not listening to the customer – led to their stock tumbling. Investors predicted that ongoing negative sentiment on such a scale would lose potential customers, and began offloading shares in response.

Most shared stories about United in April

Facebook

  1. United Airlines Stock Drops $1.4 Billion After Passenger-Removal Controversy” – Fortune, 57,075 shares
  2. United Airlines says controversial flight was not overbooked; CEO apologizes again” – USA Today, 43,044 shares

LinkedIn

  1. United Airlines Passenger Is Dragged From an Overbooked Flight” – The New York Times, 1,443 shares
  2. When a ticket is not enough: United Airlines forcibly removes a man from an overbooked flight” – The Economist, 1,430 shares

Reddit

  1. Simon the giant rabbit, destined to be world’s biggest, dies on United Airlines flight” – Fox News, 62,830 upvotes
  2. Passengers film moment police drag man off a United Airlines plane” – Daily Mail, 25,142 upvotes

2: Pepsi

In contrast with United’s response, Pepsi’s quick reaction to online opinion paid off this month as they faced their own PR crisis. On April 3rd, Pepsi released an ad that was immediately panned for trying to incorporate social movements like Black Lives Matter into a soft drink commercial that prompted ridicule online.
After using our News API to collect every available article on this story and analyzing the sentiment of each article, we can get a picture of how the media reacted to the ad. This lets us see that on the day after the ad was launched, there were over three times more negative articles mentioning Pepsi than positive ones.

Media Reaction to Pepsi’s Kendall Jenner Ad

As a company that spends $2.5 billion annually on advertising, Pepsi were predictably swift in their response to bad publicity, pulling the ad from broadcast just over 24 hours after it was first aired.

Even though this controversy involved a major celebrity, the Pepsi ad debacle has actually been shared significantly less compared to the other PR disasters. By using our News API to rank the most shared articles across major social media platforms, we can see that the story gathered a lot less pace than those covering the United scandal.

Most shared articles about Pepsi in April

Facebook

  1. Twitter takes Pepsi to task over tone-deaf Kendall Jenner ad” – USA Today, 19,028 shares
  1. Hey Pepsi, Here’s How It’s Done. Heineken Takes On Our Differences, and Nails It” – AdWeek, 16,465 shares

LinkedIn

  1. Heineken Just Put Out The Antidote to That Pepsi Kendall Jenner Ad” – Fast Company, 1,833 shares
  2. Pepsi Just Released An Ad That May Be One Of The Worst Ads Ever Made (And That’s Saying Something)” – Inc.com, 1,192 shares

Reddit

  1. Pepsi ad review: A scene-by-scene dissection of possibly the worst commercial of all time” – Independent UK, 58 upvotes
  2. Pepsi pulls Kendall Jenner advert amid outcry” – BBC, 58 upvotes

3: Fox Firing Bill O’Reilly

On April 1st, the New York Times published an article detailing numerous previously unknown sexual harassment cases brought against Bill O’Reilly. O’Reilly, who was Fox’s most popular host, drew an average of 3 million views to his prime-time slot. Though his ratings were unscathed (they actually rose), advertisers began pulling their ads from O’Reilly’s slot in response to the negative PR the host was receiving.
We sourced every available story about Bill O’Reilly published in April, and analyzed the sentiment of each article. Below we can see just how negative this coverage was over the course of the story.

Media Reaction to Bill O’Reilly Controversy

This was not the first time that O’Reilly had been accused of sexual harassment, having been placed on leave in 2004 for the same reason. In both 2004 and April 2017, O’Reilly’s viewer ratings remained unhurt by the scandals. What is different in 2017 is that brands are far more aware of the “Voice of the Customer” – social media and online content representing the intentions of potential customers. This means negative coverage and trends like #DropOReilly have a considerable effect on brands’ marketing behaviour.



Most-mentioned Keywords in Articles about Bill O’Reilly in April

By analyzing the content from every article about Bill O’Reilly in April, we can rank the most frequently used Entities and Keywords across the collection of articles. Not surprisingly, our results show us that the coverage was dominated by the topic of sexual harassment and Fox News. But our analysis also uncovered other individuals and brands that were mentioned in news articles as being tied to the scandal. Brands like BMW and Mercedes took swift action to distance themselves from the backlash by announcing they were pulling advertising from O’Reilly’s show in an attempt to preempt any negative press.

Most shared articles about Bill O’Reilly in April

Facebook

  1. Bill O’Reilly is officially out at Fox News” – The Washington Post, 63,341 shares
  2. Bill O’Reilly Is Out At Fox News” – NPR, 50,895 shares

LinkedIn

  1. Bill O’Reilly Out At Fox News” – Forbes, 861 shares
  2. Fox Is Preparing to Cut Ties with Bill O’Reilly” – The Wall Street Journal, 608 shares

Reddit

  1. Sources: Fox News Has Decided Bill O’Reilly Has to Go” – New York Magazine, 80,436 upvotes
  2. Fox News drops Bill O’Reilly in wake of harassment allegations” – Fox News, 12,387 upvotes

We hope this post has given you an idea of the importance of media monitoring and content analysis is from a PR and branding point of view. Being able to collect and understand thousands of articles in a matter of minutes means you can quickly assess media reaction to PR crises as they unfold.

Ready to try the News API for yourself? Simply click the image below to sign up for a 14-day free trial.





News API - Sign up




0

Screen Shot 2017-04-25 at 19.38.39

 

 

 

 

 

 

 

 

 

2017 looks set to be a big year for us here on Ormond Quay – with AYLIEN in hyper-growth mode, we’ve added six new team members in the first four months of the year, and that looks set to continue. After this period of quick growth, we thought we’d take stock and introduce you to the newest recruits.

Say hello to our newest recruits!

Mahdi

Mahdi: NLP Research Engineer

From Qom, Iran, Mahdi became an open-source contributor at age 16, working on Firefox Developer Tools and other projects you can find on his GitHub. At 18, he was hired as a full-stack developer to work on browser extensions and mobile apps. Mahdi just started at AYLIEN as a Natural Language Processing Research Engineer focusing on Deep Learning, while also working as a full-stack developer on our web apps. He blogs about programming (and life in general) on theread.me.

Mahdi is a serious outdoorsman who can be found hiking in the hills and practicing Primitive Living. He also loves learning languages and reading, which provides him with the raw material to fill our Slack loading messages with some supremely inspirational quotes!

Demian

Demian: NLP Research Intern

Demian comes from Braunschweig in Central Germany and has completed a degree in Computational Linguistics in the University of Heidelberg. As part of his degree he studied NLP and Artificial Intelligence in information extraction, and he is already familiar with Dublin from an Erasmus year spent in Trinity College. Demian previously worked in the Forensic Department of PwC in Germany, and here at AYLIEN he is going to research document summarization and event extraction for our News API.

Besides being a proficient coder, Demian is an avid painter and reader, and can be found running in Dublin’s parks.

Sylver

Sylver: Data Management Intern

Growing up between Dublin and Seattle, Sylver swapped one rainy town with a thriving tech scene for another. She is currently studying Legal Practice and Procedures, and before starting with us here at AYLIEN she was an editor of everything from novels to academic papers. Here at AYLIEN Sylver works on maintaining and managing our datasets and models.

A previous owner of 10 snakes (at the same time), Sylver spends her spare time caring for exotic pets, and is interested in reading, alternative modelling, and fitness.

Hosein

Hosein: Web Designer

Hosein is a native of Tehran who has three years’ experience in UI design and front-end development, having worked with startups and IT companies in Iran. A newcomer to NLP, Hosein is designing the AYLIEN website and web apps, and also developing our front-ends.

While he’s away from his laptop, Hosein is usually out taking photographs and finding out more about cameras.

Erfan

Erfan: NLP Research Engineer

From Urmia in Northwestern Iran, Erfan holds a Bachelor’s Degree in Software Engineering from Sharif University in Tehran. He has been researching computer vision for three years and you can read about his research on his blog. For his thesis, he used Deep Neural Nets to study the joint embedding of image and text, and at AYLIEN he is going to research and work on using memory-augmented neural nets, focusing on question-answering.

Will

Will: Content Marketing Intern

From the comparatively less exotic background of Dublin, Will is a Classics graduate who completed a Master’s in Digital Humanities at Trinity College, where he was introduced to NLP when he tried to write some code to index where authors use Latin words across English Literature. At AYLIEN, he is joining the Sales, Marketing, and Customer Success team to bolster our content creation and distribution efforts, and is even writing this exact sentence at this very moment in time.

Outside of AYLIEN, Will is an avid reader and learner of languages, and when he’s outside, he can be found running or hiking.

Come work with us!

So that sums up our new recruits – a pretty diverse group who all gravitated towards languages and programming. If you think you’d like to join us, take a look at aylien.com/jobs, email us at jobs@aylien.com, or call in for a fresh cup of coffee. We’re always interested in talking to anyone working on or studying NLP, Computational Linguistics or Machine Learning.




News API - Sign up




0