###### Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!

Our researchers at AYLIEN keep abreast of and contribute to the latest developments in the field of Machine Learning. Recently, two of our research scientists, John Glover and Sebastian Ruder, attended NIPS 2016 in Barcelona, Spain. In this post, Sebastian highlights some of the stand-out papers and trends from the conference.

# NIPS

The Conference on Neural Information Processing Systems (NIPS) is one of the two top conferences in machine learning. It took place for the first time in 1987 and is held every December, historically in close proximity to a ski resort. This year, it took place in sunny Barcelona. The conference (including tutorials and workshops) went on from Monday, December 5 to Saturday, December 10. The full conference program is available here.

Machine Learning seems to become more pervasive every month. However, it is still sometimes hard to keep track of the actual extent of this development. One of the most accurate barometers for this evolution is the growth of NIPS itself. The number of attendees skyrocketed at this year’s conference growing by over 50% year-over-year.

Image 1: The growth of the number of attendees at NIPS follows (the newly coined) Terry’s Law (named after Terrence Sejnowski, the president of the NIPS foundation; faster growth than Moore’s Law)

Unsurprisingly, Deep Learning (DL) was by far the most popular research topic, with about every fourth of more than 2,500 submitted papers (and 568 accepted papers) dealing with deep neural networks.

Image 2: Distribution of topics across all submitted papers (Source: The review process for NIPS 2016)

On the other hand, the distribution of research paper topics has quite a long tail and reflects the diversity of topics at the conference that span everything from theory to applications, from robotics to neuroscience, and from healthcare to self-driving cars.

One of the hottest developments within Deep Learning was Generative Adversarial Networks (GANs). The minimax game playing networks have by now won the favor of many luminaries in the field. Yann LeCun hails them as the most exciting development in ML in recent years. The organizers and attendees of NIPS seem to side with him: NIPS featured a tutorial by Ian Goodfellow about his brainchild, which led to a packed main conference hall.

Image 3: A full conference hall at the GAN tutorial

Though a fairly recent development, there are many cool extensions of GANs among the conference papers:

• Reed et al. propose a model that allows you to specify not only what you want to draw (e.g. a bird) but also where to put it in an image.
• Chen et al. disentangle factors of variation in GANs by representing them with latent codes. The resulting models allow you to adjust e.g. the type of a digit, its breadth and width, etc.

In spite of their popularity, we know alarmingly little about what makes GANs so capable of generating realistic-looking images. In addition, making them work in practice is an arduous endeavour and a lot of (undocumented) hacks are necessary to achieve the best performance. Soumith Chintala presents a collection of these hacks in his “How to train your GAN” talk at the Adversarial Training workshop.

Image 4: How to train your GAN (Source: Soumith Chintala)

Yann LeCun muses in his keynote that the development of GANs parallels the history of neural networks themselves: They were poorly understood and hard to get to work in the beginning and only took off once researchers figured out the right tricks and learned how to make them work. At this point, it seems unlikely that GANs will experience a winter anytime soon; the research community is still at the beginning in learning how to make the best use of them and it will be exciting to see what progress we can make in the coming years.

On the other hand, the success of GANs so far has been limited mostly to Computer Vision due to their difficulty in modelling discrete rather than continuous data. The Adversarial Training workshop showcased some promising work in this direction (see e.g. our own John Glover’s paper on modeling documents, this paper and this paper on generating text, and this paper on adversarial evaluation of dialogue models). It remains to be seen if 2017 will be the year in which GANs break through in NLP.

# The Nuts and Bolts of Machine Learning

Andrew Ng gave one of the best tutorials of the conference with his take on building AI applications using Deep Learning. Drawing from his experience of managing the 1,300 people AI team at Baidu and hundreds of applied AI projects and equipped solely with two whiteboards, he shared many insights about how to build and deploy AI applications in production.

Besides better hardware, Ng attributes the success of Deep Learning to two factors: In contrast to traditional methods, deep NNs are able to learn more effectively from large amounts of data. Secondly, end-to-end (supervised) Deep Learning allows us to learn to map from inputs directly to outputs.

While this approach to training chatbots or self-driving cars is sufficient to write innovative research papers, Ng emphasized end-to-end DL is often not production-ready: A chatbot that maps from text directly to a response is not able to have a coherent conversation or fulfill a request, while mapping from an image directly to a steering command might have literally fatal side effects if the model has not encountered the corresponding part of the input space before. Rather, for a production model, we still want to have intermediate steps: For a chatbot, we prefer to have an inference engine that generates a response, while in a self-driving car, DL is used to identify obstacles, while the steering is performed by a traditional planning algorithm.

Image 5: Andrew Ng on end-to-end DL (right: end-to-end DL chatbot and chatbot with inference engine; left bottom: end-to-end DL self-driving car and self-driving car with intermediate steps)

Ng also shared that the most common mistakes he sees in project teams is that they track the wrong metrics: In an applied machine learning project, the only relevant metrics are the training error, the development error, and the test error. These metrics alone enable the project team to know what steps to take, as he demonstrated in the diagram below:

Image 6: Andrew Ng’s flowchart for applied ML projects

A key facilitator of the recent success of ML have been the advances in hardware that allowed faster computation and storage. Given that Moore’s Law will reach its limits sooner or later, one might reason that also the rise of ML might plateau. Ng, however, argued that the commitment by leading hardware manufacturers such as NVIDIA and Intel and the ensuing performance improvements to ML hardware would fuel further growth.

Among ML research areas, supervised learning is the undisputed driver of the recent success of ML and will likely continue to drive it for the foreseeable future. In second place, Ng saw neither unsupervised learning nor reinforcement learning, but transfer learning. We at AYLIEN are bullish on transfer learning for NLP and think that it has massive potential.

# Recurrent Neural Networks

The conference also featured a symposium dedicated to Recurrent Neural Networks (RNNs). The symposium coincided with the 20 year anniversary of LSTM…

Image 7: Jürgen Schmidhuber kicking off the RNN symposium

… being rejected from NIPS 1996. The fact that papers that do not use LSTMs have been rare in the most recent NLP conferences (see our EMNLP blog post) is a testament to the perseverance of the authors of the original paper, Sepp Hochreiter and Jürgen Schmidhuber.

At NIPS, we had several papers that sought to improve RNNs in different ways:

Other improvements apply to Deep Learning in general:

• Salimans and Kingma propose Weight Normalisation to accelerate training that can be applied in two lines of Python code.
• Li et al. propose a multinomial variant of dropout that sets neurons to zero depending on the data distribution.

The Neural Abstract Machines & Program Induction (NAMPI) workshop also featured several speakers talking about RNNs:

• Alex Graves focused on his recent work on Adaptive Computation Time (ACT) for RNNs that allows to decouple the processing time from the sequence length. He showed that a word-level language model with ACT could reach state-of-the-art with fewer computations.
• Edward Grefenstette outlined several limitations and potential future research directions in the context of RNNs in his talk.

# Improving classic algorithms

While Deep Learning is a fairly recent development, the conference featured also several improvements to algorithms that have been around for decades:

• Ge et al. show in their best paper that the non-convex objective for matrix completion has no spurious local minima, i.e. every local minimum is a global minimum.
• Bachem et al. present a method that guarantees accurate and fast seedings for large-scale k-means++ clustering. The presentation was one of the most polished ones of the conference and the code is open-source and can be installed via pip.
• Ashtiani et al. show that we can make NP-hard k-means clustering problems solvable by allowing the model to pose queries for a few examples to a domain expert.

# Reinforcement Learning

Reinforcement Learning (RL) was another much-discussed topic at NIPS with an excellent tutorial by Pieter Abbeel and John Schulman dedicated to RL. John Schulman also gave some practical advice for getting started with RL.

One of the best papers of the conference introduces Value Iteration Networks, which learn to plan by providing a differentiable approximation to a classic planning algorithm via a CNN. This paper was another cool example of one of the major benefits of deep neural networks: They allow us to learn increasingly complex behaviour as long as we can represent it in a differentiable way.

During the week of the conference, several research environments for RL were simultaneously released, among them OpenAI’s Universe, Deep Mind Lab, and FAIR’s Torchcraft. These will likely be a key driver in future RL research and should open up new research opportunities.

# Learning-to-learn / Meta-learning

Another topic that came up in several discussions over the course of the conference was Learning-to-learn or Meta-learning:

• Andrychowicz et al. learn an optimizer in a paper with the ingenious title “Learning to learn by gradient descent by gradient descent”.
• Vinyals et al. learn how to one shot-learn in a paper that frames one-shot learning in the sequence-to-sequence framework and has inspired new approaches for one-shot learning.

Most of the existing papers on meta-learning demonstrate that wherever you are doing something that gives you gradients, you can optimize them using another algorithm via gradient descent. Prepare for a surge of “Meta-learning for X” and “(Meta-)+learning” papers in 2017. It’s LSTMs all the way down!

Meta-learning was also one of the key talking points at the RNN symposium. Jürgen Schmidhuber argued that a true meta-learner would be able to learn in the space of all programs and would have the ability to modify itself and elaborated on these ideas at his talk at the NAMPI workshop. Ilya Sutskever remarked that we currently have no good meta-learning models. However, there is hope as the plethora of new research environments should also bring progress in this area.

# General Artificial Intelligence

Learning how to learn also plays a role in the pursuit of the elusive goal of attaining General Artificial Intelligence, which was a topic in several keynotes. Yann LeCun argued that in order to achieve General AI, machines need to learn common sense. While common sense is often vaguely mentioned in research papers, Yann LeCun gave a succinct explanation of what common sense is: “Predicting any part of the past, present or future percepts from whatever information is available.” He called this predictive learning, but notes that this is really unsupervised learning.

His talk also marked the appearance of a controversial and often tongue-in-cheek copied image of a cake, which he used to demonstrate that unsupervised learning is the most challenging task where we should concentrate our efforts, while RL is only the cherry on the icing of the cake.

Image 8: The Cake slide of Yann LeCun’s keynote

Drew Purves focused on the bilateral relationship between the environment and AI in what was probably the most aesthetically pleasing keynote of the conference (just look at those graphics!)

Image 9: Graphics by Max Cant of Drew Purves’ keynote (Source: Drew Purves)

He emphasized that while simulations of ecological tasks in naturalistic environments could be an important test bed for General AI, General AI is needed to maintain the biosphere in a state that will allow the continued existence of our civilization.

Image 10: Nature needs AI and AI needs Nature from Drew Purves’ keynote

While it is frequently — and incorrectly — claimed that neural networks work so well because they emulate the brain’s behaviour, Saket Navlakha argued during his keynote that we can still learn a great deal from the engineering principles of the brain. For instance, rather than pre-allocating a large number of neurons, the brain generates 1000s of synapses per minutes until its second year. Afterwards, until adolescence, the number of synapses is pruned and decreases by ~50%.

Image 11: Saket Navlakha’s keynote

It will be interesting to see how neuroscience can help us to advance our field further.

In the context of the Machine Intelligence workshop, another environment was introduced in the form of FAIR’s CommAI-env that allows to train agents through interaction with a teacher. During the panel discussion, the ability to learn hierarchical representations and to identify patterns was emphasized. However, although the field is making rapid progress on standard tasks such as object recognition, it is unclear if the focus on such specific tasks brings us indeed closer to General AI.

# Natural Language Processing

While NLP is more of a niche topic at NIPS, there were a few papers with improvements relevant to NLP:

• He et al. propose a dual learning framework for MT that has two agents translating in opposite directions teaching each other via reinforcement learning.
• Sokolov et al. explore how to use structured prediction under bandit feedback.
• Huang et al. extend Word Mover’s Distance, an unsupervised document similarity metric to the supervised setting.
• Lee et al. model the helpfulness of reviews by taking into account position and presentation biases.

Finally, a workshop on learning methods for dialogue explored how end-to-end systems, linguistics and ML methods can be used to create dialogue agents.

# Miscellaneous

## Schmidhuber

Jürgen Schmidhuber, the father of the LSTM was not only present on several panels, but did his best to remind everyone that whatever your idea, he had had a similar idea two decades ago and you should better cite him lest he interrupt your tutorial.

## Robotics

Boston Robotics’ Spot proved that — even though everyone is excited by learning and learning-to-learn — traditional planning algorithms are enough to win the admiration of a hall full of learning enthusiasts.

Image 12: Boston Robotics’ Spot amid a crowd of fascinated onlookers

## Apple

Apple, one of the most secretive companies in the world, has decided to be more open, to publish, and to engage with academia. This can only be good for the community. We’re looking forward to more apple research papers.

Image 13: Ruslan Salakhutdinov at the Apple lunch event

## Uber

Uber announced their acquisition of Cambridge-based AI startup Geometric Intelligence and threw one of the most popular parties of NIPS.

Image 14: The Geometric Intelligence logo

## Rocket AI

Talking about startups, the “launch” of Rocket AI and their patented Temporally Recurrent Optimal Learning had some people fooled (note the acronyms in the below tweets). Riva-Melissa Tez finally cleared up the confusion.

These were our impressions from NIPS 2016. We had a blast and hope to be back in 2017!

## Intro

For PR professionals, entrepreneurs, marketers, or just about anyone out there who is looking to connect with relevant journalists, reporters and influencers to cover their press release, the biggest challenge in doing so can often lie in finding exactly who are the most suitable people to approach.

This can be a time-consuming and often fruitless endeavour as many take a spray and pray approach by sending out high volumes of emails in the hope that someone out there picks one up. One of the main drawbacks of this approach however is that mass emails aren’t targeted and are inevitably written in an impersonal manner and generally fail to grab the attention of the intended recipient.

To help streamline and vastly improve this entire process, we’re going to show you how you can use Machine Learning and NLP to significantly improve your PR targeting process. A technique we’ve used at AYLIEN to land coverage in the likes of TechCrunch, The Next Web and Forbes.

Using the AYLIEN News API, we’ll show you how easy it can be to quickly build your own highly-targeted list of journalists, reporters and influencers to reach out and pitch to.

As an example, let’s say you’ve recently gone through a funding round and you’re hoping to get some press coverage and exposure. We’ll start by first identifying the publishers who have generated the most articles mentioning startups and funding in the past 60 days. We will then narrow our search and get more targeted by finding specific people who write  about startup funding, and then finish by giving you some tips and instructions on how to create a highly-targeted search to match your own needs.

## Which publishers are writing about startup funding?

To find the publishers that write the most about startups and funding, we’ll use the /trends endpoint in the News API. Using the /trends endpoint enables you to identify the most frequently mentioned keywords, entities and topical or sentiment-related categories in news content. Put simply, it allows you to measure the amount of times that specific elements of interest are mentioned in the content you source through the News API.

By performing the following search using /trends, we can source these metrics for all stories that mention our keywords–startup and funding–and by specifying field=source.name, our results will be returned with a count for each source (publisher, news outlet or blog).

Here’s the query we used;

Our News API returns results in JSON format, and here’s what they look like for this query;


{
"trends": [
{
"value": "TechCrunch",
"count": 206
},
{
"value": "Fortune",
"count": 108
},
{
"count": 91
},
{
"value": "PR Newswire",
"count": 70
},
{
"value": "Inc.com",
"count": 64
},
{
"value": "Seeking Alpha",
"count": 62
},
{
"value": "Forbes",
"count": 52
},
{
"value": "CNBC TV18",
"count": 46
},
{
"value": "Entrepreneur.com",
"count": 44
},
{
"value": "Bloomberg",
"count": 43
},
{
"value": "BetaKit",
"count": 33
},
{
"value": "Market Wired",
"count": 31
},
{
"value": "Huffington Post",
"count": 26
},
{
"value": "Quartz",
"count": 26
},
{
"value": "Fast Company",
"count": 24
},
{
"count": 23
},
{
"count": 19
},
{
"value": "ZDNet",
"count": 18
},
{
"value": "Daily Mail UK",
"count": 18
},
{
"value": "Mashable",
"count": 17
},
{
"value": "The Guardian",
"count": 16
},
{
"value": "Deccan Herald",
"count": 15
},
{
"value": "Globe and Mail",
"count": 14
},
{
"count": 13
},
{
"value": "Reuters",
"count": 12
},
{
"count": 12
},
{
"value": "The Next Web",
"count": 10
},
{
"value": "The Wall Street Journal",
"count": 10
},
{
"value": "Economic Times",
"count": 10
},
{
"value": "Variety",
"count": 10
},
{
"count": 10
},
{
"value": "Times of Israel",
"count": 10
},
{
"value": "CNN",
"count": 9
},
{
"value": "CNET",
"count": 9
},
{
"value": "Globes",
"count": 9
},
{
"value": "The Verge",
"count": 8
},
{
"value": "Autonews",
"count": 8
},
{
"value": "Yahoo",
"count": 8
},
{
"value": "Irish Independent",
"count": 8
},
{
"value": "Modern Ghana",
"count": 8
},
{
"value": "Drudge Report",
"count": 8
},
{
"value": "Berlin Startup Jobs",
"count": 8
},
{
"value": "Digital Trend",
"count": 7
},
{
"value": "Times of India",
"count": 7
},
{
"value": "Albuquerque Journal",
"count": 7
},
{
"value": "USA Today",
"count": 6
},
{
"value": "Nikkei Asian Review",
"count": 6
},
{
"value": "Times Picayune",
"count": 6
},
{
"value": "New Zealand Herald",
"count": 6
},
{
"value": "Sify",
"count": 6
},
{
"value": "Star",
"count": 6
},
{
"value": "Malay Mail",
"count": 6
},
{
"value": "WCPO",
"count": 6
},
{
"value": "The Guardian Nigeria",
"count": 6
},
{
"value": "The Economist",
"count": 5
},
{
"value": "Japan Times",
"count": 5
},
{
"value": "Republican",
"count": 5
},
{
"value": "Daily Courier",
"count": 5
},
{
"value": "Sydney Morning Herald",
"count": 5
},
{
"value": "Gulf News",
"count": 5
},
{
"value": "Bangkok Post",
"count": 5
},
{
"value": "Buzz Feed",
"count": 5
},
{
"value": "DNA",
"count": 5
},
{
"value": "Kyiv Post",
"count": 5
},
{
"value": "Portland Press Herald",
"count": 5
},
{
"value": "Roanoke Times",
"count": 5
},
{
"value": "ALL TOP STARTUPS",
"count": 5
},
{
"value": "Irish Central",
"count": 5
},
{
"value": "CRN",
"count": 5
},
{
"value": "Haaretz",
"count": 5
},
{
"value": "Nigeria Communications Week",
"count": 5
},
{
"value": "Wired",
"count": 4
},
{
"value": "Kiplinger",
"count": 4
},
{
"value": "Vietnam Net",
"count": 4
},
{
"value": "M Live - 786",
"count": 4
},
{
"value": "Scoop",
"count": 4
},
{
"value": "Arkansas Democrat Gazette",
"count": 4
},
{
"value": "Newsweek",
"count": 4
},
{
"value": "Stuff",
"count": 4
},
{
"value": "Yale Daily News",
"count": 4
},
{
"value": "Anthill Online",
"count": 4
},
{
"value": "Medium",
"count": 4
},
{
"value": "Vice Motherboard",
"count": 4
},
{
"value": "IT news Africa",
"count": 4
},
{
"value": "Zero Hedge",
"count": 3
},
{
"value": "Oregonian",
"count": 3
},
{
"value": "Philippine Daily Inquirer",
"count": 3
},
{
"value": "Daily Caller",
"count": 3
},
{
"value": "Benzinga",
"count": 3
},
{
"value": "Billboard",
"count": 3
},
{
"value": "International Business Times - UK",
"count": 3
},
{
"value": "Age",
"count": 3
},
{
"value": "D Magazine",
"count": 3
},
{
"value": "Montreal Gazette",
"count": 3
},
{
"value": "Hill",
"count": 3
},
{
"value": "ARL Now",
"count": 3
},
{
"count": 3
},
{
"value": "Channel News Asia",
"count": 3
},
{
"value": "China Post",
"count": 3
}
],
"field": "source.name"
}




By importing our results into a visualization tool such as Tableau, we can quickly get an idea of which publishers are writing most about our selected keywords.

Note: The chart below is interactive. You can hover over and click the various bubbles to see more information.

Straight away we can see that TechCrunch dominate our results, generating almost twice as many matches as the next top result. What does this tell us? It tells us that TechCrunch are more than likely a leading publisher when it comes to writing about startup funding.

## Which reporters are writing about startup funding?

Now that we’ve established the top publishers writing about startups and funding, we’ll look to find out which specific reporters/influencers are writing the most content around this subject area.

Similar to our previous query, we’re once again going to use the /trends endpoint. This time, however, we’ll look at field=author.name. Here’s the search query we used;

Here are our visualized results for the query above;

If further proof was needed that TechCrunch are leaders in reporting about startup funding, check out the top ten authors from our results, and who they write for. TechCrunch reporters make up half of the top 10, but top of the list is Erin Griffiths of Fortune.

1. Erin Griffiths – Fortune
2. Steve O’Hear – TechCrunch
3. Lora Kolodny – TechCrunch
4. Kia Kokalitcheva – Fortune
5. Sarah Buhr – TechCrunch
6. Ingrid Lunden – TechCrunch
8. Connie Loizos – TechCrunch
9. Jessica Galang – BetaKit
10. Tas Bindi – ZDNet

### What now?

Now that you have a list of reporters who you know are writing plenty of content around your area of interest, you can focus your efforts on contacting them individually, rather than sending out blind and impersonal mass emails.

Reporters generally have a profile or portfolio of their work on their publisher’s website, and so by citing this relevant work as a reason for contacting them specifically, you are showing that you have done your homework and have intentionally reached out to them.

Depending on your own precise search criteria, there are a number of options available to narrow down your search and pinpoint exactly what, and who, you are looking for.

### Search by article title

While searching for mentions of startup and funding gave us some excellent results, perhaps you have a niche product or app and you would like to find a reporter who has previously written about your exact field of expertise. Searching by article title is often the most accurate method of sourcing content that is specifically about your keyword, rather than just mentioning it somewhere in the body of text.

Previously, we found that 5 out of our top 10 search results for startup and funding write for TechCrunch. But what if we want to be even more targeted and find a reporter who specifically writes about fintech startups and funding?

To do so, we will use a previous search query for startup and funding from above, but we will now add a parameter to search article titles for the word fintech. Here’s our updated query;

JSON results;


{
"trends": [
{
"value": "Oscar Williams-grut",
"count": 9
},
{
"value": "Erweiterte Suche",
"count": 5
},
{
"value": "Andrew Meola",
"count": 3
},
{
"value": "Natasha Lomas",
"count": 2
},
{
"value": "Steve O'hear",
"count": 2
},
{
"value": "Tas Bindi",
"count": 2
},
{
"value": "John Rampton",
"count": 1
},
{
"value": "Roger Aitken",
"count": 1
},
{
"count": 1
},
{
"value": "Tx Zhuo",
"count": 1
},
{
"value": "Lisa Rabasca Roepe",
"count": 1
},
{
"value": "Richie Hecker",
"count": 1
},
{
"value": "Mileika Lasso",
"count": 1
},
{
"value": "Peter Nowak",
"count": 1
},
{
"value": "Par Sophie",
"count": 1
},
{
"value": "Spencer Israel",
"count": 1
},
{
"value": "John Detrixhe",
"count": 1
},
{
"value": "Julie Verhage",
"count": 1
},
{
"value": "Jessica Galang",
"count": 1
},
{
"value": "Douglas Soltys",
"count": 1
},
{
"value": "Ara Rodríguez",
"count": 1
},
{
"value": "Jessica Vomiero",
"count": 1
},
{
"value": "Valeria Ríos",
"count": 1
},
{
"value": "Amy Feldman",
"count": 1
},
{
"value": "Ameinfo Staff",
"count": 1
},
{
"value": "Kevin Sandhu",
"count": 1
},
{
"value": "George Beall",
"count": 1
},
{
"value": "Par Delphine",
"count": 1
},
{
"value": "Caitlin Hotchkiss",
"count": 1
},
{
"value": "Robert Hackett",
"count": 1
},
{
"value": "Nathan Sinnott",
"count": 1
},
{
"value": "Eliran Rubin",
"count": 1
},
{
"value": "Lee Roden",
"count": 1
},
{
"value": "Piruze Sabuncu",
"count": 1
},
{
"value": "Danon Gabriel",
"count": 1
},
{
"value": "Rachel Witkowski",
"count": 1
},



As you can see from the JSON results above, Oscar Williams-grut has recently written 9 articles matching our search query. A quick look at Oscar’s profile on Business Insider confirms that he writes about finance, specializing in fintech, business, markets, and politics. He would certainly top our list of contacts if we wanted to reach out about a fintech startup funding press release!

### Location and language

Our News API scans content from thousands of sources and RSS feeds worldwide, in multiple languages, meaning you can narrow your search to locate content in specific languages and from specific countries. As an example, you can add the following parameters to your search query to locate only sources from Portugal, that are also written in the Portuguese language;

• source.locations.country[]=pt
• language[]=pt

### Social shares count

One of main reasons for finding relevant reporters and bloggers in the first place is to gain as much public exposure as possible. One way to help ensure this is to source reporters based on the number of shares their content receives on social media.

You can be quite specific here by choosing the social network(s) that interest you most. For example, perhaps your content is best suited for distribution on Facebook. You can therefore find out which reporters tend to generate the most shares on Facebook by adding a minimum share count for that network. Here’s an example query that will do just that, by only sourcing authors who have generated over 10,000 shares on Facebook in the past 60 days;

At the time of writing, this query is returning the names of four reporters, each of which have generated over 10,000 Facebook shares with content containing our keywords startup and funding published in the past 60 days.

Of course, the further you lower the minimum number of shares, the more results you will obtain. We changed the above search query to contain a minimum of 5,000 shares and our results almost trebled.

### Alexa rank

Similar to how we defined a minimum number of Facebook social shares in the example above, you also have the option to define the minimum and maximum Alexa rank of websites that you source.

Why is this useful? The Alexa ranking system is compiled to analyze the frequency of visits on websites and rank them against each other according to the volume of visits they receive. Alexa’s algorithm is pretty simple – it is calculated by the amount of website traffic generated over the past 3 months.

If you’re looking to maximize your exposure, you will naturally want your content to be featured on sites with the highest visitor traffic, and you will therefore be looking at sites with the best Alexa ranks.

Try the search query below. It is the same as our earlier search for publishers, but we are now narrowing the search to only include sites with an Alexa rank of 1-1000.

## Conclusion

It took us less than 5 minutes to source and visualize the top publishers and reporters writing about startup funding, which could potentially save hours of time scanning the web and social media in the search for suitable influencers to reach out to about your press release.

Ready to try the News API for yourself? Click the image below and sign up for a free 14-day trial.

## Intro

Here at AYLIEN we spend our days creating cutting-edge NLP and Text Analysis solutions such as our Text Analysis API and News API to help developers build powerful applications and processes.

We understand, however, that not everyone has the programming knowledge required to use APIs, and this is why we created our Text Analysis Add-on for Google Sheets – to bring the power of NLP and Text Analysis to anyone who knows how to use a simple spreadsheet.

Today we want to show you how you can build an intelligent sentiment analysis tool with zero coding using our Google Sheets Add-on and a free service called IFTTT.

Here’s what you’ll need to get started;

## What is IFTTT?

IFTTT stands for If This, Then That. It is a free service that enables you automate specific tasks by triggering actions on apps when certain criteria is met. For example, “if the weather forecast predicts rain tomorrow, notify me by SMS”.

### Step 1 – Connect Google Drive to IFTTT

• Search for, and select, Google Drive

### Step 2 – Create Applets in IFTTT

Applets are the processes you create to trigger actions based on certain criteria. It’s really straightforward. You define the criteria (the ‘If’) and then the trigger (the ‘That’). In our previous weather-SMS example, the ‘if’ is a rain status within a weather app, and the ‘that’ is a text message that gets sent to a specified cell phone number.

To create an applet, go to My Applets and click New Applet.

Here’s what you’ll see. Click the blue +this

You will then be shown a list of available apps. In this case, we want to source specific tweets, so select the Twitter app.

You will then be asked to choose a trigger. Select New tweet from search.

You can now define exactly what tweets you would like to source, based on their content. You can be quite specific with your search using Twitter’s search operators, which we’ve listed below;

To search for specific words, hashtags or languages

• Tweets containing all words in any position (“Twitter” and “search”)
• Tweets containing exact phrases (“Twitter search”)
• Tweets containing any of the words (“Twitter” or “search”)
• Tweets excluding specific words (“Twitter” but not “search”)
• Tweets with a specific hashtag (#twitter)
• Tweets in a specific language (written in English)

To search for specific people or accounts

• Tweets from a specific account (Tweeted by “@TwitterComms”)
• Tweets sent as replies to a specific account (in reply to “@TwitterComms”)
• Tweets that mention a specific account (Tweet includes “@TwitterComms”)

• To exclude Retweets (“-rt”)
• To exclude links/URLs (“-http”) and (“-https”)

#### Our first trigger

We’re going to search for tweets that mention “bad santa 2 is” or “bad santa 2 was”. Why are we searching for these terms? Well, we find that original, opinionated tweets generally use either one of these phrases. It also helps to cut out tweets that contain no opinion (neutral sentiment) such as the one below;

Our goal with this tool is to analyze the viewer reaction to “Bad santa 2”  which means Tweets such as this one aren’t entirely interesting to us in this case. However, if we wanted to asses the overall buzz on Twitter about Bad Santa 2 perhaps we might just look for any mention at all and concentrate on the volume of tweets.

And so, here’s our first trigger.

Click Create Trigger when you’re happy with your search. You will then see the following;

Notice how the Twitter icon has been added. Now let’s choose our action. Click the blue +that

Next, search for or select Google Drive. You will then be given 4 options – select Add row to spreadsheet. This action will add each matching tweet to an individual row in Google Sheets.

Next, give the spreadsheet a name. We simply went for ‘Bad Santa 2’. Click Create Action. You will then be able to review your applet. Click Finish when you are happy with it.

Done! Tweets that match your search criteria will start appearing in an auto-generated Google Sheet within minutes. Now you can go through this process again to create a second applet. We chose another movie, Allied. (“Allied was” or “Allied is”).

Here is an example of what you can expect to see accumulate in your Google Sheet;

Note: When you install our Google Sheets Add-on we’ll give 1,000 credits to use for free. You then have the option to purchase additional credits should you wish to. For this example, we will stay within the free range and analyze 500 tweets for each movie. You may choose to use more or less, depending on your preference.

### Step 3 – Clean your data

Because of the nature of Twitter, you’re probably going to find a lot of crap and spammy tweets in your spreadsheet. To minimize the amount of these tweets that end up in your final data set, there are a few things we recommend you do;

By sorting your tweets alphabetically, you can quickly scroll down through your spreadsheet and easily spot multiples of the same tweet. It’s a good idea to delete multiple instances of the same tweet as they will not only skew your overall results but multiple instances of the same tweet can often point to bot activity or spamming activity on Twitter. To sort your tweets alphabetically, select the entire column, select Data and Sort sheet by column B, A-Z.

#### Remove retweets (if you haven’t already done so)

Alphabetically sorting your tweets will also list all retweets together (beginning with RT). You may or may not want to include retweets, but this is entirely up to you. We decided to remove all retweets because there are so many bots out there auto-retweeting and we felt that using this duplicate content isn’t exactly opinion mining.

#### Search and filter certain words

Think about the movie(s) you are searching for and how their titles may be used in different contexts. For example, we searched for tweets mentioning ‘Allied’, and while we used Twitter’s search operators to exclude words like forces, battle and treaty, we noticed a number of tweets about a company named ‘Allied’. By searching for their company Twitter handle, we could highlight and delete the tweets in which they were mentioned.

#### NB: Remove movie title from tweets

Before you move on to Step 4 and analyze your tweets, it is important to remove the movie title from each tweet, as it may affect the quality of your results. For example, our tweet-level sentiment analysis feature will read ‘Bad Santa 2…” in a tweet and may assign negative sentiment because of the inclusion of the word bad.

To remove all mentions of your chosen movie title, simply use EditFind and Replace in Google Sheets.

### Step 4 – Analyze your tweets

Now comes the fun part! It’s time to analyze your tweets using the AYLIEN Text Analysis Add-on. If you have not yet installed the Add-on, you can do see here.

Using our Add-on couldn’t be easier. Simply select the column containing all of your tweets, then click Add-onsText Analysis.

To find out whether our tweets have been written in a positive, neutral or negative way, we use Sentiment Analysis.

Note: While Sentiment Analysis is a complex and fascinating field in NLP and Machine Learning research, we won’t get into it in too much detail here. Put simply, it enables you to establish the sentiment polarity (whether a piece of text is positive, negative or neutral) of large volumes of text, with ease.

Next, click the drop-down menu and select Sentiment AnalysisAnalyze.

Each tweet will then be analyzed for subjectivity (whether it is written subjectively or objectively) and sentiment polarity (whether it is written in a positive, negative or neutral manner). You will also see a confidence score for both subjectivity and sentiment. This tells you how confident we are that the assigned label (positive, negative, objective, etc) is correct.

By repeating this process for our
Allied tweets, we can then compare our results and find out which movie has been best received by Twitter users.

### Step 5 – Compare & visualize

In total we analyzed 1,000 tweets, 500 for each movie. Through a simple count of positive, negative and neutral tweets, we received the following results;

Positive – 170

Negative – 132

Neutral – 198

Allied

Positive – 215

Negative – 91

Neutral – 194

Now to generate a percentage score for each movie. Let’s start by excluding all neutral tweets. We can then easily figure out what percentage of remaining tweets are positive. So, for Allied, of the remaining 306 tweets, 215 were positive,giving us a positive score of 70%.

By doing the same with Bad Santa 2, we get 56%.

Allied wins!

To visualize your results, use your tweet volume data to generate some charts and graphs in Google Sheets;

### Comparing our results with Rotten Tomatoes & IMDb

It’s always interesting to compare results of your analysis with those of others. To compare ours, we went to the two major movie review site – Rotten Tomatoes & IMDb, and we were pleasantly surprised with the similarity in our results!

#### Allied

The image below from Rotten Tomatoes shows both critic (left) and audience (right) score for Allied. Seeing as we analyzed tweets from a Twitter audience, we are therefore more interested in the latter. Our score of 70% comes so close to that of almost 15,000 reviewers on Rotten Tomatoes – just 1% off!

IMDb provide an audience-based review score of 7.2/10. Again, very close to our own result.

Our result for Bad Santa 2, while not as close as that of Allied, was still pretty close to Rotten Tomatoes with 56%.

With IMDb, however, we once again come within 1% with a score of 5.7/10.

## Conclusion

We hope that this simple and fun use-case using our Google Sheets Add-on will give you an idea of just how useful, flexible and simple Text Analysis can be, without the need for any complicated code.

While we decided to focus on movie reviews in this example, there are countless other uses for you to try. Here’s a few ideas;

• Track mentions of brands or products
• Track event hashtags
• Track opinions towards election candidates

## Intro

Dubbed as Europe’s largest technology marketplace and Davos for geeks, the Web Summit has been going from strength to strength in recent years as more and more companies, employees, tech junkies and media personnel flock to the annual event to check out the latest innovations, startups and a star-studded lineup of speakers and exhibitors.

Having grown from a small gathering of around 500 like-minded people in Dublin, this year’s event, which was held in Lisbon for the first time, topped 50,000 attendees representing 15,000 companies from 166 countries.

With such a large gathering of techies, there was bound to be a whole lot of chatter relating to the event on Twitter. So being the data geeks that we are, and before we jetted off to Lisbon ourselves, we turned our digital ears to Twitter and listened for the duration of the event to see what we could uncover.

## Our process

We collected a total of just over 80,000 tweets throughout the event by focusing our search on keywords, Twitter handles and hashtags such as ‘Web Summit’, #websummit, @websummit, etc.

We used the following tools to collect, analyze and visualize the data;

And here’s what we found;

## What languages were the tweets written in?

In total, we collected tweets written in 42 different languages.

Out of our 80,000 tweets, 60,000 were written in English, representing 75% of the total volume.

The pie chart below shows all languages, excluding English. As you can see, Portuguese was the next most-used language with just under of 11% of tweets being written in the host country’s native tongue. Spanish and French tweets represented around 2.5% of total volume each.

## How did tweet volumes fluctuate throughout the week?

The graph below represents hourly tweet volume fluctuations throughout the week. As you can see, there are four distinct peaks.

While we can’t list all the reasons for these spikes in volume, we did find a few recurring trends during these times, which we have added to the graph;

Let’s now take a more in-depth look at each peak.

## What were the causes of these fluctuations?

By adding the average hourly sentiment polarity to this graph we can start to gather a better understanding of how people felt while writing their tweets.

Not familiar with sentiment analysis? This is a feature of text analysis and natural language processing (NLP) that is used to detect positive or negative polarity in text. In short, it tells us whether a piece of text, or a tweet in this instance, has been written in a positive, negative or neutral way. Learn more.

Interestingly, each tweet volume peak correlates with a sharp drop in sentiment. What does this tell us? People were taking to Twitter to complain!

### Positivity overall

Overall, average sentiment remained in the positive (green) for the entire week. That dip into negative (red) that you can see came during the early hours of Day 2 as news of the US election result broke. Can’t blame the Web Summit for that one!

We can also see distinct rises in positive sentiment around the 5pm mark each day as attendees took to Twitter to reflect on an enjoyable day.

Sentiment also remained comparatively high during the later hours of each day as the Web Summit turned to Night Summit – we’ll look at this in more detail later in the post.

Mike, Afshin, Noel & Hamed after a hectic but enjoyable day at the Web Summit

## What was the overall sentiment of the tweets?

The pie chart below shows the breakdown of all 80,000 tweets, split by positive, negative and neutral sentiment.

The majority of tweets (80%) were written in a neutral manner. 14% were written with positive sentiment, with the remaining 6% written negatively.

To uncover the reasons behind both the positive and negative tweets, we extracted and analyzed mentioned keywords to see if we could spot any trends.

## What were the most common keywords found in positive tweets?

We used our Entity and Concept Extraction features to uncover keywords, phrases, people and companies that were mentioned most in both positive and negative tweets.

As you can imagine, there were quite a few keywords extracted from 80,000 tweets so we trimmed it down by taking the following steps;

• Sort by mention count
• Take the top 100 most mentioned keywords
• Remove obvious or unhelpful keywords (Web Summit, Lisbon, Tech, etc)

And here are our results. You can hover over individual clusters to see more information.

We can see some very positive phrases here, with great, amazing, awesome, good, love and nice featuring prominently.

The most mentioned speaker from the positive tweets was Gary Vaynerchuk (@garyvee), which makes sense considering the sharp rise in positive sentiment we saw his fans produce earlier in this post on our sentiment-over-time graph.

## What were the most common keywords found in negative tweets?

We took the exact same approach to generate a list of the most mentioned keywords from tweets with negative sentiment;

For those of you that attended Web Summit, it will probably come as no surprise to see WiFi at the forefront of the negativity. While it did function throughout the event, many attendees found it unreliable and too slow, leading to many using their own data and hotspotting from their cell phones.

Mentions of queue, long, full, lines and stage are key indicators of just how upset people became while queueing for the opening ceremony at the main stage, only for many to be turned away because the venue became full.

The most mentioned speaker from negative tweets was Dave McClure (@davemcclure). The 500 Startups Founder found himself in the news after sharing his views on the US election result with an explosive on-stage outburst. It should be noted that just because Dave was the most mentioned speaker from all negative tweets, it doesn’t necessarily mean people were being negative towards him. In fact, many took to Twitter to support him;

Much of the negativity came from people simply quoting what Dave had said on stage, which naturally contained high levels of negative sentiment;

## Which speakers were mentioned most?

Web Summit 2016 delivered a star-studded line up of a total of 663 speakers. What we wanted to know who was, who was mentioned most on Twitter?

By combining mentions of names and Twitter handles, we generated and sorted a list of the top 25 most mentioned speakers.

Messrs Vaynerchuk and McClure once again appear prominently, with the former being the most mentioned speaker overall throughout the week. Joseph Gordon-Levitt, actor and Founder of HitRECord, came in in second place, followed by Web Summit founder Paddy Cosgrave.

## Which airline flew to Lisbon with the happiest customers?

With attendees visiting Lisbon from 166 countries, we thought it would be cool to see which airline brought in the happiest customers. By extracting mentions of the airlines that fly in to Lisbon, we could then analyze the sentiment of the tweets in which they were mentioned.

For most airlines, there simply wasn’t enough data available to analyze. However, we did find enough mentions of Ryanair and British Airways to be able to analyze and compare.

Here’s what we found;

### Ryanair vs. British Airways

The graph below is split into three levels of sentiment – positive, neutral and negative. Ryanair is represented in blue and British Airways in red.

It’s really not hard to pick a winner here. British Airways were not only mentioned in more positive tweets, they were also mentioned in considerably less negative tweets.

## Night Summit: which night saw the highest tweet volumes?

In total we found 593 mentions of night summit. The graph below shows tweet volumes for each day, and as you can see, November 7 was a clear winner in terms of volume.

### ..and which morning saw the most hangovers?!

Interestingly, we found a correlation between low tweet volumes (mentioning Night Summit, #nightsummit, etc.) and higher mentions of hangovers the following day!

59% of tweets mentioning hangover, hungover, resaca, etc, came on November 10 – the day after the lowest tweet volume day.

35% came on November 9 while just 6% came on November 8 – the day after the highest tweet volume day.

What do these stats tell us? Well, while we can’t be certain, we’re guessing that the more people partied, the less they tweeted. Probably a good idea 🙂

## Conclusion

In today’s world, if someone wants to express their opinion on an event, brand, product, service, or anything really, they will more than likely do so on social media. There is a wealth of information published through user generated content that can be accessed in near real-time using Text Analysis and Text Mining solutions and techniques.

Wanna try it for yourself? Click the image below to sign up to our Text Analysis API with 1,000 free calls per day.

## Intro

In recent months, we have been bolstering our sentiment analysis capabilities, thanks to some fantastic research and work from our team of scientists and engineers.

Today we’re delighted to introduce you to our latest feature, Sentence-Level Sentiment Analysis.

New to Sentiment Analysis? No problem. Let’s quickly get you up to speed;

## What is Sentiment Analysis?

Sentiment Analysis is used to detect positive or negative polarity in text. Also known as opinion mining, sentiment analysis is a feature of text analysis and natural language processing (NLP) research that is increasingly growing in popularity as a multitude of use-cases emerge. Here’s a few examples of questions that sentiment analysis can help answer in various industries;

• Brands – are people speaking positively or negatively when they mention my brand on social media?
• Hospitality – what percentage of online reviews for my hotel/restaurant are positive/negative?
• Finance – are there negative trends developing around my investments, partners or clients?
• Politics – which candidate is receiving more positive media coverage in the past week?

We could go on and on with an endless list of examples but we’re sure you get the gist of it. Sentiment Analysis can help you understand the split in opinion from almost any body of text, website or document – an ideal way to uncover the true voice of the customer.

## Types of Sentiment Analysis

Depending on your specific use-case and needs, we offer a range of sentiment analysis options;

### Document Level Sentiment Analysis

Document level sentiment analysis looks at and analyzes a piece of text as a whole, providing an overall sentiment polarity for a body of text.

For example, this camera review;

Want to test your own text or URLs? Check out our live demo.

### Aspect-Based Sentiment Analysis (ABSA)

ABSA starts by locating sentences that relate to industry-specific aspects and then analyzes sentiment towards each individual aspect. For example, a hotel review may touch on comfort, staff, food, location, etc. ABSA can be used to uncover sentiment polarity for each aspect separately.

Here’s an example of results obtained from a hotel review we found online;

Note how each aspect is automatically extracted and then given a sentiment polarity score.

### Sentence-Level Sentiment Analysis (SLSA)

Our latest feature breaks down a body of text into sentences and analyzes each sentence individually, providing sentiment polarity for each.

### SLSA in action

Sentence-Level Sentiment Analysis is available in our Google Sheets Add-on and also through the ABSA endpoint in our Text Analysis API. Here’s a sample query to try with the Text Analysis API;

Now let’s take a look at it in action in the Sheets Add-on.

#### Analyze text

We imported some hotel reviews into Google Sheets and then ran an analysis using our Text Analysis Add-on. Below you will see the full review in column A, and then each sentence in a column of its own with a corresponding sentiment polarity (positive, negative or neutral), as well as a confidence score. This score reflects how confident we are that the sentiment is correct, with 1.0 representing complete confidence.

#### Analyze URLs

This new feature also enables you to analyze volumes of URLs as it first scrapes the main text content from each web page and then runs SLSA on each sentence individually.

In the GIF below, you can see how the content from a URL on Business Insider is first broken down into individual sentences and then assigned a positive, negative or neutral sentiment at sentence level, thus providing a granular insight into the sentiment of an article.

## What’s the benefit of SLSA?

As we touched on earlier, sentiment analysis, in general, has a wide range of potential use-cases and benefits. However, Document-Level Sentiment Analysis can often miss out on uncovering granular details in text by only providing an overall sentiment score.

Sentence-Level Sentiment Analysis allows you to perform a more in-depth analysis of text by uncovering the positive, neutral and negatively written sentences to find the root causes of the overall document-level polarity. It can assist you in locating instances of strong opinion in a body of text, providing greater insight into the true thoughts and feelings of the author.

SLSA can also be used to analyze and summarize a collection of online reviews by extracting all the individual sentences within them that are written with either positive or negative sentiment.

Our Text Analysis Add-on for Google Sheets has been developed to help people with little or no programming knowledge take advantage of our Text Analysis capabilities. If you are in any way familiar with Google Sheets or MS Excel you will be up and running in no time. We’ll even give you 1,000 free credits to play around with. or click the image below to get started for free with our Text Analysis API.

## Intro

The 2016 US Presidential election was one of (if not the) most controversial in the nation’s history. With the end prize being arguably the most powerful job in the world, the two candidates were always going to find themselves coming under intense media scrutiny. With more media outlets covering this election than any that have come before it, an increase in media attention and influence was a given.

But how much of an influence does the media really have on an election? Does journalistic bias sway voter opinion, or does voter opinion (such as poll results) generate journalistic bias? Does the old adage “all publicity is good publicity” ring true at election time?

“My sense is that what we have here is a feedback loop. Does media attention increase a candidate’s standing in the polls? Yes. Does a candidate’s standing in the polls increase media attention? Also yes.” -Jonathan Stray @jonathanstray

Thanks to an ever-increasing volume of media content flooding the web, paired with advances in natural language processing and text analysis capabilities, we are in a position to delve deeper into these questions than ever before, and by analyzing the final sixty days of the 2016 US Presidential election, that’s exactly what we set out to do.

## So, where did we start?

We started by building a very simple search using our News API to scan thousands of monitored news sources for articles related to the election. These articles, 170,000 in total, were then indexed automatically using our text analysis capabilities in the News API.

This meant that key data points in those articles were identified and indexed to be used for further analysis:

• Keywords
• Entities
• Concepts
• Topics

With each of the articles or stories sourced comes granular metadata such as publication time, publication source, source location, journalist name and sentiment polarity of each article. Combined, these data points provided us with an opportunity to uncover and analyze trends in news stories relating to the two presidential candidates.

We started with a simple count of how many times each candidate was mentioned from our news sources in the sixty days leading up to election day, as well as the keywords that were mentioned most.

## Keywords

By extracting keywords from the news stories we sourced, we get a picture of the key players, topics, organizations and locations that were mentioned most. We generated the interactive chart below using the following steps;

1. We called the News API using the query below.
2. We called it again, but searched for “Trump NOT Clinton”
3. Mentions of the two candidates naturally dominated in both sets of results so we removed them in order to get a better understanding of the keywords that were being used in articles written about them. We also removed some very obvious and/or repetitive words such as USA, America, White House, candidate, day, etc.

Here’s the query;

#### Most mentioned keywords in articles about Hillary Clinton

Straight away, bang in the middle of these keywords, we can see FBI and right beside it, emails.

#### Most mentioned keywords in articles about Donald Trump

Similar to Hillary, Trump’s main controversies appear most prominently in his keywords, with terms like women, video, sexual and assault all appearing prominently.

## Most media mentions

If this election was decided by the number of times a candidate was mentioned in the media, who would win? We used the following search queries to total the number of mentions from all sources over the sixty days immediately prior to election day;

Note: We could also have performed this search with a single query, but we wanted to separate the candidates for further analysis, and in doing this, we removed overlapping stories with titles that mentioned both candidates.

Here’s what we found, visualized;

#### Who was mentioned more in the media? Total mentions volume:

It may come as no surprise that Trump was mentioned considerably more than Clinton during this period, but was he consistently more prominent in the news over these sixty days, or was there perhaps a major story that has skewed the overall results? By using the Time Series endpoint, we can graph the volume of stories over time.

We generated the following chart using results from the two previous queries;

#### How media mentions for both candidates fluctuated in the final 60 days

As you would expect, the volume of mentions for each candidate fluctuates throughout the sixty day period, and to answer our previous question – yes, Donald Trump was consistently more prominent in terms of media mentions throughout this period. In fact, he was mentioned more than Hillary Clinton in 55 of the 60 days.

Let’s now take a look at some of the peak mention periods for each candidate to see if we can uncover the reasons for the spikes in media attention;

### Donald Trump

Trump’s peak period of media attention was October 10-13, as indicated by the highest red peak in the graph above. This period represented the four highest individual days of mention volume and can be attributed to the scandal that arose from sexual assault accusations and a leaked tape showing Trump making controversial comments about groping women.

The second highest peak, October 17-20, coincides with a more positive period for Trump, as a combination of a strong final presidential debate and a growing email scandal surrounding Hillary Clinton increased his media spotlight.

### Hillary Clinton

Excluding the sharp rise in mentions just before election day, Hillary’s highest volume days in terms of media mentions occurred from October 27-30 as news of the re-emergence of an FBI investigation surfaced.

So we’ve established the dates over the sixty days when each candidate was at their peak of media attention. Now we want to try establish the sentiment polarity of the stories that were being written about each candidate throughout this period. In other words, we want to know whether stories were being written in a positive, negative or neutral way. To achieve this, we performed Sentiment Analysis.

## Sentiment analysis

Sentiment Analysis is used to detect positive or negative polarity in text. Also known as opinion mining, sentiment analysis is a feature of text analysis and natural language processing (NLP) research that is increasingly growing in popularity as a multitude of use-cases emerge. Put simply, we perform Sentiment Analysis to uncover whether a piece of text is written in a positive, negative or neutral manner.

Note: The vast majority of news articles about the election will undoubtedly contain mentions of both Trump and Clinton. We therefore decided to only count stories with titles that mentioned just one candidate. We believe this significantly increases the likelihood that the article was written about that candidate. To achieve this, we generated search queries that included one candidate while excluding the other. The News API supports boolean operators, making such search queries possible.

First of all, we wanted to compare the overall sentiment of all stories with titles that mentioned just one candidate. Here are the two queries we used;

And here are the visualized results;

What am I seeing here? Blue represents articles written in a neutral manner, red in a negative manner and green in a positive manner. Again, you can hover over the graph to view more information.

#### What was the overall media sentiment towards Donald Trump?

Those of you that followed the election, to any degree, will probably not be surprised by these results. We don’t really need data to back up the claim that Trump ran the more controversial campaign and therefore generated more negative press.

Again, similar to how we previously graphed mention volumes over time, we also wanted to see how sentiment in the media fluctuated throughout this sixty day period. First we’ll look at Clinton’s mention volume and see if there is any correlation between mention volume and sentiment levels.

## Hillary Clinton

How to read this graph: The top half (blue) represents fluctuations in the number of daily media mentions (‘000’s) for Hillary Clinton. The bottom half represents fluctuations in the average sentiment polarity of the stories in which she was mentioned. Green = positive and red = negative.

You can hover your cursor over the data points to view more in-depth information.

#### Mentions Volume (top) vs. Sentiment (bottom) for Hillary Clinton

From looking at this graph, one thing becomes immediately clear; as volume increases, polarity decreases, and vice versa. What does this tell us? It tells us that perhaps Hillary was in the news for the wrong reasons too often – there were very few occasions when both volume and polarity increased simultaneously.

Hillary’s average sentiment remained positive for the majority of this period. However, that sharp dip into the red circa October 30 came just a week before election day. We must also point out the black line that cuts through the bottom half of the graph. This is a trend line representing average sentiment polarity and as you can see, it gets consistently closer to negative as election day approaches.

#### Mentions Volume (top) vs. Sentiment (bottom) for Donald Trump

Trump’s graph paints a different picture altogether. There was not a single day when his average polarity entered into the positive (green). What’s interesting to note here, however, is how little his mention volumes affected his average polarity. While there are peaks and troughs, there were no major swings in either direction, particularly in comparison to those seen on Hillary’s graph.

These results are of course open to interpretation, but what is becoming evident is that perhaps negative stories in the media did more damage to Clinton’s campaign than they did to Trump’s. While Clinton’s average sentiment polarity remained consistently more positive, Trump’s didn’t appear to be as badly affected when controversial stories emerged. He was consistently controversial!

Trumps lowest point, in terms of negative press, came just after the second presidential debate at the end of September. What came after this point is the crucial detail, however. Trump’s average polarity recovered and mostly improved for the remainder of the campaign. Perhaps critically, we see his highest and most positive averages of this period in the final 3 weeks leading up to election day.

## Sentiment from sources

At the beginning of this post we mentioned the term media bias and questioned its effect on voter opinion. While we may not be able to prove this effect, we can certainly uncover any traces of bias from media content.

What we would like to uncover is whether certain sources (ie publications) write more or less favorably about either candidate.

To test this, we’ve analyzed the sentiment of articles written about both candidates from two publications: USA Today and Fox News.

### USA Today

Query:

Similar to the overall sentiment (from all sources) displayed previously, the sentiment polarity of articles from USA Today shows consistently higher levels of negative sentiment towards Donald Trump. The larger than average percentage of neutral results indicate that USA Today took a more objective approach in their coverage of the election.

### Fox News

Again, Trump dominates in relation to negative sentiment from Fox News. However, what’s interesting to note here is that Fox produced more than double the percentage of negative story titles about Hillary Clinton than USA Today did. We also found that, percentage-wise, they produced half as many positive stories about her. Also, 3.9% of Fox’s Trump coverage was positive, versus USA Today’s 2.5%.

### Media bias?

These figures beg the question; how are two major news publications writing about the exact same news, with such varied levels of sentiment? It certainly highlights the potential influence that the media can have on voter opinion, especially when you consider how many people see each article/headline. The figures below represent social shares for a single news article;

Bear in mind, these figures don’t represent the number of people who saw the article, they represent the number of people who shared it. The actual number of people who saw this on their social feed will be a high-multiple of these figures. In fact, we grabbed the average daily social shares, per story, and graphed them to compare;

#### Average social shares per story

Pretty even, and despite Trump being mentioned over twice as many times as Clinton during this sixty day period, he certainly didn’t outperform her when it came to social shares.

## Conclusion

Since the 2016 US election was decided there has been a sharp focus on the role played by news and media outlets in influencing public opinion. While we’re not here to join the debate, we are here to show you how you can deep-dive into news content at scale to uncover some fascinating and useful insights that can help you source highly targeted and precise content, uncover trends and assist in decision making.

Here at AYLIEN we have a team of researchers who like to keep abreast of, and regularly contribute to, the latest developments in the field of Natural Language Processing. Recently, one of our research scientists, Sebastian Ruder, attended EMNLP 2016 in Austin, Texas. In this post, Sebastian has highlighted some of the stand-out papers and trends from the conference.

Image: Jackie Cheung

I spent the past week in Austin, Texas at EMNLP 2016, the Conference on Empirical Methods in Natural Language Processing.

There were a lot of papers at the conference (179 long papers, 87 short papers, and 9 TACL papers all in all) — too many to read each single one. The entire program can be found here. In the following, I will highlight some trends and papers that caught my eye:

#### Reinforcement learning

One thing that stood out was that RL seems to be slowly finding its footing in NLP, with more and more people using it to solve complex problems:

#### Dialogue

Dialogue was a focus of the conference with all of the three keynote speakers dealing with different aspects of dialogue: Christopher Potts talked about pragmatics and how to reason about the intentions of the conversation partner; Stefanie Tellex concentrated on how to use dialogue for human-robot collaboration; finally, Andreas Stolcke focused on the problem of addressee detection in his talk.

Among the papers, a few that dealt with dialogue stood out:

• Andreas and Klein model pragmatics in dialogue with neural speakers and listeners;
• Liu et al. show how not to evaluate your dialogue system;
• Ouchi and Tsuboi select addressees and responses in multi-party conversations;
• Wen et al. study diverse architectures for dialogue modelling.

#### Sequence-to-sequence

Seq2seq models were again front and center. It is not common for a method to have its own session two years after its introduction (Sutskever et al., 2014). While in the past years, many papers employed seq2seq e.g. for Neural Machine Translation, some papers this year focused on improving the seq2seq framework:

#### Semantic parsing

Since seq2seq’s use for dialogue modelling was popularised by Vinyals and Le, it is harder to get it to work with goal-oriented tasks that require an intermediate representation on which to act. Semantic parsing is used to convert a message into a more meaningful representation that can be used by another component of the system. As this technique is useful for sophisticated dialogue systems, it is great to see progress in this area:

#### X-to-text (or natural language generation)

While mapping from text-to-text with the seq2seq paradigm is still prevalent, EMNLP featured some cool papers on natural language generation from other inputs:

#### Parsing

Parsing and syntax are a mainstay of every NLP conference and the community seems to particularly appreciate innovative models that push the state-of-the-art in parsing: The ACL ’16 outstanding paper by Andor et al. introduced a globally normalized model for parsing, while the best EMNLP ‘16 paper by Lee et al. combines a global parsing model with a local search over subtrees.

#### Word embeddings

There were still papers on word embeddings, but it felt less overwhelming than at the past EMNLP or ACL, with most methods trying to fix a particular flaw rather than training embeddings for embeddings’ sake. Pilevhar and Collier de-conflate senses in word embeddings, while Wieting et al. achieve state-of-the-art results for character-based embeddings.

#### Sentiment analysis

Sentiment analysis has been popular in recent years (as attested by the introductions of many recent papers on sentiment analysis). Sadly, many of the conference papers on sentiment analysis reduce to leveraging the latest deep neural network for the task to beat the previous state-of-the-art without providing additional insights. There are, however, some that break the mold: Teng et al. find an effective way to incorporate sentiment lexicons into a neural network, while Hu et al. incorporate structured knowledge into their sentiment analysis model.

#### Deep Learning

By now, it is clear to everyone: Deep Learning is here to stay. In fact, deep learning and neural networks claimed the two top spots of keywords that were used to describe the submitted papers. The majority of papers used at least an LSTM; using no neural network seems almost contrarian now and is something that needs to be justified. However, there are still many things that need to be improved — which leads us to…

#### Uphill Battles

While making incremental progress is important to secure grants and publish papers, we should not lose track of the long-term goals. In this spirit, one of the best workshops that I’ve attended was the Uphill Battles in Language Processing workshop, which featured 12 talks and not one, but four all-star panels on text understanding, natural language generation, dialogue and speech, and grounded language. Summaries of the panel discussions should be available soon at the workshop website.

This was my brief review of some of the trends of EMNLP 2016. I hope it was helpful.

With our News API, our goal is to make the world’s news content easier to query, just like a database. Additionally, we leverage Machine Learning to process, normalize and analyze this content to make it easier for our users to gain access to rich and high quality metadata, and use powerful filtering capabilities that will ultimately help you to find the needle in the haystack more easily.

To this end, we have just launched two new handy features for filtering stories based on their image metadata and setting range queries for social media share counts. You can read more about these two features – which are now also available in our News API SDKs – below.

The news content published online is increasingly becoming multimodal, to the point that it is rare to find an article or a blog post that doesn’t include an image or a video. Our News API stats show that 83% of all the articles that we have in our index contain at least 1 image.

Therefore, it is important to be able to search and filter stories not just based on their textual content, but also based on their images.

To facilitate this, we now analyze each extracted image of each news article to capture its size (width and height), format and content length. Additionally, we have introduced 7 new parameters for filtering stories based on these attributes:

• media.images.width.min: minimum image width (in pixels)
• media.images.width.max: maximum image width (in pixels)
• media.images.height.min: minimum image height (in pixels)
• media.images.height.max: maximum image height (in pixels)
• media.images.content_length.min: minimum image content size (in bytes)
• media.images.content_length.max: maximum image content size (in bytes)
• media.images.format[]: image format (possible values are: JPEG, PNG, GIF, SVG, ICO, TIFF, CUR, WEBP and BMP).

As an example, let’s use these parameters to retrieve stories about Golf that have an image in JPEG or PNG format that is bigger than 80kb in size:

#### Result

Here’s an image returned from the search query above:

## Social range filters

One of the highly popular features of our News API is its ability to sort stories based on how many times they have been shared on social media. However, if you use this to retrieve popular stories over a long period of time, you will sometimes notice that a few highly popular stories (those that have been shared 100’s of thousands of times) would come at the top, preventing you from accessing the long tail of interesting and popular stories easily.

To battle this, we have introduced the following 8 new parameters that allow you to set range (i.e. minimum and maximum) filters on social media shares counts:

• social_shares_count.reddit.min: minimum number of Reddit shares
• social_shares_count.reddit.max: maximum number of Reddit shares

To retrieve all stories that mention Donald Trump, and have been shared between 50 and 500 times on Facebook, we can use the following query:

These filters are now available across all our News API SDKs. We hope that you find these new updates useful, and we would love to hear any feedback you may have.

### PhD & MASTERS APPLICATIONS ARE NOW CLOSED

However, we are always keen to speak with potential candidates for various roles here at AYLIEN. If you’re interested in joining the team, we would love to hear from you. Please email your CV to jobs@aylien.com.

At AYLIEN we are using recent advances in Artificial Intelligence to try to understand natural language. Part of what we do is building products such as our Text Analysis API and News API to help people extract meaning and insight from text. We are also a research lab, conducting research that we believe will make valuable contributions to the field of Artificial Intelligence, as well as driving further product development (see this post about a recent publication on aspect-based sentiment analysis by one of our research scientists for example).

We are excited to announce that we are currently accepting applications from students and researchers for funded PhD and Masters opportunities, as part of the Irish Research Council Employment Based Programme.

The Employment Based Programme (EBP) enables students to complete their PhD or Masters degree while working with us here at AYLIEN.

For students and researchers, we feel that this is a great opportunity to work in industry with a team of talented scientists and engineers, and with the resources and infrastructure to support your work.

We’re an award-winning VC-backed text analysis company specialising in cutting-edge AI, deep learning and natural language processing research to offer developers and solution builders a package of APIs that bring intelligent analysis to a wide range of apps and processes, helping them make sense of large volumes of unstructured data and content.

With thousands of users worldwide and a growing customer base that includes great companies such as Sony, Complex Media, Getty Images, and McKinsey, we’re growing fast and enjoy working as part of a diverse and super smart team here at our office in Dublin, Ireland.

You can learn more about AYLIEN, who we are and what we do, by checking out our blog and two of our core offerings – our Text Analysis API and News API.

## About the IRC Employment Based Programme

The Irish Research Council’s Employment Based Programme (EBP) is a unique national initiative, providing students with an opportunity to work in a co-educational environment involving a higher education institution and an employment partner.

The EBP provides a co-educational opportunity for researchers as they will be employed directly by AYLIEN, while also being full time students working on their research degree. One of the key benefits of such an arrangement is that you will be given a chance to see your academic outputs being transferred into a practical setting. This immersive aspect of the programme will enable you to work with some really bright minds who can help you generate research ideas and bring benefits to your work that may otherwise not have come to light under a traditional academic Masters of PhD route.

### Funding

The Scholarship funding consists of €24,000pa towards salary and a maximum of €8,000pa for tuition, travel and equipment expenses. Depending on candidates’ level of seniority and expertise, the salary amount may be increased.

## Our experience with the EBP

AYLIEN is proud to host and work with two successful programme awardees under the EBP, Sebastian Ruder and Peiman Barnaghi. Both Sebastian and Peiman have been working under the supervision of Dr. John Breslin, who is an AYLIEN advisor and lecturer at NUI Galway and Insight Center. We also have academic ties with University College Dublin (UCD) through Barry Smyth. Barry is a Full Professor and Digital Chair of Computer Science at UCD, and recently joined the team at AYLIEN as an advisor.

Back row, left to right: Peiman and Sebastian with Parsa Ghaffari, AYLIEN Founder & CEO

#### Sebastian Ruder

Throughout his research, Sebastian has developed language and domain-agnostic Deep Learning-based models for sentiment analysis and aspect-based sentiment analysis that have been published at conferences and are used in production. His main research focus is to develop efficient methods to enable models to learn from each other and to equip them with the capability to adapt to new domains and languages.

The Employment Based Programme for me brings academia and industry together in the best possible way: It enables me to immerse myself and get to the bottom of hard problems; at the same time, I am able to collaborate with driven and inspiring individuals at AYLIEN. I find this immersion of research-oriented people like myself sitting next to people that are hands-on with diverse technical backgrounds very compelling. This stimulating and fast-paced working environment provides me with direction and focus for my research, while the ‘get stuff done’ mentality allows me to concentrate and accomplish meaningful things” – Sebastian Ruder, Research Scientist at AYLIEN

Here are some of Sebastian’s recent publications:

• INSIGHT-1 at SemEval-2016 Task 4: Convolutional Neural Networks for Sentiment Classification and Quantification (arXiv)
• INSIGHT-1 at SemEval-2016 Task 5: Deep Learning for Multilingual Aspect-based Sentiment Analysis (arXiv)
• A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis (arXiv)
• Towards a continuous modeling of natural language domains (arXiv

#### Peiman Barnaghi

Peiman’s research, in collaboration with the Insight Centre for Data Analytics, NUI Galway, focuses on Scalable Topic-level Sentiment Analysis on Streaming Feeds. His main focus is working on Twitter data for Sentiment Analysis using Machine Learning and Deep Learning methods for detecting polarity trends toward a topic on a large set of tweets and determining the degree of polarity.

Here are some of Peiman’s recent publications:

• Opinion Mining and Sentiment Polarity on Twitter and Correlation between Events and Sentiment (link)
• Text Analysis and Sentiment Polarity on FIFA World Cup 2014 Tweets (PDF)

You can read more about our experience with the EBP in the Irish Research Council’s Annual Report (pages 29 & 31)

## Details & requirements

First and foremost, your thesis topic must be something you are passionate about. While prior experience with the topic is important, it is not crucial. We can work with you to establish a suitable topic that overlaps with both the supervisor’s general area of interest/research and our own research and product directions.

Suggested read: Survival Guide to a PhD by Andrej Karpathy

We are particularly interested in applicants with interests in the following areas (but are open to other suggestions):

• Representation Learning
• Domain Adaptation and Transfer Learning
• Sentiment Analysis
• Dialogue Systems
• Entity and Relation Extraction
• Topic Modeling
• Document Classification
• Taxonomy Inference
• Document Summarization
• Machine Translation

You have the option to complete a Masters (1 year, or 2 years if structured) or a PhD (3 years, or 4 years if structured) degree.

AYLIEN will co-fund your scholarship and provide you with professional guidance and mentoring throughout the programme. It is a prerequisite that you spend 50-70% of your time based on site with us and the remainder of the time at your higher educational institute (HEI).

Open to students with a bachelor’s degree or higher (worldwide) and you will ideally be based within a commutable distance of our office in Dublin City Centre.

### Supervision

It would be ideal if you have already identified or engaged with a potential supervisor at a university in Ireland. However, if not, we will help you with finding a suitable supervisor.

Please note: all times stated are Ireland time and are estimates based on last years programme. Full details will be released in December.

Call open: 6 December 2016

FAQ Deadline: 8 February 2017 (16:00)

Applicant Deadline: 15 February 2017 (16:00)

Supervisor, Employment Mentor and Referee Deadline: 22 February 2017 (16:00)

Research Office Endorsement Deadline: 1 March 2017 (16:00)

Outcome of Scheme: 26 May 2017

Scholarship Start Date: 1 October 2017