Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Product

Introduction

We are thrilled to announce the launch of the AYLIEN News API, our groundbreaking new service that is going to make it easier than ever before for developers and solution builders to collect, index and understand content at scale to uncover trends, identify hot topics and find influencers.

 

More content was uploaded yesterday than any one human could ever consume in their entire life

– Condé Nast

 

Given the content explosion we’re experiencing on the web today, the need to aggregate and understand news and web content at scale and as close to real-time as possible is more important now than ever before. The News API will help you source and analyze specific, relevant and actionable content from blogs and news sources from across the web. We’re monitoring the Internet 24/7 to provide a constant stream of content so you can keep your finger on the pulse with the most up-to-date news content and data within your applications and solutions.

 

The World’s news data, at your fingertips

The News API enables users to search, source and understand news content from across the web in real-time. By harnessing the power of Machine Learning and NLP-driven technology, users can stay ahead of the curve by collecting news content and extracting what is relevant and important to them.

 

 

You can use our News API to build intelligent content-driven apps and solutions by searching and filtering thousands of news sources, extracting the key data points and delivering valuable and actionable insights.

 

1. Search & Filter

We crawl and index thousands of news sources every day and analyze their content using our NLP-powered Text Analysis Engine to bring you an enriched and flexible news data source.

Our powerful search and filtering capabilities allow users to source and collect the news that matters most to them. Users can build their queries on a variety of data points, including:

– Entities (people, places, products, organizations, etc.)

– Writer Sentiment (positive, negative or neutral opinion)

– Topics

– Categories (industry-specific taxonomies)

– Time (down to minute level, and up to 60 days of historical data)

– Location
– Outlets (news sources and blogs)
– Authors (journalists and influencers)

– Language (English, Spanish, Portuguese, Italian, French and German – more to come)

 

We currently monitor an ever-growing list of thousands of sources from across the world. Given the amount of noise out there, we are focusing on quality over quantity by providing access to high quality and trusted sources.

 

2. Extract key data points

The AYLIEN News API goes beyond just sourcing news. We extract key data points from news content generating an enriched, valuable and actionable data source that can be used to power intelligent news aggregators, content-driven apps and news dashboard. These data points include;

– Keywords 

– Entities mentioned (People, Places, Products, Organizations, etc)

– Categories (according to industry-specific taxonomies)

– Sentiment Analysis of writer’s opinion 

– Language Detection (English, Spanish, Portuguese, Italian, French and German – with more to come)

– Automated article summaries

– Hashtags (automatically generated for each story)

This data is extracted in a matter of seconds from the time the article is published, giving you speedy access to the key data points in the world’s news content.

News API users can build complex search queries to query the news like they would a database, giving them tailored streams of news and content. Try it yourself by building some test quesries here.

 

3. Deliver insights

Social media performance

We continuously monitor social media to measure the mention performance of each story, and profile this performance over time to give users an understanding of the increasing or decreasing popularity of the story.

 

 

Sentiment and category breakdown

We leverage the data points we extract from each and every story to help users answer questions like; What % of articles are talking about category X? or What % of articles are positive, negative or neutral?

 

 

Volume over time

We provide historical data for the previous 60 days, enabling users to clearly see how many stories match a query in a given time window.

 

 

Word clouds

Our word cloud capabilities provide users with a snapshot of the most-used keywords or entities within a given time period.

 

 

Histograms

Informative histograms can be easily created to provide snapshots about a query, an author, outlet or even vertical.

 

 

Integration: As Developer-friendly as it gets

Integrating with our News API is simple. In addition to our extensive and interactive documentation, we’re providing code snippets for the most popular programming languages to help developers get up and running in no time. Results are provided in a well-structured JSON format.

 

 

Pricing & Free Trial 

News API paid plans start from $49 per month and because we charge on a pay-per-story basis, you only pay for what you use. So you are complete control of your usage – no bill shocks!

We are currently offering a 14-day trial . During your trial we will help you make the most of the API by providing access to our extensive interactive documentation, sending you helpful tips, sample code snippets and query inspiration.Check out our Pricing Calculator to get an estimate.

 





News API - Sign up




About AYLIEN

We are a Dublin, Ireland-based AI and Machine Learning company. We provide a range of content analysis solutions to developers, data scientists, marketers and academics. Our core offerings include packages of Information Retrieval, Machine Learning, Natural Language Processing and Image Recognition APIs that allow our users to make sense of human-generated content at scale.

0

General

 

Introduction

Deep Learning is a new area of Machine Learning research that has been gaining significant media interest owing to the role it is playing in artificial intelligence applications like image recognition, self-driving cars and most recently the AlphaGo vs. Lee Sedol matches. Recently, Deep Learning techniques have become popular in solving traditional Natural Language Processing problems like Sentiment Analysis.

For those of you that are new to the topic of Deep Learning, we have put together a list of ten common terms and concepts explained in simple English, which will hopefully make them a bit easier to understand. We’ve done the same in the past for Machine Learning and NLP terms, which you might also find interesting.

Perceptron

In the human brain, a neuron is a cell that processes and transmits information. A perceptron can be considered as a super-simplified version of a biological neuron.

A perceptron will take several inputs and weigh them up to produce a single output. Each input is weighted according to its importance in the output decision.

Artificial Neural Networks

Artificial Neural Networks (ANN) are models influenced by biological neural networks such as the central nervous systems of living creatures and most distinctly, the brain.

ANN’s are processing devices, such as algorithms or physical hardware, and are loosely modeled on the cerebral cortex of mammals, albeit on a considerably smaller scale.

Let’s call them a simplified computational model of the human brain.

Backpropagation

A neural network learns by training, using an algorithm called backpropagation. To train a neural network it is first given an input which produces an output. The first step is to teach the neural network what the correct, or ideal, output should have been for that input. The ANN can then take this ideal output and begin adapting the weights to yield an enhanced, more precise output (based on how much they contributed to the overall prediction) the next time it receives a similar input.

This process is repeated many many times until the margin of error between the input and the ideal output is considered acceptable.

Convolutional Neural Networks

A convolutional neural network (CNN) can be considered as a neural network that utilizes numerous identical replicas of the same neuron. The benefit of this is that it enables a network to learn a neuron once and use it in numerous places, simplifying the model learning process and thus reducing error. This has made CNNs particularly useful in the area of object recognition and image tagging.

CNNs learn more and more abstract representations of the input with each convolution. In the case of object recognition, a CNN might start with raw pixel data, then learn highly discriminative features such as edges, followed by basic shapes, complex shapes, patterns and textures.

 

source: http://stats.stackexchange.com/questions/146413

 

 

Recurrent Neural Network

Recurrent Neural Networks (RNN) make use of sequential information. Unlike traditional neural networks, where it is assumed that all inputs and outputs are independent of one another, RNNs are reliant on preceding computations and what has previously been calculated. RNNs can be conceptualized as a neural network unrolled over time. Where you would have different layers in a regular neural network, you apply the same layer to the input at each timestep in an RNN, using the output, i.e. the state of the previous timestep as input. Connections between entities in a RNN form a directed cycle, creating a sort of internal memory, that helps the model leverage long chains of dependencies.

Recursive Neural Network

A Recursive Neural Network is a generalization of a Recurrent Neural Network and is generated by applying a fixed and consistent set of weights repetitively, or recursively, over the structure. Recursive Neural Networks take the form of a tree, while Recurrent is a chain. Recursive Neural Nets have been utilized in Natural Language Processing for tasks such as Sentiment Analysis.

 

source:  http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf

 

 

Supervised Neural Network

For a supervised neural network to produce an ideal output, it must have been previously given this output. It is ‘trained’ on a pre-defined dataset and based on this dataset, can produce  accurate outputs depending on the input it has received. You could therefore say that it has been supervised in its learning, having for example been given both the question and the ideal answer.

Unsupervised Neural Network

This involves providing a programme or machine with an unlabeled data set that it has not been previously trained for, with the goal of automatically discovering patterns and trends through clustering.

Gradient Descent

Gradient Descent is an algorithm used to find the local minimum of a function. By initially guessing the solution and using the function gradient at that point, we guide the solution in the negative direction of the gradient and repeat this technique until the algorithm eventually converges at the point where the gradient is zero – local minimum. We essentially descend the error surface until we arrive at a valley.

Word Embedding

Similar to the way a painting might be a representation of a person, a word embedding is a representation of a word, using real-valued numbers. Word embeddings can be trained and used to derive similarities between both other words, and other relations. They are an arrangement of numbers representing the semantic and syntactic information of words in a format that computers can understand.

Word vectors created through this process manifest interesting characteristics that almost look and sound like magic at first. For instance, if we subtract the vector of Man from the vector of King, the result will be almost equal to the vector resulting from subtracting Woman from Queen. Even more surprisingly, the result of subtracting Run from Running almost equates to that of Seeing minus See. These examples show that the model has not only learnt the meaning and the semantics of these words, but also the syntax and the grammar to some degree.

 

 

So there you have it – some pretty technical deep learning terms explained in simple english. We hope this helps you get your head around some of the tricky terms you might come across as you begin to explore deep learning.

 

 




News API - Sign up




0

Data Science

 

Introduction

On Monday we showed you how we analyzed 1.8 million tweets associated with Super Bowl 50 in order to gauge the public’s reaction to the event. While the Denver Broncos and Carolina Panthers waged war on the field, a battle of ever-increasing popularity and importance was taking place off it. I am of course talking about the Super Bowl ads battle, where top brands pay top coin for a 30-second slot during one of sport’s greatest spectacles.

This post comes on the back of the ‘Text Analytics Delivers Game-Changing Customer Insight’ webinar that we ran in conjunction with our friends at RapidMiner. You can check out the video here.

With a viewership of 111.9 million in the United States alone (35% of population), Super Bowl 50 was the third most-watched event in US history, coming in just behind Super Bowl 49 and Super Bowl 48. In fact, the Super Bowl accounts for the top seven US broadcast events of all time, so it’s easy to see why brands pay what they do to be involved, which is roughly $4.5 – $5 million for that 30 seconds of airtime alone. That’s over $166,000 per second.

So after analyzing viewer sentiment toward the pigskin throwers on the field, we thought it would also be cool to find out which brands brought home the bacon in the ads battle, this time using RapidMiner and AYLIEN Text Analysis.

 

 

Data Collection  – Twitter

Over the course of two days and nights, we collected 120,000 tweets that mentioned, or were related to, the brands that advertised during Super Bowl 50. We focused our attention on 15 top brands by analyzing sentiment and clustering the results to see how viewers reacted to the various ads on show. Ultimately, we wanted to uncover the major winners and losers based on viewer sentiment from collected tweets.

Using the RapidMiner Search Twitter operator, we gathered tweets related to our 15 brands. and then got to work on prepping our data. To do this, we;

  • cleaned the tweets by removing links
  • removed retweets
  • kept the meta data we needed (user ID’s, geolocation, hashtags and mentions)
  • removed non-subjective tweets to ensure we were only concentrating on tweets that contained opinions and that we were mining relevant tweets that would give us real insights into the opinions of the viewers.

Initially, we focused mainly on volume to get a handle on what exactly people were talking about, what brands they mentioned most, and how the brand-related chatter developed in the build-up to the game, during the game, and in the aftermath. As you can see from the graph below, there were clear and predictable spikes in chatter volume during the game itself. What is also interesting to see is how the brands managed to generate significant hype even before their ad was aired, and continued to do so for hours after the game had ended and the Panthers fans had cried themselves to sleep. Sorry Carolina 🙁

As you can see, Amazon completely dominated with just under 40% of all collected tweets mentioning the brand or their related keywords (Kindle, Echo, etc). The graph shows how the brand chatter develops before, during and after the game, with Amazon remaining top throughout in terms of volume.

One interesting observation we made from this graph was the sharp increase in chatter around Budweiser, in an otherwise (relatively) quiet period for the beer brand. We decided to do a bit of research on this spike and came across a tweet from Budweiser’s Head of Marketing Communications which quickly explained the sharp increase;

 

 

February 8, 2016

Hmmm! We’ll leave you to decide on the legitimacy of this claim but either way, it really shows the power of celebrities and the effect they can have on brand awareness with a simple mention.


K-means clustering of brand-related keywords

Next we wanted to find out what people were talking about when they tweeted about our 15 chosen brands. With the help of Thomas Ott, from the super-smart team at RapidMiner, we used RapidMiner’s text processing capabilities to create clusters of words using the k-means algorithm. This allowed us to understand what it was that people were talking about in each tweet.

As an example, let’s take a look at our chatter-volume champions, Amazon.

 

This keyword cluster nicely displays the words that were used in Amazon-related tweets. For organizations wanting to know what words and phrases customers are using in relation to their brand, this information can be extremely valuable.

 

 

The true voice of the customer

While it was interesting to know what keywords people were using and what brands they were tweeting about, what we really wanted to know was their opinion towards each brand and the sentiment of their tweets – whether it be positive, negative or neutral. We wanted to hear the true voice of the customer.

To achieve this, we utilized the Sentiment Analysis capabilities of our AYLIEN Text Analysis API to give us an indication of the polarity of the text. As expected, 60-70% of the tweets we collected were neutral, with viewers expressing neither positive or negative sentiment in their tweets. The real insight, however, came from the tweets with positive and negative polarity. This is where we found some clear winners and losers in the Super Bowl ads battle.

So let’s take a look at the good, the bad and the ugly from Super Bowl 50;

The Good

As you may have guessed from the previous graph showing the high volume of chatter around the brand, Amazon came out on top in the battle of the brands. The graph below shows viewer sentiment toward Amazon’s ad campaign and that tall green spike represents a lot of love for what was a star-studded 30 second triumph for the retail giant.

This amount of positive sentiment was, of course, a huge plus for Amazon. However, we all know that love doesn’t pay the bills and the goal for this ad campaign was to boost sales for their latest gadget, the Amazon Echo.

Result: In a matter of days, the Echo rose to second place in the bestsellers list. Well played Amazon.


The Bad

At the opposite end of the scale, we had the brand that received the highest amount of negative sentiment toward their Super Bowl ad. In what was their first Super Bowl appearance, PayPal failed to inspire and ultimately paid the price for playing it too safe.

As you can see from the graph below, PayPal suffered a sharp increase in negative sentiment immediately after their ad aired.

While Amazon were basking in the glory of a superbly executed ad campaign with significant sales increases, PayPal were being mocked by the likes of AdWeek who mirrored general opinion that their Super Bowl offering was ‘safe’ and ‘boring’.


 

The Ugly?

One brand that certainly didn’t play it safe was Mountain Dew with their Puppy Monkey Baby ad. If you haven’t seen it, picture a dog head on a monkey torso with human baby legs, wearing a diaper. Yep.

Initial reaction proved to be mixed, with sentiment leaning more towards the negative side than positive as viewers perhaps found Mountain Dew’s hybrid creature a tad disturbing.



The blue line in the graph below shows polarity swaying from positive to negative, perhaps indicating a love-it-or-hate-it response from viewers.

Mountain Dew clearly went for the shock factor here and while their ad may have drawn as much negative sentiment as it did positive, the viral appeal of this little monster can not be denied. At the time of writing, the Puppy Monkey Baby ad has been viewed 22.8 million times on YouTube alone. Compare this to PayPal (1.7 million views) and Amazon (17.8 million views) and you can see how successful that shock factor has been.

Key Takeaways

In today’s world, if someone wants to express their opinion on a brand, product, service, or anything really, they will more than likely do so on social media. It is therefore important for organizations to perform social listening to gauge customer sentiment toward their brand, campaigns or even their competitors. There is a wealth of information published through user generated content that can be accessed in near real-time using Text Analysis and Text Mining solutions and techniques.

  • Social media is the modern day focus group
  • The business insight that can be mined from online chatter is often overlooked
  • User generated content is plentiful, it’s timely and it’s often opinionated which means it’s extremely useful to brands
  • If you’re running a Super Bowl ad, fill it with A-list celebrities or a Puppymonkeybaby!





Text Analysis API - Sign up




0