# Sentiment Analysis Update; How have we improved?

Sentiment Analysis is a well-known task in Text Analysis, and it’s defined as the use of Natural Language Processing, Machine Learning and Computational Linguistics to identify and extract subjective information in source materials. It’s also commonly known as opinion mining.

Extracting and understanding opinions from text is an extremely hard thing for machines to do, heck it’s even difficult for humans to decide on a piece of text is positive or negative. There are a number of reasons for this, there can be mixed sentiment in a piece of text, there can be sarcastic tones present, the presence of slang or short-hand writing, etc.

Sentiment Analysis is an area of hot debate and active research in the data mining world. It’s a data analysis technique often bad mouthed and trashed as inaccurate and misleading.

So why is it such a hot topic then? Why haven’t people just given up on it? Why are companies and researchers still fixated on solving this problem?

In short it’s because of the opportunity out there, there is a wealth of information hidden in user generated content (news articles, reviews, Tweets, Facebook posts, Instagram comments) and the shear rate at which we’re creating this sort of content online means, human analysis just isn’t able to keep up without the help of modern technology. Being able to mine text for opinions is big business for brands, governments and researchers. Analyzing opinions on social media is the modern day focus group the only difference is you’re getting honest feedback and opinions from outside of a controlled environment.

This is why, at AYLIEN, we’re focused on constantly keeping our Sentiment Analysis models to the highest standard possible when it comes to accuracy (precision, recall and confidence) and why we’re constantly evaluating and updating our approach to the problem.

We recently updated our sentiment model and so far following our testing and customer feedback we’re really happy with the improvements we’ve seen.

### Accuracy

So, firstly, we’ve seen an improvement in how accurate our system is. State-of-the-art performance for Sentiment Analysis systems on Twitter data is believed to be around ~80% accuracy. Following tests on our updated models we’ve had an overall (7-8%) increase in accuracy compared to our previous model, which actually takes us into the ~80% range, so closer to, and in some cases better than, state-of-the-art (yay!).

Pro tip: if anyone tells you their Sentiment Analysis is 100% accurate, especially on Social data…you should turn around and run as fast as you can.

### Confidence Scores

Second, we’ve also significantly improved how we calculate our confidence scores to ensure our end users know how confident we are on each prediction we make. As you may have heard us say before, a good Sentiment Analysis solution should not only be accurate in its results but it should also know when it might be wrong, so the accuracy of the confidence score is as important as the actual prediction, if not more.

Finally, we’ve seen some massive improvements in how we handle Negation in text.

### Negation and Mixed Sentiment

Like we said at the beginning, one of the main challenges with understanding sentiment and opinions, is the complexities involved in how us human beings express our thoughts and opinions, and form a message.

Often when people give feedback about something, it’s a mix of things they liked and disliked about that thing. So for instance you might say “I like the battery life of this phone, but the screen sucks!”.

Also some messages are commonly expressed as a negation of another message, so we typically say “I don’t like the food” instead of “I dislike the food”.

Both of these complexities would impose challenges for Sentiment Analysis systems. It means that you can’t rely on only the “polar” words (e.g. “like”, “love”, “hate”, etc) but you also need to take their context, and the general structure of the sentence into account, in order to make better judgements.

With this release we have fixed a lot of the issues our previous models had with negation.

### Let’s see a few examples:

Input:

“I don’t like their food”

Output:

{
polarity :  negative,
subjectivity :  subjective,
text :  i dont like their food,
polarity_confidence :  0.6314665758824843,
subjectivity_confidence :  0.9999774309011896
}

Input:

“I don’t like their food, but the service is great”

Output:

{
polarity :  positive,
subjectivity :  subjective,
text :  i dont like their food, but the service is great,
polarity_confidence :  0.8229126377773636,
subjectivity_confidence :  0.9999797608112301
}

Input:

“I like their food, but the service is terrible”

Output:

{
polarity :  negative,
subjectivity :  subjective,
text :  i like their food, but the service is terrible,
polarity_confidence :  0.9981542081035445,
subjectivity_confidence :  0.9999999992706756
}

We enjoy working on hard problems at AYLIEN and they don’t come much harder than teaching machines to understand opinions from text. Sentiment Analysis is something we’re constantly working on, we’re regularly updating and tinkering with our models to off the best, most accurate service we can.

So what’s next?

Well, we can’t tell you much, but we’ve been working hard on a ground-breaking Sentiment Analysis pipeline that we will be launching later this year. So stay tuned!

Give it a try, check out our Sentiment Analysis demo.

Let's Talk