Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78

Introduction

As part of our blog series, ‘Text Analysis 101: a basic understanding for business users’, we will aim to explain how Text Analysis and Natural Language Processing works from a non-technical point of view.

For the first installment, we are going to discover how text is understood by machines, what methods are used in text analysis and why Entity and Concept extraction techniques are so important in the process.

Text Analysis

Text Analysis refers to the process of retrieving high-quality information from text. It involves Information Retrieval (IR) processes and lexical analysis techniques to study word frequency, distributions, patterns and utilizes information extraction and association analysis to attempt to understand text. The main goal of Text Analysis as a practice is to turn text into data for further analysis, whether that is from a business intelligence, research, data analytics or investigative perspective. There are certain aspects of text, that can be identified with modern techniques, that allow machines to understand a document, article or piece of text.

Technological advancements, greater computing power and investment in research has meant Natural Language Processing techniques have evolved, performance has improved and adoption across the business world has grown dramatically with the Text Analytics market now, according to Alta Plana’s latest report “Text Analytics 2014: User Perspectives on Solutions and Providers.”  having an estimated market value exceeding $2bn.

Traditionally NLP techniques focused on words. These techniques relied on statistical algorithms to analyze and attempt to understand text. However, there has been a push in recent times to equip machines with the capabilities to not just analyze, but to “understand” text. There are numerous approaches to the problem some being more popular and more accurate than others.

Document Representation Models – Bag of Words and Bag of Concepts

Traditionally, analysis systems were focused on words and they failed to identify concepts when attempting to understand text. The diagram below outlines how, as we move up the pyramid and consider concepts in our analysis, we move closer to machines extracting meaning from text.

Bag of Words

The bag-of-words model is a representation that has been traditionally used in NLP and IR. In this model, all grammar, sentence structure and word order can be disregarded and a piece of text, a document or a sentence can be seen or represented as a “bag of words”. The collection of words can then be analyzed using the Document-term Matrix for occurrences of certain words, in order to better understand the document, based on its most representative terms. While analyzing words is somewhat successful, a greater focus on concepts within text has proven to increase a machine’s overall understanding of text.

Bag of Concepts

Looking beyond just the words on the surface of a document can provide context to improve a computers’ understanding of text. As demonstrated in the pyramid above, analyzing the words alone can be seen as a base level analysis while considering concepts as part of the analysis goes a step further to improve overall understanding.

While a concept based approach may provide greater insight, by not relying on the words alone and considering concepts as part of the analysis process. Combining both the BoW and BoC approaches to understanding text, performance and accuracy can be greatly improved. This is especially true when we are dealing with a somewhat lesser known sample of text.

You can read more about the Bag of Concepts approach here:

“Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization”

To move towards more of a concept-based model of Text Analysis we need to be able to identify entities and concepts within a text. In order to understand how this is done it’s important to discuss, what entities and concepts are and how we identify and utilize them from an analysis point of view.

Entities

An entity is something that exists in itself, a thing with distinct or independent existence.

Concepts

A concept can be defined as an abstract or generic idea generalized from particular instances.

But how can machines recognize entities and concepts in text?

Named Entity Recognition (NER)

Also known as Entity Extraction, NER, aims to automatically locate and classify elements of text into predefined categories such as the names of persons, organizations, locations, expression of times, quantities, monetary values, percentages, etc. The NER approach uses either linguistic grammar-based techniques or statistical modelling techniques or both to identify and extract entities from text.

Consider the following piece of text as an example:

“Michael loved the Apple iPhone. He always admired Steve Jobs, but he couldn’t justify spending over $500 on a new phone.”

Using NER certain mentions of Entities can be identified in a sentence or entire piece of text, as is highlighted below:

“Michael [Person] loved the Apple [Organization] iPhone . He always admired Steve Jobs [Person] but he couldn’t justify spending over $500 [Money] on a new phone.”

It isn’t always possible, however, to identify entities in a piece of text using NER exclusively. Written language isn’t always exact and trying to understand a piece of text without considering the context can lead to inaccuracies. Is a mention of Apple referring to the company, the fruit or even the artist Billy Apple? That is where disambiguation and concepts can add more clarity and accuracy to the analysis process.

Named Entity Disambiguation (NED)

Named entity disambiguation can be used to identify and extract concepts from text. Its approach to the problem differs to NER in that it doesn’t rely on grammar or statistics. Also known as entity linking, NED utilizes a knowledge base to use as a reference to identify entities. This could be a public knowledge base like Wikipedia or a training text which is often domain specific.

The process is outlined simply below:

Step 1. Spotting: looking for surface forms like “apple” (the sequence of the letters a-p-p-l-e)

Step 2. Candidate generation: identifying potential candidates, Apple inc, Apple (the fruit), Billy Apple etc.

Step 3. Disambiguation: referencing a knowledge base and considering the context to identify a concept.

Entities vs Concepts

For the most part it is often best to identify and extract both named entities and concepts in order to fully understand a piece of text. Entities may be common and well known and easy to identify, but there may also be concepts within your text that would be overlooked without the disambiguation process.

Identifying concepts does have some advantages over only considering entities as part of the entire analysis process. By referring to a knowledge base, like Wikipedia, further information about a concept can be identified and utilised. For example, in an article that mentions Steve Jobs, iPhone, Mac and Palo Alto but not “apple”, based on the information sourced in your knowledge base, you could still identify “apple” as a concept.

Concepts can also be used to pull additional information and insights from a knowledge base, providing an automated and straightforward way to enhance and augment any document. For instance, for every concept of type “place”, a map of that place could be added to the document, knowing the place’s exact latitude and longitude.

Being able to identify Entities and Concepts means key aspects can be identified and extracted from documents, articles, emails etc. which allows machines to provide greater analysis and enhancement capabilities and a deeper understanding of text.

Our next blog in the series will focus on how text is classified and summarized automatically.





Text Analysis API - Sign up




0

Introduction

This is a continuation of our getting up and running with AYLIEN Text Analysis API blog series. In our first ‘getting started’ blog, we went through the process of signing up for the API, obtaining your Application ID, Application Key and making calls with Node.JS. For this blog, we are going to focus on working with the API using Java.

We’re going to walk you through how easy it is to perform some basic Text Analysis processes like, detecting what language a piece of text is written in, analyzing the sentiment of a piece of text and finally generating some hashtags for a URL that can be used for maximum exposure when sharing content on social media.

To give you an overview of what can be achieved, we will first look at the code in action. We will then go through the code, section by section, to investigate each of the API endpoints used.

 

Overview of the code in action

The complete code snippet is given at the end of this blog for you to copy and paste. To run it, open a text editor such as notepad or sublime text and copy and paste the snippet. Ensure you replace the YOUR_APP_ID and YOUR_APP_KEY placeholders in the code with the application id and application key which you received when you signed up for the APIs.

Save the file as TextAPISample.java and then open a windows command prompt.
Navigate to the folder where you saved the code snippet and compile the code by typing “javac TextAPISample.java”

Note: You will need to have the java development kit (jdk) installed to compile and run this example, you can download it here.

 

 

You can then run the code by typing “java TextAPISample”. Once you run it, you should receive the output as shown below.

 

image

 

In this case we have detected that the text is written in English and the sentiment or polarity of the text is positive. We have also analyzed and generated hashtags for a URL, that points to a BBC wildlife photography story.

The detail above shows the code running in its entirety, but to highlight each feature/end-point we will now go through the code snippet, section by section, to explain the workings of Language Detection, Sentiment Analysis, and finally Hashtag Suggestion.

Language Detection

Using the language detection endpoint you can analyze a piece of text or a URL. In the function we have used in this demo code, the parameter “textOrUrl” controls whether the call is made specifying the text directly or as a URL.

public static Language getLanguage(String text, String textOrUrl)
{
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("language", parameters);
    Language language = new Language();
    NodeList nodeList = doc.getElementsByTagName("lang");
    Node langNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("confidence");
    Node confNode = nodeList.item(0);
    language.setLang(langNode.getTextContent());
    language.setConfidence(Double.parseDouble(confNode.getTextContent()));
 
    return language;
}

In this case we have specified that it should analyze the following text “John is a very good football player!” and as you can see from the output below, it determined that the text is written English. Note: For all of the endpoints, the API returns the text which was analysed for reference and we have included it in the output in each case.

Result:

John is a very good football player!
Language: en (0.999998)

 

Sentiment Analysis

Similarly, the Sentiment Analysis endpoint can take a piece of text or a URL and analyze it.

public static Sentiment getSentiment(String text, String textOrUrl)
{
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("sentiment", parameters);
    Sentiment sentiment = new Sentiment();
    NodeList nodeList = doc.getElementsByTagName("polarity");
    Node polarityNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("polarity_confidence");
    Node confNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("text");
    Node textNode = nodeList.item(0);
    sentiment.setText(textNode.getTextContent());
    sentiment.setPolarity(polarityNode.getTextContent());
    sentiment.setPolarityConfidence(Double.parseDouble(confNode.getTextContent()));
 
    return sentiment;
}

In this case, we have also specified that it should analyze the text “John is a very good football player!”. The API has determined that the sentiment of the piece of text is positive.

Result:

John is a very good football player!
Sentiment: positive (0.999984)

 

Hashtag Suggestions

Finally, the Hashtag Suggestion endpoint, analyses a URL and generates a list of hashtag suggestions:

public static Hashtags getHashtags(String text, String textOrUrl)
{
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("hashtags", parameters);
     NodeList nodeList = doc.getElementsByTagName("hashtag");
    Hashtags hashtags = new Hashtags();
    List<String> hts = new ArrayList<String>();
    for (int i = 0; i < nodeList.getLength(); i++) {
      Node currentNode = nodeList.item(i);
      hts.add(currentNode.getTextContent());
    }
 
    hashtags.setHashtags(hts.toArray(new String[hts.size()]));
 
    nodeList = doc.getElementsByTagName("text");
    Node textNode = nodeList.item(0);
    hashtags.setText(textNode.getTextContent());
 
    return hashtags;
}

 

For hashtag suggestions, we have used an article about wildlife photography published on the BBC news website http://www.bbc.com/news/science-environment-29701853. The hashtag endpoint first extracts the text from the URL (which is returned for reference by the call and the start of which I have shown below) and then analyses that text and generates hashtag suggestions.

Results:

Hashtags: #France #BBCWildlife #BBCNews #Infrared #ZoomLens #SerengetiNationalPark #UnitedStates #Elephant #BBC #MultipleExposure #GEO #UK #RockMusic #NaturalHistoryMuseumLondon #GeostationaryOrbit #Haze #FocalLength #BBC #UnitedKingdom #SouthAfrica #Serengeti #ChemicalFormula

The text analyzed for hashtag suggestions is shown here for reference…

Slumbering lions win top wildlife photo prize

A stark image of lions resting on a rock outcrop in the Serengeti has won the 2014 Wildlife Photographer of the Year (WPY) Award…

For more getting started guides and code snippets to help you get up and running with our API, visit our Getting Started page on our website. If you haven’t already done so you can get free access to our API on our sign up page.

 

The complete code snippet:


import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.StringReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.util.Map;
import java.util.HashMap;
import java.util.ArrayList;
import java.util.List;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.DocumentBuilder;
import org.xml.sax.InputSource;
import org.w3c.dom.Node;
import org.w3c.dom.Document;
import org.w3c.dom.NodeList;
 
class Language {
  private String lang;
  private String text;
  private Double confidence;
 
  public String getLang() {
    return lang;
  }
  public void setLang(String lang) {
    this.lang = lang;
  }
 
  public String getText() {
    return text;
  }
  public void setText(String text) {
    this.text = text;
  }
  public Double getConfidence() {
    return confidence;
  }
  public void setConfidence(Double confidence) {
    this.confidence = confidence;
  }
}
 
class Hashtags {
  private String[] hashtags;
  private String text;
  public String[] getHashtags() {
    return hashtags;
  }
 
  public void setHashtags(String[] hashtags) {
    this.hashtags = hashtags;
  }
  public String getText() {
    return text;
  }
  public void setText(String text) {
    this.text = text;
  }
}
 
class Sentiment {
  private String polarity;
  private String text;
  private Double polarityConfidence;
 
  public String getPolarity() {
    return polarity;
  }
  public void setPolarity(String polarity) {
    this.polarity = polarity;
  }
  public String getText() {
    return text;
  }
  public void setText(String text) {
    this.text = text;
  }
  public Double getPolarityConfidence() {
    return polarityConfidence;
  }
  public void setPolarityConfidence(Double confidence) {
    this.polarityConfidence = confidence;
  }
}
 
class TextAPISample {
  private static final String APPLICATION_ID = "YOUR_APP_ID";
  private static final String APPLICATION_KEY ="YOUR_APP_KEY";
 
  public static void main(String[] args) {
    String text = "John is a very good football player!";
    String textOrUrl = "text";
    Language lang = getLanguage(text, textOrUrl);
    System.out.printf("n%sn",
        lang.getText());
    System.out.printf("Language: %s (%f)n",
        lang.getLang(), lang.getConfidence());
    Sentiment sent = getSentiment(text,textOrUrl);
    System.out.printf("n%sn",
        sent.getText());
    System.out.printf("Sentiment: %s (%f)n",
        sent.getPolarity(), sent.getPolarityConfidence());
    textOrUrl = "url";
    Hashtags hashtags = getHashtags("http://www.bbc.com/news/science-environment-29701853", textOrUrl);
    StringBuilder sb = new StringBuilder();
    for (String s: hashtags.getHashtags()) {
      sb.append(" " + s);
    }
    System.out.printf("nHashtags:%sn", sb.toString());
    System.out.printf("nThe text analyzed for hashtag suggestions is shown here for reference...n");
        System.out.printf("n%snn",
        hashtags.getText());
  }
 
  public static Language getLanguage(String text, String textOrUrl) {
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("language", parameters);
    Language language = new Language();
    NodeList nodeList = doc.getElementsByTagName("lang");
    Node langNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("confidence");
    Node confNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("text");
    Node textNode = nodeList.item(0);
    language.setText(textNode.getTextContent());
    language.setLang(langNode.getTextContent());
     language.setConfidence(Double.parseDouble(confNode.getTextContent()));
 
    return language;
  }
 
  public static Hashtags getHashtags(String text, String textOrUrl) {
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("hashtags", parameters);
     NodeList nodeList = doc.getElementsByTagName("hashtag");
    Hashtags hashtags = new Hashtags();
    List<String> hts = new ArrayList<String>();
    for (int i = 0; i < nodeList.getLength(); i++) {
      Node currentNode = nodeList.item(i);
      hts.add(currentNode.getTextContent());
    }
 
    hashtags.setHashtags(hts.toArray(new String[hts.size()]));
 
    nodeList = doc.getElementsByTagName("text");
    Node textNode = nodeList.item(0);
    hashtags.setText(textNode.getTextContent());
 
    return hashtags;
  }
 
  public static Sentiment getSentiment(String text, String textOrUrl) {
    final Map<String, String> parameters;
    parameters = new HashMap<String, String>();
    parameters.put(textOrUrl, text);
    Document doc = callAPI("sentiment", parameters);
    Sentiment sentiment = new Sentiment();
    NodeList nodeList = doc.getElementsByTagName("polarity");
    Node polarityNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("polarity_confidence");
    Node confNode = nodeList.item(0);
    nodeList = doc.getElementsByTagName("text");
    Node textNode = nodeList.item(0);
    sentiment.setText(textNode.getTextContent());
    sentiment.setPolarity(polarityNode.getTextContent());
     sentiment.setPolarityConfidence(Double.parseDouble(confNode.getTextContent()));
 
    return sentiment;
  }
 
  public static Document callAPI(String endpoint, Map<String, String> parameters) {
    URL url;
    HttpURLConnection connection = null;
 
    try {
      String queryString = "";
      StringBuilder sb = new StringBuilder();
      for (Map.Entry<String, String> e: parameters.entrySet()) {
        if (sb.length() > 0) { sb.append('&'); }
        sb.append(URLEncoder.encode(e.getKey(), "UTF-8")).append('=')
          .append(URLEncoder.encode(e.getValue(), "UTF-8"));
      }
      queryString = sb.toString();
      url = new URL("https://api.aylien.com/api/v1/" + endpoint);
      connection = (HttpURLConnection)url.openConnection();
      connection.setRequestMethod("POST");
      connection.setRequestProperty(
          "Content-Type", "application/x-www-form-urlencoded");
      connection.setRequestProperty(
          "Content-Length", Integer.toString(queryString.getBytes().length));
      connection.setRequestProperty("Accept", "text/xml");
      connection.setRequestProperty(
          "X-AYLIEN-TextAPI-Application-ID", APPLICATION_ID);
      connection.setRequestProperty(
          "X-AYLIEN-TextAPI-Application-Key", APPLICATION_KEY);
      connection.setDoInput(true);
      connection.setDoOutput(true);
 
      DataOutputStream dos = new DataOutputStream(connection.getOutputStream());
      dos.writeBytes(queryString);
      dos.flush();
      dos.close();
 
      DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
      DocumentBuilder builder = factory.newDocumentBuilder();
      InputSource xis = new InputSource(connection.getInputStream());
 
      return builder.parse(xis);
    } catch (Exception e) {
      e.printStackTrace();
      return null;
    } finally {
    }
  }
}





Text Analysis API - Sign up




0

Whether you’re a publisher, a content distributor or an advertiser, content is king if you are looking to increase engagement. As web users, we don’t visit and regularly return to websites, blogs and news sites if we don’t find them engaging or they don’t fulfill our needs. When we visit websites we generally want to learn something or be entertained. If businesses want to engage with audiences and more importantly relevant audiences online they need to be creating high quality and informative content.

 

image

What’s Changed?

As web users, the way we search for and consume information has dramatically changed. This has meant publishers and content creators have had to adapt too. More and more content is being consumed and shared online today than ever before according to a recent study by IBM; “90% of the data in the world today has been created in the last two years alone.” This content is also published and shared across numerous channels; blogs, platforms, news sites and social media for example which means staying ahead of the curve and informed has become harder than ever before.

Traditionally content distributors or even search engines would rely on serving relevant content to users based on their expressed requirements, for example through a Google search, we would enter our search term and they would provide a relevant result. It was a similar situation for advertisers and publishers, we would be served ads or promoted content based on our search terms or what we asked for.

Today things are a little different, as web users, we expect relevant content to be pushed to us and placed under our noses via our favourite blogs, social sites and even well targeted ads. We expect informative, relevant and sharable content to be automatically placed at our fingertips.

This new focus on content has also brought about some notable developments in the advertising and content distribution spaces. Content discovery platforms, publishers and advertisers have needed to adapt and not be left behind and have done a relatively good job of reacting to change and embracing technology.

Advancements in NLP, Machine Learning and Text Analysis are right at the heart of how content discovery and distribution has changed. Being able to analyze vast amounts of content and extract topics, entities, concepts, keywords and even being able to summarize vast amounts of text allows for easier and more accurate categorization, discovery and distribution.

So who Benefits?

With all of the advancements in technology being utilized to make the web better, it is difficult to know who benefits the most. Publishers, Advertisers, Content Distributors or Web users.

Publishers

Providing relevant content to attract visitors to your sites is one thing but keeping visitors engaged and returning is even more important to publishers. Being able to suggest another relevant article or video to readers means visitors are more likely to spend time on your site, re-visit your site, consume more content and share more articles.

Advertisers

Semantic Advertising (which you can read about here) allows advertisers to serve more targeted ads which results in higher CTR’s. Being able to analyze content and automatically serve well-targeted relevant ads means the ad publisher/networks as well as the brands behind the ads extend their reach, improve their relevancy and benefit from better performing ad campaigns.

Content discovery/distribution platforms

Text Analysis and Natural Language Processing technologies allow content platforms to easily discover popular content, group or categorize it and distribute that content effectively on the right channels to the right target audience resulting in a more engaged and growing user base.

Web Users

As web users, things have become a lot easier. We now utilise automated discovery tools, engage with content discovery platforms and follow influencers on social channels, for example, to be kept abreast of content we wish to consume. It has never been easier for us to discover and share content online. We can now consume and discover information, news articles, videos, services and products that are relevant to us on whatever channel or format we choose with little or no input ourselves.





Text Analysis API - Sign up





0

Introduction

This is a continuation of our getting up and running with AYLIEN’s Text Analysis API blog series. In our previous ‘getting started’ blog, we went through the process of signing up for the API, obtaining your Application ID and Application Key and making calls with Node.JS. For this blog, we are going to focus on working with the API using Windows PowerShell.

We’re going to walk you through how easy it is to perform some basic Text Analysis processes like, analyzing the sentiment of a piece of text, detecting what language a piece of text is written in and finally generating some hashtags for a URL that can be used for maximum exposure on social media, using PowerShell.

The script we’re going to use was put together by one of our Text Analysis API users, Doug Finke. Doug is the author of ‘PowerShell for Developers’, a six time recipient of the Microsoft Most Valuable Professional (MVP) for PowerShell and works with former PowerShell Team member James Brundage at Start-Automating. Start-Automating is a PowerShell specialist company delivering PowerShell automation, tools, consulting and training.

You can catch up with Doug at his blog Development in a Blink.

Getting Started

To begin open a Windows Powershell ISE by typing “powershell ise” in the Windows Run box.

To open a new PowerShell script window, Click File -> New.

Once you have your new window open, Copy and Paste the following PowerShell script into the script editor:


function Invoke-SentimentAnalysis {
       param(
       [Parameter(ValueFromPipeline=$true)]
       [string]$Text,
       $ApplicationId,
       $ApplicationKey
   )
   Begin {
       if($env:AylienApplicationId)  {$ApplicationId=$env:AylienApplicationId}
       if($env:AylienApplicationKey) {$ApplicationKey=$env:AylienApplicationKey}
       $Headers=@{
           "X-AYLIEN-TextAPI-Application-ID"=$ApplicationId
           "X-AYLIEN-TextAPI-Application-Key"=$ApplicationKey
       }
   }
   Process {
       $address = "https://api.aylien.com/api/v1/sentiment?text=$($Text)"
       Invoke-RestMethod -Uri $address -Headers $Headers
   }
}

Sentiment Analysis

The script above performs a call to the Sentiment Analysis endpoint of AYLIEN Text Analysis API to analyze the text contained within the $Text parameter, in this case ‘Have a nice day’.

You can invoke the script by running the following in the console pane of Windows Powershell ISE.


$env:AylienApplicationId  = '<Specify Your App ID>'
$env:AylienApplicationKey = '<Specify Your App Key>'
$text = $(
   'Have a nice day.'
)
$text | Invoke-SentimentAnalysis | Format-Table -AutoSize

Note: Replace the <Specify Your App Key> and <Specify Your App ID> placeholders in the snippet with the “Key” and “App ID” with your credentials you received during the sign up process.

Once executed you should receive an output similar to the result below:

text: Have a nice day                   

subjectivity: Subjective   

subjectivity_confidence: 0.9900886100760287 

polarity: Positive     

polarity_confidence: 0.75

Language Detection

To automatically detect the language that a piece of text is written in you can invoke the Language Detection by using the following function:


function Invoke-LanguageDetection {
       param(
       [Parameter(ValueFromPipeline=$true)]
       [string]$Text,
       $ApplicationId,
       $ApplicationKey
   )
   Begin {
       if($env:AylienApplicationId)  {$ApplicationId=$env:AylienApplicationId}
       if($env:AylienApplicationKey) {$ApplicationKey=$env:AylienApplicationKey}
       $Headers=@{
           "X-AYLIEN-TextAPI-Application-ID"=$ApplicationId
           "X-AYLIEN-TextAPI-Application-Key"=$ApplicationKey
       }
   }
   Process {
       $address = "https://api.aylien.com/api/v1/language?text=$($Text)"
       Invoke-RestMethod -Uri $address -Headers $Headers
   }
}

Running the script again from PowerShell as follows:


$env:AylienApplicationId  = '<Specify Your App ID>'
$env:AylienApplicationKey = '<Specify Your App Key>'
$text = $(
   'Can you spot what language this is written in?'
)
$text | Invoke-LanguageDetection | Format-Table -AutoSize

You should be left with the following results:

text: Can you spot what language this is written in?                                 

lang: en        

confidence: 0.9999965739026125

Hashtag Suggestions

Finally, to get a list of suggested hashtags for a given URL you can use the function below:


function Invoke-HashtagExtraction {
       param(
       [Parameter(ValueFromPipeline=$true)]
       [string]$Text,
       $ApplicationId,
       $ApplicationKey
   )
   Begin {
       if($env:AylienApplicationId)  {$ApplicationId=$env:AylienApplicationId}
       if($env:AylienApplicationKey) {$ApplicationKey=$env:AylienApplicationKey}
       $Headers=@{
           "X-AYLIEN-TextAPI-Application-ID"=$ApplicationId
           "X-AYLIEN-TextAPI-Application-Key"=$ApplicationKey
       }
   }
   Process {
       $address = "https://api.aylien.com/api/v1/hashtags?url=$($Text)"
       Invoke-RestMethod -Uri $address -Headers $Headers
   }
}

You can then run the script from PowerShell as follows


$env:AylienApplicationId  = '<Specify Your App ID>'
$env:AylienApplicationKey = '<Specify Your App Key>'

$url = $(
   'http://www.networkworld.com/article/2686045/smartphone/iphone6-review-roundup-the-raves-come-quietly.html'
)

$url | Invoke-HashtagExtraction | Format-Table -AutoSize

to receive the following output:

text: iPhone 6 review roundup – the …                                             

language: en                

hashtags: #BMW6Series, #Samsung, #Appleinc

More PowerShell function wrappers for the rest of the AYLIEN Text Analysis features can be found on GitHub.

For more getting started guides and code snippets to help you get up and running with our API visit our Getting Started page on our website. If you haven’t already done so you can get free access to our API on our sign up page.





Text Analysis API - Sign up





0

In recent years, Support functions particularly at SaaS companies have transformed from costly must haves to revenue securing and generating need to haves. Customer Support and Customer Success now contribute directly to company revenue through increasing sales and upgrades and, most importantly for SaaS companies in growth mode, by reducing churn.

What does good customer service look like?

According to a recent survey carried out by Zendesk, customers hold speed of response and speed of resolution in the highest regard when evaluating a customer support service.

As customers we have become needier, we expect immediate responses from service providers and overall we have become more open to using multiple channels for support queries other than the phone. We tweet about our frustrations of services going down, we email about account queries, we use chat boxes for immediate responses and we provide feedback through form submissions. SaaS companies for example, rely on their support functions to provide personal, timely and cost efficient support to their customer base where and when their customers want it.

It’s not only the SaaS industry which has seen a shift in how much it relies on Customer Support. The same can be seen in the e-commerce and online retail industries. Companies like Zappos pride themselves on Customer Support and have successfully positioned it as a differentiator over their competitors.

Good customer service is speedy, focused on customer success and personalized.

Multiple channels

Traditionally customer support was done over the phone, which often meant sitting on the end of the line on hold, only to speak to someone who would route you half way round the world and back again to even take a note of your request and log a ticket. Today things are different.

We have become more digitally inclined in how we interact with service or product providers and I know from my own experience that picking up the phone to call support is often a fallback for me. It’s pretty much the last thing I will do to try and resolve a problem. It’s what I will do after I have exhausted every other channel, self-service, chat, email even social media channels, looking for an answer to my query.

This has only made it more difficult for companies to stay on top of customer queries. In order to provide a seamless and efficient experience, support functions need to meet customers where they are and provide the same level of service across every channel being personalized, efficient and informative.

The shift towards a multi-channel support function has meant organisations have had to adapt to increase efficiencies and lower costs by adopting streamlined processes and procedures often through the introduction of technology. The growth of Zendesk, RightNow and Intercom in recent years, companies focused on customer interaction and support, highlights the fact that organizations are investing more in their support functions and are moving from traditional support offerings to more technology driven processes to provide less costly, more efficient and more intelligent processes.

Apart from phone support other common channels for customer support rely heavily on textual interactions, whether it’s email, social, chat or form submissions customers are interacting with service providers by communicating with text.

How can we leverage the large volumes of text gathered in support interactions?

 

 

Email

Analyzing the text of an email can provide insight into the context, intent and sentiment of a customer’s query. Being able to automatically route account queries to accounts, support requests to support and sales queries to a sales rep not just improves efficiencies, but also makes the interaction easier and more enjoyable for the customer.

Social

Keeping on top of all comments, tweets and shares on social channels can be pretty difficult and some comments or interactions often need immediate and personalized attention, especially those left by frustrated or disgruntled customers. By analyzing tweets and comments, you can automatically determine which interactions are support queries, frustrations or even compliments about your product or service, allowing you to action them appropriately.

Form Submissions

Free form submissions are a great way to gather customer feedback or even customer support queries. While they provide a method for customers to leave feedback, making sense of free form submissions can be quite difficult. They gather a lot of noise (useless submissions) and are for the most part unstructured which makes them difficult to search and report on. Having to manually trawl through, classify and monitor form submissions is extremely time-consuming and costly. As a result, they are often ignored and overlooked when they may hold extremely useful business insights.

By analyzing text, extracting keywords, entities, intent and determining the sentiment of a query, whether it is an email, chat, social comment or form submission you can effectively deal with customer queries and increase the productivity of support functions. Text analysis and natural language processing advancements allow support functions to be more consistent, timely and even personalized without losing efficiency.





Text Analysis API - Sign up





0

Social Listening

It simply isn’t an option these days for businesses to ignore the voice of their customers on social channels. There is a huge amount of business insight hidden in text on social channels, but it can be difficult to block out the noise and gain business insight from social data. Buying signals, support queries, complaints etc can all be gleaned from social chatter and activity by properly analyzing the voice of customers and users online. For more on this check out our blog on “Why sentiment analysis is important from a business perspective.”

Analyzing social data and listening to the voice of your customers can be hard and often involves costly software solutions and/or certain technical expertise to gather data, analyze it and visualize the results.

That’s pretty much why we built our Text Analysis add-on. We built it with the everyday analyst or marketer in mind. We wanted to provide a quick and easy way for our users to analyze text, without the hassle, cost and complications of traditional Text Analysis tools.

Our Text Analysis add-on is built on a package of machine learning and natural language processing API’s that allow you to perform sophisticated text analysis without any programming or technical expertise.

In this how-to guide we are going to demonstrate just how easy it is to collect tweets, analyze them and report on your findings from within Google Sheets.

If you haven’t used the add-on before you can download it here and if it’s your first time then check out our getting started tutorial to get up and running.

To build a social listening tool, you will need the following:

  • An inquisitive mind
  • Google Spreadsheets
  • AYLIEN Text Analysis add-on
  • Some way of gathering your tweets (Copy and paste, RSS feeds, Twitter Curator)

For the purpose of this blog, we are going to gather a sample of 100 tweets that mention Ryanair, analyze them, look for insights and graph our results. We will aim to automatically determine what language the tweets are written in, extract any mentions of locations and determine what the sentiment is towards Ryanair from this sample set of tweets.

Step 1 – Data Collection

Collect and gather your tweets in a blank spreadsheet. You can copy and paste the tweets from another source, use Twitter curator to collect your tweets with the click of button or if you have the technical expertise write a script to automatically mine Twitter. (Keep an eye out for our webinar and blog on how to build a basic twitter mining tool)

Step 2 – Analysis

Once you have your tweets laid out as desired in a Spreadsheet, start your Text Analysis Add-on. For a guide on how to get up and running with the add-on visit our tutorial page.

First things first, determine what language your tweet is written in by using the language detection function. (=language(X)). Keep in mind you can drag the formula down through the rest of the column to analyze all the tweets automatically, which saves a lot of time and effort.

 

Screen shot 2014-10-10 at 5.18.27 PM.png

Screen shot 2014-10-10 at 3.19.48 PM.png

 

Then, extract any mentions of locations by using the locations extraction function (=locations(X)) and do the same as above to drag the formula throughout the rest of the column.

 

Screen shot 2014-10-10 at 5.19.04 PM.png

 

Lastly, use the sentiment analysis feature to find out if the tweets are negative, positive or neutral. This can be done using the Sentiment Polarity Feature (=sentimentpolarity(x)).

 

Screen shot 2014-10-10 at 5.21.33 PM.png

 

Following this you should have a spreadsheet that looks like this:

 

Screen shot 2014-10-10 at 3.32.00 PM.png

 

(Keep in mind the colour coding on the sentiment column is down to the formatting of the column and isn’t generated automatically.)

So far we have collected and analyzed our tweets, now all that is left to do is build some pretty graphs to visualize the data.

Step 3 – Reporting

The advantage of having this data in a Spreadsheet means it is extremely flexible. It can be shared, copied, combined with other data and reported on very easily.

We are going to create some basic reports based off the data we have gathered throughout the process. This will be done entirely within Google Spreadsheets by utilising the pivot table report. Pivot tables are a very handy way of preparing your data to be visualized in graphs, you can read more about them here.

To get started with your report, select the range of data you want to report on. Choose data in the main toolbar and click on Pivot Table Report.

 

Screen shot 2014-10-10 at 3.34.53 PM.png

 

As an example we are going to create a simple bar chart showing the different languages of the tweets in our dataset.

Once you have clicked on Pivot Table Report in the drop down menu, a separate sheet called “pivot table 1” will open. In the sidebar of the sheet the is a reporting widget. Here is where you choose how your report is laid out.

In this particular report, we want to get a breakdown of the different languages used in the sample set of tweets and figure out what language is used the most.

Sort your “rows” by language and under “values” we also choose language. The report widget will be defaulted to summarize by SUM which will leave our table full of zero’s. This needs to be changed to “COUNTA” in order to display the count data.

 

Screen shot 2014-10-10 at 3.36.10 PM.png

 

Below is an example of what a basic pivot table should look like

Screen shot 2014-10-10 at 3.35.57 PM.png

 

In order to graph the results, choose the data you want to include by highlighting the appropriate cells in the table. Click on “insert” in your toolbar and choose “chart”.

You should be left with a simple bar chart like the “Tweets by Language” one below. You can get a bit more creative with how you customize your charts by adding colours and formatting.

You can choose from a wide range of bar charts, geo charts, pie charts etc… all of which are displayed in our completed graphs below.

Findings

Tweets by language: Using the language detection feature we could easily recognise that the majority of tweets out of our 100 sampled were in English.

image

Sentiment of tweets: Here we have provided a chart that displays the percentage of tweets that were positive, negative and neutral. On close inspection we noticed the majority of the neutral tweets were general enquiries or news reports.

image

Geo locations: Here we have displayed mentions of locations which were extracted automatically from tweets.

 

image

 

Sentiment of tweets by location: From studying this graph it is pretty clear to see that, of the small sample of tweets we analyzed, there was a lot of negativity in tweets that also mentioned Corfu. On further investigation it became clear that there was in fact a delayed flight which left passengers stranded in Corfu at the time the sample tweets were collected. image

You can download the add-on here and for more information on our Text Analysis add-on videos, cheat sheets and tutorials visit or tutorial page.





Text Analysis API - Sign up




0

What is eDiscovery?

In order to understand how Text Analysis technology can help as part of the eDiscovery process it is important to first understand, what eDiscovery is and why it is important in the legal profession. Wikipedia describes legal discovery as “the pre-trial phase in a lawsuit in which each party…can obtain evidence from the opposing party.” eDiscovery is an umbrella term used to indicate the discovery process for electronic documents.

 

image

 

Given that the vast majority of information is stored electronically in one form or another the discovery process requires law firm associates to review text documents, email trails etc to determine if they are relevant (responsive or non-responsive) to a  particular case. It is pretty much a data reduction and analysis task which is time-consuming and therefore an extremely costly process.

Given the proliferation of electronic documents within a corporate environment and the sheer mass of e-documents within an organization’s data warehouse one may have to consider documents numbering in the millions or tens of millions as part of a discovery process. It is almost impossible for a human being to trawl through such a vast amount of documents with a fine tooth comb without any technological assistance. Natural Language Processing and Machine Learning technologies, therefore, are well placed to add some smarts and automation to the process in order to save time, eliminate human error and overall reduce costs.

Text Analysis used in the process

Text Analysis practices can be used as part of an overall eDiscovery process to reduce time, increase accuracy and lower costs. Unsupervised and supervised methods can be used to achieve this goal.

Unsupervised Methods:

Machine Learning practices and the application of Text Analysis as part of the discovery process can help by allowing certain tasks such as Language detection of documents, Entity Extraction, Concept Extraction, Summarization and Classification to be conducted automatically. Metadata created for individual documents can also be considered in terms of the overall document repository to cluster documents by concept and uncover duplicate and/or near duplicate documents quickly with little or no heavy lifting.

Additionally the metadata created can allow the automatic discovery of topics in documents and add a temporal dimension to see how the topic evolves over time, this process is known as topic modelling. Consider email threading as an example i.e. taking what would otherwise be disparate emails and linking them together into a thread over time to see how the conversation evolved.

Supervised Methods:

While unsupervised methods are useful in the eDiscovery process they will most likely never entirely replace the human aspect of discovery, and for the most part they don’t aim to be a complete replacement. They’re more of a very smart and efficient aid in the process.

Major benefits are realized when predictive coding is combined with human review. This process is known as Technology Assisted Review or TAR. This is a process whereby a sample set of documents is analyzed, usually by a senior attorney, and scored in terms of responsiveness to discovery requests for the case. eDiscovery software applies mathematical algorithms and machine learning techniques to automatically analyze the rest of the documents and score them for relevance based on what it “learns” from the TAR process.

Scores generated through predictive coding can be used to automatically cull large numbers of documents from consideration without the need for human review.

Benefits

In recent years, the adoption of natural language processing and machine learning technologies as part of the eDiscovery process has been on the rise mainly due to the fact that it aids knowledge discovery, saves time and reduces costs.

Knowledge Discovery:

The sheer volume of documents and data to review as part of an eDiscovery process is massively overwhelming for a team of legal professionals who might be searching for a specific line of text among millions of documents. Sometimes they may not even know what they are looking for. Incorporating advanced and specialized technology into the process means the search and discovery process can ensure no page is left unturned.

Time:

In most cases eDiscovery projects are time bound and teams work day and night to meet important deadlines. With limited time and huge volumes of data and text to get through eDiscovery teams are often fighting an uphill battle. Technology can assist in processing large amounts of data in a fraction of the time it takes a team of legal professionals.

Cost:

The amount of data sources to be analyzed and the size of legal teams involved means an eDiscovery project can often prove quite costly. The introduction of Text Analysis as part of the eDiscovery process means the time it takes and the amount of professionals needed on an eDiscovery team can both be greatly reduced which in turn reduce the cost of the overall project.

Conclusion

It seems technology will never fully replace the role of a legal expert in the eDiscovery process, but as machines and software get smarter the role of technology in the entire process is only going to grow.





Text Analysis API - Sign up




0

Welcome to the second part of our blog series; Analyzing text in Rapidminer. In the first part of the series, we built a basic setup for analyzing the sentiment of any arbitrary text, to find out if it’s positive, negative or neutral. In this blog, we’re going to build a slightly more sophisticated process than the last one, which we can use to scrape movie reviews from Rotten Tomatoes and analyze them in RapidMiner.

In a nutshell, we’re going to:

  • Scrape movie reviews for The Dark Knight using the Web Mining extension for RapidMiner.
  • Run the reviews through AYLIEN Text Analysis API to extract their sentiment.
  • Compare the extracted sentiment values with the Fresh or Rotten ratings from reviewers to see if they follow the same pattern.

Requirements

  • RapidMiner v5.3+ (download)
  • Text Analysis API key (subscribe for free here)

Step 1: Extract Reviews from Rotten Tomatoes

The Web Mining extension comes with a set of useful tools designed to crawl the web and scrape web pages for information. Here we’re using the Process Documents from Web operator to scrape a review page, and we use XPath queries to extract the text of the reviews:

  • First, drag and drop a Process Control > Loop > Loop operator into your process. We will use the Loop operator to scrape multiple pages of reviews, which gives us more reviews to analyze.
  • Next, configure the Loop operator to run 5 times, which means we’re going to scrape 5 pages of The Dark Knight reviews – a total of 100 reviews.
  • Now double click the Loop operator and add the Web Mining > Process Documents from Web operator, which will fetch the contents of each review page and provide its HTML for further analyses.
  • Configure the newly added operator to fetch the Nth page of reviews, where N is the current iteration of the Loop operator. The url parameter should look like this http://www.rottentomatoes.com/m/the_dark_knight/reviews/?page=%{iteration}
  • Process Documents from Web exposes a sub-process for further analysis of the page contents. Double click the operator to access this sub-process and add a Text Processing > Transformation > Cut Document operator, which will extract individual reviews from a single review page.
  • Configure the Cut Document operator to segment the page using the following XPath query: //h:table[@class='table table-striped']/h:tr
  • The Cut Document operator will expose each extracted segment in a sub-process, so let’s add a Text Processing > Extraction > Extract Information operator to extract the actual text of the review.
  • Now let’s connect everything and run the process to get our 100 reviews.

Step 2: Analyze Reviews using Sentiment Analysis API

Now that we have run the process and we have our reviews, it’s time to send them to Text API’s /sentiment endpoint and see if they are positive, negative or neutral.

  • Let’s URL-encode the reviews first. To do that, we’re going to use the Web Mining > Utility > Encode URLs operator.
  • Next we’ll send the encoded text to the Text API using the Web Mining > Services > Enrich Data by Webservice operator.
  • So now we have our reviews and we have sent them to the Text API it’s time to run the entire process and analyze these 100 reviews!

As you can see, we get a polarity column that tells us whether each review is positive, negative or neutral.

Step 3: Extract Freshness scores and compare them to Sentiment values

What we accomplished in Step 2 is cool, but let’s evaluate the results by checking if the sentiment polarity scores match the “Freshness” scores given by Rotten Tomatoes reviewers.

For anyone who doesn’t know, the “Freshness” score on Rotten Tomatoes basically tells us whether a review is positive (Fresh ) or negative (Rotten ).

  • First things first, add a second XPath query to extract the Freshness score as a boolean value (Fresh/Rotten=not Fresh)
  • Before we can check the data for correlations, we must do a bit of a cleanup and pre-processing:
    1. Remove the text column after Sentiment Analysis is done, using the Select Attributes operator.
    2. Convert the polarity and fresh columns to Numerical columns so that for instance, Polarity=true becomes Polarity_true=1. For that, we’ll use the Nominal to Numerical operator.
  • Then we need to add a Modeling > Correlation and Dependancy Computation > Correlation Matrix operator, which basically discovers statistical correlations between independent variables.
  • Finally, Run the process again to produce a table similar to below.

What we see in the Correlation Matrix, is that polarity_positive has a positive correlation to fresh_true and polarity_negative has a positive correlation to fresh_false, which means we’ve predicted most of the polarity scores correctly.

That’s it 100 reviews scraped an analyzed using RapidMiner and AYLIEN Text Analysis API, pat yourself on the back, Good Job!





Text Analysis API - Sign up





0

Introduction

Getting up and running with AYLIEN’s Text Analysis APIs couldn’t be easier. It’s a simple 3 part process from signing up to calling the API. This blog will take you through the complete process of creating an account, retrieving your API Key and Application ID, and making your first call to the API.

Part 1: Signing up for a free account

Navigate to http://aylien.com/getting-started/ and click on the “Subscribe For free” button. This will bring you to a sign up form which will ask for your details in order to setup your account and generate your credentials.

By signing up, you will get access to our basic plan which will allow you to make 1,000 API calls per day for free. Note: There is no credit card needed to get access to our basic plan. 😉

Part 2: Retrieving your API Key and Application ID

Upon signing up you will receive an email with an activation link.

Clicking on the link will activate your account and direct you to a sign in page (developer.aylien.com). Use the credentials you set in the sign up process to login.

Once signed in you will be brought to the Text Analysis API page where your API Key and Application ID will be displayed. (Make sure you make a note of these.)

Next make your way to our getting started guide by clicking on the getting started button.

Part 3: Creating your first application

Our getting started guide is designed to get you up and running with the API and making calls as quickly and as easily as possible. Here you will find information on the API Documentation, Features, Links to a demo and some code snippets.

We have included sample code snippets for you to use in the following languages.

  • Java
  • Node.js
  • Python
  • Go
  • PHP
  • C#
  • Ruby

To start making calls, while you’re on the getting started page, scroll down to the “Calling the API” section. Choose which language you wish to use and take a copy of the code snippet. In this example, we are going to use Node.js.

We are going to walk through two very simple examples of how to call the API. The first analyzing a simple piece of text for sentiment and language detection and the second, analyzing a URL.

First things first, copy your code snippet and paste into a text editor. Replace the YOUR_APP_KEY and YOUR_APP_ID constant placeholders in the code with the “Key” and “App ID” from your credentials and save the file. In this case I have called it codesnippet.js.

const APPLICATION_KEY = “YOUR_APP_KEY”,
APPLICATION_ID = “YOUR_APP_ID”;

The code snippet is very simple; it sets up the parameters to use with the endpoints as ‘text’ with just one sentence to analyze i.e. ‘John is a very good football player!’. We are going to make two calls to the API to analyze the sentiment (whether it’s positive, negative or neutral) and to detect what language it is written in.

Sentiment Analysis:

var parameters = {'text': 'John is a very good football player!'};
call_api('sentiment', parameters, function(result) {
  console.log(result);
});

Language Detection:

call_api('language', parameters, function(result) {
  console.log(result);
});

To run the application on Windows open a command prompt and run the snippet by typing “node” followed by the path to your text file (assuming you have Node.js installed on your computer, if not you can get it from http://nodejs.org/download/).

The results will be displayed in the command prompt as follows:

c:Program Filesnodejs>node "c:UsersUserDocumentscodesnippet.js"
{ text: 'John is a very good football player!',
lang: 'en',
confidence: 0.9999970820218829 }
{ text: 'John is a very good football player!',
subjectivity: 'objective',
subjectivity_confidence: 0.9996820178364304,
polarity: 'positive',
polarity_confidence: 0.9999836594933543 }

So that’s a pretty simple example, but what if we want to do something a little more advanced like analyzing a URL.

Let’s say we have an article online that we want to analyze. We want to summarize it, extract any entities mentioned and generate optimal hashtags for that article so we can be sure we maximize its exposure.

In this case we are going to analyze an article about the iPhone 6. There is just a couple of changes we will need to make from the last example in order to summarize the article, extract the entities mentioned and generate some hashtags.

We need to change the var parameters:

var parameters = {'url': 'http://www.networkworld.com/article/2686045/smartphone/iphone6-review-roundup-the-raves-come-quietly.html'}

And use alternative API calls. You can find a full list of features and end-points in our documentation.

Summarization:

call_api('summarize', parameters, function(result) {
  console.log(result);
});

Entity Extraction:

call_api('entities', parameters, function(result) {
  console.log(result);
});

Hashtag Suggestion:

call_api('hashtags', parameters, function(result) {
  console.log(result);
});

Results will be shown as follows:

Article Summary:

  • Why the entry-level iPhone 6 has just 16GB of storage +nnThe lack of a strong reaction, either positive or negative, to the iPhone 6 series is largely due to the fact that the iPhone 6 and iPhone 6 Plus don’t introduce a lot of revolutionary new features – there are hardware updates aplenty, of course, but they’re generally incremental upgrades, bringing Apple’s top-end devices into relative parity with the latest from Samsung, et al.
  • ‘I love the old iPhone size so much, and I’ve spent so much time with it, that it’s going to take longer than a week to adjust to a new size – especially so when I spent half the week using the ginormous iPhone 6 Plus.’
  • ‘USA Today tech columnist Ed Baig was wowed by the iPhone 6 Plus’ display, construction and generally high level of polish:nn
  • These are the phones Apple devotees have been waiting for: iPhones that measure up to what’s fast becoming the new normal – the large, modern smartphone display.
  • Make no mistake: The most important new thing about the iPhone 6 and iPhone 6 Plus is their size.

Entities:


{
    organization: ['Fine',
        'Samsung',
        'Apple',
        'Bloomberg Businessweeks Joshua Topolsky '
    ],
    keyword: ['iPhone 6 and iPhone 6 Plus is their size',
        'iPhone 6 and iPhone',
        'iPhone 6 or iPhone',
        'iPhone size',
        'iPhones instantly make Apple',
        'iPhone 6 review',
        'iPhone 6 series',
        'iPhone user',
        'iPhone',
        'screen size',
        'phones Apple',
        'phones with larger screens',
        'size',
        'impressed by the 6 series',
        'Apple',
        'review',
        'Phones',
        'series',
        'screen',
        'users'
    ],
    date: ['Today'],
    person: ['John Gruber',
        'Gruber',
        'Ed Baig',
        'Jason Snell',
        'David Pierce',
        'Verge'
    ],
    product: ['iPhone']
}

Hashtag Suggestions:


{
    language: 'en',
    hashtags: ['#BMW6Series',
        '#Samsung',
        '#AppleInc',
        '#OriginalEquipmentManufacturer',
        '#NetworkWorld',
        '#JasonSnell',
        '#WaltMossberg',
        '#JohnGruber',
        '#Reachability',
        '#Headache',
        '#Phablet',
        '#Rave',
        '#JoshuaTopolsky',
        '#DaringFireball',
        '#USAToday'
    ]
}

There you have it, that’s how easy it is to get up and running with AYLIEN Text Analysis API.





Text Analysis API - Sign up





0