On Artificial Intellience: Branches, Applications and Challenges

  • This blog is an adaptation of a talk, “Computer intelligence”, delivered by our founder Parsa Ghaffari (@parsaghaffari) and Trinity College, Dublin, research fellow and founder of Wripl, Kevin Koidl (Ph.D., M.Sc. (Dipl. Wirtsch. Inf. TU)). (@koidl)
  • The talk is a discussion of how computer, or artificial, intelligence works, its applications in the industry and the challenges it presents. You can watch the original video here.


Artificial intelligence or Computer intelligence [1] is hot in the tech scene right now, it’s high priority with tech giants like Google and Facebook, journalists are writing about how it will take our jobs and our lives and it’s even hot in Hollywood (although mostly in a technophobic fashion typical of the 21st century Hollywood).

In the industry, all of a sudden AI is everywhere and it almost looks like we’re ready to replace Marc Andreessen’s famous “software is eating the world” with “AI is eating the world”.

But what exactly are we talking about when we refer to Artificial or Computer intelligence?

AI could be defined as the science and engineering of making intelligent computers and computer programs. Since we don’t have a solid definition of intelligence that is not relative to human intelligence, we can define it as the ability to learn or understand things or to deal with new or difficult situations. We also know what computers are, they’re essentially machines that are programmed to carry out a specific task. So Computer Intelligence could be seen as a combination of these two concepts: an algorithmic approach to mimicking human intelligence.

Two branches of AI

Back in the 60s, AI got to a point where it could actually do things, and that created a new branch of AI that was more practical and more pragmatic, which, as a result, got adopted and pioneered by the industry eventually. The new branch (which we call Narrow AI in this article) had different optimization goals and success metrics compared to the original, now called, General AI.



General AI

If your goal was to predict what’s going to happen next in the room where you’re sitting, one option would be to consult with a physicist who would probably take an analytical approach and use well known equations from Thermodynamics, Electromagnetism and Newtonian Physics to predict the next state of the room.

A fundamentally different approach that doesn’t require a physicist’s involvement would be to set up as many sensors as possible (think video cameras, microphones, thermometers, etc) to capture and feed all the data from the room to a Super Computer, which then runs some form of probabilistic modelling to predict the next state.

The results you get from the second approach would be far more accurate than the ones produced by the physicist. However, with the second approach you did not really understand why things are the way they are, and that’s what General AI is all about: to understand how certain things such as language, cognition, vision, etc work and how they can be replicated.

Narrow AI

Narrow AI is a more focused application of Computer Intelligence that aims to solve a specific problem and is driven by industry, economics and results. Common use cases you will certainly have heard of include Siri on your iPhone or self-driving cars for example.

While Siri can be seen as an AI application, that doesn’t mean that the intelligence behind Siri can also power a self-driving car. The AI behind both is very different, one can’t do the other.

It’s also true that with Narrow AI the intelligence works by crunching information in set conditions for economic outputs so as an example Siri can only answer you certain questions, questions that she has the answer to or can retrieve the answer to by referencing a database.

Challenges of AI

Technical Challenges

As human beings, understanding visual and lingual information comes to us naturally, we read a piece of text and we can extract meaning, intent, feelings, information. We look at a picture and we identify objects, colours, people, places.

However, for machines it’s not that easy. Take this sentence for instance; “I made her duck”. It’s a pretty straightforward sentence, but it has multiple meanings. There are actually 4 potential meanings for that short sentence.

I cooked her some duck I forced her, to duck I made, her duck (the duck belonged to her) I made her duck (made her duck out of wood for example)

When we interpret text we rely on prompts, either syntax indicators or just context, that helps us predict the meaning of a sentence but teaching a machine to do this is a lot harder. There is a lot of ambiguity in language that makes it extremely hard for machines to understand text or language in general.




The same can be said for an image or picture, or visual information in general. As humans we can pick up and recognise certain things in an image within a matter of seconds, we know that there are a man and a dog in a picture, we recognise colours and even brands, but it takes an intelligent machine to do the same.




Philosophical Challenges

One of the main arguments against AI’s success is that we don’t have a good understanding of human intelligence, and, therefore, are not able to fully replicate it. A convincing counter-argument, pioneered by the likes of Ray Kurzweil, is that intelligence or consciousness is an emergent property of the comparatively simpler building blocks of our brains (Neurons) and to replicate a brain or to create intelligence, all we need to do is to understand, decode and replicate these building blocks.

Ethical Challenges

Imagine you’re in a self-driving car and it’s taking you over a narrow bridge. Suddenly a person appears in front of the car (say, due to losing balance) and to avoid hitting that person the AI must take a sharp turn which will result in the car falling off the bridge. If you hit the person, they will die and if you fall off the bridge you will get killed.

One solution is for the AI to predict who’s more “valuable” and make a decision based on that. So it would factor in things like age, job status, family status and so on and boil it down to a numerical comparison between you and the other person’s “worth”. But how accurate would that be? Or would you ever buy a self-driving car that has a chance of killing you?


While some serious challenges in AI still remain open, industry and the enterprise have latched on to the benefits that AI, like Natural Language Processing, Image Recognition and Machine Learning, can bring to a variety of problems and applications.

One thing can be said for certain, and it’s that AI has left the science and research labs and is powering developments in health, business and media. Industry has recognised the potential of Narrow AI and how it can change, enhance and optimize the way we approach problems and tasks as human beings.

[1] The border between AI and human intelligence is getting blurred, therefore eventually we might get to a point where intelligent behaviour manifested by a machine can no longer be labeled as “artificial”. In that case, Computer Intelligence would be better suited. That said, we use the terms Computer Intelligence and Artificial Intelligence interchangeably in this article.





Parsa Ghaffari

CEO and Founder of AYLIEN Parsa is an AI, Machine Learning and NLP enthusiast, whose aim is to make these techniques and technologies more accessible and easier to use for developers and data scientists. When he’s not working he likes to play chess ('parsabg' on Twitter: @parsaghaffari