Skip to content

It’s not that AI is taking over the world but…

January 6, 2017

Artificial Intelligence (AI) has been in the news at a constant burn over the last year. Having worked with AI tools and exposed to the 5th generation project in the 1980’s in Japan had given me some perspective and a latent interest in the domain, but not enough to spend my savings or my career on it.

In those (bad old?) days there was this dream of finding the universal AI solution and that we would be abandoning our classic programming languages soon. None of that happened (and much of it I suspect had to deal with the fact that our brains aren’t really wired for the kind of languages that were touted as the future fifth generation languages).

AI then was seen as being able to completely drive a process from start to end. That never worked because all the variance that exists in the real world could never be properly predicted and addressed. Many attempts I saw ended up in a mess of coding where the effort to extend the knowledge base started to grow exponentially with every increment in that knowledge base.

However it looks like AI has undergone a revolution where product managers have taken the AI baton from AI researchers. Instead of trying to develop a generic AI solution and then looking for problems to solve, we now see AI evolution occurring in very specific domains, solving needs in very narrow problem domains that are becoming all too important. Self Driving Cars and Personal Assistants are two domains that come to mind.

So over the Christmas period I decided to take a closer look at one of these (I don’t have a self driving car, although I’d love one) and started investigate the world of chat bots. Chat bots are now quite common — Amazon Alexa, Apple Siri, Microsoft Cortana are but a few of them. They all rely on interpreting the spoken word and responding in an intelligent fashion. The difficulty here is two fold — first the device needs to understand spoken sentences and that is quite difficult to do in a generic way. Then secondly it needs to understand the intent of what was said and react to it in a way that makes sense to our expectations. So when I ask ‘what is the weather’ then I would like to what the weather is like outside of where I am, not in another continent.

I spent some time perusing frameworks and settled on API.AI to create a sample chat bot for booking rooms. It was quite efficient to develop a reasonably complete set of intents and responses and to tie this into a back end that would do the details of actually booking rooms and managing extras for the rooms (e.g. catering, projectors etc). From a creators perspective the technology is quite straight forward. You create intents (book a room, cancel that room,…), add entities (such as nr of people, catering required, etc) and then add phrases that trigger the intent.

What I really appreciated was that I could see the machine learning behind the scene (API.AI mentions machine learning but they don’t go about pushing it front and center). What the machine learning does is that it takes your phrases and applies it to the language recognition. If I had a phrase ‘book a room for today’, then it knows that today is a date and it will automatically recognise the phrase ‘book a room for tomorrow’.

I also suspect that the actual language recognition is helped by this approach. Typically, unless you are a very clear speaker without accent (which is not me) the system will have multiple interpretation to what I said. So ‘book a room’ could be interpreted as ‘Booker rule’. The machine learning engine will however score these interpretations against how well they meet the defined phrases (and their automatic derivatives).

Where does this lead us?

So I had lots of fun playing with this technology but it also gave me much to think about. The people at API.AI certainly demonstrated that AI and machine learning can improve productivity considerably in the software development domain. And this is by no means unique. A year back I played with IBM’s Watson Analytics. In data analytics, when you start looking at a problem you spend a lot of time exploring data and building and testing hypotheses. Watson Analytics simply took the data and did a lot of this aforementioned rather mind numbing foot work, eliminating all the obvious dead ends in a short time and letting you focus on the leads that look promising.

So my guess is that we are set for a shake up. Not just my profession of software engineering. I think all of us will run into narrow AI tools that will focus on making us more productive. You need to organise a party? Well your assistant may have a conversation with you and then start making suggestions on how to create a theme, when to start ordering the food and maybe even selecting the right music for you.

The original problem of universal AI is still with us and I suspect it would be a long time before we really make substantial inroads. Yes we have seen Chess AI and GO AI, but these are still narrow AI systems. To my mind the future lies in the concept of AI as an assistant that augments our abilities and let’s us focus on our creative side. Successful AI implementations will be those where the designer understands what needs to be in the domain of the AI and what needs to be in the domain of the human being. A key issue will be that of the interface between the two. There is the question of human dignity: we are very finely attuned to sense the distinction between what is helpful vs what is overbearing. Get it wrong and us humans won’t touch it.

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: