So, if it is already so pervasive, why is AI not old news? Why are we discussing it so much now and why are we pondering so excessively over our future with AI?
Three things may explain why it is coming up in nearly every technology strategy and roadmap discussion I am in.
First, AI is still in its absolute infancy. Everyone expects a lot more to come and that it will impact every aspect of our lives. There are huge benefits but there are also risks and pitfalls that we need to navigate. It is therefore only to be expected that people both from a personal and business perspective think about the impact, whether positive or negative. Will AI help in the fight of complex diseases? When will my car be able to drive me to work? What will my future job look like? Can AI solve some of our productivity issues? Can AI help us provide better services to our clients?
Second, there is a sense that now is the time, as there has been a convergence over the last few decades where we have seen huge data growth, massive increase in available (and affordable) compute power all the while the science behind AI has advanced. So where are we actually now? What is the reality once the hype has died down.
Third, entertainment, popular science and journalism love to talk about the science fiction use cases of AI, so it just naturally is “front of mind”. The reality is – as per point one above – that we are still very far off most of those scenarios depicted in Ex_Machina, Bladerunner iRobot and so forth. Most of these scenarios fall under AGI (Artificial General Intelligence, or General AI or Human-Level AI).
So, what is status now in 2018?
The reality is that at the moment we are actually getting really good at what often is referred to as “Narrow AI” (so AI that is programmed to serve a very narrow and specific purpose). This is where there are real advances, lots of available tools and services, and with which we can solve a lot of known “use cases” in today’s world. It is certainly already now capable of changing the face (and guts) of many businesses. Harry Shum, Microsoft’s Executive VP, AI and Research said not too long ago: “AI is going to disrupt every single business app – whether an industry vertical like banking, retail and health care, or a horizontal business process like sales, marketing and customer support.”
It is a fair statement (albeit without a timeline). But my point is that can definitely see that the reality behind the statement is creeping into many technology strategy/roadmap conversations that we are having. And indeed, Microsoft’s three big bets are currently Mixed Reality, Artificial Intelligence and Quantum computing.
To best summarise it from our practical perspective, we look at what we are doing or able to do on top of the Microsoft eco system that we depend on for client solutions and products. Naturally, if you work with Amazon, Google, IBM or other platforms the detail will be different.
The direction in relation to AI is to pursue a vision of democratizing the potential of AI. Microsoft is taking a “platform” approach to AI, making available a series of services and tools with underlying infrastructure components that is available through its cloud technology stack (Azure).
Today, the AI platform consists of three core areas: AI Services, AI Infrastructure and AI Tools.
From ClearPeople’s perspective this puts us in a position now where we can efficiently deliver solutions that provide “smart experiences” (yes, you can call it AI if you like) to the end users.
Three straight forward and very practical examples that clients are using today in intelligent digital workspaces we have delivered:
- Q&A Bots: Utilising NLP (Microsoft’s Language Understanding Intelligence Services / LUIS) and the bot framework, we are able to rapidly roll-out Q&A bots to our clients. They are provided with self-service interface to continuously add more questions and answers, while the NLP and ML service layer trains, learns improves the matching of questions with answers.
- Automatic Tagging: Our solutions now can include automatic tagging (or suggestions for tagging) of images or other media assets. This can be combined with advanced ranking and scoring against an enterprise taxonomy. It immediately improves the quality of tagging because a) assets get tagged more frequently and b) assets get tagged more consistently.
- Collaboration Explorer: Utilising the Graph API and Machine Learning services users are presented with dashboards of relevant collaboration areas (based on areas the user is in, based on email communication, social interactions and actual contribution to content).
The point to make here is that if you unbundle the generic “AI” term and break it down into practical / narrow use cases, there is plenty of technology out there today that allows us to provide real tangible benefit to the end users. Think of it as cool tools that make the digital workspace a smart digital workspace, and which helps us to do our work better.
Microsoft recently released this one-minute marketing video. Nothing earth shattering or deep in terms of content – but the point is well made. The tools are there, we just need to do something with them.
Personally, I am keen on working with our clients to exploit the fast pace with which cognitive services are being made available (watch out for the next blog on this topic). Cognitive services have a big impact on how we interact with technology and will change how we think about delivering user experiences to the end users in their day to day work life. Feel free to get in touch!