Artificial intelligence - tailored smart solutions today, true intelligence tomorrow

Digitization

28 April 2016 — The AI technology we see these days is tailored to specific uses and not yet scalable in a generally applicable manner. Nevertheless, AI is rapidly changing our day-to-day lives in a significant way.

Artificial intelligence - tailored smart solutions today, true intelligence tomorrow
Authors
Per Stenius
Per SteniusClient Director
Seoweon Yoo
Seoweon Yoo(Alumnus) Business Developer

Download this post (PDF)

This article originally appeared as a post in LG CNS BLOG (www.lgcnsblog.com) on 28.04.2016. Reproduced here with the kind permission of LG CNS.

Artificial Intelligence (AI) has come to the center stage of public attention in Korea since Google’s AlphaGo famously defeated the world’s best human Go player, Lee Sedol. A part of the excitement stems from this being a first major public demonstration where a computer actually must recognize shapes and structures to beat its human counterpart. As such, this is a clear step forward from previous AI achievements in, for example, chess. A flurry of reactions followed in Korea and worldwide, ranging from excitement to fear. The Korean government announced plans to invest 3.5 trillion won (3 billion USD) for public and private sectors to develop AI[1], and companies who are using or plan to use AI have come under intense media spotlight. Also, the fear that AI will take over humanity (commonly referred to as “the singularity”) and how we should devise policies to prevent this has become a popular topic of discussion in social media. References are often made to movies depicting a grim future where humans are taken over by highly advanced artificial intelligence, such as Ex Machina, Her, The Terminator, or even the classic 2001: Space Odyssey. All of these movies depict a near future where artificial intelligence becomes so sophisticated that they no longer find the need to coexist with humans (and ultimately lead to the near extinction of human kind). But a lot of misconceptions exist on how AI will affect our lives in the future. Although the future is not likely to play out as people imagine in movies, the foreseeable future will change in a fundamental and significant way because of AI. AI is rapidly gaining momentum, and we can expect to start seeing these changes now. We will examine these changes in this article, and also look into whether AI really is starting to get towards true intelligence, or still remains merely a collection of highly sophisticated but individually tailored single-purpose solutions.

Connection between human and the virtual world

AI innovation enabled by faster processors, Big Data and novel algorithms

AI is “an area of computer science that deals with giving machines the ability to seem like they have human intelligence[2]”. Human intelligence encompasses a wide spectrum of cognitive tasks, which can range from simple rule-based logic, to pattern recognition, to more complex communication-based “human work”[3] (see figure 1). Traditional (non-AI) algorithms are able to carry out some logic-based tasks, such as arithmetic calculation, but they cannot handle more complex tasks such as understanding and summarizing a conversation, as synthesis is missing. In contrast, AI can actually encompass more complex, human-like cognitive tasks, like recognizing images or speech. An example of such tasks include the face recognition function used by Facebook, or how Gmail automatically detects scheduling requests from emails and adds them to Google Calendar. However, one can rightfully question the level of intelligence some of these tasks require. In many cases the underlying code is still more of a sophisticated traditional algorithm. AI that can handle more complex human work also exist today; consider X.ai, an artificial intelligence personal assistant who schedules meetings by conversing with meeting participants to find the ideal time and location[4]. Just recently an AI entered and passed the first round for a national literary competition in Japan[5], showing the possibilities of AI handling even intricate writing skills which even requires a level of creativity.

Figure 1. Varieties of computer information processing

Figure 1. Varieties of computer information processing[6]

Even though the idea of artificial intelligence existed for a long time, it seems like this technology has advanced suddenly these days and already is being integrated to our everyday lives. A good example representing this ‘intellectual awakening’ of computers is the error rates on a popular image recognition challenge, shown in figure 2, which has dropped dramatically over merely five years. Currently experts are calling this the “AI big bang”.

Figure 2. Error rates on image recognition 2011 – 2015

Figure 2. Error rates on image recognition 2011 – 2015[7]

The AI big bang can be attributed to three breakthroughs in technology in recent years[8][9].

  1. Cheap parallel computation technology: there has been significant advances in computing power, enabling execution of highly complex calculations extremely fast. The high demand for better video games led to an acceleration in innovation for graphics processing technology. GPUs, or graphics processing units, are chips that have extremely high processing power. These made high-quality 3D interactive video games possible. However the utilization of GPUs is not limited to games – more general purposed computation is possible through GPU accelerated computing[10]. Compared to CPUs (central processing unit), GPUs are able to process more specialized and complicated tasks faster. By combining a CPU and GPU together, leaving each to carry out tasks they are specialized in, a much more powerful computer is achieved, capable of running more complicated algorithms in parallel – say, like the human brain’s neural network.
  2. Explosion of big data: artificial intelligence, the same as people, need constant input and exposure to situations in order to learn and grow. This is why the explosion of big data (explained in detail in our previous blog post on Big Data) is an important reason why AI has started to grow so rapidly.
  3. “Teachable” and “self-learning” algorithms: while computation technology and big data are purely underlying key elements helping AI to flourish, algorithms form the engine that brings AI to life. If we were to compare AI to a human brain, computing power would be the basic brain functionality, big data the experiences and exposure to new things, and the algorithms would be the IQ. In other words, the algorithms ultimately determine how “smart” an AI will be. With a basic algorithm, a computer has to be “spoon-fed” data, and the output will be proportional to the effort people put in. However with a cleverer algorithm that enables computers to “learn” as more data is entered (or as it goes through numerous trial-and-errors), one gets improved functionality (the program gets smarter with more data). Algorithms are the core of artificial intelligence work. In the following three of the main concepts are discussed; artificial neural networks, deep learning, and unsupervised learning.

Artificial neural networks: algorithms for artificial intelligence have shown significant development in the past few years based on the new environment with better-than-ever processors and an abundance of data to test on. It started with the realization of “artificial neural networks[11]”. True to their name, artificial neural networks are modeled after a human’s biological neural network, comprising of neurons connected together in an intricate net. Biological neural networks enable a higher level of abstraction in information processing compared to a traditional computer. For example, we can see a few images of a cat and recognize a cat next time we see a similar image. A similar process happens in artificial neural networks, when presented with thousands of cat images. The concept of artificial neural networks have existed for several decades already, but only recently has this been enabled for practical application because of enhanced processing power and sufficient data sets available.

Comparison of biological and artificial neural network

Figure 3. Comparison of biological and artificial neural network[12]

Deep learning: to manage the extremely complicated network of different computations, developers came up with the concept of dividing this network into different layers and guide the flow of information through multiple layers. For example, in determining if a picture includes a cat, there would be different layers of questioning that the artificial brain has to go through before reaching that conclusion. The sequence of questions for identifying a cat could go “Does it have four legs?” → “Is it covered in fur?” → “Does it have long whiskers?” and so on. However this process is never perfect from the beginning, as the logic may not be accurate all the time. Going back to the example, this algorithm may say that certain types of dogs are cats, and hairless cats would not be recognized as an image of a cat. “Deep learning” is the technique developed to overcome this type of limitation[13]. In the AI world, the term ‘learning’ is basically a feedback process where the network gets feedback on the result it has produced, and has a self-correcting feature if the result is wrong. This is achieved by reassigning weights on the layers (individual questions asked) so that it gets a statistically optimized algorithm that will enhance the probability of getting the right answer. Compared to traditional computer algorithms, a neural network can detect when its output is wrong and self-modify so that it gets the right answer next time. This implies that the more data is inputted and validated, the smarter the algorithm becomes.

For this type of deep learning to work, two conditions must be fulfilled: the network must be engineered so it is generalizable and applicable to new examples, and humans need to interfere by labeling the input data and determining whether the output result is right or wrong. For instance, when teaching a network to recognize a cat, the programmer will direct the network to look at certain attributes that will distinguish a cat image from other images. When the network returns a result, the programmer will supervise the training by telling the network if it was right or wrong. The network would learn right from wrong through adjusting the weights given to each attribute to minimize the possibility of getting the answer wrong. Because of this level of supervision, some argue that the current realization of artificial intelligence is not ‘real’ AI. These people argue that this technology is not really a machine that can think on its own like a human, but rather just advanced computation.

Unsupervised learning: a new approach that is trying to overcome these shortcomings and get one step closer to ‘real’ AI, is called unsupervised learning. While supervised learning requires a dataset containing the correct answers (labels), unsupervised learning does not require the correct answers ahead of time[14]. It’s like throwing an algorithm out in the wild and letting it figure out the correct answers for itself. A good comparison would be networks that recognize cat images (supervised learning, like the example above) versus networks that learn to play video-games (unsupervised learning). When teaching a network to ace a video game like Breakout[15] through an unsupervised method, the programmer does not tell the algorithm to avoid dropping the ball, or that the goal is to eliminate all the bricks on the screen. Instead the algorithm is simply directed to maximize the score. This method has the potential of finding new unconventional solutions that “think outside the box” as human bias is not used to guide the algorithm. This method is expected to be very useful once matured since most data is not structured or labeled. It is expected to open AI to new possibilities. Unsupervised learning could also be used to develop a general AI tool that solves complex “human” problems (also called “real AI”), instead of humans having to tediously assign specific tasks to an algorithm.

Significant business implications, inherrent candidates for industry leader(s)

According to Nvidia’s CEO Jen Hsun Huang in his keynote at the GPU Technology Conference 2016[16], artificial intelligence will represent a 500B USD opportunity over the next 10 years. He also predicted that AI will become the new platform for computing, replacing all of our “dumb” computers. Eric Brynjolfsson and Andrew McAfee argue in their book, The Second Machine Age, that AI will change all industries fundamentally as the invention of the steam engine did during the industrial revolution. On a darker note, big names in technology and science, such as Stephen Hawking, Elon Musk, and Bill Gates have famously raised concerns that AI could bring the end of humanity, or at least be “more dangerous than nuclear weapons[17]”. In whichever way the future actually pans out, it seems evident that AI is going to bring a fundamental and significant change, spanning all industries and aspects of human life.

The exciting (or worrying, based on how it is viewed) news is that AI is already part of our everyday life in ways people might not have even noticed. Apple’s Siri or Amazon’s Alexa is AI that acts as a personal assistant through voice commands. People now have comfortable communications with a machine through their phone or speaker. Google’s search bar itself is AI; “RankBrain” is the name for the AI behind googling that makes searching a breeze[18]. An email or text alerting abnormal account login activity or credit card purchase activity, is also AI that is being used for fraud detection. As technology becomes more intricate and as companies come up with new uses of AI, industries being invaded by AI will increase significantly. Kevin Kelly is quoted as saying “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI[19]”. Applications will be replacing and augmenting cognitive tasks that are currently done by humans, changing the way people work across all industries. In the below figures 4 and 5 is a comparison of the machine intelligence industry landscape assessed in 2014 compared to 2015. It shows that AI is rapidly expanding its reach, across industries as well as functions.

Machine intelligence landscape 2014

Figure 4. Machine intelligence landscape 2014[20]

Machine intelligence landscape 2015

Figure 5. Machine intelligence landscape 2015[21]

As seen here, both old and new companies in a wide array of industries are quickly filling in the opportunities presented through the three factors enabling AI: better computing technology, big data, and improved algorithms. Wherever there is a big dataset, AI is now applicable to a certain level. The caveat is that these applications are still pretty much objective-driven and general usage of AI is limited, as they are mostly supervised deep learning AIs. Some even compare modern AI applications as an autistic brain that can show brilliant performance in a very specific field, but outside of that function, it is not very useful.

Artificial intelligence works best (and is only possible to some extent) only when there is a large amount of good-quality data to work on. Hence large software companies that already possess large structured datasets have an advantage in AI. It is therefore quite natural that the search engine giant Google, which has practically infinite datasets, is currently leading the pack for artificial intelligence. It is leading so far ahead that oftentimes Google is the only company mentioned when discussing advances in AI these days. There is no evidence that this market lead is going to be overturned anytime soon either, due to their unequivocal advantage in abundance of data and consequential pull for top talent in the field.

Google has a strong advantage in understanding and responding to text queries, as search engines have been their forte for more than ten years. Another effort is related to their 2014 acquisition of a London-based deep learning company, DeepMind. This is the company behind the infamous AlphaGo, and they are focusing on building artificial intelligence with reinforcement learning, which is a type of unsupervised learning that sets a goal and lets the AI loose to find the best way to reach that goal through trial and error. For example, DeepMind developed an AI that was programmed to find the best way to score a high score on Atari games, without teaching or programming the AI on how the game works. The AI application simply found its own way around the game through trial and error, and was able to score higher than any human could ever reach.

Other notable companies placing a high bet on artificial intelligence are all software-related in one way or another – Facebook[22], Microsoft[23] and Amazon[24] have been making impressive strides on advancing the technology; China’s search engine giant Baidu is also a strong contender[25]; IBM is famously developing its proprietary AI Watson for various B2B uses[26]. Nvidia, a GPU company, is also a top contender on the platform level, offering full stack solutions that encompass hardware, platform, and application for AI. Digital disruptors in many industries are also picking up artificial intelligence and using it to create revolutionary industry-specific solutions. Some examples include Tesla (automotive industry) with its self-driving technology[27], or Uber and Airbnb using artificial intelligence for dynamic pricing and demand forecasting[28][29]. Artificial intelligence as-a-service models are also popular, with large companies like Google[30] and IBM[31] jumping into this, as well as startups. One notable startup is natural language detecting AI startup Wit.ai, which provides APIs that can be implemented for any application[32]. This “AI-as-a-service” model is also expected to accelerate the implementation of artificial intelligent applications across all industries.

An interesting twist to the landscape of artificial intelligence is that large companies like Google and Baiduare opening up its technology for the public[33][34]. The reason behind this is that companies want to crowdsource the improvement and applications of AI. This strategy seeks to ensure that the current top players keep their position as the dominant platform for AI, while also accelerating improvements and applications of artificial intelligence in many fields. Many startups incorporating AI in various industries were made possible by using tools provided by Google (like Tensorflow[35]) as a base. This forms an innovation ecosystem, where startups are frequently experimenting with the technology as well as with business models[36]. They are developing artificial intelligence solutions that aim to solve specific problems that can show immediate results, like following up on a sales lead, or reading through legal documents and finding only the relevant information for a certain purpose. In the meanwhile, large companies are fostering the technology by acquiring startups or attracting top talent to devise their own solutions. Through their more abundant resources – both data-wise and financially – they are tackling solutions that have a higher entry barrier, such as devising self-driving cars/drones or enabling automated diagnostics or genomics.

Changing the paradigm; human vs. machine to human plus machine

Once AI unarguably beat humans in chess, some people worried that humans would give up on chess altogether. However, instead, they developed a new type of game where a team of humans in tandem with AI software, as a “cyborg”, compete against each other, to augment the thrill of the chess game with even better thought-out moves with an added human touch[37]. It seems likely that the same thing will happen with the game Go. This symbolically demonstrates how the future of AI could look like – it will perhaps not be machines competing with humans, but humans finding new ways to work together with AI to create better results.

The reason this is possible is because AI is not human, although it emulates human intelligence. Humans have an anthropocentric tendency (meaning we tend to imagine the unknown as similar to humans and also tend to think of humans as the center of the universe). Because of this we traditionally believed that AI will take human form and ultimately will think “like a human”. However modern AI is proving to be fundamentally different from a human’s cognitive function in that it is solely focused on the objective and not bound to biases or conventions. For example, AlphaGo at one instance seemed to make a mistake move, but in the end it turned out to be a highly calculated strategic move that led to its victory. In other words, ‘thinking like a human’ may not be the most optimal way of thinking in terms of achieving our goals, and artificial intelligence can be able to complement that. AI will redefine what ‘being human’ is, as it will replace some cognitive tasks, and at the same time highlight others that AI cannot replace.

Current modern AI, as it is merely the starting point, will see a boom for the next coming years and many industry platforms will be converted to AI-based as it presents productivity benefits unmatched to any pre-existing methods. In the long term, there will be further breakthroughs in algorithms that will be able to overcome the current shortfalls of AI (requiring a large amount of data to recognize a pattern, being inflexible in objective-setting and more) and another shift in industries will ensue. As businesses, and as human beings, we should prepare for this fundamental change, but not fear it, because in the end AI will most likely augment our abilities, not invade it.

The authors want to thank Milan Minsky for inspiring us with good questions about whether AI has reached a state of general applicability, or is more of a collection of highly tailored solutions fit-for-purpose.

Further reading and references:

This blog is based on a broad range of articles, books and reports. Some of the more interesting ones are listed below.

http://www.wired.com/2014/10/future-of-artificial-intelligence/

http://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence

http://www.forbes.com/sites/kevinmurnane/2016/04/01/what-is-deep-learning-and-how-is-it-useful/

http://colah.github.io/posts/2015-08-Understanding-LSTMs/

http://www.kdnuggets.com/2015/01/deep-learning-explanation-what-how-why.html

http://www.thirdway.org/report/dancing-with-robots-human-skills-for-computerized-work

http://www.gizmag.com/creative-ai-computational-creativity-challenges-future/36353/

The Second Machine Age: Work, progress, and prosperity in a time of brilliant technologies, Eric Brynjolfsson and Andrew McAfee, 2014, W.W. Norton & Company

Frank Levy and Richard J. Murnane, The New Division of Labor: How computers are creating the next job market (Princeton University Press, 2004)

Michael Spence, The Next Convergence: The future of economic growth in a multispeed world, (New York: Macmillan, 2011)


[1] http://www.zdnet.com/article/south-korea-promises-3b-for-ai-r-d-after-alphago-shock/
[2] http://www.merriam-webster.com/dictionary/artificial%20intelligence
[3] http://press.princeton.edu/titles/7704.html
[4] https://x.ai/
[5] http://bigthink.com/natalie-shoemaker/a-japanese-ai-wrote-a-novel-almost-wins-literary-award
[6] http://www.thirdway.org/report/dancing-with-robots-human-skills-for-computerized-work
[7] http://blog.clip.mn/2016/06/the-relevance-of-artificial-intelligence-to-digital-video-creation-consumption-and-monetization/
[8] https://www.technologyreview.com/s/513696/deep-learning/
[9] http://www.wired.com/2014/10/future-of-artificial-intelligence/
[10] http://www.nvidia.com/object/what-is-gpu-computing.html
[11] https://arxiv.org/ftp/cs/papers/0308/0308031.pdf
[12] ViníciusGonçalvesMaltarollo, Káthia Maria Honório and Albérico Borges Ferreira da Silva (2013). Applications of Artificial Neural Networks in Chemical Problems, Artificial Neural Networks – Architectures and Applications, Prof. Kenji Suzuki (Ed.), InTech, DOI: 10.5772/51275. Available from: http://www.intechopen.com/books/artificial-neural-networks-architectures-and-applications/applications-of-artificial-neural-networks-in-chemical-problems
[13] http://www.kdnuggets.com/2015/01/deep-learning-explanation-what-how-why.html
[14] http://www.forbes.com/sites/kevinmurnane/2016/04/01/what-is-deep-learning-and-how-is-it-useful/#7f9a7f1d10f0
[15] https://youtu.be/V1eYniJ0Rnk
[16] http://www.gputechconf.com/
[17] http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/
[18] http://www.dailymail.co.uk/sciencetech/article-3485456/Peering-Google-s-RankBrain-Report-gives-glimpse-AI-software-engineers-don-t-fully-know-works.html
[19] http://www.wired.com/2014/10/future-of-artificial-intelligence/
[20] https://medium.com/@shivon/the-current-state-of-machine-intelligence-f76c20db2fe1#.sen68s9rl
[21] https://www.oreilly.com/ideas/the-current-state-of-machine-intelligence-2-0
[22] http://www.psfk.com/2016/03/project-gutenberg-facebook-training-its-ai-using-childrens-stories.html
[23] http://www.informationweek.com/software/productivity-collaboration-apps/microsoft-positions-cortana-skype-ai-as-future-of-communications/d/d-id/1324956
[24] http://finance.yahoo.com/news/amazon-boost-artificial-intelligence-orbeus-154003909.html
[25] https://www.technologyreview.com/s/545486/chinas-baidu-releases-its-ai-code/
[26] http://www.businesscloudnews.com/2016/04/07/pfizer-utilizes-ibm-watson-for-parkinsons-research/
[27] http://fortune.com/2015/12/21/elon-musk-interview/
[28] https://newsroom.uber.com/inferring-uber-rider-destinations/
[29] http://www.businessinsider.com/airbnb-aerosolve-machine-learning-open-source-2015-6
[30] http://www.wired.com/2016/03/google-sharing-powerful-ai-everyone-cloud/
[31] http://www.forbes.com/sites/alexkonrad/2016/01/29/new-ibm-watson-chief-david-kenny-talks-his-plans-for-ai-as-a-service-and-the-weather-company-sale/#6fe8ef617f2c
[32] https://wit.ai/
[33] http://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/
[34] http://www.wired.com/2016/03/google-sharing-powerful-ai-everyone-cloud/
[35] https://www.tensorflow.org/
[36] https://www.linkedin.com/pulse/why-digitalisation-changes-entrepreneurship-everything-erkko-autio
[37] http://www.wired.com/2014/10/future-of-artificial-intelligence/

Tags
Artificial intelligence