In 1950, Alan Turing, widely considered the father of theoretical computer science and artificial intelligence, published a seminal paper entitled “Computing Machinery and Intelligence” in which he posed a fundamental question: can machines think? In his paper, Turing claimed that by 2000 computers would able to “think” and successfully answer questions posed by humans in a human-like manner. He was not far off the mark.

The pursuit of artificial intelligence is rooted in the history of philosophy and mathematics, reaching as far back as 1300; however, computer-related artificial intelligence, as we refer to it today, arose in the mid-fifties following the publication of Turing’s milestone paper. The quest has been long and difficult. It has driven the development of entirely new areas of study, but also suffered many methodological and technological setbacks. In fact, after many dead-ends and even outright failures, it was not until the early 1990s that machine learning, originally a subsector of artificial intelligence, emerged as the main route to pursue the development of “thinking” machines.

Machine learning is based on methods and models used in statistics and probability theory. It first emerged thanks to the possibility of sourcing and sharing information via the Internet and, most recently, has taken giant steps with the opportunities provided by big data, data mining and pattern recognition algorithms. And as the amount of available data continues to increase exponentially, computers are becoming increasingly more powerful, capable of learning, and acquiring “experience.” In fact, that is the very definition of machine learning: allowing systems to acquire experience without being explicitly programmed.

Current examples of machine learning applications can be found in the recommendation engines providing suggestions on on-line e-shopping and media platforms, which recommend a given series of books, songs or movies based on our previous selections. Other more complex machine learning systems are employed to run customer care and help desk phone services, to manage the purchase and sales of financial portfolios, and even to actively promote public safety by monitoring suspicious behaviour or identifying specific vehicles and individuals.

Moreover, machine learning is at the very core of the long-awaited self-driving car revolution, which is one of its most advanced and complex applications. Self-driving vehicles need not only “understand” the rules of driving and how to drive, how to monitor the movements and signals of other cars and infrastructure, but they must also learn to negotiate exceptions and make split-second decisions.

The good news is that it is estimated that nearly 90% of road accidents are caused by human error, so computerized self-driving cars are expected to significantly reduce this figure and deliver far greater safety. Nonetheless, the biggest barrier to date for the introduction of self-driving cars concerns the resolution of unexpected events, exceptions and even ethical dilemmas. In fact, this is one of the new frontiers of machine learning and the acquisition of “experience” by intelligent devices.

In the meantime, Deloitte’s new report on Technology, Media & Telecommunications Predictions indicates that first step towards the integration of true artificial intelligence in vehicles will be the adoption of automatic emergency braking systems, which it predicts could save over 6000 lives a year by 2022.

The report also predicts, that during 2017, we will witness the adoption of over 300 million smartphones equipped with state-of-the-art neural networks for machine-learning that will vastly enhance a wide range of functions, including indoor navigation, photo classification, augmented reality, speech recognition and translation. Above all, machine learning will be able to accomplish these tasks even with limited or no access to connectivity.

And, of course, once this technology goes mobile, our vehicles will surely be the next beneficiaries.

 

For further information: