The beginning of AI and its first breakthroughs

Over the past few years, artificial intelligence has found its way into numerous areas of our lives. Finance, healthcare, retail, education, manufacturing, to name but a few, are some of the industries that have integrated artificial intelligence capabilities into their businesses.

Clearly, we find ourselves on the brink of the Fourth Industrial Revolution that promises to dramatically change the society we live in. The future of mankind will fundamentally be impacted by the role technology plays in our lives and how well we manage to cope with all the transformations the new revolution will bring. But to fully acknowledge the implications, one must really understand the premises and beginnings of AI too.

The Imitation Game

British mathematician and code-breaker Alan Turing is often considered to be the father of computer science and artificial intelligence. In 1936, he developed the Turing machine, which was an abstract model that could use a predefined set of rules to determine a result from a set of input variables. The machine consisted of a long tape divided into squares, each square representing a single symbol. Operating according to the directions of an instruction table, a reader could move the tape back and forth, reading one symbol at a time.

Shot of Benedict Cumberbatch playing Alan Turing in "The Imitation Game" (2014)

Fourteen years later, Alan Turing introduced the Turing Test, also known as the Imitation Game, which was considered to be the first attempt to measure if machines can be defined as intelligent. According to Turing, a machine could be perceived as intelligent, if it could mimic human behavior under specific conditions.

The Turing test consists of three terminals physically separated from each other, with one terminal being operated by a machine and the other two handled by humans. During the test, one of the humans will ask questions, using a specified format and context, to both the other human and the computer. If after a certain number of such interactions, the questioner cannot determine which one is the human, the machine is considered to have artificial intelligence. Although his test has been criticized by many, Alan Turing remains the one that opened the door to a field that would soon be called AI.

The Dartmouth Workshop

The official birth of AI took place in the summer of 1956 when John McCarthy held the first workshop on artificial intelligence at the Dartmouth College. The complete name of the workshop was the Dartmouth Summer Research Project on Artificial Intelligence and the purpose of it was to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity.

MIT scientists, IMB employees and researchers from Carnegie Mellon University who have attended the workshop soon became very optimistic about AI’s future and in the following years a lot of advancements have been registered. Furthermore, between 1964 and 1966, MIT professor Joseph Weizenbaum, had developed the world's first chatbot called Eliza. The computer program was aimed at demonstrating the superficiality of communication between humans and machines by using a pattern matching and substitution methodology.

AI winters

However, even though everything seemed to be going in the right direction, the interest and funding in AI research has significantly decreased around 1974 and it continued to remain extremely low until the ’80s. This period was known as the first AI winter.

Things started to brighten up again when the British government started funding it in part to compete with the Japanese fifth generation computer initiative. Another contributing factor was the advent of expert systems that would solve narrowly defined problems from single domains of expertise using vast databanks.

Yet, after a couple of financial setbacks, AI found itself in its second winter which lasted until the early ’90. Since then, the field of artificial intelligence has been extremely well-funded and significant breakthroughs have been made.

AI vs. humans in different games

In 1997, artificial intelligence reached a major turning point. On May 11th, a computer build by IBM, known as Deep Blue, defeated world chess champion Garry Kasparov after six games with 3½–2½. While some have argued the computer system used to beat Kasparov wasn’t technically ‘intelligent’, there is no denying it beat the human's ability to analyze chess moves and pick the right one. Deep Blue evaluated around 200 million positions a second and averaged 8-12 ply search depth. Humans are generally thought to examine near 50 moves to various depths.

Deep Blue was the first machine to beat a reigning world chess champion

Since it was obvious that AI could beat humans at chess, there was now a growing interest in seeing if computers can beat humans at other games too. A computing system developed by Google researchers overtook a human top player at the game of Go. Go is an Asian ancient board game which requires strategy and intuition and it is exponentially more complex than chess. AlphaGo, the AI system, matched its artificial wits against 18-time world champion Lee Se-dol, and won using deep neural networks and machine learning. The moment represented yet another great milestone in the development of AI.

Fast forward to 2017, Google’s AI system wins its second game against the world’s best player of the ancient Asian board game, Chinese 19-year-old Ke Jie. But this was only the first step and shortly after winning the game, researchers have repurposed AlphaGo to conquer chess without aid of human help. The repurposed AI system, called AlphaGo Zero, needed only four hours to learn to play chess and beat the world champion chess program, Stockfish 8, in a 100-game match up. According to its creators, the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge.

The first self-driving car

It might sound unbelievable to some of you, but the first self-driving car dates back to 1995, when Mercedes-Benz managed to drive a modified S-Class mostly autonomously for 1,678 kilometers (1,043 miles) from Munich to Copenhagen. The steering, throttle and brakes were controlled through computer commands based on a real-time evaluation of image sequences caught by four cameras with different focal lengths for each hemisphere. Since then, Mercedes-Benz has developed an in-house R&D program, culminating with the S 500 Intelligent Drive near-production concept car, based on the W222 generation of the “Sonder Klasse.”

Courtesy of Medium

But games and self-driving cars are not the only areas artificial intelligence has made an impact. From virtual assistants like Siri and Alexa, to chatbots created by Facebook and Drift, AI is having a significant impact on the lives of humans. For instance, major breakthroughs have also been made in healthcare, where large amounts of information can be processed in the blink of an eye with the use of machine learning, a field of artificial intelligence. The same goes for the payments and e-commerce areas, where pattern recognition can be used to identify fraudulent transactions.

No matter the field of business, it is obvious that artificial intelligence could have a significant contribution and influence the way we live, interact and work. AI has been around for more than 60 years and has definitely had its ups and downs. Some might believe these prolific years only represent one of its boom periods and will soon be followed by another winter. Yet, the speed of change, the existing technological possibilities as well as human experts show us that this time AI is here to stay. The machines nowadays are capable of doing a lot of things that we could not haven even imagined doing 20-30 years ago. So, regardless of what we think of AI, the best and probably the only choice is to embrace the revolution.