Think Upon AI

History and AI

Table of Contents

Before jumping to the history, let us first define AI (Artificial Intelligence) in its most simplest form –

"Anything that can think or act like human"

1936

People were fascinated with the idea of building machines that could think and act like humans. The idea of machine intelligence sparked in the human brain long before the actual digital computers were invented. In 1936, Alan Turing first gave the idea of a thinking hypothetical machine, called the Turing machine. This concept basically showed the way to embed intelligence into a machine through a set of defined rules.

1950

Alan Turing
Machine Intelligence, being a potential and unexplored field, caught the interest of several curious minds in past history and started booming. Intelligence was understood to be a rule-obeying behavior, but we didn’t know what are those rules. This raised questions on how to exactly define intelligence. And the answer is, we couldn’t. But Turing played the imitation game. In the 1950s, Alan Turing devised a test (called the Turing test) where a machine’s ability to imitate human intelligence could be tested. If a human interacts with a machine and another human. And if he/she could not distinguish between the two, then we say that the machine has passed the Turing test and hence it exhibits intelligence equivalent to that of a human.

1955

john-mccarthy

John McCarthy, known as the Father of AI, coined the term Artificial Intelligence (AI). He defined AI as the science and engineering of making intelligent machines. His work was mainly focused on understanding human consciousness and self-awareness of the choices that we have. He tried to mimic the same in AI as much as possible and produced interesting work on robot consciousness and free will.

1958

Frank rosenblatt: Perceptrons
Little later, people started realizing the importance of the biological structure of the human brain in contributing to intelligence. Inspired by the biology of neurons and a mere understanding of how they are connected to each other in a complex system, Frank Rosenblatt created perceptrons. It was the simplest neural network with only one layer that connected inputs to outputs. That sparked the beginning of neural networks in the history of Machine Learning (ML).

1974

Baby

Soon, machines were able to solve logical problems like playing checkers or solving algebra, which was considered ‘hard’ for a normal human being to solve. Leading researchers assumed this was a sign of intelligence that can easily imitate humans. But they were wrong about it. The ‘hard’ logical problems were actually easier to solve as compared to the ‘easy’ problems of imitating the skills of a one-year-old baby, in terms of mobility and perception. This is interestingly coined under the term “Moravec’s paradox“. Without realizing the complexity involved in basic human tasks like walking or picking a pencil, people hyped the potential of AI in imitating human behavior. As they realized the true nature of problems, this inevitably lead to the first downfall of AI (1974-1980), which collapsed its value in the perspective of government bureaucrats. This pushed down the research funding in AI and is termed the first AI winter of the 20th century.

1987

Desktop computers: History of AI
Despite the criticism, few researchers continued to make advances. They realized the efforts required in imitating humans. Solving tasks like picking an object or walking, is easy for humans to learn since it is encoded in our system through billions of years of evolution and experiences. But encoding the same information in the machines is one of the hardest engineering problems to solve. Expert systems such as XCON, and LISP machines were developed for solving such specific AI problems. But, by 1987, companies like Apple and IBM already advanced in desktop computers. These computers became cheaper compared to the specialized AI hardware, which lead to the collapse of the AI hardware market. This led to another downfall in AI history (1987-1993) – the second AI winter.

2000

Neural networks: History
Soon after, desktop computers became the hardware for AI. Researchers started exploring Neural networks. The beginning of the 21st century has been one of the most impactful milestones in the history of AI and ML. Between 2000-2010, “Deep Learning” exploded, particularly in the field of Convolutional Neural Networks (CNN) in computer vision and Natural Language Processing (NLP) in Speech Recognition. The CNNs proved to be useful to humans in facial recognition, self-driving cars, document recognition, weather prediction, and so on. On the other hand, speech recognition gave rise to Apple’s Siri, Google’s google assistant, Amazon’s Alex, and many more. This idea of using many layers of neurons to imitate the connections of our human brain revolutionized the AI industry. With the introduction of fast GPUs, the implementation of Deep NNs became much more efficient.

2015

Alpha Go: History
AlphaGo by Google, beat the world champion in the board game ‘Go’. It was a historical moment since it was believed that a machine would never be able to beat a human in such a complex game. Just to give an idea of its complexity, the number of possible configurations in a game of Go is more than the number of atoms in the universe. Terrific right!! This boosted the confidence of researchers in the vast potential of Deep Learning.

2021

Dall-E image
Dall-E and Dall-E 2 were born. They are the kind of AI that can create mind-blowing images for the text description as input by the user. They have literally cracked the code of generating images that never existed before. It can also recreate an existing image with different new variations. This shows how far AI can get in building relations between text descriptions and images.
AI has a very rich history of downfalls and misunderstandings. Still, it has progressed at an exponential pace since the beginning of the 21st century. There is much more to see and understand about how these models predict or generate outputs that we never thought could be possible. The term “Explainable AI” coined by DARPA has gained a lot of popularity over the past 3 years. It provides a set of tools to interpret the results of AI models. And In my opinion, it is really important that we should understand the world from the perspective of AI in order to build trust in it. I believe that soon the progress in AI would bring surprising things to the table. Also, as Andrew Ng quotes in one of his talks “AI is the new electricity”, it could really change the way we live in the coming years…

Share:

2 Responses

  1. I have been browsing online more than three hours today yet I never found any interesting article like yours It is pretty worth enough for me In my view if all website owners and bloggers made good content as you did the internet will be a lot more useful than ever before

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

What exactly is ChatGPT?

ChatGPT is the single fastest growing human application in the whole history. It had around 100 million users in just 2.5 months from its launch date. What exactly is about ChatGPT that made it widely popular?

Read More »