What is the history of artificial intelligence (AI)?

It may sometimes feel like AI is a recent development in technology. After
all, it’s only become mainstream to use in the last several years, right?
In reality, the groundwork for AI began in the early 1900s. And although
the biggest strides weren’t made until the 1950s, it wouldn’t have been
possible without the work of early experts in many different fields.

Knowing the history of AI is important in understanding where AI is now and
where it may go in the future. In this article, we cover all the major
developments in AI, from the groundwork laid in the early 1900s, to the
major strides made in recent years.

What is artificial intelligence?

Artificial intelligence is a specialty within computer science that is
concerned with creating systems that can replicate human intelligence and
problem-solving abilities. They do this by taking in a myriad of data,
processing it, and learning from their past in order to streamline and
improve in the future. *A normal computer program would need human
interference in order to fix bugs and improve processes. [everyday updation
is needed to keep straight where human involvement is amust and if so,
where is the job loss?]*

The history of artificial intelligence:

The idea of “artificial intelligence” goes back thousands of years, to
ancient philosophers considering questions of life and death. In ancient
times, inventors made things called “automatons” which were mechanical and
moved independently of human intervention. The word “automaton” comes from
ancient Greek, and means “acting of one’s own will.” One of the earliest
records of an automaton comes from 400 BCE and refers to a mechanical
pigeon created by a friend of the philosopher Plato. Many years later, one
of the most famous automatons was created by Leonardo da Vinci around the
year 1495.

So while the idea of a machine being able to function on its own is
ancient, for the purposes of this article, we’re going to focus on the 20th
century, when engineers and scientists began to make strides toward our
modern-day AI.

Groundwork for AI:

1900-1950In the early 1900s, there was a lot of media created that centered
around the idea of artificial humans. So much so that scientists of all
sorts started asking the question: is it possible to create an artificial
brain? Some creators even made some versions of what we now call “robots”
(and the word was coined in a Czech play in 1921) though most of them were
relatively simple. These were steam-powered for the most part, and some
could make facial expressions and even walk.

Dates of note:

1921: Czech playwright Karel Čapek released a science fiction play
“Rossum’s Universal Robots” which introduced the idea of “artificial
people” which he named robots. This was the first known use of the word.

1929: Japanese professor Makoto Nishimura built the first Japanese robot,
named Gakutensoku.

1949: Computer scientist Edmund Callis Berkley published the book “Giant
Brains, or Machines that Think” which compared the newer models of
computers to human brains.

Birth of AI: 1950-1956

This range of time was when the interest in AI really came to a head. Alan
Turing published his work “Computer Machinery and Intelligence” which
eventually became The Turing Test, which experts used to measure computer
intelligence. The term “artificial intelligence” was coined and came into
popular use.

Dates of note:

1950: Alan Turing published “Computer Machinery and Intelligence” which
proposed a test of machine intelligence called The Imitation Game.

1952: A computer scientist named Arthur Samuel developed a program to play
checkers, which is the first to ever learn the game independently.

1955: John McCarthy held a workshop at Dartmouth on “artificial
intelligence” which is the first use of the word, and how it came into
popular usage.

AI maturation: 1957-1979

The time between when the phrase “artificial intelligence” was created, and
the 1980s was a period of both rapid growth and struggle for AI research.
The late 1950s through the 1960s was a time of creation. From programming
languages that are still in use to this day to books and films that
explored the idea of robots, AI became a mainstream idea quickly.



The 1970s showed similar improvements, such as the first anthropomorphic
robot being built in Japan, to the first example of an autonomous vehicle
being built by an engineering grad student. However, it was also a time of
struggle for AI research, as the U.S. government showed little interest in
continuing to fund AI research.

Notable dates include:

1958: John McCarthy created LISP (acronym for List Processing), the first
programming language for AI research, which is still in popular use to this
day.

1959: Arthur Samuel created the term “machine learning” when doing a speech
about teaching machines to play chess better than the humans who programmed
them.

1961: The first industrial robot Unimate started working on an assembly
line at General Motors in New Jersey, tasked with transporting die casings
and welding parts on cars (which was deemed too dangerous for humans).

1965: Edward Feigenbaum and Joshua Lederberg created the first “expert
system” which was a form of AI programmed to replicate the thinking and
decision-making abilities of human experts.

1966: Joseph Weizenbaum created the first “chatterbot” (later shortened to
chatbot), ELIZA, a mock psychotherapist, that used natural language
processing (NLP) to converse with humans.1968: Soviet mathematician Alexey
Ivakhnenko published “Group Method of Data Handling” in the journal
“Avtomatika,” which proposed a new approach to AI that would later become
what we now know as “Deep Learning.”

1973: An applied mathematician named James Lighthill gave a report to the
British Science Council, underlining that strides were not as impressive as
those that had been promised by scientists, which led to much-reduced
support and funding for AI research from the British government.

1979: James L. Adams created The Standford Cart in 1961, which became one
of the first examples of an autonomous vehicle. In ‘79, it successfully
navigated a room full of chairs without human interference.

1979: The American Association of Artificial Intelligence which is now
known as the Association for the Advancement of Artificial Intelligence
(AAAI) was founded.

Create beautiful visualizations with your data.

AI boom: 1980-1987

Most of the 1980s showed a period of rapid growth and interest in AI, now
labeled as the “AI boom.” This came from both breakthroughs in research,
and additional government funding to support the researchers. Deep Learning
techniques and the use of Expert System became more popular, both of which
allowed computers to learn from their mistakes and make independent
decisions.

Notable dates in this time period include:

1980: First conference of the AAAI was held at Stanford.

1980: The first expert system came into the commercial market, known as
XCON (expert configurer). It was designed to assist in the ordering of
computer systems by automatically picking components based on the
customer’s needs.

1981: The Japanese government allocated $850 million (over $2 billion
dollars in today’s money) to the Fifth Generation Computer project. Their
aim was to create computers that could translate, converse in human
language, and express reasoning on a human level.

1984: The AAAI warns of an incoming “AI Winter” where funding and interest
would decrease, and make research significantly more difficult.

1985: An autonomous drawing program known as AARON is demonstrated at the
AAAI conference.

1986: Ernst Dickmann and his team at Bundeswehr University of Munich
created and demonstrated the first driverless car (or robot car). It could
drive up to 55 mph on roads that didn’t have other obstacles or human
drivers.

1987: Commercial launch of Alacrity by Alactrious Inc. Alacrity was the
first strategy managerial advisory system, and used a complex expert system
with 3,000+ rules.

AI winter: 1987-1993

As the AAAI warned, an AI Winter came. The term describes a period of low
consumer, public, and private interest in AI which leads to decreased
research funding, which, in turn, leads to few breakthroughs. Both private
investors and the government lost interest in AI and halted their funding
due to high cost versus seemingly low return. This AI Winter came about
because of some setbacks in the machine market and expert systems,
including the end of the Fifth Generation project, cutbacks in strategic
computing initiatives, and a slowdown in the deployment of expert systems.

Notable dates include:

1987: The market for specialized LISP-based hardware collapsed due to
cheaper and more accessible competitors that could run LISP software,
including those offered by IBM and Apple. This caused many specialized LISP
companies to fail as the technology was now easily accessible.

1988: A computer programmer named Rollo Carpenter invented the chatbot
Jabberwacky, which he programmed to provide interesting and entertaining
conversation to humans.

AI agents: 1993-2011

Despite the lack of funding during the AI Winter, the early 90s showed some
impressive strides forward in AI research, including the introduction of
the first AI system that could beat a reigning world champion chess player.
This era also saw early examples of AI agents in research settings, as well
as the introduction of AI into everyday life via innovations such as the
first Roomba and the first commercially available speech recognition
software on Windows computers.

The surge in interest was followed by a surge in funding for research,
which allowed even more progress to be made.

Notable dates include:

1997: Deep Blue (developed by IBM) beat the world chess champion, Gary
Kasparov, in a highly-publicized match, becoming the first program to beat
a human chess champion.

1997: Windows released a speech recognition software (developed by Dragon
Systems).

2000: Professor Cynthia Breazeal developed the first robot that could
simulate human emotions with its face,which included eyes, eyebrows, ears,
and a mouth. It was called Kismet.

2002: The first Roomba was released.

2003: Nasa landed two rovers onto Mars (Spirit and Opportunity) and they
navigated the surface of the planet without human intervention.

2006: Companies such as Twitter, Facebook, and Netflix started utilizing AI
as a part of their advertising and user experience (UX) algorithms.

2010: Microsoft launched the Xbox 360 Kinect, the first gaming hardware
designed to track body movement and translate it into gaming directions.

2011: An NLP computer programmed to answer questions named Watson (created
by IBM) won Jeopardy against two former champions in a televised game.

2011: Apple released Siri, the first popular virtual assistant.

Artificial General Intelligence: 2012-present

That brings us to the most recent developments in AI, up to the present
day. We’ve seen a surge in common-use AI tools, such as virtual assistants,
search engines, etc. This time period also popularized Deep Learning and
Big Data..

Notable dates include:

2012: Two researchers from Google (Jeff Dean and Andrew Ng) trained a
neural network to recognize cats by showing it unlabeled images and no
background information.

2015: Elon Musk, Stephen Hawking, and Steve Wozniak (and over 3,000 others)
signed an open letter to the worlds’ government systems banning the
development of (and later, use of) autonomous weapons for purposes of war.

2016: Hanson Robotics created a humanoid robot named Sophia, who became
known as the first “robot citizen” and was the first robot created with a
realistic human appearance and the ability to see and replicate emotions,
as well as to communicate.

2017: Facebook programmed two AI chatbots to converse and learn how to
negotiate, but as they went back and forth they ended up forgoing English
and developing their own language, completely autonomously.

2018: A Chinese tech group called Alibaba’s language-processing AI beat
human intellect on a Stanford reading and comprehension test.

2019: Google’s AlphaStar reached Grandmaster on the video game StarCraft 2,
outperforming all but .2% of human players.

2020: OpenAI started beta testing GPT-3, a model that uses Deep Learning to
create code, poetry, and other such language and writing tasks. While not
the first of its kind, it is the first that creates content almost
indistinguishable from those created by humans.

2021: OpenAI developed DALL-E, which can process and understand images
enough to produce accurate captions, moving AI one step closer to
understanding the visual world.

*What does the future hold?     *Now that we’re back to the present, there
is probably a natural next question on your mind: so what comes next for AI?

Well, we can never entirely predict the future. However, many leading
experts talk about the possible futures of AI, so we can make educated
guesses. We can expect to see further adoption of AI by businesses of all
sizes, changes in the workforce as more automation eliminates and creates
jobs in equal measure, more robotics, autonomous vehicles, and so much more.
Interested in moving your business forward with the help of AI?

Automates repetitive learning and discovery through data. Instead of
automating manual tasks, artificial intelligence performs frequent,
high-volume, computerized tasks. And it does so reliably and without
fatigue. Of course, humans are still essential to set up the system and ask
the right questions.

Adds intelligence to existing products. Many products you already use will
be improved with artificial intelligence capabilities, much like Siri was
added as a feature to a new generation of Apple products. Automation,
conversational platforms, bots and smart machines can be combined with
large amounts of data to improve many technologies. Upgrades at home and in
the workplace, range from security intelligence and smart cams to
investment analysis.

Adapts through progressive learning algorithms to let the data do the
programming.
Artificial intelligence finds structure and regularities in data so that
algorithms can acquire skills. Just as an algorithm can teach itself to
play chess, it can teach itself what product to recommend next online. And
the models adapt when given new data.

Analyzes more and deeper data using neural networks that have many hidden
layers. Building a fraud detection system with five hidden layers used to
be impossible. All that has changed with incredible computer power and big
data. You need lots of data to train deep learning models because they
learn directly from the data.

Achieves incredible accuracy through deep neural networks. For example,
your interactions with Alexa and Google are all based on deep learning. And
these products keep getting more accurate the more you use them. In the
medical field, AI techniques from deep learning and object recognition can
now be used to pinpoint cancer on medical images with improved accuracy.

Gets the most out of data. When algorithms are self-learning, the data
itself is an asset. The answers are in the data – you just have to apply
artificial intelligence to find them. With this tight relationship between
data and AI, your data becomes more important than ever. If you have the
best data in a competitive industry, even if everyone is applying similar
techniques, the best data will win. But using that data to innovate
responsibly requires trustworthy AI. And that means your AI systems should
be ethical, equitable and sustainable.

HENCE, I BEG TO DIFFER WHERE POPULATION IS GROWING IN MULTIPLE G P RATIO,
EMPLOYMENT Faster fits only through fast machines provided by the human
brain.  K RAJARAM IRS  5825

On Fri, 15 Aug 2025 at 14:44, Venkatachalam Subramanian <
[email protected]> wrote:

> ​Sure! I can provide an overview of Artificial Intelligence (AI) and how
> you can get started learning about it.
> ​What is Artificial Intelligence?
> ​செயற்கை நுண்ணறிவு (Artificial Intelligence) என்பது மனிதர்கள் போல
> சிந்திக்கும், செயல்படும் மற்றும் கற்றுக்கொள்ளும் திறன் கொண்ட கணினி
> அமைப்புகளை உருவாக்குவதாகும். இதன் முக்கிய நோக்கம், சிக்கலான பணிகளைத்
> தானாகவே செய்யக்கூடிய, முடிவெடுக்கக்கூடிய மற்றும் புதிய விஷயங்களைக்
> கற்றுக்கொள்ளக்கூடிய மென்பொருள்கள் மற்றும் இயந்திரங்களை உருவாக்குவதுதான்.
> ​இது பல பிரிவுகளைக் கொண்டுள்ளது, அவற்றுள் முக்கியமானவை:
> ​இயந்திரக் கற்றல் (Machine Learning - ML): இது AI-யின் ஒரு துணைப் பிரிவு.
> இதில், கணினிகள் தரவுகளைப் பயன்படுத்தி தானாகவே கற்றுக்கொண்டு, எதிர்கால
> நிகழ்வுகளை கணிக்கவோ அல்லது முடிவெடுக்கவோ பயிற்சி அளிக்கப்படுகின்றன.
> உதாரணமாக, ஒரு மின்னஞ்சல் ஸ்பேமா அல்லது இல்லையா என்பதைக் கண்டறிதல்.
> ​ஆழமான கற்றல் (Deep Learning - DL): இது இயந்திரக் கற்றலின் ஒரு சிறப்புப்
> பிரிவு. இது மனித மூளையின் நரம்பு மண்டலத்தைப் போன்ற அமைப்பைக் கொண்டு
> செயல்படுகிறது. படங்களைக் கண்டறிதல், குரலை அடையாளம் காணுதல் போன்ற சிக்கலான
> பணிகளுக்கு இது அதிகம் பயன்படுத்தப்படுகிறது.
> ​இயற்கை மொழி செயலாக்கம் (Natural Language Processing - NLP): இது கணினிகள்
> மனித மொழியைப் புரிந்துகொண்டு, அதை செயலாக்குவதற்கும், உருவாக்கும்
> திறனுக்கும் உதவுகிறது. உதாரணமாக, Google Translate, Siri அல்லது Alexa
> போன்றவை.
> ​கணினிப் பார்வை (Computer Vision): இது கணினிகள் படங்களையும்,
> வீடியோக்களையும் புரிந்துகொள்ளும் திறனைக் குறிக்கிறது. முகத்தைக் கண்டறிதல்,
> வாகனங்களை அடையாளம் காணுதல் போன்றவை இதன் எடுத்துக்காட்டுகள்.
> ​எப்படி கற்கத் தொடங்குவது?
> ​செயற்கை நுண்ணறிவு கற்க பல வழிகள் உள்ளன. உங்களுக்கான சிறந்த வழியை நீங்களே
> தேர்ந்தெடுக்கலாம்:
> ​அடிப்படை கணித அறிவைப் பெறுங்கள்: AI-யை நன்கு புரிந்துகொள்ள, கணிதம்
> மிகவும் அவசியம். குறிப்பாக நேரியல் இயற்கணிதம் (linear algebra), புள்ளியியல்
> (statistics) மற்றும் நிகழ்தகவு (probability) ஆகியவற்றில் நல்ல அடித்தளம்
> இருப்பது முக்கியம்.
> ​புரோகிராமிங் மொழியைக் கற்றுக்கொள்ளுங்கள்: AI-க்கு பைதான் (Python) மிகவும்
> பிரபலமான மொழி. அதன் எளிமையான வாக்கிய அமைப்புகள், பல நூலகங்கள் (libraries)
> மற்றும் பரந்த ஆதரவு ஆகியவை இதற்குக் காரணம். NumPy, Pandas, Scikit-learn,
> TensorFlow மற்றும் PyTorch போன்ற பைதான் நூலகங்கள் மிகவும் பயனுள்ளவை.
> ​ஆன்லைன் படிப்புகள் மற்றும் பயிற்சிகளைப் பயன்படுத்துங்கள்: Coursera, edX,
> Udacity போன்ற தளங்களில் AI, Machine Learning பற்றி பல உயர்தர படிப்புகள்
> கிடைக்கின்றன. சில பிரபலமான படிப்புகள்:
> ​"Machine Learning" by Andrew Ng (Coursera): இது AI-க்கு ஒரு சிறந்த
> தொடக்கப் புள்ளி.
> ​"Deep Learning Specialization" by Andrew Ng (Coursera): ஆழமான கற்றலுக்கான
> விரிவான படிப்பு.
> ​சிறிய திட்டங்களைச் செய்து பாருங்கள்: நீங்கள் கோட்பாடுகளைப் படித்த பிறகு,
> அவற்றைச் செயல்படுத்திப் பார்ப்பது மிக முக்கியம். Kaggle போன்ற தளங்களில்
> உள்ள தரவுகளைப் பயன்படுத்தி, சிறிய திட்டங்களை உருவாக்குவதன் மூலம் நடைமுறை
> அனுபவத்தைப் பெறலாம்.
> ​நூல்கள் மற்றும் கட்டுரைகளைப் படியுங்கள்: "An Introduction to Statistical
> Learning" போன்ற புத்தகங்கள் கோட்பாடுகளை ஆழமாகப் புரிந்துகொள்ள உதவும்.
> மேலும், AI தொடர்பான புதிய ஆராய்ச்சிக் கட்டுரைகளைப் படிப்பதன் மூலம்
> சமீபத்திய முன்னேற்றங்களை அறிந்துகொள்ளலாம்.
> ​AI-யின் எதிர்கால வாய்ப்புகள்
> ​AI-யானது இன்று பல துறைகளில் பயன்படுத்தப்படுகிறது. எதிர்காலத்தில் இது
> இன்னும் பல துறைகளில் அதன் பங்களிப்பை அதிகரிக்கும்.
> ​சுகாதாரம்: நோய்களைக் கண்டறிதல், மருந்து கண்டுபிடிப்பு.
> ​நிதித் துறை: மோசடிகளைக் கண்டறிதல், பங்குச் சந்தை கணிப்புகள்.
> ​போக்குவரத்து: தானியங்கி வாகனங்கள்.
> ​விவசாயம்: தானியங்கி பயிர் மேலாண்மை.
> ​இந்தத் தகவல்கள் உங்களுக்கு செயற்கை நுண்ணறிவு பற்றி ஒரு தெளிவான புரிதலை
> அளித்திருக்கும் என நம்புகிறேன். நீங்கள் எந்தப் பிரிவில் கவனம் செலுத்த
> விரும்புகிறீர்கள் அல்லது எந்தக் குறிப்பிட்ட தலைப்பு பற்றி மேலும்
> தெரிந்துகொள்ள விரும்புகிறீர்கள்?
>
> ---------- Forwarded message ---------
> From: Venkatachalam Subramanian <[email protected]>
> Date: Thu, 14 Aug, 2025, 5:34 pm
> Subject:
> To: Bhagya lakshmi <[email protected]>
>
>
> https://g.co/gemini/share/45159e3cd882
>
> --
> You received this message because you are subscribed to the Google Groups
> "Thatha_Patty" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion visit
> https://groups.google.com/d/msgid/thatha_patty/CAJgp%3Ddv9CTvamYdJX9tXp_6-7oGBQS3uQzrrjEYaktht6uDa%2Bw%40mail.gmail.com
> <https://groups.google.com/d/msgid/thatha_patty/CAJgp%3Ddv9CTvamYdJX9tXp_6-7oGBQS3uQzrrjEYaktht6uDa%2Bw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Thatha_Patty" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/thatha_patty/CAL5XZoqxPYx7TEZdvTGpJx0q-5DeTOYvYfitY2WtkG3dkT1TMQ%40mail.gmail.com.
  • Fwd: Venkatachalam Subramanian
    • Re: Rajaram Krishnamurthy

Reply via email to