Demystifying Artificial Intelligence: Tracing its Origins, Types, and Everyday Marvels

Demystifying Artificial Intelligence (AI): Tracing its Origins, Types, and Everyday Marvels

"I think the biggest mistake we could make is to underestimate the potential of artificial intelligence." - Stephen Hawking, physicist

"Artificial intelligence is a fundamental technology that will transform every aspect of our lives." - Bill Gates, entrepreneur

"Artificial intelligence is the last invention that humans will ever need to make." - Alan Turing, computer scientist

Introduction: Why I am Curious About AI

When you search the web for what people say about artificial intelligence, you find an endless flow of quotes. Many express fear or caution, while others are curious and inspiring or simply aim to present 'objective' facts, which can also be unsettling. One common thread emerges among these quotes. Almost all of them share the belief that AI will have a significant impact on our society. It's striking how this shared belief underlines the discussions surrounding artificial intelligence.

But what might that look like? Should I be afraid of the potential impact of AI? Will this lead humanity into a new golden age or destroy us? How advanced is AI today, and how and when did development and research start? I know Google has been working on this for a long time, and they certainly have the money to invest.

“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” - Larry Page, co-founder of Google

They seem to be getting good at it, too. This article from the Washington Post is quite astounding. I really suggest reading the included Google Doc if you find the time.

It is likely that as time progresses and AI becomes more a part of our daily lives, xenophobia, the fear of the unknown, will increase as well, largely driven by the media. But is this justified, or is it just a means to get more attention from the people who develop AI products for commercial use?

I think the best way to know is to understand the matter for oneself. That is my quest. To address these questions, I've chosen two distinct approaches. The first one is trying to learn and make use of the technical aspects of AI and machine learning, hence my plan to learn Python , and the second is trying to learn where AI came from (history) and where it might be going (philosophy). This is what I will start with right here and now.

The Basic Concept of AI

Definition of AI

What is artificial intelligence? The Oxford English Dictionary defines it as the capacity of computers or other machines to exhibit or simulate intelligent behavior, or the field of study concerned with this. Webster's defines it as a branch of computer science dealing with the simulation of intelligent behavior in computers, or the capability of a machine to imitate intelligent human behavior. In essence, it refers to technology that imitates human intelligence.

History of AI

The idea of artificial intelligence dates back quite a long time - even to ancient times, with myths and stories of artificial beings endowed with intelligence or consciousness by master craftsmen. Think of the golem or Frankenstein’s monster, for example. I found a very interesting website on the detailed history of AI if you want to dig deeper. Here is a short overview or summary of more “recent” major developments for the impatient:

  • In the 17th century, the French philosopher RenĂ© Descartes argued that animals were simply machines and that it was possible to create machines that could think.
  • In the 19th century, the British mathematician Ada Lovelace wrote about the possibility of creating a machine that could generate music. - I just read about Ada Lovelace with my daughter in a wonderful book. She basically invented the concept of computers and coding by essentially writing the first computer program - very cool. Here is the affiliate link to the book if you want to help me get get filthy rich: Good Night Stories Rebel Girls 1.
  • In the early 20th century, the American mathematician Alan Turing published a paper that laid the theoretical foundations for artificial intelligence. The paper is called "Computing Machinery and Intelligence", which introduces the Turing test, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
  • Modern AI research arose in the 1950s; there was a surge of interest in artificial intelligence, and many early AI programs were developed. Look up the Logic Theorist program for instance. Also, the Dartmouth Summer Research Project on Artificial Intelligence was held, which is considered to be the founding event of the field of artificial intelligence.
  • In the 1960s and 1970s, development around AI slowed down. Many of the early programs failed to live up to expectations, and funding was greatly reduced. ELIZA, the first chatbot, is certainly an exception to this. In addition, DENDRAL , the first expert system, was developed.
  • The beginning of the 1980s saw the onset of the so-called AI Winter, even though interest in expert systems peaked during this period and things like MYCIN (a program able to diagnose diseases based on patient symptoms) were developed. While expert systems showed promise, their limitations also became obvious - you could say they lacked intelligence.
  • Interest in AI was reignited only when the back-propagation algorithm, a fundamental technique for training multi-layer neural networks, was introduced in 1985. New techniques, such as machine learning, were developed shortly after.
  • Since the 1990s, there has been rapid progress in artificial intelligence, and AI is now being used in a wide variety of applications, including healthcare, finance, and transportation. Some milestones since then:
    • 1997: Deep Blue defeats Garry Kasparov, the world chess champion. This is a major milestone for artificial intelligence, as it shows that computers can now beat humans at a complex game.
    • 2011: Watson, a question-answering system developed by IBM, defeats two human champions on the game show Jeopardy!. This shows that artificial intelligence can now understand and respond to natural language.
    • 2015: AlphaGo, a Go-playing program developed by Google DeepMind, defeats the world champion Go player Lee Sedol in a series of 4 to 1. This is another major milestone for artificial intelligence, as Go is a much more complex game than chess.
    • While AlphaGo was trained by watching humans play games (a later version) AlphaGo Zero was a self-taught system that never saw a human play. In 2017 AlphaGo Zero beat the original AlphaGo 100-0. The system basically taught itself strategies superior to any human that ever played the game, in just 3 days. Isn’t that crazy?!
    • October 2022: OpenAI releases the ChatGPT API to the public. Now everyone can use it and create with it. BAAAMMM!
    • August 2023: More and more AI tools have emerged. There are seemingly hundreds available, and the number is growing by the day.

Types of AI

The world of AI is vast and evolving rapidly. Many “buzzwords” float around that don’t make much sense to me yet. To get a clearer picture, I want to break AI down into some of its different types and categories.

Narrow vs. General AI

This is the most basic, broad categorization of all the different types of AI. Narrow is focused on a specific single task, at which it can get incredibly good. It is sometimes called Weak AI because it cannot learn or adapt to new situations. Good examples would be voice assistants, chatbots, and image recognition systems.

General AI, also referred to as Strong AI due to its ability to learn and adapt to situations, is, on the other hand, designed to perform a multitude of tasks. Even though full development hasn't been achieved yet, the goal is for them to perform essentially any task a human can. A good metaphor to differentiate the two would be to picture Narrow AI as a master of one instrument versus General AI as the maestro of an orchestra.

ANI, AGI and ASI

This is another categorization method I read about. Be aware that these are just hypothetical concepts, and it is not clear if or when they will be achieved, but here is the breakdown of the terms:

  • ANI stands for Artificial Narrow Intelligence. This is the kind of AI that we now have, and it is made to carry out certain tasks like playing chess or translating languages. In contrast to humans, ANI lacks general intelligence and is unable to learn new skills or adjust to changing circumstances.
  • AGI stands for Artificial General Intelligence. This is a hypothetical type of AI that would be capable of performing any task that a human can. AGI would be able to learn and adapt to new situations, and it would have a level of intelligence that is indistinguishable from human intelligence. It is the kind of AI we are used to from science fiction movies and books.
  • ASI stands for Artificial Super Intelligence. This is the hypothetical ultimate or advanced version of AGI that would be more intelligent than any human being. It would be able to solve problems that are way beyond human capabilities, outthinking even the brightest of our minds. I have no idea what that would end up looking like.

AI - Learning Types

One of the major ways to differentiate AI models is the way that they learn or are taught. Here are the common ways:

  • Machine Learning: Machine learning allows computers to learn without being explicitly programmed. Data is used to train machine learning algorithms, which can then use that data to make predictions or decisions.
  • Deep Learning: Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Neural networks are inspired by the human brain, and they can be used to solve a wide variety of problems, including image recognition, natural language processing, and speech recognition.
  • Reinforcement Learning: Reinforcement learning is the AI version of trial and error. Algorithms are rewarded for actions that result in desired outcomes, and they are penalized for actions that result in undesirable outcomes. This does sound odd - how do you reward an algorithm? I picture it like scoring points in a game for “positive” moves I make versus losing points for moves that don’t bring me closer to the goal.
  • Natural Language Processing (NLP): NLP is a subfield of AI that focuses on enabling machines to comprehend, interpret, and generate human language. It's the technology behind chatbots, language translation, and even text generation.

Generative AI

There is currently a lot of buzz about Generative AI or Gen AI. This category of AI focuses on generating or creating new things like images, text, music, and even videos. It can be imagined as a digital artist that conjures up unique creations based on patterns it has learned from existing data.

Gen AI is what we are being exposed to more and more. Even though it is still in its early stages, it has massive potential to increase our creativity, productivity, and problem-solving abilities if used correctly, or to just do this sort of work for us in many cases if you are boring and lazy.

By the way, if you ask ChatGPT if it is a Generative AI it will say it is not in the typical sense but rather a NLP, even though the lines between AI types are thin and terms rather blurry.

Real-World Usage of AI and My Next Step

Today, AI is already all around us. Everybody has at least tried virtual assistants like Alexa, Siri, or Google Assistant. Even search engines use intelligent algorithms.

Another widely known application, mainly thanks to Tesla, is self-driving car systems. Drones are trained on similar technology as well. Do you think they might use reinforcement learning for such systems? - kind of a scary thought.

AI systems are becoming more proficient in medical diagnosis and other expert fields like law and business. Bard AI and ChatGPT have both passed several official exams regarding these topics. Just search for “AI passes exam” and you will see. Now, can AI systems replace lawyers or doctors? Not likely, but it is easy to see how AI can aid them in their jobs quite a bit.

Chatbots are becoming increasingly useful. But other bots—robots and/or industrial machines—benefit greatly from becoming “smarter”, be it in manufacturing, healthcare, or other industries.

One field I am particularly interested in is AI usage for finance. From data analysis to fraud detection to algorithmic trading, I’m really excited to find out more about it. And there are so many more real-world applications that I could not possibly fit them all in this post.

In the next step, I want to look deeper into what AI consist of - the building blocks - if you want. So as soon as I get a chance to learn this, I will post my results here. Make sure you subscribe if you don't want to miss out.

Either way, I would be delighted to hear your thoughts and maybe to discuss the matter further, so please feel very welcome to leave a comment or feedback.

Thanks for reading. Until next time!

J.D.

© 2023 | J.D. | Let's Learn, Code, and Thrive!

Comments