← Try the simulators

> History of AI_

From calculators to systems that seem to think

1943
Foundations

The First Neuron

McCulloch and Pitts published a mathematical model of an artificial neuron — showing that the logic of thought could be represented as on/off switches. Computers didn't even exist yet. But in a wartime paper that almost nobody read, the theoretical foundation for "thinking machines" was quietly laid. Every neural network alive today traces its lineage to this moment.

1950
The Big Question

The Turing Test

Alan Turing published "Computing Machinery and Intelligence" and posed the question that would haunt the next century: "Can machines think?" His proposal was elegant — if a machine could hold a text conversation indistinguishable from a human, it should be considered intelligent. Over 70 years later, the question remains unanswered. But the machines keep getting better at faking it.

1952
Machine Learning is Born

Samuel's Checkers

Arthur Samuel at IBM built a checkers program that learned from experience — and eventually beat its own creator. It studied thousands of board positions, developed its own evaluation strategies, and improved without being explicitly programmed for each move. Samuel later coined the term "Machine Learning" in 1959, giving a name to the idea that machines could teach themselves. The concept that would take over the world started with a board game.

1956
AI is Born

The Dartmouth Conference

John McCarthy gathered the brightest minds to a summer workshop and coined the term "Artificial Intelligence." An era of wild optimism — participants genuinely believed AGI was 20 years away. McCarthy, Minsky, Shannon, Rochester — legends in a room, dreaming big. Reality would prove far more stubborn than any of them imagined.

1958
First Learning

The Perceptron

Frank Rosenblatt invented the Perceptron — the first machine that could genuinely "learn" from examples. The New York Times ran the headline: "Navy reveals embryo of computer designed to read and grow wiser." The hype was enormous. Researchers declared it the dawn of machine intelligence. But a devastating critique was already brewing —

1965
The First Expert System

DENDRAL

At Stanford, Edward Feigenbaum, Bruce Buchanan, and Nobel laureate Joshua Lederberg built DENDRAL — the first "expert system." It could identify organic compounds from mass spectrometry data, matching and sometimes exceeding the performance of human chemists. The idea was revolutionary: encode human expertise as rules, and the machine becomes the expert. This approach would dominate AI for two decades — and then spectacularly fail.

1966
First Conversation

ELIZA

Joseph Weizenbaum created ELIZA at MIT. A simple rule-based program that mimicked a Rogerian therapist — it mostly just rephrased your own words back at you. Yet people became deeply captivated. Weizenbaum's own secretary asked him to leave the room so she could talk to it privately. The "ELIZA effect" revealed something unsettling: humans will project emotion and intelligence onto machines at the slightest provocation.

1969
Collapse

The First AI Winter

Marvin Minsky and Seymour Papert published "Perceptrons" — mathematically proving that single-layer perceptrons couldn't solve even simple problems like XOR. The book was devastating. AI funding evaporated almost overnight. Researchers scattered to other fields. The dream of thinking machines, barely a decade old, froze solid. The First AI Winter had begun.

1970
Natural Language Understanding

SHRDLU — Intelligence in a Blocks World

Terry Winograd at MIT built SHRDLU — a program that could hold full English conversations about a virtual world of colored blocks. "Put the red block on the blue one." "Now move it." SHRDLU understood what "it" referred to. It handled context, pronouns, and ambiguity. The AI community was electrified. Human-level language understanding seemed imminent. But SHRDLU only understood its tiny blocks world. The real world — with its infinite ambiguity, sarcasm, and common sense — remained completely beyond reach. The optimism SHRDLU generated would contribute directly to the coming crash.

1973
Devastation

The Lighthill Report

The UK government commissioned mathematician James Lighthill to evaluate the state of AI research. His verdict was merciless: "In no part of the field have the discoveries made so far produced the major impact that was then promised." British AI funding was nearly completely eliminated. DARPA followed with similar cuts across the Atlantic. AI researchers became academic pariahs.

1980
Commercial Hope

The Expert Systems Boom

Expert systems roared back with a vengeance. DEC's XCON configured computer orders using over 10,000 IF-THEN rules, saving $25 million per year. Stanford's MYCIN diagnosed bacterial infections with 69% accuracy — better than most doctors — but was never deployed due to ethical and legal concerns. Corporations poured billions into AI. For a brief, shining moment, it looked like the winter was over. It wasn't.

1982
National Ambition

Japan's Fifth Generation Computer Project

Japan's MITI launched the Fifth Generation Computer Project through ICOT, spending 54 billion yen (~$400 million) over ten years. The goal: build a parallel inference machine running at 1 Giga-LIPS that would leapfrog Western computing. The US and UK panicked and launched rival programs. The project failed commercially — parallel Prolog never took off. But it trained an entire generation of Japanese computer scientists and proved that national AI ambition could reshape global research priorities.

1986
Seeds of Revival

Backpropagation

Hinton, Rumelhart, and Williams published the paper that made backpropagation practical for multi-layer neural networks. Finally, a way to train networks with hidden layers — to teach machines complex, layered representations of the world. The algorithm that would eventually power the deep learning revolution now existed. But the world wasn't ready. Data was scarce, computers were slow, and nobody was listening. The seeds were planted in frozen ground.

1987
Collapse Again

The Second AI Winter

Expert systems crumbled under their own weight — brittle, expensive, impossible to maintain. The LISP machine market collapsed overnight, sending companies into bankruptcy. "AI" became a toxic label that repelled investors. Researchers learned to survive by camouflage, deliberately renaming their work "machine learning," "informatics," or "computational intelligence." The dream went underground again.

1989
Proof of Utility

LeNet — A Neural Network That Reads

Yann LeCun at Bell Labs built LeNet — a convolutional neural network (CNN) that could read handwritten zip codes for the US Postal Service. Trained with backpropagation, it learned to recognize digits with remarkable accuracy. By the late 1990s, LeNet was processing millions of checks per day in ATMs across America. In the depths of the AI winter, while the mainstream had abandoned neural networks, LeCun's CNN quietly proved they had real-world value. If you deposited a check at an ATM in the 2000s, a neural network designed in 1989 was reading your handwriting. Some banks reportedly still run his code.

1992
Reinforcement Learning Arrives

TD-Gammon — Learning by Self-Play

Gerald Tesauro at IBM created TD-Gammon — a program given nothing but the rules of backgammon. It played itself 1.5 million times and reached human world-champion level. No training data. No expert games to study. Pure self-play and reinforcement learning. More remarkably, TD-Gammon discovered strategies that humans had never considered in thousands of years of play. Professional backgammon players studied the program and changed how they played. Twenty-four years later, AlphaGo's legendary Move 37 — the move no human had ever played — was a direct descendant of what TD-Gammon proved possible.

1995
Conversation Evolves

A.L.I.C.E.

Richard Wallace created A.L.I.C.E. and invented AIML — an open XML language for writing conversation rules. Released as open source, developers worldwide contributed tens of thousands of patterns. It won the Loebner Prize three times. The inspiration behind Spike Jonze's film "Her." Not intelligent — but proof that conversation AI still captivated the human imagination.

1997
A Different Kind of Intelligence

Deep Blue Defeats Kasparov

IBM's Deep Blue defeated world chess champion Garry Kasparov in a six-game match. But this wasn't neural networks or learning — it was brute-force search, evaluating 200 million positions per second. Kasparov said afterward that he sensed "a new kind of intelligence" across the table. He was facing a calculator. A phenomenally fast one. The world celebrated, but AI researchers knew: raw computation isn't thought.

1998
Memory Unlocked

LSTM — Teaching Machines to Remember

Sepp Hochreiter and Jurgen Schmidhuber solved one of neural networks' most stubborn problems: the vanishing gradient. Their Long Short-Term Memory networks could finally remember information over long sequences — hundreds or thousands of steps back. LSTM was largely ignored for years. Then it quietly powered everything: Siri's voice recognition, Alexa's language understanding, Google Translate's breakthroughs. The architecture that taught machines to remember would dominate sequence modeling until the Transformer arrived.

2006
Dawn of Deep Learning

Hinton's Revival

Geoffrey Hinton published his landmark paper on "deep belief networks" — showing that multi-layer neural networks, long thought unworkable, could be trained greedily, layer by layer. Hinton deliberately avoided the toxic term "neural networks" and branded his approach "Deep Learning." A small team at the University of Toronto. Minimal funding. Years of ridicule from the mainstream AI establishment. But Hinton had been right all along, and everything was about to change.

2007
AI Hits the Road

The DARPA Urban Challenge

Eleven autonomous vehicles navigated 60 miles of urban traffic — merging, turning, parking, and obeying traffic laws — with no human intervention. Carnegie Mellon's "Boss" won. Stanford's "Junior" came second. The technology was rough, the speeds were slow, and safety drivers were on standby. But it proved autonomous driving was possible. Within a decade, Google would spin out Waymo, Tesla would ship Autopilot, and self-driving would become a trillion-dollar race.

2009
The Data Revolution

ImageNet

Fei-Fei Li at Stanford released ImageNet: 1.2 million hand-labeled images across 1,000 categories. It took years of painstaking work using Amazon Mechanical Turk. "If we show computers millions of examples, maybe they can learn to see." Previous AI research meant hand-crafting complex rules from tiny datasets. ImageNet was built to obliterate that paradigm — and it would, spectacularly, three years later.

2011
Breaking the Language Barrier

IBM Watson Wins Jeopardy

IBM's Watson crushed human champions Ken Jennings and Brad Rutter on the quiz show Jeopardy!, winning $1 million. The first time an AI "understood" natural language questions — puns, wordplay, obscure references — and answered them live on national television. Watson wasn't connected to the internet. It reasoned from 200 million pages of pre-loaded text. Jennings wrote on his answer screen: "I for one welcome our new computer overlords."

2012
The Deep Learning Revolution

AlexNet — The Moment Everything Changed

★ The turning point — this is where the modern AI era began

ImageNet Large Scale Visual Recognition Challenge 2012. Previous best error rate: ~26%. AlexNet error rate: 15.3%. A gap of ~11%. Nothing like it had ever been seen in the competition's history. AlexNet was built by Hinton's student Alex Krizhevsky and Ilya Sutskever — who would later co-found OpenAI. Their secret weapon: two gaming GPUs (GTX 580) and CUDA. Total hardware cost: about $1,000. "Stop writing rules. Show examples instead." This single result detonated the deep learning revolution. Every major tech company pivoted overnight. Google acquired Hinton's team the following year. Facebook hired Yann LeCun. The AI gold rush had begun.

2013
Words as Geometry

Word2Vec & The AI Arms Race

Google's Tomas Mikolov released Word2Vec, and a stunning equation emerged: "King - Man + Woman = Queen." Words could be represented as vectors in geometric space, and their relationships could be computed with simple arithmetic. The same year, Google acquired DeepMind for roughly $500 million — a startup with no products, only research papers and ambition. The AI arms race between tech giants had officially started. Talent became the scarcest resource on earth.

2014
The Birth of Generation

GANs — AI That Creates

Ian Goodfellow, after an argument with friends at a bar in Montreal, went home and coded a new architecture in a single night. By morning, it worked. He called them Generative Adversarial Networks — two neural networks locked in competition: one forges, one detects. The forger gets better at fooling the detector. The detector gets better at catching fakes. Both improve endlessly. AI could now create photos, art, and music — not just classify. The era of generative AI had quietly begun.

2015
Breaking the Depth Barrier

ResNet — Deeper Than Human

Kaiming He at Microsoft Research introduced residual connections — a deceptively simple idea that let neural networks grow to 152 layers deep without collapsing during training. Previous networks topped out at around 20 layers. ResNet achieved 3.57% error on ImageNet. Human performance: 5.1%. For the first time in history, a machine could identify objects in photographs more accurately than the people who took them. The depth barrier was shattered.

2016
Creative Intelligence

AlphaGo — A Move No Human Had Played

Google DeepMind's AlphaGo defeated world Go champion Lee Sedol 4-1. In Game 2, Move 37 — a stone placed on the fifth line that no human would have considered, that commentators immediately called a mistake. It won the game. Go has more possible board positions than atoms in the observable universe. Brute force is mathematically impossible. AlphaGo didn't calculate — it intuited. Lee Sedol, visibly shaken, left the room for fifteen minutes. He later called Move 37 "a divine move." AlphaGo had thought.

2017
Foundation of Modern AI

Transformer — Attention Is All You Need

Eight researchers at Google Brain published "Attention is All You Need." The Transformer architecture computes relationships between all words in a sentence simultaneously — not one by one, but all at once. Previous AI processed language sequentially, word by word (RNNs, LSTMs). Transformers see everything in parallel — retaining long-range context, enabling massive parallelization on GPUs, and scaling to datasets previously unimaginable. GPT, BERT, Claude, Gemini, LLaMA — every modern large language model stands on the Transformer. This single paper may be the most consequential publication in the history of AI.

2018
The Pre-training Era

GPT-1 and BERT

OpenAI released GPT-1 in June, Google released BERT in October. A new paradigm crystallized: pre-train on massive text, then fine-tune on small task-specific data. No more building separate models for each problem. One foundation model, infinite applications. GPT-1 had 117 million parameters — modest by today's standards. But the architecture was right, and the insight was profound: language models trained on enough text develop emergent capabilities nobody explicitly programmed.

2019
The First Warning

GPT-2 — "Too Dangerous"

OpenAI refused to fully release GPT-2, citing risks of fake news generation. The first time AI researchers publicly declared their own creation "too dangerous to release." The AI community was split — half called it responsible caution, half called it marketing genius. The generated text was indistinguishable from human writing. Coherent paragraphs, consistent style, factual-sounding claims that were entirely fabricated. A preview of a world where you can't trust what you read.

2020
Intelligence at Scale

GPT-3 — The Scale Laws

OpenAI released GPT-3: 175 billion parameters. Over 100x larger than GPT-2. Training cost estimated at $4.6 million. The real shock wasn't the size — it was few-shot learning. Show GPT-3 just a few examples of any task, and it could perform it. Programming, translation, poetry, legal analysis, SAT questions — tasks it was never explicitly trained for. Capabilities emerged spontaneously from scale, like intelligence condensing from raw data. "Scale is all you need" became the new mantra. The race to build bigger models consumed the industry.

2020
Scientific Revolution

AlphaFold — Solving the Code of Life

DeepMind's AlphaFold2 crushed the CASP14 protein structure prediction competition — scoring 244.0 against the runner-up's 92.1. Nearly indistinguishable from experimental lab results. The 50-year "protein folding problem" — one of biology's grand challenges — was effectively solved. Not by biologists, but by the game-AI researchers who built AlphaGo. By 2022, AlphaFold had predicted the 3D structures of virtually all 200 million known proteins and released the database for free. Doing the same experimentally would have taken hundreds of millions of years. In 2024, creators Demis Hassabis and John Jumper won the Nobel Prize in Chemistry. AI had changed fundamental science forever.

2021
From Labs to Tools

DALL-E & Copilot — AI Gets Creative

OpenAI unveiled DALL-E: type a description, get an image. "An astronaut riding a horse in the style of Andy Warhol" — and there it was. GitHub launched Copilot, an AI that writes code alongside you, trained on billions of lines from open-source repositories. AI crossed a critical threshold: it moved from research labs into creative tools that ordinary people could use. Artists, writers, and programmers suddenly had AI collaborators. The question shifted from "can AI create?" to "who owns what AI creates?"

2022
The Inflection Point

ChatGPT — The World Changed

November 30, 2022. OpenAI released ChatGPT. 100 million users in 2 months — the fastest product adoption in human history. Instagram took 2.5 years. TikTok took 9 months. For the first time, talking to AI became part of ordinary people's daily lives. Students used it for homework. Lawyers used it for briefs. Programmers used it to debug code. 56 years after ELIZA, 4 years after GPT-1, 3 years after the "too dangerous" GPT-2. The age of conversational AI had arrived, and there was no going back.

2022
The Open Source Shock

Stable Diffusion — AI Art for Everyone

In August 2022, Stability AI released Stable Diffusion with fully open weights and code. Unlike DALL-E 2 or Midjourney, locked behind corporate APIs and waitlists, Stable Diffusion could run on any consumer GPU with just 2.4 GB of VRAM. The power to generate photorealistic images from text was suddenly free and unrestricted. The internet was flooded with AI-generated art overnight. Artists filed class-action lawsuits alleging copyright infringement. An AI-generated painting won a fine art competition, igniting a global debate about creativity and authorship. The question of whether AI should be open or controlled became the defining schism of the field.

2023
Beyond Text

GPT-4 — The Multimodal Leap

OpenAI released GPT-4: a multimodal model that could process both text and images. It scored in the 90th percentile on the bar exam. It could describe photographs, solve visual puzzles, and reason about charts and diagrams. The EU passed the AI Act — the world's first comprehensive AI legislation. Anthropic released Claude 2. Google launched Gemini. Open-source models from Meta (LLaMA) democratized access. Safety debates intensified as capabilities accelerated beyond what anyone had predicted even a year earlier.

2024
System 2 Thinking

The Reasoning Revolution — OpenAI o1

OpenAI released o1 — the first model designed to "think before answering." Chain-of-thought reasoning, applied at inference time, let the model break complex problems into steps and verify its own logic. The results were staggering. On the AIME math competition: GPT-4o scored 12%. o1 scored 74-93%. On PhD-level science questions, o1 exceeded human expert performance. The model could spend minutes reasoning through a single problem, trading compute for accuracy. "System 2 thinking" had arrived for AI. Not just pattern matching — deliberate, step-by-step reasoning. Geoffrey Hinton's Nobel Prize in Physics that year felt like the universe acknowledging what had been built.

2025
The Cost Revolution

DeepSeek — The $294K Shock

A Chinese lab called DeepSeek released R1 — a reasoning model that matched frontier performance. Training cost: $294,000. Other labs were spending hundreds of millions. 671 billion parameters using Mixture-of-Experts, with only 37 billion active at any time. Fully open source, weights and all. Nvidia's stock crashed $593 billion in a single day — the largest one-day loss in stock market history. Trump called it a "wake-up call" for American AI. The assumption that AI supremacy required unlimited capital was shattered overnight. China had proven that ingenuity could substitute for brute-force spending. The geopolitics of AI would never be the same.

2026
The New Frontier

The Age of AI Agents

AI systems stopped waiting for prompts and started acting. Autonomous agents that plan multi-step tasks, use tools, browse the web, write and execute code, and correct their own mistakes. The GPT-5 era brought models that could sustain complex goals over hours, not seconds. The agent market exploded to $7.6 billion. AI copilots embedded in cars, enterprise software, scientific research, and creative workflows. Deliberative Alignment — models that reason about their own safety constraints in real time — became the new frontier of AI safety. The boundary between tool and colleague blurred. For the first time, the question wasn't "what can AI do?" but "what should AI be allowed to do?" You are standing at the edge of that answer.

> KEY ARCHITECTS_

Alan Turing1912–1954Father of Computer Science

Created the Turing Test. Convicted of homosexuality in 1952 and subjected to chemical castration. Died in 1954. The UK government formally apologized in 2009.

Claude Shannon1916–2001Father of Information Theory

Published "A Mathematical Theory of Communication" in 1948 and built the digital age. In 1950, built a maze-solving mouse called Theseus. Also spotted juggling while riding a unicycle through Bell Labs.

Arthur Samuel1901–1990Coined "Machine Learning"

Coined the term "machine learning" in 1959. His checkers program was the first to learn from experience — and the first to beat its own creator. Proof that machines could surpass their programmer.

John McCarthy1927–2011Named "Artificial Intelligence"

Organized the Dartmouth Conference and coined "AI." Developed LISP. Believed his entire life that AGI was "20 years away."

Marvin Minsky1927–2016Pioneer and Inadvertent Critic

Championed symbolic AI — then wrote "Perceptrons," which halted neural network research for over a decade. A figure of deep irony.

Geoffrey Hinton1947–Godfather of Deep Learning

Believed in neural networks through the darkest winters. 2018 Turing Award, 2024 Nobel Prize in Physics. Left Google in 2023 to speak freely about AI risks.

Yann LeCun1960–Pioneer of CNNs

Built LeNet for handwriting recognition in 1989. Chief AI Scientist at Meta. Won the 2018 Turing Award with Hinton and Bengio. Outspoken advocate for open-source AI.

Yoshua Bengio1964–Deep Learning Pioneer

Laid groundwork for attention mechanisms. 2018 Turing Award. Unlike Hinton and LeCun, stayed in academia. Now one of the loudest voices warning about AI existential risk.

Fei-Fei Li1976–Creator of ImageNet

Believed "data is the key." Built ImageNet and laid the foundation for the deep learning revolution. Director of Stanford's AI Lab.

Demis Hassabis1976–Co-founder of DeepMind

A child chess prodigy who grew up to build AlphaGo and AlphaFold. AlphaFold solved protein folding — a 50-year grand challenge — earning him the 2024 Nobel Prize in Chemistry.

Ian Goodfellow1985–Inventor of GANs

In 2014, after an argument with friends at a bar, went home and wrote the GAN code that night. It worked by morning. The night AI learned to create.

Ilya Sutskever1986–Co-founder of OpenAI

Key contributor to AlexNet. Believed in scaling before anyone else. Left OpenAI after internal conflict over safety vs. speed. Founded Safe Superintelligence Inc (SSI).

John Hopfield1933–The Physicist Who Reignited Neural Networks

A physicist who wandered into AI, applied equations from magnetic materials to neural networks in 1982, and single-handedly reignited the field after a decade of neglect. Won the 2024 Nobel Prize in Physics — 42 years after his breakthrough.

Judea Pearl1936–Father of Causal Reasoning

Formalized Bayesian networks in 1988, giving AI the ability to reason under uncertainty. Won the 2011 Turing Award. His son Daniel Pearl was the journalist murdered in Pakistan in 2002.

> THINGS THEY DON'T TEACH YOU_

ELIZA was ~200 lines of code. Modern AI has billions of lines and trillions of parameters. Yet people's tendency to emotionally bond with it hasn't changed.

♟️

After Deep Blue defeated Kasparov, Kasparov said he felt "a new kind of intelligence." Deep Blue was dismantled shortly after. No monument was built.

🎮

AlexNet ran on two gaming GPUs (each ~$500). Not a supercomputer. Not AI-specific hardware. The graphics cards gamers use.

🐦

Microsoft's Tay chatbot launched March 23, 2016. Within 24 hours it was making racist statements. Emergency shutdown. The internet had spoken.

📰

When GPT-2 was withheld as "too dangerous," half the AI community called it marketing hype. Nobody says that anymore.

🏆

ChatGPT reached 100M users in 2 months. Instagram: 2.5 years. TikTok: 9 months. Netflix: 3.5 years. Fastest adoption in history.

🇯🇵

Japan's Fifth Generation Computer project spent 54 billion yen (~$400M) building the world's fastest inference machines. By the time they finished, cheap PCs with GUIs had made them obsolete. The project's biggest success was training researchers, not building computers.

💰

DeepSeek-R1 achieved GPT-o1-level reasoning for just $294,000 in training costs. US companies spent hundreds of millions for similar results. Nvidia's stock crashed on the news.

🎲

Arthur Samuel's 1959 checkers program beat its own creator. The first time a machine surpassed the human who programmed it. Samuel was delighted, not afraid.

🧬

AlphaFold predicted the 3D structure of virtually every known protein (~200 million). A problem biologists had worked on for 50 years was essentially solved in a weekend of compute.

🤖

The word "robot" comes from the Czech word "robota" (forced labor), coined by Karel Capek in his 1920 play R.U.R. The robots in the play rebel and destroy humanity. Fiction warned us first.

🔬

When Geoffrey Hinton won the 2024 Nobel Prize in Physics for neural networks, many physicists were outraged. "It's not physics!" they protested. The Nobel committee disagreed — the math of neural networks is rooted in statistical physics.

🚗

In 1995, two Carnegie Mellon researchers drove a neural-network-equipped minivan from Pittsburgh to San Diego. The AI steered for 98.2% of 2,850 miles. No GPS. Self-driving cars aren't a 21st century invention — they crossed America during the Clinton administration.

📞

In 2018, Google demoed Duplex: an AI calling a restaurant to book a table, complete with natural "umm"s and "uh-huh"s. The human on the phone had no idea. The first time AI passed a real-world Turing test. Google had to scramble to promise the AI would identify itself.

💎

Nvidia started as a gaming graphics card company. By June 2024, it hit $3 trillion market cap — briefly the most valuable company on Earth. It controls 92% of the data center GPU market. It didn't build AI. It sold the pickaxes in the AI gold rush.

🎨

In 2022, an AI-generated image called "Théâtre D'opéra Spatial" won first place at the Colorado State Fair art competition. The artist honestly disclosed he used Midjourney. Artists were furious, calling it "the death of creativity." The judges said they would have awarded it first place even knowing it was AI.

> NEW: AI HISTORY NARRATOR

Meet CLIO — an AI that has "witnessed" all of AI history. Ask about any event, person, or mystery.

> Talk to CLIO

Now experience the AI of each era for yourself.

> Launch Retro AI Simulator