← Back to Timeline
2023Beyond Text

> GPT-4_

Multimodal AI. 90th percentile on the bar exam.

> DEEP DIVE_

On March 14, 2023, OpenAI released GPT-4, and the capabilities of large language models took another quantum leap. For the first time, the model was multimodal, able to process both text and images as input. GPT-4 could analyze photographs, interpret charts, read handwritten notes, and reason about visual content with remarkable sophistication. On standardized tests, the results were extraordinary: GPT-4 scored in the 90th percentile on the Uniform Bar Exam, the 93rd percentile on the SAT Reading section, and passed virtually every AP exam with scores of 4 or 5. The gap between GPT-3.5 and GPT-4 was so large that it felt less like an incremental improvement and more like encountering a different kind of intelligence.

The same month saw a seismic event on the open-source front. Meta's LLaMA (Large Language Model Meta AI), a family of models ranging from 7 billion to 65 billion parameters, was leaked online after being released to researchers under a restrictive license. The leak democratized access to a powerful foundation model, and within weeks, the open-source community had produced dozens of fine-tuned variants: Alpaca from Stanford, Vicuna, and eventually an entire ecosystem of open models that could run on consumer hardware. The LLaMA leak accelerated the open-source AI movement more than any deliberate release could have, proving that competitive language models did not require billion-dollar budgets.

The competitive landscape fractured into a multi-front war. Anthropic launched Claude, built with a focus on safety using Constitutional AI techniques. Google rebranded Bard as Gemini and released increasingly capable models. Meta committed to an open-source strategy with the subsequent LLaMA 2 and 3 releases. Chinese companies including Baidu (Ernie Bot), Alibaba (Qwen), and others released their own large language models. By the end of 2023, the concentration of AI capability that had existed in 2020, when only OpenAI and Google had frontier models, had shattered into a diverse ecosystem of competing approaches and philosophies.

The year also saw the beginning of serious regulatory action. The European Union's AI Act, the world's first comprehensive AI regulation, moved toward final passage. In March, over 1,000 technology leaders, including Elon Musk and Steve Wozniak, signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4, citing "profound risks to society and humanity." The letter was controversial; many AI researchers dismissed it as alarmist or impractical, while others argued it did not go far enough. Regardless of its impact, the letter symbolized a turning point: for the first time, the people building AI were publicly expressing fear about what they were building. The year 2023 was when AI stopped being a technology story and became a civilization story.