> The Second AI Winter_
Expert systems failed. The LISP machine market collapsed.
> DEEP DIVE_
The second AI winter arrived with brutal speed in 1987, and its immediate cause was the collapse of the LISP machine market. Throughout the early 1980s, companies like Symbolics, Lisp Machines Inc., and Texas Instruments had built specialized hardware designed to run LISP — the programming language of choice for AI research since John McCarthy invented it in 1958. These machines were expensive ($50,000 to $100,000 each) but powerful for their specific purpose, and corporations had bought them enthusiastically as part of the expert systems boom. Then the general-purpose workstation market caught up. Sun Microsystems, Apollo Computer, and others produced Unix workstations that could run LISP nearly as fast as dedicated machines at a fraction of the cost. Almost overnight, the market for LISP machines evaporated.
Symbolics, the poster child of the AI hardware boom, was hit hardest. Founded in 1980 by former MIT AI Lab researchers, the company had been the first to register a .com domain name (symbolics.com, on March 15, 1985) and had achieved revenues of over $100 million per year. By 1988, the company was hemorrhaging cash. It filed for bankruptcy in 1993. The broader AI hardware and software market, which had been valued at approximately $450 million in the mid-1980s, collapsed to a fraction of that. Companies that had invested millions in expert systems found that the systems were expensive to maintain, difficult to update, and brittle in the face of real-world complexity.
The winter spread beyond the commercial sector into academia. DARPA, which had increased AI funding during the Fifth Generation scare, began cutting budgets sharply. The Strategic Computing Initiative was wound down. Research proposals that mentioned "artificial intelligence" were routinely rejected. Researchers learned to disguise their work under more palatable labels: "machine learning" became "statistical pattern recognition," "neural networks" became "adaptive systems," "knowledge representation" became "database theory." An entire field went into hiding, and a generation of graduate students was advised not to put "AI" on their resumes.
The human cost was significant. Researchers who had built careers around expert systems and symbolic AI found themselves professionally stranded. Some retooled and moved into adjacent fields. Others left academia entirely. The second AI winter also had a chilling effect on risk-taking: for the next 15 years, AI researchers tended to work on narrow, well-defined problems where they could demonstrate measurable progress, rather than pursuing the grand ambitions of the Dartmouth era. This conservatism was understandable — they had been burned twice — but it also meant that bold ideas about deep learning, reinforcement learning, and large-scale neural networks were pursued by only a tiny handful of stubborn believers, working in near-obscurity. Those believers, as it would turn out, were building the future.