> The Perceptron_
Frank Rosenblatt invented the Perceptron — the first machine that could "learn."
> DEEP DIVE_
In 1958, Frank Rosenblatt, a 32-year-old psychologist at the Cornell Aeronautical Laboratory in Buffalo, New York, unveiled the Mark I Perceptron — the first machine that could genuinely learn from experience. Rosenblatt was handsome, charismatic, and wildly ambitious, a far cry from the typical reserved academic. He was a polymath who played piano, flew planes, and studied everything from astrophysics to brain anatomy. His perceptron was not a software simulation running on someone else's computer; it was a custom-built hardware device that filled an entire room, weighing in at several tons. It used a grid of 400 photocells (20 by 20) as its "eye," connected through a network of randomly wired potentiometers to a set of output units.
The New York Times reported the invention on July 8, 1958, with a headline that perfectly captured both the excitement and the hype: "New Navy Device Learns By Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser." The U.S. Navy, which funded Rosenblatt's work through the Office of Naval Research, was thrilled. At a press conference, Rosenblatt declared that the perceptron would eventually "be able to walk, talk, see, write, reproduce itself, and be conscious of its existence." The media went wild. Here, finally, was a thinking machine — not a theoretical proposal or a philosophical argument, but a physical device that could learn to distinguish between shapes shown on cards held up to its camera.
The way the perceptron worked was elegant and, in hindsight, remarkably prescient. Each photocell was connected to a set of "association units" through randomly assigned positive or negative weights. These association units fed into a single output neuron that would fire if the weighted sum of its inputs exceeded a threshold. Learning occurred through a simple algorithm: when the perceptron made a correct classification, the weights were left alone; when it made an error, the weights were adjusted to make the correct answer more likely. Rosenblatt proved mathematically that this algorithm would converge — that is, if a solution existed, the perceptron would eventually find it. This "Perceptron Convergence Theorem" was a genuine breakthrough.
But there were limits that Rosenblatt himself did not fully appreciate. The perceptron was a single-layer network; it could only learn to classify patterns that were "linearly separable" — patterns that could be divided by a straight line (or hyperplane). This limitation was a ticking time bomb. Marvin Minsky, who had been Rosenblatt's classmate at the Bronx High School of Science, was already skeptical. The two men had a rivalry that was personal as well as intellectual — Minsky saw Rosenblatt as a showman making promises that science could not keep. In 1969, Minsky and Seymour Papert would publish a devastating critique that exploited this very limitation. The seeds of the perceptron's destruction were planted at the moment of its greatest triumph.