Credit: Olivier Le Moal/ iStock via Getty Images Plus
Author Nicole Villeneuve
February 26, 2026
Love it?7
When our gadgets slow down after an update or come with batteries that are impossible to replace, it’s hard not to wonder if they’re designed to fail in order to force us to buy something new. This is known as planned obsolescence — the idea that products are intentionally made to wear out and be replaced quickly. And while it may feel like a recent phenomenon, companies designing products with a built-in limited lifespan began much earlier than you may think.
One of the earliest examples of what we now know as planned obsolescence involved a seemingly mundane object: the light bulb. By the early 1900s, incandescent bulbs were capable of lasting up to 2,000 hours (or roughly 83 full days). That durability, while impressive, was bad for business. So in 1924, several major light bulb manufacturers including General Electric, Philips, and Osram formed a secret group known as the Phoebus cartel.
Together, they agreed that the bulbs they manufactured shouldn’t last longer than 1,000 hours; shorter-lived bulbs meant more frequent replacements and therefore steadier sales. The cartel disbanded during World War II, and was later uncovered by antitrust investigations that revealed the extent of its coordination. But the idea had been planted: Companies didn’t have to simply respond to the natural wear and tear of a product — they could decide how long it should last.
Credit: Keystone View Company/ Archive Photos via Getty Images
Annual Upgrades
The automobile industry adopted a similar strategy. In the mid-1920s, General Motors began rolling out annual changes to its cars. Unlike Ford, which offered little variation on the Model T for nearly two decades, GM introduced annual updates to its multiple brands, including Oldsmobile, Pontiac, Cadillac, Buick, and Chevrolet.
The updates were mostly cosmetic: new grilles, different fenders, or fresh color options, all of which caused the previous year’s model to look dated even if it still ran perfectly. GM president Alfred P. Sloan called his idea “dynamic obsolescence,” and the strategy to get people to buy more cars worked. In the 1930s, the average length of car ownership was about five years; by the mid-1950s, it was down to two or three years.
The term “planned obsolescence” first appeared in 1932. In the depths of the Great Depression, New York real estate broker Bernard London published several works on economics and industrial recovery, including a paper titled “Ending the Depression Through Planned Obsolescence.” In it, he proposed that consumer goods including shoes, homes, and machines, as well as all products from manufacturing and agriculture should have government-imposed expiration dates. Once these products reached the end of their legal lifespans, they would be turned in and destroyed.
That same year, industrial designers Roy Sheldon and Egmont Arens released their book Consumer Engineering: A New Technique for Prosperity. Like London, they wanted to boost U.S. manufacturing and saw an opportunity with products that were built to be used, disposed of, and then replaced. The idea was intended to keep factories and the economy running by requiring people to buy new things, and the concept set the stage for turning the replacement of goods into a deliberate part of product design — making planned obsolescence a key part of American manufacturing.
Credit: H. Armstrong Roberts/ ClassicStock via Getty Images
Postwar Consumerism
Some 20 years after London’s pitch, postwar prosperity saw U.S. consumerism soar, and planned obsolescence began to proliferate in American household goods. Appliances such as refrigerators, washing machines, and televisions flooded the market — as did profit incentives to keep people buying new ones. Companies wasted no time coming up with new shapes, styles, and colors that made older appliances look dated — changes made more for appearance than mechanical necessity.
In 1954, famed industrial designer Brooks Stevens (whose products include everything from the Oscar Mayer Wienermobile to wide-mouthed peanut butter jars) spoke about planned obsolescence at an advertising conference, defining it as “instilling in the buyer the desire to own something a little newer, a little better, a little sooner than is necessary.” Today, both functional limits and design and marketing plans continue to shape how we buy and replace the products in our lives.
American office life once looked very different than it does today. Through clouds of cigarette smoke, you’d see lines of desks in open-area bullpens or, starting in the late 1960s, separated by cubicles. It was common to find typewriters and many other analog gadgets on desktops through much of the 20th century before bulky early computers took over in the 1990s.
Technology, of course, continued to advance, and as office work itself became faster, more specialized, and increasingly online, many formerly essential workplace objects faded into obscurity. Here are seven once-common office fixtures that have all but vanished with time.
Before contact information lived in the cloud, it was kept in a Rolodex. Patented in 1956 by the New York office supply company Zephyr American, the Rolodex took its name from a simple idea: a rolling index of contacts. A small wheel mounted on a rotating base held index cards that kept names, numbers, and notes accessible and easy to update with the turn of the knob. Simple, yes — and also one of the most iconic office supplies of all time.
By the 1960s and ’70s, Rolodexes were standard issue on desks in sales offices, C-suites, and newsrooms. A well-used Rolodex was highly valuable; it signaled a robust professional network that was carefully built and maintained over many years. Throughout the 1980s and ’90s, lawsuits were even filed over employees copying or taking Rolodexes upon leaving a company.
As computers (and eventually smartphones) took hold, the Rolodex’s role faded, but the item never entirely went away. Today, the term “Rolodex” remains shorthand for a person’s collection of contacts, and as recently as 2013, Newell Brands, which manufactures Rolodexes, claims that consumer demand has remained high.
For much of the 20th century, no office presentation felt complete without the whir of a projector. Primitive projection devices such as the “magic lantern” had existed since the 17th century, but projectors didn’t become a common tool until the mid-20th century. During World War II, the U.S. Army relied on overhead projectors for training, a practice that helped usher the technology into civilian life after the war.
Offices (and classrooms) quickly embraced the product. Overhead projectors relied on a lamp and a flat glass surface to cast clear transparencies onto walls or screens. They were used to share charts, diagrams, or notes, or even collaborate by writing on the transparencies in real time.Starting in the 1960s (or earlier for some bigger companies), slide projectors filled a similar, but more polished, role. Using 35 mm slides arranged in carousels, they became staples in flashy sales pitches, their signature clacking sound immortalized in the memorable Mad Men episode “The Wheel” in 2007.
By the late 1990s, digital displays and computer software such as Microsoft’s PowerPoint, with its familiar slideshow functionality, rendered both formats all but obsolete.
Punch clocks were fixtures in all kinds of workplaces through the late 19th and 20th centuries, from factories to retail stores and, of course, offices. They emerged during the Industrial Revolution, when expanding workforces and hourly wages made keeping track of time a growing necessity. In the late 1800s, inventors began developing mechanical time recorders that stamped the time an employee arrived or left onto a paper card, turning time into something that could be monitored and paid out.
By the mid-20th century, punch clocks had become nearly universal in the workplace, replacing the timekeepers who manually tracked employees’ hours. Wall-mounted units with knobs or levers and slots for inserting cards sat beside rows of time cards. Some models rang bells; others used different colored ink for employees who were punctual and those who were late. As digital technology emerged in the 1980s and ’90s, paper cards and mechanical punches were replaced with online timesheets and the now-ubiquitous swipe badges.
Every time we copy someone on an email, we’re technically recalling a vintage office relic. The “CC” line traces its origins to carbon paper, once an essential tool for making duplicates of documents. Invented in the early 1800s and widely used by the end of the century, carbon paper was coated on one side with a mixture of black carbon and wax. It would be sandwiched between a piece of paper that was being written or typed on, and another underneath. Whatever text was on the top piece of paper transferred onto the page on the bottom — along with some smudges here and there. This was known as a carbon copy, the origin of “CC.”
Modern photocopiers arrived in offices in 1959 when Xerox debuted its 914 model. While it was certainly revolutionary, the machine remained expensive — it sold at about $29,500 a pop, which is almost $330,000 today — and was prone to catching fire. Needless to say, carbon paper didn’t disappear overnight, but by the end of the 1980s, copiers were streamlined and printers were accessible to most workplaces. Today, carbon copy paper remains relegated to specialty stationery stores that cater to niche clientele.
Much to the chagrin of math teachers everywhere, arithmetic is something most of us outsource to a screen today. But before computers, somewhere between the abacus and the calculator was the adding machine. Early versions were imagined as far back as the 17th century, but reliable models didn’t appear until the late 19th century. In 1888, William Seward Burroughs, a former New York bank clerk (and grandfather of famed Beat writer William S. Burroughs), patented an adding machine that could accurately total long columns of numbers and print the results.
Burroughs called his machine the Arithmometer, and after gaining international attention at the 1900 Paris Exposition, it quickly took off. Within 10 years, an estimated 100,000 office workers in payroll, accounting, billing, and other departments counted on their adding machines. Electronic calculators arrived in the mid-20th century, and by the 1970s, microchip-powered devices made adding machines and their signature keystroke sounds a relic of the past.
Credit: PhotoQuest/ Archive Photos via Getty Images
Pneumatic Tubes
When The Jetsons debuted in 1962, it envisioned a world of futuristic conveniences such as robot maids and flying cars. One of the predictions for the year 2062 was that air-pressurized tubes would transport people like high-speed elevators. It wasn’t entirely science fiction: At the time, pneumatic tubes were quite common in offices, quickly ferrying documents, cash, and small parcels between rooms and floors.
A person would drop a cylinder into a tube, and, after traveling at an air-pressurized speed of 24 feet per second, it would arrive at its destination shortly after. Entire transactions could happen almost instantly, making the tubes both practical and a bit magical. In their earliest days in the mid-1800s, these networks were indeed conceptualized as realistic transportation for humans and vehicles. Ultimately, that proved an impractical feat. Today, pneumatic tubes survive primarily in hospitals or, famously, as a trash removal system at Walt Disney World.
There was a time when important office correspondence wasn’t emailed, but spoken out loud. Early recording devices such as the Dictaphone, trademarked in 1907, allowed voices to be preserved on wax cylinders or magnetic tape for later playback. Managers dictated their intended documents aloud and typists produced them on the page. Some industries, including medicine and law, even maintained dedicated dictation rooms.
Word processors and personal computers began to disrupt this process in the late 20th century, putting drafting, editing, and formatting directly into the hands of the person composing the document. While dictation rooms have largely disappeared, dictation itself has not: Doctors and lawyers still rely on it today; only now, digital recorders and smartphone apps handle the transcription automatically, folding a once larger office workflow into a single device.
When Americans find themselves faced with an emergency situation, they instinctively reach for their phones to dial 911. These three digits are ingrained in the collective consciousness. Even toddlers know the number — take, for example, the case of little A.J. Hayes, a 3-year-old who made a lifesaving 911 call after his father accidentally stabbed himself with a chisel. Today, it’s hard to imagine any other number being used in times of crisis.
But not all that long ago, no universal emergency number existed in the United States. Instead, people had to call their local police station or fire department directly, sometimes desperately fumbling through phone books to find the correct number. It wasn’t until the 1960s that 911 was created, revolutionizing emergency response in the country. But why were these three particular numbers chosen?
Credit: George Konig/ Hulton Archive via Getty Images
A Nationwide Emergency Number
Up until the mid-20th century, there were two main ways Americans could get in touch with emergency services: by calling their local fire department or police precinct directly, or by pressing “0” for the operator. Both options were time-consuming, confusing, and unreliable. People were often unsure where exactly to call, and there was no guarantee that a police station would actually pick up.
In 1957, the National Association of Fire Chiefs recommended that a single set of numbers be used for reporting fires, as precious seconds — if not minutes — were being lost when panicked citizens struggled to find the right number. Then, in 1967, the President’s Commission on Law Enforcement and Administration of Justice, established by President Lyndon B. Johnson, recommended the creation of a nationwide number for reporting emergencies. After all, the United Kingdom had been using its universal emergency number, 999, since 1937 (it was the first country in the world to roll out such a system)— so, why wouldn’t America do the same?
In 1968, in response to growing calls for a single emergency number, the Federal Trade Commission and AT&T announced that they had decided on a new, universal three-number sequence for emergency calls: 911. The number was chosen to satisfy multiple criteria. Most importantly, it was short, easy to remember, and quick to dial. On the rotary phones used at the time, “9” and “1” were both relatively easy to find, even in the dark. Dialing 911 was also faster than the 999 used across the pond, as “9” takes slightly longer to dial than “1” on a rotary phone. With the numbers being at opposite ends of the dial, it also meant that accidentally dialing 911 was less likely.
Another important characteristic in 911’s favor was its uniqueness — it had never been previously authorized as an office code, area code, or service code. According to the National Emergency Number Association, the choice of 911, purely on a technical level, “best met the long-range numbering plans and switching configurations of the telephone industry” at the time.
In other words, 991 didn’t conflict with any existing or planned numbers, and could be programmed into existing switches without major overhauls, meaning it was a good fit for both the existing telephone infrastructure and the industry's plans for future expansion. What’s more, AT&T had already helped standardize numbers such as 411 (the number for directory assistance), allowing it to use the same preexisting telephone system infrastructure to process and route 911.
For the public, 911 was brief, catchy, and very easy to dial. And for the telephone industry, it offered a relatively hassle-free implementation. Dialing 911 would access a central operator that then routed to local emergency services by mapping landline phone numbers to addresses (or today, using location data from your cellphone). The nationwide rollout, however, proved to be a slow process.
The new universal emergency number was first dialed in 1968, but it took decades until most of the country could use it. One major obstacle was the fact that the new 911 number only worked through Bell System telephone companies — all of the nation’s independent phone companies, of which there were many, were not included in the emergency telephone plan. Local jurisdictions also had to fund the infrastructure upgrades, and some of them simply didn’t think the cost was worth it. By 1976, only 17% of the population had access to the service, rising to just 50% a decade later.
Eventually, however, 911 made its way across the United States, becoming arguably the most memorized and instantly recognizable number in the country. Today, around 240 million calls are made to 911 in the U.S. each year — more than 650,000 emergency calls every single day.
Credit: Fox Photos/ Hulton Archive via Getty Images
Author Kristina Wright
December 18, 2025
Love it?41
If you grew up with an analog clock on the wall, you might remember learning that the “little hand” marked the hour and the “big hand” showed the minutes. And you probably never questioned why the hands moved in a particular direction — they simply turned “clockwise.”
But the familiar movement of a clock’s hands is a human invention influenced by geography and centuries of history. To understand why clocks move the direction they do, we have to look back to a time before clockmakers designed the gears and pendulums of our analog clocks — back to when people first started watching and recording the sun’s slow, predictable arc across the sky.
Long before mechanical clocks existed, people measured time by watching shadows. The earliest timekeeping tools — such as a simple vertical stick called a gnomon — were used in places such as ancient Egypt and Mesopotamia to track the sun’s movement.
In the Northern Hemisphere, the sun rises in the east, arcs across the southern sky, and sets in the west. As the sun moves, the shadow cast by a gnomon shifts in a predictable path: facing west in the morning, north around noon, and east in the evening.
However, the direction of movement depends on the type of sundial. On a horizontal sundial, the kind most familiar in Europe, the shadow moves in the same direction as the hands on a modern clock. On vertical, south-facing sundials, or at different latitudes, the shadow can move in the opposite direction, and its path changes slightly with the seasons. In the Southern Hemisphere, where the sun arcs across the northern sky, many sundials naturally produce what we would call a “counterclockwise” motion.
But mechanical clocks were first developed in regions where horizontal dials were common, and shadows created a consistent pattern as the sun moved east to west. This visual rhythm became the template for future timekeeping. When early European clockmakers began building mechanical clocks, they chose to replicate the familiar motion of the sundial shadow. That choice fixed the direction that became known as “clockwise.”
By the 13th and 14th centuries, large mechanical clocks were appearing in churches and town squares across Europe. These early machines were crude by modern standards — powered by falling weights and regulated by simple mechanics — but they represented a major shift in how time was represented, because timekeeping was no longer tied to sunlight. Clocks could run indoors and after dark, offering a way to keep time that was independent of the sky.
Yet despite their mechanical innovations, these devices inherited the sundial’s sense of rotation. Early clocks typically used a single hand for the hour, and that hand moved in the same direction people had watched shadows move for generations. The mechanism might have been new, but the visual representation of time was familiar.
As Europe became a center of mechanical clock innovation, these conventions spread. When clocks were adopted in regions where sundial shadows moved the opposite way, the European standard still dominated.
By the 16th century, timekeeping was no longer limited to public towers. Advances in spring-driven mechanisms allowed for smaller, portable clocks that became increasingly refined, making it possible for individual homes and businesses to keep track of the time. By the 18th century, timekeeping was even more personalized with the development of pocket watches, which grew in popularity as the Industrial Revolution progressed.
Watchmakers built their devices to match the familiar direction of tower clocks, which themselves echoed sundial motion. Even as designs improved — adding minute hands, then second hands, and increasingly accurate escapements — the direction of motion remained consistent. Clockwise became the established system across the growing global clockmaking industry.
Credit: ClassicStock/ Archive Photos via Getty Images
Industrialization Standardized Clocks
By the 19th century, the rise of industrial production, global trade, and expanding railroads created an urgent need for standardized timekeeping. Trains depended on coordinated schedules. Cities needed synchronized clocks. And manufacturers needed consistent designs.
As time zones were adopted and cities aligned their clocks, the sweep of the hands became even more standardized. Mass-produced wall clocks, factory whistles, station clocks, and schoolroom timepieces all reinforced the same familiar clockwise direction.
Over the centuries, counterclockwise clocks did appear here and there, though they remained rare. They were usually found in places where a local tradition of vertical sundials made the opposite motion feel natural. One famous example still exists today: the astronomical clock on the Old Town Hall in Prague. Its hands run counterclockwise, following the right-to-left direction of Hebrew script and echoing older local traditions. This unique timepiece shows how easily the convention could have developed differently.
While analog clocks have mostly been replaced by digital timekeeping, the familiar clockwise motion is now simply an artifact of Northern Hemisphere sundials and the fact that Europe became the birthplace of mechanical clocks. If clockmaking had first flourished south of the equator — where the sun’s shadows sweep the opposite way — our clocks, watches, and even our idea of what “forward” in time means might move counterclockwise instead.
Whether you grew up with 8-tracks and cassette tapes or digital playlists and podcasts, recorded sound probably feels like an ordinary part of everyday life. Today, it’s possible to capture a voice, replay it instantly, and send it around the world in seconds. But before the mid-19th century, the concept of preserving words after they had been spoken was the stuff of science fiction, because the means to do it hadn’t yet been invented. Indeed, to pinpoint the oldest known recording of a human voice, we have to go back almost 170 years, to a time when sound could be seen, but not yet heard.
For years, Thomas Edison was credited with the first recording of a human voice — and for good reason, even if it isn’t quite true. In 1877, he developed his famous phonograph, a device capable of both recording and replaying sound using tinfoil inside a cylinder to inscribe and then reproduce vibrations. Edison’s goal was to make improvements to Alexander Graham Bell’s telephone, and his approach came from his experience with telegraphy and telephony.
Edison’s recording process involved sound waves entering a metal horn and causing a thin diaphragm to vibrate. Attached to the diaphragm was a needle that inscribed those vibrations onto a rotating cylinder covered in tinfoil. To hear the sound again, you simply reversed the process — the needle traced the grooves, the diaphragm vibrated, and the original voice reemerged from the machine.
Because Edison’s device produced immediate, audible results, history long placed him at the start of the sound-recording timeline. Yet as is the case with so many breakthroughs, this famous “first” was not the true beginning. Nearly 20 years earlier, another inventor had quietly laid the groundwork for recording sound.
In 1857 — nearly two decades before Edison’s phonograph — Parisian printer and bookseller Édouard-Léon Scott de Martinville patented a device he called the phonautograph. Scott de Martinville was a self-taught inventor who was fascinated by how the human ear and voice worked together. In the era of early photography, he wondered: What did sound look like?
The phonautograph was his answer. The machine consisted of a large horn that collected sound, a stretched membrane that vibrated with the air pressure, and a bristle or stylus attached to that membrane. When someone spoke or sang into the horn, the stylus etched those vibrations onto a sheet of paper or glass coated in soot from a lamp flame, preserving the pattern of the sound waves.
Scott de Martinville’s goal wasn’t to reproduce sound, because he had no means to play the recordings back. Instead, he wanted to make sound visible, by creating what he described as a visual record of the voice itself.
He imagined scientists studying the visual signatures of speech, singers analyzing their tones, and future generations reading the voices of the past like musical manuscripts. In his writings, he questioned whether one might “preserve for future generations some features of the diction of those eminent actors… who die without leaving behind the faintest trace of their genius?”
In April 1860, Scott de Martinville used his phonautograph to capture a voice singing the French folk song “Au Clair de la Lune.” The following year, he submitted the resulting soot-covered tracing to the Académie des Sciences in Paris for preservation.
At the time, no one could hear it. The lines were a pattern of vibration, so their meaning was visible but inaudible. It wasn’t until 2008 that the phonautogram could finally be played. A team of audio historians from the First Sounds project and scientists from the Lawrence Berkeley National Laboratory used high-resolution optical scanning of the soot-marked paper to convert the waveforms into audible sound. By scanning the lines in high resolution, they were able to translate the visual waveforms into digital audio.
What emerged was the faint but unmistakable sound of a human voice singing “Au Clair de la Lune.” The 20-second audio was marred by distortion and irregular tempo, so researchers at first thought the singer was a woman or a child. But further analysis revealed that it was likely Scott de Martinville himself singing.
Because Scott de Martinville’s phonautograph produced the first known physical record of sound vibrations, which has since been converted into audible sound, it is recognized today as the earliest recording of the human voice ever made. The 2008 breakthrough effectively rewrote the first chapter of sound-recording history.
Scott de Martinville died in 1879, unaware that his sound tracings would one day be heard. Yet through the efforts of modern historians and engineers, his silent tracings finally became what he had imagined — a way to preserve sound for future generations.
Two decades after the phonautograph, another French inventor, Charles Cros, theorized a way to reproduce recorded sound. In April 1877 — months before Edison’s phonograph — he deposited a sealed letter with the Académie des Sciences describing his paleophone, a device that could both record and replay sound. Cros never built his invention, but his idea of capturing and reproducing the human voice became a reality with Edison’s phonograph in 1877.
Credit: Heritage Images/ Hulton Archive via Getty Images
Author Bess Lovejoy
August 22, 2025
Love it?25
In a world without cameras, biometric databases, or even consistent spelling, identifying individuals could be quite a complex challenge. Before photography helped fix identity to an image, societies developed a range of creative methods to determine who someone was — a task that could be surprisingly difficult, especially when that someone was outside their home community. From scars to seals to signatures, here’s how identity was tracked before photo IDs.
A name was the most basic marker of identity for centuries, but it often wasn’t enough. In ancient Greece, to distinguish between people with the same first name, individuals were also identified by their father’s name. For example, an Athenian pottery shard from the fifth century BCE names Pericles as “Pericles son of Xanthippus.” In ancient Egypt, the naming convention might have reflected the name of a master rather than a parent.
But when everyone shared the same name — as in one Roman Egyptian declaration in 146 CE, signed by “Stotoetis, son of Stotoetis, grandson of Stotoetis” — things could get muddled. To resolve this, officials turned to another strategy: describing the body itself.
Credit: Culture Club/ Hulton Archive via Getty Images
Scars and Silk
Detailed physical descriptions often served as a kind of textual portrait. An Egyptian will from 242 BCE describes its subject with remarkable specificity: “65 years old, of middle height, square built, dim-sighted, with a scar on the left part of the temple and on the right side of the jaw and also below the cheek and above the upper lip.” Such marks made the body “legible” for identification.
In 15th-century Bern, Switzerland, when authorities sought to arrest a fraudulent winemaker, they didn’t just list his name. They issued a description: “large fat Martin Walliser, and he has on him a silk jerkin.” Clothing — then a significant investment and deeply symbolic — became part of someone’s identifying characteristics. A person’s outfit could mark their profession, social standing, or even their city of origin.
Uniforms and insignia served a similar function, especially for travelers. In the late 15th century, official couriers from cities such as Basel, Switzerland, and Strasburg, France, wore uniforms in city colors and carried visible badges. Pilgrims and beggars in the late Middle Ages and beyond were also required to wear specific objects — such as metal badges or tokens — that marked their status and origin. Some badges allowed the bearer to beg legally or buy subsidized bread, offering both practical aid and visible authentication.
Seals also served as powerful proxies for the self. From Mesopotamian cylinder seals to Roman oculist stamps and medieval wax impressions, these identifiers could represent both authority and authenticity. In medieval Britain, seals were often made of beeswax and attached to documents with colored tags. More than just utilitarian tools, seals were embedded with personal iconography and could even be worn as jewelry.
In many cases, travelers also had to carry letters from local priests or magistrates identifying who they were. By the 16th century, such documentation became increasingly essential, and failing to carry an identity paper could result in penalties. This passport-like system of “safe conduct” documents gradually started to spread. What began as a protection for merchants and diplomats evolved into a bureaucratic necessity for everyday people.
As written records became more widespread in medieval Europe, so did the need for permanent, portable identifiers. Royal interest in documenting property and legal rights led to the proliferation of official records, which in turn prompted the spread of literacy. Even as early as the 13th century in England, it was already considered risky to travel far without written identification.
The signature eventually emerged as a formal marker of identity, especially among literate elites, and was common by the 18th century. Still, in a mostly oral culture, signatures functioned more as ceremonial gestures than verification tools.
For the European upper classes, heraldry functioned as a visual shorthand for identity and lineage. Coats of arms adorned not only armor and flags but also furniture, buildings, and clothing. Retainers wore livery bearing their master’s symbols, making them recognizable on sight. Even death could not erase this symbolism — funerals were staged with heraldic banners, horses emblazoned with arms, and crests atop hearses.
Yet heraldry could also be diluted or faked. By the 16th century, unauthorized use of coats of arms was widespread, and forgers such as the Englishman William Dawkyns were arrested for selling false pedigrees and impersonating royal officers. In time, heraldry gradually lost its power to function as a reliable ID system.
Credit: Print Collector/ Hulton Fine Art Collection via Getty Images
Memory and Proximity
In the end, one of the most effective early methods of identification was simply being known. In rural communities, neighbors kept tabs on each other through personal memory and observation. You didn’t need a signature if the village priest or market vendor had known you since birth.
But with the rise of cities and the disruption of industrialization, such personal networks broke down. The state stepped in with systems of registration, documentation, and, eventually, visual records. Identity, once rooted in familiarity and the body, became a matter of paperwork and policy.
Before the photograph fixed the face in official form starting around the late 19th century, people relied on a more fluid, and sometimes fragile, constellation of signs: names, scars, clothing, crests, seals, and simple familiarity. All were attempts to answer the same eternal question — who are you? — in the absence of a camera’s eye.
If you grew up in the U.S. before 24-hour television programming, you might remember falling asleep to the sound of the national anthem or waking up to the eerie tone of a test pattern. Local stations typically signed off late at night — often with patriotic imagery and music — before going dark or switching to a test card. Early risers or insomniacs who turned on the TV were thus greeted by screens filled with color bars or geometric patterns accompanied by a high-pitched tone, holding the airwaves until regular programming resumed at dawn.
These static images, known as test cards or test patterns, weren’t just placeholders. They were created as calibration tools for engineers — and unintentionally became enduring symbols of a bygone broadcast era. Here’s a look back at TV before 24/7 programming changed the way we watched the tube.
Credit: Bettmann via Getty Images
Programming Didn’t Always Run All Night
In the early decades of American television, viewers typically had access to only three to five local stations, and programming didn’t operate around the clock. In 1950, four networks — ABC, CBS, DuMont (which folded in 1955), and NBC — were producing just 90 hours of programming a week combined. Within a decade, the three remaining networks were producing about that much programming individually — about 12 to 13 hours per day.
Rather than broadcast dead air outside of programming hours, many stations displayed test cards. These images helped technicians adjust transmission quality and allowed viewers to fine-tune their analog sets. In the age of “rabbit ears” and vertical hold, achieving clear reception took a little finessing at home. Test cards served as visual guideposts, helping viewers align antennas and tweak picture settings to reduce flicker, ghosting, or image roll.
Test cards date back to the 1930s, when the BBC used printed cards placed in front of cameras to help engineers align signals. The BBC’s first test card, used in 1934, was a simple black circle with a line underneath it. The practice caught on in the U.S. in the 1940s as commercial broadcasting expanded. These early test cards were often mounted on easels in front of studio cameras during off-air hours.
Black-and-white patterns featured radial grids, concentric circles, and crosshairs to reveal image distortion, framing errors, or technical noise. With the arrival of color broadcasting in the 1960s and ’70s, test patterns evolved into electronically generated images such as the SMPTE (Society of Motion Picture and Television Engineers) color bars to test for hue, saturation, contrast, and brightness. Today, the calibration and troubleshooting of broadcast signals are handled by specialized test-signal generators, though they are not broadcast to viewers.
Some test cards became more than just technical tools — they evolved into pop culture artifacts. In the U.S., the RCA “Indian head” test pattern — featuring a Native American figure flanked by geometric markings — was created in 1939 and ran for more than two decades, becoming a kind of visual shorthand for early American television. CBS had its own “bullseye” test pattern, a striking design with concentric circles and resolution wedges used to check focus and sharpness.
Depending on where you grew up, you might remember a specific local station’s test card — one that not only included the TV channel number and the affiliate’s call sign but also the location it was broadcasting from. For instance, ABC affiliate WJZ-TV in New York featured the Empire State Building on its black-and-white test card in the early 1950s, giving the broadcast a distinctly regional identity. These static images became a familiar part of daily television viewing, connecting viewers to their local stations and communities.
Across the Atlantic, the BBC’s Test Card F became one of the most iconic test patterns of all. Introduced in 1967, it featured a young girl named Carole Hersee (daughter of BBC engineer George Hersee) playing tic-tac-toe with a slightly unsettling clown doll named Bubbles. Airing for thousands of hours between 1967 and 1998, Hersee’s face became the most-broadcast image in U.K. television history.
For viewers of a certain age, test cards were an unofficial signal that the day was over. When the test pattern appeared, you knew it was time to turn off the TV and go to bed.
In the 1980s and ’90s, however, 24-hour programming took over. Infomercials, syndicated reruns, and late-night variety shows replaced those once-quiet hours. Cable, satellite, and eventually streaming made content seemingly infinite and calibration test patterns largely obsolete. Today, if a station goes off the air (usually due to weather or technical issues), you’re more likely to see a local station logo or a digital error message than a test pattern.
Even though test cards have mostly vanished from live broadcasts, they still crop up in pop culture as a kind of retro shorthand — an inside joke for those who know what they represent. On The Big Bang Theory, Sheldon Cooper’sSMPTE color bars T-shirt turned vintage broadcast tech into a fashion trend; the image has been widely reproduced and sold on everything from clothing to coffee mugs.
Some of us who remember when test cards signaled the end of the TV viewing day still have a strange fondness for their eerie stillness. In a world of endless choices and the seemingly judgmental “Are you still watching?” prompt, test cards remind us of a time when television showed us everything it had to offer — and then signed off for the night.
Advertisement
Advertisement
5 Things From ‘The Jetsons’ That Actually Exist Today
When The Jetsons first aired in 1962, it presented a futuristic world filled with imaginative technology that seemed purely fantastical to audiences at the time. Set 100 years in the future — in 2062 — it was Hanna-Barbera’s sci-fi counterpart to The Flintstones. But instead of going back to the Stone Age, it fast-forwarded a century to the Jetson family and their escapades in Orbit City. The show’s creators had free rein to playfully construct a future with any technological or societal advances their minds could conceive of, building a colorful world above the clouds.
Despite initially running for only one season (it was later revived in the 1980s), The Jetsons was highly influential both in terms of shaping the classic kitsch futurism aesthetic of the 1960s and for its wider role in science fiction. Writing for Smithsonian magazineon the show’s 50th anniversary, Matt Novak called the series the “single most important piece of 20th century futurism.” That’s a bold claim — especially for a cartoon — but The Jetsons had an uncanny ability to present possible future technologies in a very simple and entertaining way — and with all the technological optimism of the 1960s. And while the show’s creators weren’t the first to dream up most of the cartoon’s many inventions, they did help introduce them to a mainstream audience who might otherwise never have come across these ideas in less accessible works of science fiction.
Sadly, we’re still waiting for a viable flying car like the ones seen in Orbit City. But there are some futuristic concepts from the original season of The Jetsons that do actually exist today — and we didn’t even have to wait until 2062.
Credit: Bettmann via Getty Images
Video Calls
In the world of The Jetsons, video calling is a standard feature of daily life. George Jetson frequently communicates with his arrogant boss, Mr. Spacely, through a video screen, while family members regularly connect using visual communication devices in their home and on the go. This technology, which seemed revolutionary in the 1960s, has become entirely commonplace today, with Skype, Zoom, FaceTime, and more. Even more impressive is George Jetson’s video watch, a wrist-worn communication device much like modern smartwatches.
Back in the 1960s, televisions were big, bulky boxes with screens barely large enough to justify the set’s cumbersome proportions. And yet, just seconds into the first episode of The Jetsons, we see Jane Jetson standing in front of a flat-screen TV elegantly suspended from the ceiling. It’s impressively similar to today’s flat-screens, which didn’t become popular until the 2000s.
Advertisement
Advertisement
Credit: Nick de la Torre/ Houston Chronicle via Getty Images
Robot Vacuum Cleaners
The very first episode of The Jetsons introduces us to Rosie the Robot, the Jetson family’s robotic housekeeper. While we don’t have anything quite like Rosie in our homes just yet, we do have access to automated cleaning systems similar to those seen in the Jetsons’ residence. The show features various cleaning machines, including small robotic vacuums not too dissimilar to those we have now, such as the Roomba vacuum cleaner made by the company iRobot, which first came on the market in 2002.
Tanning beds only became commercially available in the U.S. in the late 1970s, before becoming hugely popular the following decade. But before that, they appeared in The Jetsons. In “Millionaire Astro,” episode 16 of season 1, we visit Gottrockets Mansion, the extravagant residence of billionaire businessman G.P. Gottrockets (who happens to be the first owner of Astro the dog before he joins the Jetson family). We see Gottrockets using a long bed where he can have a quick three-second tan. He even has three tanning settings to choose from: Miami, Honolulu, or Riviera. The instant results depicted in the cartoon are exaggerated for comedic effect, but the fundamental concept of a machine that delivers controlled UV exposure for tanning purposes proved remarkably prescient.
In “Test Pilot,” episode 15 of season 1, George Jetson is sent for a physical exam checkup. His doctor, Dr. Radius, makes George swallow a tiny robotic pill called the Peekaboo Prober, which enters his body to scan his internal systems. This scene was well ahead of its time. It’s only in recent years that technology such as capsule endoscopy — for example, the ingestible PillCam — has become available. Capsule endoscopy uses tiny wireless cameras to take pictures of the organs in the body, as opposed to the more traditional and widely used endoscope, which is not wireless and therefore more invasive.
Submarines have come a long way in the last century. During World War I, their effectiveness became truly apparent, with German U-boats sinking more than 5,000 Allied ships, forever changing the nature of war at sea. Since then, submarine technology has advanced greatly, and today they perform a wide variety of tasks in our seas and oceans.
Civilian submarines engage in exploration, marine science, salvage operations, and the construction and maintenance of underwater infrastructure. In the military arena, meanwhile, submarines prowl the oceans undetected, capable in some cases of staying submerged for months at a time. Military submarines offer a range of capabilities, whether it’s reconnaissance, the covert insertion of special forces, silently attacking enemy surface ships, or — in the case of the most advanced nuclear submarines — strategic nuclear deterrence.
The use of submarines, however, predates World War I by longer than we might imagine. For many centuries, inventors and visionaries have conceived of vessels capable of moving underwater. These early ideas, ranging from theoretical designs to actual working prototypes, represent crucial steps in maritime technology. Here, we look at three submarine journeys that represent firsts of their kind, from the ancient world to the first use of a submarine in combat.
It’s hard to say with certainty when the first submarine journey occurred, partly because of how, exactly, we define a submarine. If simply defined as a submersible craft used for warfare, it could be argued that the earliest documented case dates all the way back to Alexander the Great. According to Aristotle’s work Problemata, Alexander, or at least his divers, descended into the depths during the siege of Tyre in 332 BCE, possibly to destroy the city’s underwater defenses. Written works and paintings over the years have told legendary stories of Alexander exploring the sea in what could be described as a diving bell, bathysphere, or rudimentary submarine. But like many tales involving Alexander the Great, the story has been embellished over the centuries.
While it’s entirely possible that Alexander the Great used divers at Tyre, and possibly a rudimentary diving bell, it’s hard to give him credit for the first true submarine journey. That honor is more correctly bestowed on Cornelis Drebbel, a Dutch inventor who was invited to the court of King James I of England in 1604.
Drebbel had already made a name for himself as an engineer and inventor, having created various contraptions in Holland, including perpetual motion machines, clocks, and a self-regulating furnace. But it was during his time in England that Drebbel came up with arguably his most famous invention. In 1620, while working for the Royal Navy, Drebbel designed and built the first documented navigable submarine. None of his original records or engineering drawings survive, but there were enough eyewitness accounts to give us a good idea of how his submarine worked.
The basic design was like a rowing boat, but with raised and meeting sides, and the whole vessel covered in greased leather. It had a watertight hatch in the middle and was propelled by oars that came out the side through watertight leather seals. Large pigskin bladders were used to control diving and surfacing, while tubes floated at the surface of the water likely provided the crew with oxygen.
Drebbel built three submarines in total, each one bigger than the last — the final model had six oars and could carry 16 passengers. He tested them on the River Thames in London in front of thousands of spectators, one of whom was King James I. Contemporary accounts suggest that the king himself may have joined Drebbel on one of his test runs, making James I the first monarch to ride in a submarine. Accounts of the test runs, which continued until 1624, state that the submarine could travel underwater from Westminster to Greenwich and back — a three-hour trip with the boat traveling 15 feet below the surface.
Another hugely significant submarine journey took place a little more than a century and a half after Drebbel’s historic underwater excursions. In 1776, a one-man submersible called the Turtlebecame the world’s first combat submarine during the American Revolutionary War.
Designed by American inventor David Bushnell, this remarkable vessel was shaped somewhat like a large acorn, or, as Bushnell described it in a letter to Thomas Jefferson, as “two upper tortoise shells of equal size, joined together.” The submarine was propelled vertically and horizontally by hand-cranked and pedal-powered propellers operated by the single pilot.
On September 7, 1776, Sergeant Ezra Lee of the Continental Army piloted the Turtle in the first ever combat mission involving a submarine. He used the vessel in an attempted attack on the British flagship HMS Eagle in New York Harbor. The Turtle was equipped with a mine that was to be attached to the hull of the enemy ship, but Lee failed to attach the explosive charge to the hull. The perilous mission was a failure, as were subsequent missions involving the sub. It was, nevertheless, a pivotal moment in the history of warfare, with the Turtledemonstrating the potential of submarines in combat.
Credit: Fototeca Storica Nazionale/ Hulton Archive via Getty Images
Author Kristina Wright
February 13, 2025
Love it?51
Developed in the late 1820s, photography revolutionized the way history could be documented, blending art and science to create lasting visual records. Early photographs were exclusively black and white, featuring stark, contrast-heavy images that showcased the technical brilliance of the new medium. By the 1880s, however, photographs began taking on a warm, brownish tint. This distinctive aesthetic, known as sepia toning, became a hallmark of photography, particularly portraiture, around the turn of the 20th century.
Sepia-toned photography was not just an aesthetic preference, but a direct result of technological advancements aimed at improving the longevity and visual quality of photographs. As pioneers in the field experimented with ways to improve the durability of their images, sepia toning emerged as a practical and widely adopted solution. The process extended the lifespan of photographs, preventing fading and deterioration over time. As a result, sepia-toned prints dominated photography for several decades.
Despite their brownish hues, these photographs are still considered a form of black-and-white photography. While the sepia toning process adds warmth to the monochromatic image, it doesn’t technically introduce additional colors.
In the early days of photography, creating an image was a complicated chemical process that required precise control over light-sensitive materials. Photographers used silver-based compounds, such as silver halides, to develop images on a variety of surfaces, including glass, metal, and paper. When exposed to light, these silver compounds would undergo a chemical reaction and form a visible image.
Despite their remarkable ability to capture detail, early photographs were highly susceptible to environmental damage. Over time, exposure to light, heat, and air caused the silver particles to oxidize, leading to fading and discoloration of the photographs.
To address this issue, photographers developed a technique known as “toning,” a process that involved treating photographic prints with chemical solutions both to enhance their color and to improve their longevity. Sepia toning, named after the ink from the cuttlefish species Sepia officinalis, became one of the most effective and widely adopted methods of toning.
This process replaced some of the sensitive metallic silver in a print with silver sulfide, a more stable compound that was less prone to oxidation and fading. The chemical transformation not only gave the photographs their characteristic warm, brownish hue but also extended their lifespan, making it possible to preserve images for generations to come in an era when photography was an expensive and time-consuming process.
Sepia toning gained popularity in the 1880s as photographers experimented with ways to create prints that were visually appealing as well as long-lasting. In fact, sepia-toned photographs last up to 50% to 100% longer than black-and-white images. However, there was no universal formula for creating sepia-toned images, so each photographer had to develop their own chemical combination. This resulted in a variety of brown hues, ranging from light golden brown to dark reddish brown. The toning process remained in widespread use well into the 20th century, allowing countless photographs from that era to survive to the present day.
It Was Aesthetically Appealing as Well as Practical
While sepia toning was primarily adopted for its preservation benefits, its aesthetic qualities also contributed to its widespread appeal. The warm, brownish hues of sepia prints conveyed a sense of elegance and timelessness. Unlike the stark contrast of black-and-white images, sepia tones were considered more flattering, making them particularly popular for portraits and sentimental keepsakes. The process deepened contrast, refined tonal gradation, and offered a softer, dreamlike image compared to black-and-white photography.
Culturally, sepia-toned photographs came to define the late-Victorian and Edwardian eras, a period marked by industrial progress and significant social and cultural changes in both Europe and the United States. Photography during this time captured moments from everyday life, such as family portraits, weddings, christenings, and postmortem photos of deceased loved ones, as well as architecture, nature scenes, and historical events. As a result, sepia-toned photography became linked to collective memories of the past, reinforcing its association with history and nostalgia.
The dominance of sepia-toned photographs began to fade in the early 20th century as advancements in technology introduced more efficient and affordable ways to preserve images. Black-and-white photography became the standard because it was simpler, faster, and less costly than the additional chemical treatment required for sepia toning. By the 1920s and ’30s, photographers increasingly preferred gelatin silver prints, which produced high-quality black-and-white images without the need for sepia toning.
The decline of sepia toning continued with the rise of color photography in the mid-20th century. As color film became more widely available and affordable in the 1940s and ’50s, both sepia and black-and-white photography fell out of fashion. Sepia tones remained relevant in some artistic and archival contexts, but today that nostalgic aesthetic is often recreated digitally rather than through chemical processing.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.