In a world without cameras, biometric databases, or even consistent spelling, identifying individuals could be quite a complex challenge. Before photography helped fix identity to an image, societies developed a range of creative methods to determine who someone was — a task that could be surprisingly difficult, especially when that someone was outside their home community. From scars to seals to signatures, here’s how identity was tracked before photo IDs.
A name was the most basic marker of identity for centuries, but it often wasn’t enough. In ancient Greece, to distinguish between people with the same first name, individuals were also identified by their father’s name. For example, an Athenian pottery shard from the fifth century BCE names Pericles as “Pericles son of Xanthippus.” In ancient Egypt, the naming convention might have reflected the name of a master rather than a parent.
But when everyone shared the same name — as in one Roman Egyptian declaration in 146 CE, signed by “Stotoetis, son of Stotoetis, grandson of Stotoetis” — things could get muddled. To resolve this, officials turned to another strategy: describing the body itself.
Credit: Culture Club/ Hulton Archive via Getty Images
Scars and Silk
Detailed physical descriptions often served as a kind of textual portrait. An Egyptian will from 242 BCE describes its subject with remarkable specificity: “65 years old, of middle height, square built, dim-sighted, with a scar on the left part of the temple and on the right side of the jaw and also below the cheek and above the upper lip.” Such marks made the body “legible” for identification.
In 15th-century Bern, Switzerland, when authorities sought to arrest a fraudulent winemaker, they didn’t just list his name. They issued a description: “large fat Martin Walliser, and he has on him a silk jerkin.” Clothing — then a significant investment and deeply symbolic — became part of someone’s identifying characteristics. A person’s outfit could mark their profession, social standing, or even their city of origin.
Uniforms and insignia served a similar function, especially for travelers. In the late 15th century, official couriers from cities such as Basel, Switzerland, and Strasburg, France, wore uniforms in city colors and carried visible badges. Pilgrims and beggars in the late Middle Ages and beyond were also required to wear specific objects — such as metal badges or tokens — that marked their status and origin. Some badges allowed the bearer to beg legally or buy subsidized bread, offering both practical aid and visible authentication.
Seals also served as powerful proxies for the self. From Mesopotamian cylinder seals to Roman oculist stamps and medieval wax impressions, these identifiers could represent both authority and authenticity. In medieval Britain, seals were often made of beeswax and attached to documents with colored tags. More than just utilitarian tools, seals were embedded with personal iconography and could even be worn as jewelry.
In many cases, travelers also had to carry letters from local priests or magistrates identifying who they were. By the 16th century, such documentation became increasingly essential, and failing to carry an identity paper could result in penalties. This passport-like system of “safe conduct” documents gradually started to spread. What began as a protection for merchants and diplomats evolved into a bureaucratic necessity for everyday people.
As written records became more widespread in medieval Europe, so did the need for permanent, portable identifiers. Royal interest in documenting property and legal rights led to the proliferation of official records, which in turn prompted the spread of literacy. Even as early as the 13th century in England, it was already considered risky to travel far without written identification.
The signature eventually emerged as a formal marker of identity, especially among literate elites, and was common by the 18th century. Still, in a mostly oral culture, signatures functioned more as ceremonial gestures than verification tools.
For the European upper classes, heraldry functioned as a visual shorthand for identity and lineage. Coats of arms adorned not only armor and flags but also furniture, buildings, and clothing. Retainers wore livery bearing their master’s symbols, making them recognizable on sight. Even death could not erase this symbolism — funerals were staged with heraldic banners, horses emblazoned with arms, and crests atop hearses.
Yet heraldry could also be diluted or faked. By the 16th century, unauthorized use of coats of arms was widespread, and forgers such as the Englishman William Dawkyns were arrested for selling false pedigrees and impersonating royal officers. In time, heraldry gradually lost its power to function as a reliable ID system.
Credit: Print Collector/ Hulton Fine Art Collection via Getty Images
Memory and Proximity
In the end, one of the most effective early methods of identification was simply being known. In rural communities, neighbors kept tabs on each other through personal memory and observation. You didn’t need a signature if the village priest or market vendor had known you since birth.
But with the rise of cities and the disruption of industrialization, such personal networks broke down. The state stepped in with systems of registration, documentation, and, eventually, visual records. Identity, once rooted in familiarity and the body, became a matter of paperwork and policy.
Before the photograph fixed the face in official form starting around the late 19th century, people relied on a more fluid, and sometimes fragile, constellation of signs: names, scars, clothing, crests, seals, and simple familiarity. All were attempts to answer the same eternal question — who are you? — in the absence of a camera’s eye.
If you grew up in the U.S. before 24-hour television programming, you might remember falling asleep to the sound of the national anthem or waking up to the eerie tone of a test pattern. Local stations typically signed off late at night — often with patriotic imagery and music — before going dark or switching to a test card. Early risers or insomniacs who turned on the TV were thus greeted by screens filled with color bars or geometric patterns accompanied by a high-pitched tone, holding the airwaves until regular programming resumed at dawn.
These static images, known as test cards or test patterns, weren’t just placeholders. They were created as calibration tools for engineers — and unintentionally became enduring symbols of a bygone broadcast era. Here’s a look back at TV before 24/7 programming changed the way we watched the tube.
Credit: Bettmann via Getty Images
Programming Didn’t Always Run All Night
In the early decades of American television, viewers typically had access to only three to five local stations, and programming didn’t operate around the clock. In 1950, four networks — ABC, CBS, DuMont (which folded in 1955), and NBC — were producing just 90 hours of programming a week combined. Within a decade, the three remaining networks were producing about that much programming individually — about 12 to 13 hours per day.
Rather than broadcast dead air outside of programming hours, many stations displayed test cards. These images helped technicians adjust transmission quality and allowed viewers to fine-tune their analog sets. In the age of “rabbit ears” and vertical hold, achieving clear reception took a little finessing at home. Test cards served as visual guideposts, helping viewers align antennas and tweak picture settings to reduce flicker, ghosting, or image roll.
Test cards date back to the 1930s, when the BBC used printed cards placed in front of cameras to help engineers align signals. The BBC’s first test card, used in 1934, was a simple black circle with a line underneath it. The practice caught on in the U.S. in the 1940s as commercial broadcasting expanded. These early test cards were often mounted on easels in front of studio cameras during off-air hours.
Black-and-white patterns featured radial grids, concentric circles, and crosshairs to reveal image distortion, framing errors, or technical noise. With the arrival of color broadcasting in the 1960s and ’70s, test patterns evolved into electronically generated images such as the SMPTE (Society of Motion Picture and Television Engineers) color bars to test for hue, saturation, contrast, and brightness. Today, the calibration and troubleshooting of broadcast signals are handled by specialized test-signal generators, though they are not broadcast to viewers.
Some test cards became more than just technical tools — they evolved into pop culture artifacts. In the U.S., the RCA “Indian head” test pattern — featuring a Native American figure flanked by geometric markings — was created in 1939 and ran for more than two decades, becoming a kind of visual shorthand for early American television. CBS had its own “bullseye” test pattern, a striking design with concentric circles and resolution wedges used to check focus and sharpness.
Depending on where you grew up, you might remember a specific local station’s test card — one that not only included the TV channel number and the affiliate’s call sign but also the location it was broadcasting from. For instance, ABC affiliate WJZ-TV in New York featured the Empire State Building on its black-and-white test card in the early 1950s, giving the broadcast a distinctly regional identity. These static images became a familiar part of daily television viewing, connecting viewers to their local stations and communities.
Across the Atlantic, the BBC’s Test Card F became one of the most iconic test patterns of all. Introduced in 1967, it featured a young girl named Carole Hersee (daughter of BBC engineer George Hersee) playing tic-tac-toe with a slightly unsettling clown doll named Bubbles. Airing for thousands of hours between 1967 and 1998, Hersee’s face became the most-broadcast image in U.K. television history.
For viewers of a certain age, test cards were an unofficial signal that the day was over. When the test pattern appeared, you knew it was time to turn off the TV and go to bed.
In the 1980s and ’90s, however, 24-hour programming took over. Infomercials, syndicated reruns, and late-night variety shows replaced those once-quiet hours. Cable, satellite, and eventually streaming made content seemingly infinite and calibration test patterns largely obsolete. Today, if a station goes off the air (usually due to weather or technical issues), you’re more likely to see a local station logo or a digital error message than a test pattern.
Even though test cards have mostly vanished from live broadcasts, they still crop up in pop culture as a kind of retro shorthand — an inside joke for those who know what they represent. On The Big Bang Theory, Sheldon Cooper’sSMPTE color bars T-shirt turned vintage broadcast tech into a fashion trend; the image has been widely reproduced and sold on everything from clothing to coffee mugs.
Some of us who remember when test cards signaled the end of the TV viewing day still have a strange fondness for their eerie stillness. In a world of endless choices and the seemingly judgmental “Are you still watching?” prompt, test cards remind us of a time when television showed us everything it had to offer — and then signed off for the night.
When The Jetsons first aired in 1962, it presented a futuristic world filled with imaginative technology that seemed purely fantastical to audiences at the time. Set 100 years in the future — in 2062 — it was Hanna-Barbera’s sci-fi counterpart to The Flintstones. But instead of going back to the Stone Age, it fast-forwarded a century to the Jetson family and their escapades in Orbit City. The show’s creators had free rein to playfully construct a future with any technological or societal advances their minds could conceive of, building a colorful world above the clouds.
Despite initially running for only one season (it was later revived in the 1980s), The Jetsons was highly influential both in terms of shaping the classic kitsch futurism aesthetic of the 1960s and for its wider role in science fiction. Writing for Smithsonian magazineon the show’s 50th anniversary, Matt Novak called the series the “single most important piece of 20th century futurism.” That’s a bold claim — especially for a cartoon — but The Jetsons had an uncanny ability to present possible future technologies in a very simple and entertaining way — and with all the technological optimism of the 1960s. And while the show’s creators weren’t the first to dream up most of the cartoon’s many inventions, they did help introduce them to a mainstream audience who might otherwise never have come across these ideas in less accessible works of science fiction.
Sadly, we’re still waiting for a viable flying car like the ones seen in Orbit City. But there are some futuristic concepts from the original season of The Jetsons that do actually exist today — and we didn’t even have to wait until 2062.
Credit: Bettmann via Getty Images
Video Calls
In the world of The Jetsons, video calling is a standard feature of daily life. George Jetson frequently communicates with his arrogant boss, Mr. Spacely, through a video screen, while family members regularly connect using visual communication devices in their home and on the go. This technology, which seemed revolutionary in the 1960s, has become entirely commonplace today, with Skype, Zoom, FaceTime, and more. Even more impressive is George Jetson’s video watch, a wrist-worn communication device much like modern smartwatches.
Back in the 1960s, televisions were big, bulky boxes with screens barely large enough to justify the set’s cumbersome proportions. And yet, just seconds into the first episode of The Jetsons, we see Jane Jetson standing in front of a flat-screen TV elegantly suspended from the ceiling. It’s impressively similar to today’s flat-screens, which didn’t become popular until the 2000s.
Advertisement
Advertisement
Credit: Nick de la Torre/ Houston Chronicle via Getty Images
Robot Vacuum Cleaners
The very first episode of The Jetsons introduces us to Rosie the Robot, the Jetson family’s robotic housekeeper. While we don’t have anything quite like Rosie in our homes just yet, we do have access to automated cleaning systems similar to those seen in the Jetsons’ residence. The show features various cleaning machines, including small robotic vacuums not too dissimilar to those we have now, such as the Roomba vacuum cleaner made by the company iRobot, which first came on the market in 2002.
Tanning beds only became commercially available in the U.S. in the late 1970s, before becoming hugely popular the following decade. But before that, they appeared in The Jetsons. In “Millionaire Astro,” episode 16 of season 1, we visit Gottrockets Mansion, the extravagant residence of billionaire businessman G.P. Gottrockets (who happens to be the first owner of Astro the dog before he joins the Jetson family). We see Gottrockets using a long bed where he can have a quick three-second tan. He even has three tanning settings to choose from: Miami, Honolulu, or Riviera. The instant results depicted in the cartoon are exaggerated for comedic effect, but the fundamental concept of a machine that delivers controlled UV exposure for tanning purposes proved remarkably prescient.
In “Test Pilot,” episode 15 of season 1, George Jetson is sent for a physical exam checkup. His doctor, Dr. Radius, makes George swallow a tiny robotic pill called the Peekaboo Prober, which enters his body to scan his internal systems. This scene was well ahead of its time. It’s only in recent years that technology such as capsule endoscopy — for example, the ingestible PillCam — has become available. Capsule endoscopy uses tiny wireless cameras to take pictures of the organs in the body, as opposed to the more traditional and widely used endoscope, which is not wireless and therefore more invasive.
Submarines have come a long way in the last century. During World War I, their effectiveness became truly apparent, with German U-boats sinking more than 5,000 Allied ships, forever changing the nature of war at sea. Since then, submarine technology has advanced greatly, and today they perform a wide variety of tasks in our seas and oceans.
Civilian submarines engage in exploration, marine science, salvage operations, and the construction and maintenance of underwater infrastructure. In the military arena, meanwhile, submarines prowl the oceans undetected, capable in some cases of staying submerged for months at a time. Military submarines offer a range of capabilities, whether it’s reconnaissance, the covert insertion of special forces, silently attacking enemy surface ships, or — in the case of the most advanced nuclear submarines — strategic nuclear deterrence.
The use of submarines, however, predates World War I by longer than we might imagine. For many centuries, inventors and visionaries have conceived of vessels capable of moving underwater. These early ideas, ranging from theoretical designs to actual working prototypes, represent crucial steps in maritime technology. Here, we look at three submarine journeys that represent firsts of their kind, from the ancient world to the first use of a submarine in combat.
It’s hard to say with certainty when the first submarine journey occurred, partly because of how, exactly, we define a submarine. If simply defined as a submersible craft used for warfare, it could be argued that the earliest documented case dates all the way back to Alexander the Great. According to Aristotle’s work Problemata, Alexander, or at least his divers, descended into the depths during the siege of Tyre in 332 BCE, possibly to destroy the city’s underwater defenses. Written works and paintings over the years have told legendary stories of Alexander exploring the sea in what could be described as a diving bell, bathysphere, or rudimentary submarine. But like many tales involving Alexander the Great, the story has been embellished over the centuries.
While it’s entirely possible that Alexander the Great used divers at Tyre, and possibly a rudimentary diving bell, it’s hard to give him credit for the first true submarine journey. That honor is more correctly bestowed on Cornelis Drebbel, a Dutch inventor who was invited to the court of King James I of England in 1604.
Drebbel had already made a name for himself as an engineer and inventor, having created various contraptions in Holland, including perpetual motion machines, clocks, and a self-regulating furnace. But it was during his time in England that Drebbel came up with arguably his most famous invention. In 1620, while working for the Royal Navy, Drebbel designed and built the first documented navigable submarine. None of his original records or engineering drawings survive, but there were enough eyewitness accounts to give us a good idea of how his submarine worked.
The basic design was like a rowing boat, but with raised and meeting sides, and the whole vessel covered in greased leather. It had a watertight hatch in the middle and was propelled by oars that came out the side through watertight leather seals. Large pigskin bladders were used to control diving and surfacing, while tubes floated at the surface of the water likely provided the crew with oxygen.
Drebbel built three submarines in total, each one bigger than the last — the final model had six oars and could carry 16 passengers. He tested them on the River Thames in London in front of thousands of spectators, one of whom was King James I. Contemporary accounts suggest that the king himself may have joined Drebbel on one of his test runs, making James I the first monarch to ride in a submarine. Accounts of the test runs, which continued until 1624, state that the submarine could travel underwater from Westminster to Greenwich and back — a three-hour trip with the boat traveling 15 feet below the surface.
Another hugely significant submarine journey took place a little more than a century and a half after Drebbel’s historic underwater excursions. In 1776, a one-man submersible called the Turtlebecame the world’s first combat submarine during the American Revolutionary War.
Designed by American inventor David Bushnell, this remarkable vessel was shaped somewhat like a large acorn, or, as Bushnell described it in a letter to Thomas Jefferson, as “two upper tortoise shells of equal size, joined together.” The submarine was propelled vertically and horizontally by hand-cranked and pedal-powered propellers operated by the single pilot.
On September 7, 1776, Sergeant Ezra Lee of the Continental Army piloted the Turtle in the first ever combat mission involving a submarine. He used the vessel in an attempted attack on the British flagship HMS Eagle in New York Harbor. The Turtle was equipped with a mine that was to be attached to the hull of the enemy ship, but Lee failed to attach the explosive charge to the hull. The perilous mission was a failure, as were subsequent missions involving the sub. It was, nevertheless, a pivotal moment in the history of warfare, with the Turtledemonstrating the potential of submarines in combat.
Developed in the late 1820s, photography revolutionized the way history could be documented, blending art and science to create lasting visual records. Early photographs were exclusively black and white, featuring stark, contrast-heavy images that showcased the technical brilliance of the new medium. By the 1880s, however, photographs began taking on a warm, brownish tint. This distinctive aesthetic, known as sepia toning, became a hallmark of photography, particularly portraiture, around the turn of the 20th century.
Sepia-toned photography was not just an aesthetic preference, but a direct result of technological advancements aimed at improving the longevity and visual quality of photographs. As pioneers in the field experimented with ways to improve the durability of their images, sepia toning emerged as a practical and widely adopted solution. The process extended the lifespan of photographs, preventing fading and deterioration over time. As a result, sepia-toned prints dominated photography for several decades.
Despite their brownish hues, these photographs are still considered a form of black-and-white photography. While the sepia toning process adds warmth to the monochromatic image, it doesn’t technically introduce additional colors.
In the early days of photography, creating an image was a complicated chemical process that required precise control over light-sensitive materials. Photographers used silver-based compounds, such as silver halides, to develop images on a variety of surfaces, including glass, metal, and paper. When exposed to light, these silver compounds would undergo a chemical reaction and form a visible image.
Despite their remarkable ability to capture detail, early photographs were highly susceptible to environmental damage. Over time, exposure to light, heat, and air caused the silver particles to oxidize, leading to fading and discoloration of the photographs.
To address this issue, photographers developed a technique known as “toning,” a process that involved treating photographic prints with chemical solutions both to enhance their color and to improve their longevity. Sepia toning, named after the ink from the cuttlefish species Sepia officinalis, became one of the most effective and widely adopted methods of toning.
This process replaced some of the sensitive metallic silver in a print with silver sulfide, a more stable compound that was less prone to oxidation and fading. The chemical transformation not only gave the photographs their characteristic warm, brownish hue but also extended their lifespan, making it possible to preserve images for generations to come in an era when photography was an expensive and time-consuming process.
Sepia toning gained popularity in the 1880s as photographers experimented with ways to create prints that were visually appealing as well as long-lasting. In fact, sepia-toned photographs last up to 50% to 100% longer than black-and-white images. However, there was no universal formula for creating sepia-toned images, so each photographer had to develop their own chemical combination. This resulted in a variety of brown hues, ranging from light golden brown to dark reddish brown. The toning process remained in widespread use well into the 20th century, allowing countless photographs from that era to survive to the present day.
It Was Aesthetically Appealing as Well as Practical
While sepia toning was primarily adopted for its preservation benefits, its aesthetic qualities also contributed to its widespread appeal. The warm, brownish hues of sepia prints conveyed a sense of elegance and timelessness. Unlike the stark contrast of black-and-white images, sepia tones were considered more flattering, making them particularly popular for portraits and sentimental keepsakes. The process deepened contrast, refined tonal gradation, and offered a softer, dreamlike image compared to black-and-white photography.
Culturally, sepia-toned photographs came to define the late-Victorian and Edwardian eras, a period marked by industrial progress and significant social and cultural changes in both Europe and the United States. Photography during this time captured moments from everyday life, such as family portraits, weddings, christenings, and postmortem photos of deceased loved ones, as well as architecture, nature scenes, and historical events. As a result, sepia-toned photography became linked to collective memories of the past, reinforcing its association with history and nostalgia.
The dominance of sepia-toned photographs began to fade in the early 20th century as advancements in technology introduced more efficient and affordable ways to preserve images. Black-and-white photography became the standard because it was simpler, faster, and less costly than the additional chemical treatment required for sepia toning. By the 1920s and ’30s, photographers increasingly preferred gelatin silver prints, which produced high-quality black-and-white images without the need for sepia toning.
The decline of sepia toning continued with the rise of color photography in the mid-20th century. As color film became more widely available and affordable in the 1940s and ’50s, both sepia and black-and-white photography fell out of fashion. Sepia tones remained relevant in some artistic and archival contexts, but today that nostalgic aesthetic is often recreated digitally rather than through chemical processing.
The QWERTY keyboard layout is so common that most of us never stop to question its unusual arrangement of letters. But when we do look down at our keyboards, we might find ourselves struggling to understand the logic behind the layout: Why does the top row begin with the letters Q, W, E, R, T, Y?
Found on nearly every computer, laptop, and smartphone worldwide (at least in countries that use a Latin-script alphabet), this seemingly random configuration of keys has an interesting history — though perhaps not the one most people have been led to believe.
During the 19th century, inventors came up with various kinds of machines designed to type out letters. Most of these machines, however, were large and cumbersome, often resembling pianos in size and shape. In some cases, they proved highly valuable to people with visual impairments, but for general use they were inefficient, being much slower than simple handwriting.
Enter Christopher Latham Sholes, an American inventor who, in 1866, was working alongside Carlos Glidden on developing a machine for numbering book pages. Sholes was inspired to build a machine that could print words as well as numbers, and he and Glidden soon received a patent for their somewhat ungainly prototype. The contraption had a row of alphabetized keys that, when struck, swung little hammers with corresponding letters embossed in their heads. The keys, in turn, struck an inked ribbon to apply the printed letters to a sheet of paper. It was far from the perfect solution, however, so Sholes persevered.
Five years later, in 1872, Sholes and his associates produced the first-ever practical typewriter. Rather than an alphabetized row of keys, this new typewriter featured a four-row layout with what was then a QWE.TY keyboard (with a period where the R is today). In 1873, Sholes sold the manufacturing rights to the Remington Arms Company, which further developed the machine. It was marketed as the Remington Typewriter — complete with the slightly altered QWERTY key layout. It became the first commercially successful typewriter, and in so doing made the QWERTY keyboard the industry standard.
Here’s where things get a little foggy. An often-repeated explanation for the QWERTY keyboard layout is that it was designed by Sholes to slow typists down in order to prevent typewriters from jamming. If you’ve ever used an old-fashioned typewriter, you might have noticed how easy it is to strike letters in quick succession, or simultaneously, which can cause the type bars to become stuck together.
But would Sholes really have sabotaged his own keyboard layout to hamper the speed of operators? In reality, there is little to no hard evidence of any deliberate slowdown. And the patent that first mentions the QWERTY layout contains no direct mention of why the keys were placed in such a manner. If it was indeed designed to cause fewer jams, this feature would likely have been a primary attribute of the patent. What’s more, at the time that Sholes patented the QWERTY layout, there were no “touch typists,” so no one was typing particularly quickly, let alone speed-typing.
So, why did QWERTY become standard? One convincing and particularly thorough answer comes from Kyoto University researchers Koichi Yasuoka and Motoko Yasuoka. In their 2010 paper “On the Prehistory of QWERTY,” the researchers conclude that the mechanics of the typewriter did not influence the keyboard design. Instead, the QWERTY system was a rather circuitous result of how the first typewriters were being used.
Among the most prominent early adopters of early typewriters were telegraph operators, whose priority was to quickly transcribe messages. These operators found an alphabetical keyboard arrangement to be confusing and inefficient for translating morse code. The Kyoto paper suggests the QWERTY keyboard evolved over several years as a direct result of the input provided by these telegraph operators. According to the researchers, “The keyboard arrangement was incidentally changed into QWERTY, first to receive telegraphs, then to thrash out a compromise between inventors and producers, and at last to evade old patents.”
In other words, QWERTY came about through a combination of factors. There was direct input from telegraph operators, as well as compromises made between inventors and producers — practical changes made by the manufacturers who actually had to mass produce the product. (The keyboard layout on Scholes’ original prototype, for example, was altered by the mechanics at Remington.) And then there were old patents to contend with. In 1886, a new company called Wyckoff, Seamans & Benedict (WS&B) released the Remington Standard Type-Writer No. 2. To avoid existing keyboard layout patents, the company slightly altered the design, placing M next to N and swapping around C and X. This became the QWERTY format keyboard we use today.
While the precise intentions behind Sholes’ QWERTY design are at least partially lost to history, there is undoubtedly a strong argument againstthe popular belief that it was designed to slow typists down.
What we know for sure is that the QWERTY layout is remarkable for its staying power. Despite the development of potentially more efficient keyboard layouts — most prominently the Dvorak layout, designed in the 1930s, which is supposed to be faster and more comfortable — QWERTY has remained the global standard. By the time alternative layouts were proposed, millions of people had already learned to type on QWERTY keyboards, and switching would have required massive retraining efforts for typists, not to mention a lot of grumbling from the general public.
So, for now at least, QWERTY is very much here to stay — at least on laptops and PCs. Researchers at the University of St Andrews in Scotland have developed a split-screen keyboard they claim can increase typing speeds for touchscreen users who type with their thumbs. Known as KALQ (the letters at the bottom right of that keyboard), its layout is a radical departure from the QWERTY system, which has been unchanged for about a century and a half. Will KALQ gain a foothold? Only time, and more typing, will tell.
Ancient Egypt was home to more than 100 pyramids, many of which still stand today. One of the oldest monumental pyramids in Egypt, the Step Pyramid of Djoser, was built sometime between 2667 BCE and 2648 BCE and began a period of pyramid construction lasting more than a thousand years. The most famous monuments are found at the Giza complex, home to the Great Pyramid, the Pyramid of Khafre, and the Pyramid of Menkaure, all built during the Fourth Dynasty around 2600 to 2500 BCE — the golden age of ancient Egypt.
The Egyptian pyramids stand as one of humanity’s most remarkable architectural achievements, and their incredible precision and massive scale have confounded researchers for centuries. Despite numerous theories and extensive archaeological research, the exact methods of their construction remain a subject of scholarly debate. How did ancient Egyptians erect pyramids using millions of massive blocks weighing as much as 2.5 tons each? And how, more specifically, did they move those blocks up the superstructure?
To this day, there is no known historical or archaeological evidence that resolves the question definitively. While popular speculation often veers into fantastical explanations — yes, including aliens — serious historians and archaeologists have given much thought as to how these monumental structures might have been erected using the technological capabilities of the time. Here are three of the most likely construction theories.
The first historical account of the construction of the pyramids came from the ancient Greek historian Herodotus in the fifth century BCE. In his Histories, he wrote that the Great Pyramid took 20 years to build and demanded the labor of 100,000 people. Herodotus also wrote that after laying the stones for the base, workers “raised the remaining stones to their places by means of machines formed of short wooden planks. The first machine raised them from the ground to the top of the first step. On this there was another machine, which received the stone upon its arrival and conveyed it to the second step,” and so on.
These “Herodotus Machines,” as they later became known, are speculated to have used a system of levers or ropes (or both) to lift blocks incrementally between levels of the pyramid. Egyptian priests told Herodotus about this system — but it’s important to note that this was a long time after the construction of the Great Pyramid, so neither the priests nor Herodotus were actual eyewitnesses to its construction. It is certainly feasible, however, that the machines he described may have been used, either by themselves or, more likely, in conjunction with other methods.
Archaeologists generally agree that a system of ramps must have been used, in some form or another, to drag the millions of blocks into their positions in the various pyramids. While there’s no remaining physical evidence of external ramps at the Great Pyramid, traces can be seen around some of the other Old Kingdom structures. One initial theory posited the use of a single, linear ramp built on one side of the pyramid, which would have been gradually raised as the pyramid progressed. But, considering that an 8% slope is about the maximum possible slope for moving such heavy blocks, a ramp such as this would have needed to be about a mile long to reach the top of the pyramid. That would mean constructing the ramp would have been as much of a task as building the pyramid itself, making the theory unlikely, especially for larger structures such as the Great Pyramid.
There are a few other ramp theories, however. One proposes that a series of switchback, or staircase, ramps could have been built, zigzagging up just one face of the pyramid. Traces of these have been found at the Sinki, Meidum, Giza, Abu Ghurob, and Lisht pyramids. Other archaeologists have speculated that a spiraling ramp could have been used, which would have wrapped around the entire pyramid. This is a popular theory, although it does have its problems. As with the straight or linear ramp, this ramp would potentially have been a mile long if it was used to build the Great Pyramid, making it in itself a massive construction project. It’s also not clear how workers could have maneuvered 2.5-ton blocks around the corners of the ramp. Archaeologist Mark Lehner did make a practical demonstration of how this could be done, but it was on a far, far smaller pyramid.
In 2007, the French architect Jean-Pierre Houdin offered a new theory to solve the mystery of how Egypt’s Great Pyramid was built. Using archaeological research combined with 3D computer modeling, he argued that the Egyptians used an internal — rather than external — system of ramps to build the pyramid. Workers, he theorized, used an outer ramp to build the first 141 feet, then constructed a spiraling inner ramp to carry stones to the apex of the 482-foot-tall pyramid. Using this technique, Houdin argued, the pyramid could have been built by around 4,000 people instead of 100,000, as other theories suggest.
The internal ramp theory was met with much intrigue, with some archaeologists skeptical and others arguing that it’s as valid as any other concept. Interestingly, Houdin’s proposal would explain why no evidence of ramps has been found at the Great Pyramid: They could be still there, hidden inside the structure. He and his team continue to work on the hypothesis, in search of definitive evidence of internal ramp structures with the help of 3D scanning technology.
Whether concrete proof will ever be found to support one particular theory is yet to be seen. It’s entirely possible, of course, that the Egyptians utilized various construction techniques when building the pyramids, combining levers and both external and internal ramps — as well as other possible methods — to overcome whatever challenges came their way.
Somewhere in the vicinity of Pisa, Italy, around 1286, an unknown craftsman fastened two glass lenses to a frame likely made of wood or bone to create the first eyeglasses.
With approximately two out of three adults in the United States today requiring some form of visual aid, it’s safe to say that invention has been well received. But even though 1286 is well before any of us first discovered the splendor of improved eyesight, it’s relatively recent in the larger picture of human existence. So how did people with subpar vision get by before there was a convenient LensCrafters to pop into?
There’s not much historical evidence explaining how our prehistoric ancestors fared in the absence of visual aids, so we’re left to use some combination of deduction and common sense to determine how, say, a sight-impaired individual would keep up with the pack in a group of hunter-gatherers.
A person with imperfect vision could still be useful to a group simply because sharp eyesight to read signs, documents, and the like wasn’t necessary in prehistoric times. As civilization progressed, those with visual impairments could even find their condition produced certain advantages. A myopic (nearsighted) person, for example, could find themselves steered toward a craftsman role for their ability to focus on detail.
That said, humankind used visual aids for many centuries before the first eyeglasses appeared in the Middle Ages. Here are a few of the tools that helped those dealing with hyperopia (farsightedness) and other sight-related challenges.
Archaeological digs in the eastern Mediterranean area have uncovered the existence of plano-convex lenses (flat on one side and rounded on the other) made of glass and rock crystal that date back to the Bronze Age. The most well known example is the Nimrud lens, which was found in the remains of an Assyrian palace in modern-day Iraq. While it’s unknown what these lenses were used for, some of them magnify objects between seven and nine times, rendering them useful for work on items in close quarters.
In his bookRenaissance Vision From Spectacles to Telescopes, Vincent Ilardi suggests that the presence of holes or “resting points” on some of these lenses indicates they may have been propped up in a way that allowed artisans to use their hands. Additionally, he offers the discovery of a 5,300-year-old Egyptian ivory knife handle with carved microscopic figures as evidence that ancient Egyptians had a means for providing vision enhancements.
These weren’t the only civilizations to discover uses for lenses. A 2.3-gram convex crystal lens was found in the tomb of a son of Chinese Emperor Liu Xiu, who lived in the first century CE. Its creation was fostered by the optical studies published centuries earlier by Chinese scholars, including the philosopher Mozi and King Liú Ān.
The ancient Romans seemed to believe that emeralds and other green gemstones could be used as a means for soothing the eyes and possibly providing visual aid. One famous example appears in Pliny the Elder's Natural History, where the author notes how Emperor Nero watched a gladiatorial event with the assistance of a smaragdus — an emerald or similar gem. However, historians largely believe that the stone may not have magnified the spectacle so much as it helped to block the glare of the sun.
In his first volume of Naturales Quaestiones (Natural Questions), the Roman philosopher Seneca described the magnifying effect of water in glass: “Letters, however small and dim, are comparatively large and distinct when seen through a glass globe filled with water.” His contemporary Pliny the Elder also noted the lenslike qualities that resulted from combining these elements, although his observation was related to the potential for flammability when focusing the sun's rays through a glass water vessel. While the result may have been satisfactory, Seneca's lack of follow-up thoughts suggest that this particular method of magnification likely wasn't widely used.
In the same sentence describing the magnification qualities of the water vessels, Seneca wrote, "I have already said there are mirrors which increase every object they reflect." Indeed, concave metal, glass, or crystal mirrors could be used to magnify smaller texts for those with vision problems. As noted by Ilardi in Renaissance Vision From Spectacles to Telescopes, French author Jean de Meun wrote of the efficacy of this tool in the late-13th-century poem “Le Roman de la Rose,” although there aren’t many other documented uses of mirrors for this purpose.
A major development in the area of visual tools came with the invention of reading stones. Often credited to ninth-century Andalusian scholar Abbas ibn Firnas, the concept of curved glass surfaces being used to magnify print was discussed at length in Arab mathematician Ibn al-Haytham's 1021 Book of Optics, which later received a wider audience with its translation to Latin.
Typically made from quartz, rock crystal, and especially beryl, reading stones were fashioned in a plano-convex shape, with the flat side against the page of a book and the rounded top providing a clear view of the lettering below. Initially used to assist the elderly with faltering vision, the stones became popular among younger readers as well, especially as beryl was said to possess magic and healing powers.
One surviving example of such reading stones are the 11th- to 12th-century Visby lenses discovered in Gotland, Sweden, in 1999. Along with providing excellent magnification of tiny text, many of these quartz lenses are mounted in silver, suggesting a decorative purpose as well.
It's unknown if the Visby lenses were the work of a local professional or somehow made their way from Muslim regions where other reading stones first appeared. Regardless, the quality of the images generated by these artifacts, and the craftsmanship that went into their creation, underscores how people were hardly left grasping in the dark for assistance in the days before eyeglasses became commonplace.
World War II was an unprecedented time for advancements in aviation technology, and fighter aircraft played a crucial role in the conflict’s outcome. Fighter planes — the so-called “knights of the sky”— were agile, powerful aircraft designed primarily for air-to-air combat, whether in dramatic dogfights against enemy fighters or while intercepting enemy bombers.
The demands of the war pushed fighter designs to new heights, resulting in planes that were faster, more maneuverable, and more lethal than ever before. And with air superiority often proving pivotal on any given front, from the Battle of Britain to the Battle of Kursk, these machines and their brave pilots helped shape the course of history.
Here we look at five World War II fighter planes — from Britain, the U.S., Germany, Japan, and Russia — that left an indelible mark on aviation history.
Credit: Fox Photos/ Hulton Archive via Getty Images
Supermarine Spitfire
Both the Supermarine Spitfire and the Hawker Hurricane played crucial roles during the Battle of Britain, defending British airspace against wave after wave of German bombers and fighters. The sturdier Hurricane was often tasked with intercepting enemy bombers and engaging in ground attack missions, while the Spitfire, with its superior speed and agility, had the edge when engaging enemy fighters.
Both planes were vital, but the elegant Spitfire is regarded by many as the most iconic fighter aircraft of all time. The Spitfire evolved as the war progressed, from the early Mk I to, finally, the Mk 24. More powerful engines, improved armaments, and enhanced aerodynamics allowed the plane to remain competitive against newer Axis designs. Not only was it an engineering marvel, but the Spitfire also became an enduring symbol of British resistance and ingenuity.
The Messerschmitt Bf 109 was the primary fighter aircraft of the German Luftwaffe throughout World War II. Designed in the mid-1930s, it was continually developed during the war to increase performance, stability, and firepower. For a time, it largely outmatched its enemies, including during the invasions of Poland and France. It met its first true rivals during the Battle of Britain, when it came up against the Spitfire and the Hurricane.
The Bf 109 was faster in a dive than both the Spitfire and the Hurricane and, in most circumstances, could also outclimb them. But the British fighters, in the hands of a skilled pilot, could outturn the Bf 109, and they also had greater range — a factor that proved crucial during the Battle of Britain. In total, some 34,000 Bf 109s were built, making it the most-produced fighter aircraft in history.
The American P-51 Mustang is widely considered the finest all-around piston-engined fighter of World War II. Originally designed for the British, early models of the Mustang were not good at high altitude. That changed completely with the installation of the British Rolls-Royce Merlin engine, which transformed the aircraft into a world-class fighter. In December 1943, the first P-51B and P-51C Mustangs entered combat in Europe, providing vital long-range, high-altitude escort for the U.S. bombing campaign against Germany. This drastically reduced the previously crippling losses suffered by U.S. bombers.
In 1937, the engineering team at Mitsubishi submitted a design for a lightweight, all-metal, low-wing monoplane that would match any other fighter currently in production. When the fighter went through trials two years later, it exceeded all expectations, demonstrating exceptional maneuverability, range, rate of climb, and acceleration. The Mitsubishi A6M Zero soon became the primary fighter aircraft of the Japanese during World War II.
When it first appeared in combat, the Zero could outmaneuver any plane it encountered. It wasn’t until 1943 that the Allies fielded fighters capable of defeating the Zero in aerial combat. These included new and much faster U.S. fighters such as the F6F Hellcat and F4U Corsair, which began to wrestle back air superiority from the Zero with its almost nonexistent armor. Toward the end of the war, many Zeros were converted into kamikaze aircraft.
The Yakovlev Yak-3 was a single-engine, single-seat Soviet fighter that entered service relatively late in the war, with the first iteration not flying until March 1944. Despite its late arrival, the Yak-3 quickly gained a reputation as one of the war’s finest dogfighters. One of the smallest and lightest fighter aircraft of the entire conflict, it had an impressive power-to-weight ratio, was highly agile, and was easy to maintain, making it popular among pilots and ground crews alike.
In combat, the Yak-3 proved more than a match for the best German fighters, including the Bf 109 and Focke-Wulf Fw 190. During one large dogfight in June 1944, 18 Yak-3s clashed with about 24 German aircraft. The Yaks downed 15 German aircraft while losing only one of their own. Despite seeing less action than some other World War II fighters due to its late introduction, the Yak-3 stands as one of the war’s most effective and influential designs.
When we think about old, silent films, we’re likely to picture the choppy, fast-paced movements of Charlie Chaplin or Buster Keaton, or perhaps the newsreel footage of Babe Ruth hitting a home run and seemingly zipping around the bases at 40 miles per hour. As talented as these individuals were, they weren’t capable of moving at speeds far beyond the range of normal people. So why do they appear that way on film?
Credit: Historical/ Corbis Historical via Getty Images
Film Only Provides the Illusion of Movement
To answer this question, we need to go back to some of the basics of filmmaking. Throughout the history of cinema, movie cameras have never been able to faithfully capture real-life movement. Rather, they record a series of still images in rapid succession, and replay them at speeds fast enough to trick the human mind into perceiving movement.
The number of individual images (or frames) displayed in one second of film is known as the frame rate, measured in frames per second (fps). Thomas Edison, who patented (but didn’t invent) the movie camera, noted that film needed to be shown at a speed of at least 46 fps to provide the illusion of movement. But in the early days of cinema, this proved too pricey to be practical, and some filmmakers found that the visual illusion could be sustained — and expensive celluloid film stock conserved — with frame rates closer to 16 fps, or even as low as 12 fps. While this speed was considered fast enough for a movie camera of that era, it is noticeably slower than the 24 fps rate that later became commonplace for both filming and projecting. And when old footage filmed at 16 fps or lower is accelerated for replaying at modern speeds, it will make the objects on screen move noticeably faster.
In the 1920s, the major Hollywood studios largely used the Bell and Howell 2709 movie camera. This model featured a hand crank that churned through half a foot of film with each revolution. As such, turning the handle twice per second equated to a shooting speed of 16 fps, which became the general standard for the era.
That said, many silent film directors deliberately ordered sequences shot at slower or faster speeds — “undercranked” or “overcranked” — for effect. A scene filmed at a slower speed (i.e. 16 fps) and then projected at a faster speed (i.e. 24 fps) will appear sped up, while the effect of slow motion comes when something is filmed at an even faster speed and then slowed to a standard speed for projection.
Studios often provided theaters with specific instructions for projection speeds that were higher than the filmed speed to make the motion look faster than normal, generally for comedic purposes or action sequences. And yet many of these theaters projected the films at even higher speeds in order to cram more showings and boost the number of paying customers in a single day. The end result of a film shot at, say, somewhere between 14 and 18 fps and projected at 22 fps or faster is the sped-up motion typical of movies from this era.
Though Edison and other innovators had previously attempted to match sound to moving pictures with limited success, the successful rollout of the Vitaphone with the Warner Bros. features Don Juan (1926) and The Jazz Singer (1927) showed that “talkies” were here to stay.
The Vitaphone solved the problem of sound and picture synchronization with a mechanical turntable that simultaneously powered a phonograph record alongside a film projector. A 16-inch record played at a speed of 33.3 rotations per minute (rpm) delivered 11 minutes of sound, matching the 11 minutes of visuals produced by 1,000 feet of film played at 24 fps. As sound ushered in a new era of cinema, Hollywood studios sought to standardize film speeds to preserve the uniform quality of their expensive productions, and 24 fps became the standard frame rate across the industry (notably slower than Edison’s original recommendation of 46 fps).
The reason for this exact number varies: Some sources note the depreciation of audio fidelity at lower speeds, or the fact that 24 is divisible by several numbers, enabling cinematographers to easily mark off smaller increments of film. Yet it’s likely that 24 fps was accepted throughout the industry largely because of Vitaphone's initial success at that number.
Naturally, the equipment that once was considered cutting edge in Hollywood has long since been replaced by new and better creations. The all-important Vitaphone quickly became obsolete with the emergence of sound-on-film technology in the 1930s, and the Bell and Howell 2709 camera fell out of production by 1958. Even the longtime staple of celluloid film was largely pushed aside by major studios with the rise of digital movies in the 2000s.
Yet the old-fashioned 24 fps film rate remains standard across the industry, despite the ability of modern cameras to shoot at far greater speeds. One major reason for this is simply that audiences have become accustomed to the distinct look of movies shot at that speed (as evidenced by the critical uproar over the visuals from the 48 fps version of 2011’s The Hobbit).
This means we’ll likely continue watching movies filmed at this standard for the foreseeable future. And in the meantime, we can also enjoy the old clips of Charlie, Buster, and other luminaries of the silent era, the jerky movements and clumsy intertitles serving as a reminder of how far this beloved form of storytelling has come over the past century.