Credit: Robert Alexander/ Archive Photos via Getty Images
Author Tony Dunnell
January 15, 2025
Love it?129
The QWERTY keyboard layout is so common that most of us never stop to question its unusual arrangement of letters. But when we do look down at our keyboards, we might find ourselves struggling to understand the logic behind the layout: Why does the top row begin with the letters Q, W, E, R, T, Y?
Found on nearly every computer, laptop, and smartphone worldwide (at least in countries that use a Latin-script alphabet), this seemingly random configuration of keys has an interesting history — though perhaps not the one most people have been led to believe.
During the 19th century, inventors came up with various kinds of machines designed to type out letters. Most of these machines, however, were large and cumbersome, often resembling pianos in size and shape. In some cases, they proved highly valuable to people with visual impairments, but for general use they were inefficient, being much slower than simple handwriting.
Enter Christopher Latham Sholes, an American inventor who, in 1866, was working alongside Carlos Glidden on developing a machine for numbering book pages. Sholes was inspired to build a machine that could print words as well as numbers, and he and Glidden soon received a patent for their somewhat ungainly prototype. The contraption had a row of alphabetized keys that, when struck, swung little hammers with corresponding letters embossed in their heads. The keys, in turn, struck an inked ribbon to apply the printed letters to a sheet of paper. It was far from the perfect solution, however, so Sholes persevered.
Five years later, in 1872, Sholes and his associates produced the first-ever practical typewriter. Rather than an alphabetized row of keys, this new typewriter featured a four-row layout with what was then a QWE.TY keyboard (with a period where the R is today). In 1873, Sholes sold the manufacturing rights to the Remington Arms Company, which further developed the machine. It was marketed as the Remington Typewriter — complete with the slightly altered QWERTY key layout. It became the first commercially successful typewriter, and in so doing made the QWERTY keyboard the industry standard.
Here’s where things get a little foggy. An often-repeated explanation for the QWERTY keyboard layout is that it was designed by Sholes to slow typists down in order to prevent typewriters from jamming. If you’ve ever used an old-fashioned typewriter, you might have noticed how easy it is to strike letters in quick succession, or simultaneously, which can cause the type bars to become stuck together.
But would Sholes really have sabotaged his own keyboard layout to hamper the speed of operators? In reality, there is little to no hard evidence of any deliberate slowdown. And the patent that first mentions the QWERTY layout contains no direct mention of why the keys were placed in such a manner. If it was indeed designed to cause fewer jams, this feature would likely have been a primary attribute of the patent. What’s more, at the time that Sholes patented the QWERTY layout, there were no “touch typists,” so no one was typing particularly quickly, let alone speed-typing.
So, why did QWERTY become standard? One convincing and particularly thorough answer comes from Kyoto University researchers Koichi Yasuoka and Motoko Yasuoka. In their 2010 paper “On the Prehistory of QWERTY,” the researchers conclude that the mechanics of the typewriter did not influence the keyboard design. Instead, the QWERTY system was a rather circuitous result of how the first typewriters were being used.
Among the most prominent early adopters of early typewriters were telegraph operators, whose priority was to quickly transcribe messages. These operators found an alphabetical keyboard arrangement to be confusing and inefficient for translating morse code. The Kyoto paper suggests the QWERTY keyboard evolved over several years as a direct result of the input provided by these telegraph operators. According to the researchers, “The keyboard arrangement was incidentally changed into QWERTY, first to receive telegraphs, then to thrash out a compromise between inventors and producers, and at last to evade old patents.”
In other words, QWERTY came about through a combination of factors. There was direct input from telegraph operators, as well as compromises made between inventors and producers — practical changes made by the manufacturers who actually had to mass produce the product. (The keyboard layout on Scholes’ original prototype, for example, was altered by the mechanics at Remington.) And then there were old patents to contend with. In 1886, a new company called Wyckoff, Seamans & Benedict (WS&B) released the Remington Standard Type-Writer No. 2. To avoid existing keyboard layout patents, the company slightly altered the design, placing M next to N and swapping around C and X. This became the QWERTY format keyboard we use today.
While the precise intentions behind Sholes’ QWERTY design are at least partially lost to history, there is undoubtedly a strong argument againstthe popular belief that it was designed to slow typists down.
What we know for sure is that the QWERTY layout is remarkable for its staying power. Despite the development of potentially more efficient keyboard layouts — most prominently the Dvorak layout, designed in the 1930s, which is supposed to be faster and more comfortable — QWERTY has remained the global standard. By the time alternative layouts were proposed, millions of people had already learned to type on QWERTY keyboards, and switching would have required massive retraining efforts for typists, not to mention a lot of grumbling from the general public.
So, for now at least, QWERTY is very much here to stay — at least on laptops and PCs. Researchers at the University of St Andrews in Scotland have developed a split-screen keyboard they claim can increase typing speeds for touchscreen users who type with their thumbs. Known as KALQ (the letters at the bottom right of that keyboard), its layout is a radical departure from the QWERTY system, which has been unchanged for about a century and a half. Will KALQ gain a foothold? Only time, and more typing, will tell.
Somewhere in the vicinity of Pisa, Italy, around 1286, an unknown craftsman fastened two glass lenses to a frame likely made of wood or bone to create the first eyeglasses.
With approximately two out of three adults in the United States today requiring some form of visual aid, it’s safe to say that invention has been well received. But even though 1286 is well before any of us first discovered the splendor of improved eyesight, it’s relatively recent in the larger picture of human existence. So how did people with subpar vision get by before there was a convenient LensCrafters to pop into?
There’s not much historical evidence explaining how our prehistoric ancestors fared in the absence of visual aids, so we’re left to use some combination of deduction and common sense to determine how, say, a sight-impaired individual would keep up with the pack in a group of hunter-gatherers.
A person with imperfect vision could still be useful to a group simply because sharp eyesight to read signs, documents, and the like wasn’t necessary in prehistoric times. As civilization progressed, those with visual impairments could even find their condition produced certain advantages. A myopic (nearsighted) person, for example, could find themselves steered toward a craftsman role for their ability to focus on detail.
That said, humankind used visual aids for many centuries before the first eyeglasses appeared in the Middle Ages. Here are a few of the tools that helped those dealing with hyperopia (farsightedness) and other sight-related challenges.
Archaeological digs in the eastern Mediterranean area have uncovered the existence of plano-convex lenses (flat on one side and rounded on the other) made of glass and rock crystal that date back to the Bronze Age. The most well known example is the Nimrud lens, which was found in the remains of an Assyrian palace in modern-day Iraq. While it’s unknown what these lenses were used for, some of them magnify objects between seven and nine times, rendering them useful for work on items in close quarters.
In his bookRenaissance Vision From Spectacles to Telescopes, Vincent Ilardi suggests that the presence of holes or “resting points” on some of these lenses indicates they may have been propped up in a way that allowed artisans to use their hands. Additionally, he offers the discovery of a 5,300-year-old Egyptian ivory knife handle with carved microscopic figures as evidence that ancient Egyptians had a means for providing vision enhancements.
These weren’t the only civilizations to discover uses for lenses. A 2.3-gram convex crystal lens was found in the tomb of a son of Chinese Emperor Liu Xiu, who lived in the first century CE. Its creation was fostered by the optical studies published centuries earlier by Chinese scholars, including the philosopher Mozi and King Liú Ān.
The ancient Romans seemed to believe that emeralds and other green gemstones could be used as a means for soothing the eyes and possibly providing visual aid. One famous example appears in Pliny the Elder's Natural History, where the author notes how Emperor Nero watched a gladiatorial event with the assistance of a smaragdus — an emerald or similar gem. However, historians largely believe that the stone may not have magnified the spectacle so much as it helped to block the glare of the sun.
In his first volume of Naturales Quaestiones (Natural Questions), the Roman philosopher Seneca described the magnifying effect of water in glass: “Letters, however small and dim, are comparatively large and distinct when seen through a glass globe filled with water.” His contemporary Pliny the Elder also noted the lenslike qualities that resulted from combining these elements, although his observation was related to the potential for flammability when focusing the sun's rays through a glass water vessel. While the result may have been satisfactory, Seneca's lack of follow-up thoughts suggest that this particular method of magnification likely wasn't widely used.
In the same sentence describing the magnification qualities of the water vessels, Seneca wrote, "I have already said there are mirrors which increase every object they reflect." Indeed, concave metal, glass, or crystal mirrors could be used to magnify smaller texts for those with vision problems. As noted by Ilardi in Renaissance Vision From Spectacles to Telescopes, French author Jean de Meun wrote of the efficacy of this tool in the late-13th-century poem “Le Roman de la Rose,” although there aren’t many other documented uses of mirrors for this purpose.
A major development in the area of visual tools came with the invention of reading stones. Often credited to ninth-century Andalusian scholar Abbas ibn Firnas, the concept of curved glass surfaces being used to magnify print was discussed at length in Arab mathematician Ibn al-Haytham's 1021 Book of Optics, which later received a wider audience with its translation to Latin.
Typically made from quartz, rock crystal, and especially beryl, reading stones were fashioned in a plano-convex shape, with the flat side against the page of a book and the rounded top providing a clear view of the lettering below. Initially used to assist the elderly with faltering vision, the stones became popular among younger readers as well, especially as beryl was said to possess magic and healing powers.
One surviving example of such reading stones are the 11th- to 12th-century Visby lenses discovered in Gotland, Sweden, in 1999. Along with providing excellent magnification of tiny text, many of these quartz lenses are mounted in silver, suggesting a decorative purpose as well.
It's unknown if the Visby lenses were the work of a local professional or somehow made their way from Muslim regions where other reading stones first appeared. Regardless, the quality of the images generated by these artifacts, and the craftsmanship that went into their creation, underscores how people were hardly left grasping in the dark for assistance in the days before eyeglasses became commonplace.
Credit: Universal History Archive/ Universal Images Group via Getty Images
Author Kristina Wright
June 13, 2024
Love it?92
The typical home contains dozens of items made of glass, from canning jars in the pantry to decorative knickknacks on the mantel to the windows and doors that insulate our living spaces and filter light. Despite the surge of plastic products following World War II, this delicate translucent material has maintained its essential place in our homes and daily routines.
Historically, the glass items that we now take for granted were highly coveted luxuries, accessible for a time only to the rich and royal. The evolution of glassmaking has transformed the material from an exclusive work of art to an ubiquitous aspect of modern living. But when — and how — did humans first learn to make this valuable, versatile material?
To understand how glass is made, we must first understand what glass is and how it’s different from other materials. Rather than a single, uniform substance, glass is an unusual state of matter that looks like a solid but acts like a liquid. The formation of glass occurs when a molten substance is cooled so rapidly that its atoms are unable to organize into the latticelike crystalline structure characteristic of a solid. Likewise, these atoms also lack the ability to move randomly, as they would in a liquid. Because of the unusual properties of its atoms, glass is in a distinctly unique category of its own, neither fully solid nor fully liquid, referred to as a rigid liquid or an amorphous solid.
While we typically think of glass as a human invention, it also occurs naturally. Lightning strikes, meteorite impacts, volcanic eruptions, and even some sea creatures can produce natural glasses that are similar in composition to human-made glass. Natural glass is formed when silica-rich sand or rocks are heated to high temperatures and rapidly cooled. Examples include obsidian, created by the rapid cooling of volcanic lava; tektites and impactites, formed by the impact of meteorites; fulgurites, created by lightning striking sand; and even the siliceous (silica) skeletons produced by certain types of sea sponges and algae.
Ancient glass was made from three primary ingredients: sand (silicon dioxide or silica), an alkali oxide (typically soda ash or natron), and lime. When combined and heated to 2,400-2,700 degrees Fahrenheit, the ingredients fused to form glass.
The earliest evidence of glassmaking includes objects such as beads, pendants, and inlays that were cast in open molds, but glass may have initially occurred as an accidental byproduct in the workshops of Bronze Age metalworkers and ceramicists. While archaeologists have found glass artifacts produced by the Egyptians and Phoenicians dating back to the second millennium BCE, current theory suggests that the first human-made glass dates back even further, to Mesopotamian craftsmen during the third millennium BCE, some 4,000 to 5,000 years ago.
Researchers have long debated the birthplace of glassmaking, in part because glass products, as well as the raw materials used for the glassmaking process, such as blocks of colored glass called ingots, were frequently exchanged along trade routes. But advances in archaeology and chemistry have helped researchers better pinpoint the origins of glassmaking to ancient Mesopotamia (modern-day Iraq).
Situated between the Tigris and Euphrates rivers, Mesopotamia was well suited for trading its glass materials and glassmaking knowledge. By the second millennium BCE, artisans had developed sophisticated techniques to shape glass and create vessels. One such method, called core forming, involved creating a hollow vessel by covering a mud core with glass and then removing the hardened mud. Core-formed objects have been found in Mesopotamia, as well as Egypt, where glassmaking also flourished.
Glass trading between Egypt and Assyria was mentioned in the Amarna Letters, a set of cuneiform clay tablets dating to the 14th century BCE that were excavated in the ancient desert city of Amarna, in modern-day Egypt. Not surprisingly, chemical analysis of glass excavated in Amarna found the glass was of Mesopotamian origin. Similarly, the presence of cobalt glass beads in Scandinavian Bronze Age tombs has provided evidence of a complex trading system linking Mesopotamia and Egypt with the Nordic Bronze Age cultures. These findings highlight the extensive trade networks of the ancient world, and demonstrate the far-reaching influence of Mesopotamian glassmaking, even as other cultures began to adapt and develop their own glassmaking methods.
Advertisement
Advertisement
How Did People Wake Up on Time Before Alarm Clocks?
Timekeeping technology has come a long way from ancient Egyptian sundials, and with it, so has the ability to wake up at whatever precise time might be needed for work, school, or appointments — even if we often ignore a ringing alarm in favor of snoozing for just 10 more minutes. While the demands of modern society are certainly more rigid than they once were, people have long had various reasons to keep a tight schedule, and at times they had to rely on more than just the crow of the rooster or the chirping of birds at dawn to make sure they were up to meet the day.
The most basic way people woke up before the invention of alarms was strictly biological in nature. Long before the advent of mechanical clocks or artificial light, people lived in harmony with the natural rhythms of day and night. Two biological processes dictate this natural sleep-wake cycle: homeostasis and circadian rhythms. Homeostasis governs our body’s drive for sleep, which increases the longer we’re awake and dissipates once we fall asleep, eventually signaling when it’s time to wake up. Circadian rhythms, meanwhile, control alertness and drowsiness throughout the day, influenced by light (more alert) and darkness (sleepy time).
This isn’t the only internal body process that served as a primal wake-up call before alarm clocks: Some people relied on their bladders. In biographer Stanley Vestal’s 1984 book about the life of 19th-century Lakota warrior White Bull, he noted, “Indian warriors could determine in advance their hour of rising by regulating the amount of water drunk before going to bed.” Of course, these bodily functions still exist as natural wake-up calls, but circadian rhythms often get disrupted by modern light sources such as screens, and given the strict nature of our 21st-century work schedules, one’s bladder might not be the most reliable alarm.
Another historical method of awakening also involves water, though not drinking it in large quantities. In the fourth century BCE, the ancient Greek philosopher Plato notably refined the timekeeping tool known as the "clepsydra" or "water thief” into a primitive alarm clock. Throughout the night, one basin of water would empty into another. When the water rose to the desired level, it was then siphoned into another container, emitting a whistling sound and ensuring neither Plato nor the students of his famed academy slept through his lectures. Another creative yet complicated hydraulic device was the “Water-Driven Spherical Birds’-Eye-View Map of the Heavens,” one of the first known mechanical clocks, created in 725 CE by Chinese mathematician and inventor Yi Xing. Water flowed through its system of wheels, hooks, pins, shafts, locks, and rods; the wheel did a full revolution every 24 hours, and different sounds occurred at specific times — a bell would chime every hour, and a drumlike beat would sound every 15 minutes.
Clocks using weights or pendulums, including devices with built-in alarm functionality, became much more accurate throughout the 17th and 18th centuries, but they remained largely out of reach for most people. Then, during Britain’s Industrial Revolution, the shift from agriculture to factory work created a demand for reliable timekeeping for more people than ever. Urbanization meant fewer people waking to the rooster’s crow, so instead, they depended on factory whistles, or, for those who could afford it, on “knocker-uppers” to ensure they woke up on time.
Knocker-uppers were people who carried sticks, poles, or peashooters to tap on windows and wake the sleeping workers who had hired them as personal alarms. The knocker-uppers themselves were often night owls, waking for the day at 4 p.m. and retiring to bed after getting the laborers off to work. Eventually, electric alarm clocks became readily accessible to most people, but even as the gadgets gained popularity throughout the 1930s and ’40s, knocker-uppers were still found in some pockets of industrial England all the way up to the 1970s.
Advertisement
Advertisement
5 Common Items From Colonial America You’ve Never Heard Of
Credit: Science & Society Picture Library via Getty Images
Author Kristina Wright
April 10, 2024
Love it?52
Life in colonial America was undeniably challenging, and early settlers had to be resilient and resourceful in order to survive. Many of the items that colonists used in day-to-day life were either brought from Europe or based on tools they had used in their old lives. While some remnants of the colonial era, such as spinning wheels and quill pens, remain a part of our collective memory, many lesser-known items have faded into obscurity or been replaced by modern innovations. Here are five once-common objects you may not have heard of before, each of which served an important role in sustaining family life and building communities in colonial America.
A simple, durable tablet used as a primer for children’s studies, the hornbook originated in England around 1450 and was a staple of early childhood education in colonial America. Hornbooks were crafted by affixing a single page of parchment or paper onto a paddle-shaped wooden board and covering it with a translucent protective sheet made from an animal’s horn. This was created by soaking the horn in cold water to separate the parts, then heating and pressing the needed part into a thin, clear layer. A fundamental lesson was printed on the paper, such as the alphabet in lowercase and capital letters, simple vowel-consonant combinations, Roman numerals, and religious texts. Hornbooks remained popular well into the era of mass-printed books because they were both sturdy and functional.
Dating back to ancient Greece, the salt cellar (or salt-box) was a practical and decorative piece of tableware that colonists brought with them to the Americas. Traditionally, a salt cellar was not only useful for storing salt, but could also signify the hierarchy of those who were seated at the table. Those who were seated “above the salt,” typically closer to the host at the head of the table, were considered esteemed guests, while individuals of less prominence, including children, were seated “below the salt,” toward the middle or opposite end of the table from the host.
Sugar was considered a luxury in colonial America. The crop was generally harvested by enslaved people on sugar cane plantations in the Caribbean, then shipped to refineries where it was processed and shaped into cones that were wrapped in paper and distributed throughout the colonies. Sugar nippers were specialized steel tools that were used to cut chunks off a sugar “loaf,” which could weigh as much as 10 pounds. The smaller pieces were then crushed into granulated sugar using a mortar and pestle.
Shaped like a frying pan with a long handle and hung by the fireplace when not in use, a warming pan was typically made of wrought iron and fitted with a decorative brass or copper lid. Warming pans were popular throughout Europe by the 17th century, and were indispensable household items for settlers in the northern American colonies during the harsh winter months. Before retiring for the evening, a family would fill the warming pan with hot coals or embers, place it between the layers of bedding, and gently move it around to warm the sheets. The pans themselves were considered valuable family heirlooms that were handed down from generation to generation.
The flail is an ancient hand tool that colonial farmers used for threshing grain, such as wheat, barley, and oats. Commonly used until the widespread availability of mechanical threshers in the mid-19th century, the flail consisted of two wooden sticks of different lengths joined by a short, flexible thong made of rope, leather, or chain. The longer rod, called a handstaff or helve, provided leverage, while the shorter rod, called the beater, was used to strike the grain over and over, separating the edible part of the grain from the husks or chaff. Using a flail, one person could thresh around seven bushels of wheat, 15 bushels of barley, or 18 bushels of oats in one day.
Credit: Visual Studies Workshop/ Archive Photos via Getty Images
Author Bennett Kleinman
April 3, 2024
Love it?86
Since the fourth millennium BCE, when urban civilizations first appeared in ancient Mesopotamia, humans have strived to achieve proper dental hygiene. Yet the nylon-bristled toothbrush we use today didn’t come along until the 1930s. For the thousands of years in between, people relied on rudimentary tools that evolved with scientific knowledge and technological advancements over time. Some of the earliest toothbrush predecessors date as far back as 3500 BCE. Here’s a look at how people kept their teeth clean before the modern toothbrush.
Sometime around the year 3500 BCE, the ancient Babylonians (located near modern-day Iraq) created a tool known as a “chew stick.” This simple, handheld piece of wood is considered the earliest known direct predecessor to the toothbrushes we use today. Chew sticks were simple wooden twigs cut to approximately 5 or 6 inches long. One end of the stick was then softened in boiling liquid to help separate the fibers, creating an almost brushlike effect. Individuals would chew on these sticks to freshen their mouths, as the frayed fibers would slide between the teeth and help loosen debris. Many early Arab cultures used a specific shrub called Salvadora persica (also known as the “toothbrush tree”) to create their chew sticks, which they called miswak. The shrub was particularly aromatic in nature and thought to have a stronger mouth-freshening effect than other plants.
Around this same time, civilizations in Mesopotamia, Egypt, and elsewhere in the ancient world also used early versions of a toothpick to keep their teeth clean. These were often made of thin pieces of wood, though in later years, wealthy individuals began crafting toothpicks from brass and silver for added opulence and durability. In ancient Greece, toothpicks were known as karphos, roughly meaning “blade of straw,” suggesting the Greeks may have used coarse fibers such as straw in addition to wood.
Before handheld teeth-cleaning devices became widespread, many ancient civilizations relied on a dental cream made of various ingredients, including the ashes of oxen hooves, myrrh, egg shells, and pumice. This cream was first developed in ancient Egypt sometime before 3000 BCE, and was mixed with water and then rubbed into the teeth to help remove debris. Around 1000 BCE, flavorful ingredients such as herbs and honey were added to the mix by the Persians. Another thousand years after that, the ancient Greeks and Romans improved the cream even further by adding abrasive elements such as crushed bone and oyster shell to really help get rid of stuck-on debris.
The world of dental hygiene was forever changed thanks to the ingenuity of ancient China. During the Tang dynasty (618 to 907 CE), coarse hair fibers from the back of a hog’s neck were attached to a handle made of bamboo or bone, creating an early handheld toothbrushing device. Centuries later in 1498, Emperor Hongzhi of the Ming dynasty formally patented this hog’s hair toothbrush. For the next several centuries, artisan toothbrush makers around the world drew inspiration from ancient China and continued to heavily rely on coarse boar’s hair. In fact, toothbrush bristles didn’t see another major change until the middle of the 20th century.
Boar’s hair toothbrushes didn’t really take off in Western Europe until the 17th century — before that, Europeans often relied on toothpicks, as well as rags rolled in salt or soot, as they believed the coarse substances helped remove debris from teeth. But around the mid-1600s, French dentists began pushing people to take better care of their teeth as they made scientific advances on the topic of dental hygiene. In 1649, an English politician named Ralph Verney, who was living in Paris at the time, was asked by a friend back in England to see if he could get his hands on some French-made “little brushes for making cleane of the teeth,” which were becoming more common throughout the region.
There was a bit of pushback in British society against using toothbrushes, however, as some people worried about the cost and hassle of replacing these devices that wore down over time. Instead, some promoted cheaper alternatives that could be easily crafted at home — the 1741 British text The Compleat Housewife advised wrapping a cloth around your finger and using that to clean your teeth instead, a process that had also been used in ancient Greece and Rome. But these detractors were in the minority, and the production of handheld tools began to expand. By 1780, Englishman William Addis realized how to improve upon the available toothbrushes of the time, which were using animal fibers sourced locally. Addis began importing boar’s hair directly from Siberia and northern China, as the boars from those cold climates had coarser, more durable hair to withstand the elements, which was believed to be more effective at cleaning teeth. Addis founded his namesake company shortly thereafter — which is still in operation today — and began successfully producing toothbrushes on a much larger scale.
As brushing one’s teeth became increasingly popular in Britain, that fervor spread to the British colonies in the Americas. Toothbrushes even caught the eye of future Presidents George Washington and Thomas Jefferson, the latter of whom instructed a London-based colleague to bring him “½ doz. Tooth brushes, the hair neither too strong nor too weak” as well as “½ doz. do. with the strongest hair, such as hog’s bristle.”
It wasn’t until the 19th century, however, that toothbrushes really became popular among everyday Americans, who previously relied on coarse clothes, toothpicks, and other rudimentary methods that were common back in Europe. This was in part due to the rise of a middle class with disposable income to spend on dental care products. The Industrial Revolution also increased commercial production of toothbrushes, which made the product available on a wider scale. On November 7, 1857, dental school graduate H.N. Wadsworth became the first American to successfully patent a toothbrush design, and large-scale production of toothbrushes began throughout the country by the mid-1880s. One of the earliest known American-made toothbrushes was produced by the Florence Manufacturing Company of Massachusetts. Known as the “Pro-phy-lac-tic,” the brush looked quite similar in design to many modern toothbrushes, though coarse animal hair was still used for the bristles.
Advertisement
Advertisement
Credit: GraphicaArtis/ Archive Photos via Getty Images
The First Nylon Toothbrush
On February 24, 1938, the U.S.-based DuPont chemical company changed the world of dental care with the debut of a new toothbrush featuring delicate nylon bristles instead of coarse hog’s hair. This was the first nylon toothbrush in the world, fitting since DuPont invented nylon to begin with. The synthetic bristles were just as effective at cleaning teeth, if not more so, and made using the tool more pleasurable. The first commercial nylon-bristled toothbrush was called Dr. West’s Miracle Tuft, and DuPont maintained exclusive rights to nylon bristles until the 1940s. Once that exclusivity expired, other manufacturers began incorporating the material into their own designs, and nylon bristles eventually became the standard.
Since the middle of the 20th century, Las Vegas has been known as the capital of the American id. Gambling has long been at the center of its appeal, as nicknames such as “Sin City” and “Lost Wages” suggest. “What happens in Vegas stays in Vegas” is the city’s well-known slogan, while others have remarked, “Las Vegas is where losers come to win, and winners come to lose.”
Rising up from the Nevada desert, the city’s built environment is so extravagant that it’s difficult to imagine a time when its spectacle did not exist, fully formed. Let’s go back and trace the origins of this uniquely American city.
Even though Las Vegas occupies a unique place in American culture, its metropolitan origin was sparked by the same thing that gave rise to many other U.S. cities: the development of the railroad. The area that includes present-day Nevada became a United States territory with the signing of the Treaty of Guadalupe Hidalgo in 1848, which ended the U.S. war with Mexico. Despite its location in the basin of the Mojave Desert, the site of what is now Las Vegas was a sort of oasis — a valley that included a water source in the form of artesian springs.
The water source was the selling point for railroad magnate and U.S. Senator William Clark. In 1902, he bought 2,000 acres of land and water rights in order to create a waypoint for the San Pedro, Los Angeles & Salt Lake Railroad he incorporated to connect those cities. The railroad line through Nevada began construction in 1904, and the following year, Clark auctioned off parcels of his land, which was located east of the railroad tracks.
Around the same time, civil engineer John T. McWilliams was attempting to build a township west of the railroad tracks. Though he was working with far less acreage than Clark — 80 acres to Clark’s 2,000 — the development provoked competition and intensified Clark’s efforts to build his township. Clark offered refunds on the $16 train fare to town in order to attract buyers. Newspaper advertisements promised, “Get into line early. Buy now, double your money in 60 days,” though accounts differ on which of the two were commissioning that ad.
Ultimately, McWilliams couldn’t really compete. After all, Clark owned the water rights and far more land, and he had a major stake in the railroad. On September 5, 1905, a fire almost completely consumed McWilliams’ townsite, and ensured that the competition between the two was short-lived; development would be concentrated west of the railroad tracks. Clark formed the Las Vegas Land & Water Company with his partners, and vowed, “I will leave no stone unturned and spare myself no personal effort to do all that lies within my power to foster and encourage the growth and development of Las Vegas.”
Clark’s dramatic statement might sound like a natural lead-up to building the bombastic city we know today. But that’s not quite what happened. Over the next 25 years, Las Vegas settled into an existence as a quasi company town, with railroad and mining as the main industries and a population of about 2,300. Clark sold his share of the railroad to Union Pacific in 1921, living in retirement for four more years until his death at age 86.
The 1920s were a tumultuous decade for Las Vegas nearly from the outset. In 1921, Union Pacific cut 60 jobs in the wake of its acquisition of the railroad. President Warren G. Harding’s incoming administration also meant new appointments to the Railroad Labor Board, and the board approved a series of wage cuts for railroad workers. In the meantime, a post-World War I downturn in mining further impacted Las Vegas. Then, in what is largely viewed as a retaliatory move, Union Pacific moved its repair shops out of Las Vegas and to Caliente, Nevada, costing hundreds of jobs.
With a dire economic outlook impacting the entire state, Nevada revisited the legalization of gambling, which had been legal from its statehood in 1869 up until 1910. With greater public support for relegalizing gambling than previous efforts had, a bill to legalize “wide open” gambling passed in both the state Assembly and Senate, and on March 19, 1931, Governor Fred Balzar signed it into law. That same year, divorce laws were loosened to permit anyone with a six-week residency in the state to legally divorce. And just one year earlier, construction had begun on the Hoover Dam, bringing an influx of thousands of workers to the area, many of whom would take the short trip to Las Vegas to try their luck with the newly legalized games. With this confluence of events, the Las Vegas we know today began to take shape.
Advertisement
Advertisement
Credit: Gene Lester/ Hulton Archive via Getty Images
Organized Crime and the Strip
The decriminalization of gambling made Las Vegas an attractive destination to experienced gambling operators, some of whom were running criminal enterprises in other states. One such figure was the archetypal crooked cop Guy McAfee, a Los Angeles vice squad officer who fled to Las Vegas to escape prosecution for running gambling and prostitution rings — the exact vice he was supposed to be policing. Arriving in town in 1938, he bought the Pair-O-Dice Club on Highway 91 and renamed it the 91 Club, delaying its grand opening to 1939 in order to coincide with Ria Langham's six-week residency for divorcing Clark Gable.
McAfee was responsible for two enduring pieces of Las Vegas culture: He opened the Golden Nugget on Fremont Street, ushering in an era of grandiose casinos, and he is also credited with nicknaming Highway 91 “the Strip.” The Golden Nugget opened in 1946, about a year after the Nevada Legislature created the first casino license stipulating a 1% tax rate on gross gaming revenue in excess of $3,000.
The lucrative gaming industry began to attract heavier organized crime players beyond McAfee. Benjamin “Bugsy” Siegel arrived in Las Vegas intending to create a base of operations for the notorious Syndicate, which, at the time, was led by Meyer Lansky during a period when Salvatore “Lucky” Luciano was in prison. Using funds from the Syndicate, Siegel became the primary stakeholder in the construction of a casino on Highway 91 to rival the Golden Nugget. Siegel wanted it to depart from the Old West aesthetic of most casinos of the time, and instead be patterned after the tropical resorts the Syndicate backed in Havana, Cuba. He dubbed it the Flamingo, and hoped to set a new standard for opulence in line with Siegel’s own worldview. “Class, that’s the only thing that counts in life," he once said. “Without class and style, a man’s a bum, he might as well be dead.”
Lavish attention to detail and poor business management contributed to enormous cost overruns, and bad luck compromised the Flamingo’s opening and its ability to quickly recoup costs. Maybe because of the money, maybe for a number of other possible motives that are debated to this day, Bugsy Siegel was gunned down while reading the newspaper in a Beverly Hills mansion on June 21, 1947. The murder was a national sensation, covered in tabloids and TIME magazine alike. LIFE magazine ran a gruesomely iconic full-page photo of the crime scene in its article about the murder. The case, Crime Case #46176 in the Beverly Hills Police Department, is still open and unsolved.
In a tellingly quick matter of minutes after Siegel’s murder, other Syndicate bosses took over the Flamingo. The resort eventually became profitable — so much so that the Syndicate began building more casino-resorts on the Strip. Organized crime had taken hold in Las Vegas, and the era of the swanky, entertainment-oriented hotel-casino was born. The mob invested in more casinos; the Sands Hotel and Casino opened in 1952 and brought in the “Rat Pack” (Frank Sinatra, Dean Martin, Sammy Davis Jr., Joey Bishop, and Peter Lawford) for a high-profile residency. The Dunes, Riviera, and New Frontier opened in 1955; the Tropicana followed in 1957, and the Stardust opened a year later. Each had ties with organized crime syndicates from around the country.
Despite the sensational murder of Bugsy Siegel, the mob’s involvement in casinos, hotels, restaurants, and other Vegas businesses expanded, and more gangsters arrived in the city throughout the 1960s and ’70s. But in the late ’60s, billionaire Howard Hughes bought a series of mob-connected casinos — the Desert Inn, the Sands, Castaways, Frontier, the Silver Slipper, and Landmark — that shifted the balance of casino ownership in the city from mob-connected to corporate-owned. In 1969, the Nevada Legislature promoted corporate ownership of casinos, and in 1970, Congress passed the Racketeer Influenced and Corrupt Organizations Act (commonly known as RICO), which aided the U.S. Justice Department in cracking down on organized crime.
During the ’70s, high-profile car and restaurant bombings between rival gangs unsettled the city to the point of attracting the attention of the FBI. The Nevada Gaming Commission and the Nevada Gaming Control Board refocused on organized crime, and Governor Mike O’Callaghan made it a point of emphasis. A RICO case focused on mobster Anthony Spilotro and Frank “Lefty” Rosenthal, whose connections ran from Chicago mob families to others throughout the Midwest. By 1981, Spilotro’s operations had been broken up, and the mob was all but finished in Las Vegas.
Advertisement
Advertisement
Credit: Frank Edwards/ Archive Photos via Getty Images
The Rise of the Corporate Mega-Resort
Billionaire businessman Kirk Kerkorian bought the Flamingo in 1967, and in 1969, he opened the massive International Hotel. It was the largest hotel in the country, with 1,500 rooms and a 4,200-seat showroom. For its grand opening, he brought in Barbra Streisand, and then followed that by bringing in Elvis Presley for a famed residency — 837 consecutive sold-out performances over seven years — that set an enduring record. The same year, Kerkorian bought Hollywood’s venerable MGM Studio, and set out to build a themed resort in Las Vegas based on the production house.
With all of the buying and building, Kerkorian incurred enormous costs, so to help balance the ledger, he sold the Flamingo (and later the International Hotel as well) to the Hilton Hotel Corporation. The success of the Flamingo Hilton caught the attention of other major hotel corporations, such as Sheraton and Holiday Inn, and they too began opening casino-hotels in the city. In 1973, Kerkorian opened the MGM Hotel-Casino, which eclipsed the International Hotel in grandeur, boasting 2,100 rooms, eight restaurants, two showrooms, and the (at the time) world’s largest casino. It was the largest resort in the world, and Las Vegas’ first mega-resort.
During the rest of the ’70s and into the ’80s, development on the Strip stagnated. But Las Vegas itself was growing: From 1985 to 1995, the city’s population nearly doubled, increasing to around 368,360. Using junk bonds in 1989, developer Steve Wynn reinvigorated the Strip by building the most ostentatious mega-resort yet: the Mirage Resort and Casino. A 29-story Y-shaped tower with 3,044 rooms, a 1,500-seat showroom, and waterfalls, it also had a simulated volcano that would “erupt” every 15 minutes after sundown. That same year, Kerkorian announced plans for a new MGM Grand, which was completed in 1991 and took the mantle as Las Vegas’ largest casino, with even more over-the-top touches including a lion zoo and heavyweight boxing arena.
Credit: George Rose/ Getty Images News via Getty Images
An Entertainment Capital
The 1990s were a transitional era in Vegas, as many of the midcentury casino icons were razed in favor of constructing new family-friendly mega-resorts, representing a commitment toward broader entertainment tourism, rather than singular gambling. The Sands was imploded and replaced by the Venetian; similarly, the Dunes was replaced by the Bellagio, and the Hacienda was replaced by Mandalay Bay Resort. In true Las Vegas fashion, each implosion was a spectator event. The Hacienda implosion was even scheduled at 9 p.m. on December 31, 1996, in order to coincide with the new year on the East Coast. Most of the casino implosions were televised, and the videos can still be viewed on local TV news channel websites.
Today, Las Vegas continues to broaden its scope. Professional sports leagues have ended their historical aversion to placing teams in the city, as seen by the NHL awarding the expansion team the Las Vegas Golden Knights in 2017, the WNBA’s San Antonio Stars relocating to Las Vegas and becoming the Aces in 2018, and the NFL’s iconic franchise the Raiders relocating to Las Vegas in 2020. Major League Baseball’s Athletics are likely to follow. Las Vegas is now known as a city with an excellent fine-dining scene, with a number of chefs up as semifinalists for 2024 James Beard Awards. And the only place in town the mob exists now is in a museum.
Advertisement
Advertisement
What 6 Major State Capitals Looked Like 100 Years Ago
One hundred years is a long time in the life of a city. New technologies emerge and wane, people come and go, cultural factors ebb and flow. But not all cities change at the same rate; some stay comparatively similar to their older incarnations, while others become drastically different. Here’s a glimpse at what a few iconic state capitals looked like a century ago.
Credit: Buyenlarge/ Archive Photos via Getty Images
Atlanta, Georgia
Atlanta was named after the Western and Atlantic Railroad, for which it was a terminus. In the early 20th century, the city was well established as a major railway hub, and the downtown was built around its first train station. Hotels were concentrated in an area near the station (called, fittingly, Hotel Row) in order to serve train travelers, and by the 1920s, masonry high-rises created the city’s skyline.
Like many cities during this period, Atlanta was beginning to expand its roads in order to accommodate increasing numbers of cars. In the 1920s, the city built three major viaducts to allow traffic to bypass the high number of railroad crossings. The Central Avenue, Pryor Street, and Spring Street (later renamed Ted Turner Drive) viaducts not only improved vehicle safety, but also led to development outside the city’s downtown core.
Though Boston was established as a colonial port city as early as 1630, a wave of immigration between 1880 and 1921 fueled a population boom and a sense of transition similar to what many younger cities were facing at the time. An expanding population created a need for a building boom, and changes wrought by the Industrial Revolution were at the forefront. The industrialization of nearby Springfield, Massachusetts led to a high population of mechanics and engineers in that city, and it became a hub for the nascent automotive industry. Rolls-Royce selected Springfield as the site of its U.S. factory, and many other early auto manufacturers were based in the area. In fact, Massachusetts claimed to have manufactured more cars at the beginning of the 20th century than Detroit, Michigan. Cars were particularly popular in Boston — more so than in many other cities — and 1 in 8 Bostonians were car owners by 1913. This led to the construction of a large number of buildings dedicated to automobiles, including garages, repair shops, car dealers, and more.
In terms of architecture, the city’s affluent Beacon Hill neighborhood appears very similar today to how it looked in the 1920s, with well-preserved colonial-style and Victorian buildings. However, little remains of Boston’s once-abundant theater district, which reached a peak count of 40 theaters by 1935.
Nashville has a storied history as a center of American popular music, but that history was in its very infancy 100 years ago. The famous Grand Ole Opry didn’t begin until the middle of the 1920s, first broadcasting as “the WSM Barn Dance,” and at the time, it was hardly the institution it would become later. In those days, it was purely a radio show broadcast out of the WSM studio on the fifth floor of the National Life and Accident Insurance building, with only as many spectators as could fit in the limited confines of the station’s Studio A.
Unlike other major capitals, Nashville wasn’t a city of high-rises — the 12-story Stahlman Building was the tallest building from the time of its completion in 1908 until the L&C Tower was built in the 1950s — and much of the low-rise brick and masonry buildings from the last century are preserved today. This is particularly true along the First Avenue front of the Cumberland River, and along SecondAvenue, formerly known as Market Street.
Though Austin’s population began steadily growing around the end of the 19th century, in 1920 it was only the 10th-largest city in Texas, with a population just under 35,000. Its visual focal point was the Texas State Capitol Building (the UT Tower didn't exist yet), and the surrounding downtown consisted of low- and mid-rise buildings with brick or stone facades — an aesthetic that was more “Main Street” than “metropolis.” Cars weren’t quite as dominant in Austin as in larger cities of the time, and horse-drawn carriages were still seen on the streets.
Phoenix is another city that had a relatively small population in 1920 — just around 29,000 — but it was still the largest city in a state that had existed for only eight years. Because of this, Phoenix had the flashiness and bustle of an up-and-coming city, despite its small size. The city’s first skyscraper, the Heard Building, was even the site of a stunt climbing performance shortly after it was built. Nonetheless, the Heard Building’s height of seven stories might not pass for consideration as a skyscraper in larger cities. The 10-story Luhrs Building surpassed it in height when it opened in 1924, and the 16-story Hotel Westward Ho became the city’s tallest building in 1928. It held that title for more than 30 years, as the vast availability of land surrounding Phoenix disincentivized vertical construction, in favor of outward land expansion.
Sacramento is often overshadowed by other iconic California cities, but 100 years ago it boasted a downtown of ornate classical architecture, was home to the largest manufacturing train yard in the western United States, and served as a major retail hub for the region. Vital downtown structures of the time — such as Sacramento City Hall, Memorial Auditorium, the California State Life Building, and the Federal Building — were all built during a construction boom that occurred between 1912 and 1932. But there isn’t much evidence of this architectural period today, as even some surviving buildings, such as Odd Fellows Hall, have been remodeled with simpler midcentury-style facades.
Advertisement
Advertisement
We Tried Writing With a Quill, and Here’s What We Learned
The use of quill pens dates back to the sixth century CE, when the feathers of large birds — primarily geese, turkeys, swans, and even crows — replaced the reed pens that had been used previously. Though it’s an obsolete writing utensil today, the quill pen remains a symbol of education, literature, and artistic expression. Many important historical documents were written using quill and ink, from the Magna Carta to the Declaration of Independence, and white quills are still laid out every day the U.S. Supreme Court is in session.
In pop culture, the Harry Potter series has helped generate interest in the old-fashioned writing instrument, and Taylor Swift, noting the influences of Charlotte Brontë and period films, has referred to some of her music as “Quill Pen Songs.” “If my lyrics sound like a letter written by Emily Dickinson’s great-grandmother while sewing a lace curtain, that’s me writing in the quill genre,” she explained at the Nashville Songwriter Awards in 2022.
So what is it actually like to write with the quill pens of yore? To answer that question, I turned to the internet for authentic supplies and expert advice, and set out scribbling. Here’s what I learned from the experience.
Photo credit: Kristina Wright
First, What Is a Quill?
A traditional quill pen consists of a feather that has been trimmed to around 9 inches long, had its shaft stripped of barbs, and had the inside and outside of the hollow barrel cleaned of membrane and wax. The quill is then dried, typically by curing it in sand, and the tip is shaped into a nib with a channel split (cut) to hold the ink.
The earliest fluid inks were carbon-based black inks that are believed to have originated in China around 2700 BCE. Iron gallotannate (iron gall) ink eventually replaced carbon and became the primary ink used with quill pens from the Middle Ages until the beginning of the 20th century. Iron gall ink is a permanent, deep purple-black or blue-black ink that darkens as it oxidizes, and is made from iron salts and gallotannic acids from organic sources (such as trees and vegetables). The Codex Sinaiticus, written in the fourth century CE and containing the earliest surviving manuscript of the Christian Bible, is one of the oldest known texts written with iron gall ink.
The Writing Technique Is Familiar — But With More Blotting
Though the quill pen was the primary writing implement for more than 1,500 years, it’s an awkward tool for a modern writer to learn to use. Thankfully, hand-cut goose quills and iron gall ink are readily available online for the curious scribe to purchase.
I was eager to start writing once I acquired my pen and ink, but I hadn’t considered what type of paper to use. I felt a bit like Goldilocks while trying different types: Watercolor paper was too absorbent, printer paper and a coated writing tablet weren’t absorbent enough (though the pen glided across both beautifully), but a good-quality sketchbook offered just the right amount of absorbency.
On the first attempt, I dipped the pen in the jar of ink and then removed the excess liquid by rubbing the barrel of the feather along the rim of the ink jar. I found this didn’t remove enough ink to prevent drips, however, so I used a paper towel to blot the excess.
Once that was done, using the quill pen was no different from using my favorite metal-tipped ink pen. I held the quill the same way and applied about the same amount of pressure (at least initially) to the paper to write.
Advertisement
Advertisement
Photo credit: Kristina Wright
A Soft Backing Helps Prevent Breakage
Once I found the right paper to use with my quill pen, I practiced loading the tip with ink and writing the alphabet and short sentences. My initial attempts were challenging because using a quill pen requires frequently dipping the pen tip back into the jar for more ink, which affected the quality and consistency of the letters. In trying to finish a word or sentence before replenishing the ink, I would find myself pressing too hard on the quill tip, which quickly dulled the point and resulted in me cracking my first pen.
With only one quill remaining, I sought the advice of more experienced quill pen users. They recommend using a felt cushion underneath the writing paper to preserve the quill’s point; I didn’t have any felt on hand, so I used an old linen napkin instead. I expected it to be more difficult to write with a soft backing rather than a solid tabletop, but I was amazed at how much easier and smoother it was to write. And I didn’t crack my pen!
The Ink Needs Time to Dry, But Sand Can Speed Up the Process
I smeared the ink on many of my early efforts by trying to move or stack the paper too soon. Blotting paper was (and still is) a popular way of preventing ink from smearing, but my attempts to use a clean piece of paper on top of my iron gall ink still resulted in smudges.
However, I had good luck with a technique that predates blotting paper: sand. In this case, I used sterile terrarium sand from the craft store and sprinkled it over my still-wet ink. The sand absorbed the wet ink in a matter of minutes and, once I shook off the sand, my quill ink writing was dry and (relatively) smudge-free. (Which was more than I can say for my hands and shirtsleeves.)
Advertisement
Advertisement
Photo credit: Kristina Wright
Successfully writing with a quill pen took more practice and patience than I initially expected, especially considering I’ve been a lifelong fan of sending handwritten letters. But once I got the hang of it, there was something very soothing about the rhythm of the old-fashioned process.
It’s easy to take for granted today, but the emergence of broadcast radio was a seismic shift in early 20th-century culture. Born out of ship-to-shore wireless telegraph communication at the turn of the 20th century, broadcast radio represented an entirely new pastime by the time it began to mature in the 1920s. The golden age of radio was the period from the 1920s to the 1950s when the medium was at its absolute peak in both program variety and popularity. Radio grew massively during this era: In 1922, Variety reported that the number of radio sets in use had reached 1 million. By 1947, a C.E. Hooper survey estimated that 82% of Americans were radio listeners.
In addition to the music, news, and sports programming that present-day listeners are familiar with, radio during this period included scripted dramas, action-adventure series such as The Lone Ranger, science fiction shows such as Flash Gordon, soap operas, comedies, and live reads of movie scripts. Major film stars including Orson Welles got their start in radio (Welles became a household name in the wake of the infamous panic sparked by his 1938 broadcast of The War of the Worlds), and correspondents such as Edward R. Murrow established the standard for broadcast journalism. President Franklin D. Roosevelt used the medium to regularly give informal talks, referred to as fireside chats, to Americans listening at home. But radio was also largely influenced by advertisers, who sometimes wielded control of programming right down to casting and the actual name of the program, resulting in some awkward-sounding show titles, such as The Fleischmann’s Yeast Hour. The golden age of radio was a combination of highbrow and lowbrow content, offering both enduring cultural touchstones and popular ephemera — much like the television that eclipsed it. Read on for five more facts from this influential era.
The first known radio advertisement was a real-estate commercial for the Hawthorne Court Apartments in Jackson Heights, Queens, broadcast by New York station WEAF in August 1922. There’s a bit of disagreement over whether the duration of the ad was 10 minutes or 15 minutes, but fortunately for listeners, it wasn’t long before the ad format was pared down considerably. In 1926, when General Mills predecessor Washburn-Crosby was looking for a way to boost the languishing sales of Wheaties, it turned to its company-owned radio station in Minneapolis (WCCO) for what ended up being a much shorter form of commercial. WCCO head of publicity Earl Gammons wrote a song about the cereal called “Have You Tried Wheaties?” and Washburn-Crosby hired a barbershop quartet to sing it, thus creating the first radio jingle.
Due to limited recording capabilities during the first three years of the ad campaign, the Wheaties Quartet (as they were known) performed the jingle live at the station every time the commercial aired. The decidedly manual campaign worked, as it led to the Minneapolis-St. Paul area comprising more than 60% of Wheaties’ total sales. When the ad campaign was expanded nationally, sales of Wheaties increased throughout the country, establishing the effectiveness of the jingle and the influence of advertising on the medium. By 1948, American advertisers were spending more than $100 million per year (around $1.2 billion today) on radio commercials.
Advertisement
Advertisement
Photo credit: Graphic House/ Archive Photos via Getty Images
The “Big Three” Networks Were Born in Radio
In 1926, RCA, the Radio Corporation of America, bought the radio station WEAF from AT&T and added the infrastructure to its New York and New Jersey station, WJZ. The combined assets established RCA’s broadcast network, dubbed the National Broadcasting Company, or NBC. On November 1 that same year, NBC officially became two networks: NBC Red (extending from WEAF) and NBC Blue (extending from WJZ). The upstart networks soon had a competitor. In 1927, frustrated talent agents Arthur Judson and George Coats resolved their inability to land a contract to get their clients work with NBC by forming their own radio network, United Independent Broadcasters. The network quickly changed its name to Columbia Phonograph Broadcasting Company after a merger with Columbia Phonograph and Records. Unfortunately for Judson and Coats, they were no more effective as would-be radio network moguls than they were as radio talent agents: The network operated at a loss, and it wasn’t long before Judson sold it to a relative who had been an initial investor, William S. Paley. On January 29, 1929, Paley shortened the network’s name to Columbia Broadcasting Company, or CBS.
The same year, NBC established the country’s first coast-to-coast radio infrastructure, but in 1934, antitrust litigation resulted in the FCC ordering the company to sell either the Red or Blue network. Years of appeals followed, finally resulting in NBC electing to sell the Blue network to Life Savers and Candy Company owner Edward J. Noble. Noble renamed it the American Broadcasting Company, and ABC was born.
Photo credit: Hulton Archive/ Hulton Archive via Getty Images
A Ventriloquist Show Was One of Radio’s Biggest Hits
A form as visual and illusion-based as ventriloquism seems like a poor fit for an audio-only medium, but from 1937 to 1957, The Edgar Bergen and Charlie McCarthy Show was an American radio institution. It was the top-rated show for six years of its run, and in the top seven for all but its final five years. Ventriloquist Edgar Bergen started in vaudeville, and it was his guest appearance on Rudy Vallée’s Royal Gelatin Hourin 1936 that introduced him to the radio audience. The appeal of the show was Bergen’s vaudevillian skill at performing multiple comedic voices, and his quick and salacious wit as Charlie, roasting celebrity guests and using the dummy’s nonhuman innocuousness to get away with censorship-pushing double-entendres. Though the show included a live studio audience, Bergen all but dropped the traditional ventriloquism requirement of not moving his lips while voicing Charlie. As he reasoned, “I played on radio for so many years… it was ridiculous to sacrifice diction for 13 million people when there were only 300 watching in the audience.”
Inventor Edwin H. Armstrong earned prestige for creating the regenerative circuit in 1912, a modification to the vacuum tube that led to the dawn of modern radio. In the late 1920s, he set out to find a way to eliminate static from broadcasts, and received initial support in the endeavor from RCA President David Sarnoff. Sarnoff allowed Armstrong to use the RCA radio tower atop the Empire State Building to conduct experiments, and Armstrong agreed to give RCA first rights to the resulting product. When Armstrong demonstrated his static-free invention in 1935, what he unveiled was an entirely new broadcast technology using frequency modulation (FM) instead of the existing AM band.
Sarnoff, however, had wanted an improvement to AM, and saw FM as a threat to both RCA’s existing AM infrastructure and the emerging television technology RCA was investing in: He feared it would render AM equipment obsolete, and that FM radios would compromise the nascent market for television sets. Instead of embracing FM, RCA withdrew its support of Armstrong. With no support elsewhere in the broadcast industry, Armstrong set up his own fledgling FM station in hopes of promoting high fidelity radio, but he spent years in court mired in a byzantine tangle of regulatory and patent battles. FM eventually caught on, of course, but not until after radio’s golden age had passed: The FCC finally authorized an FM broadcasting standard in 1961.
Photo credit: Gene Lester/ Archive Photos via Getty Images
The Last Shows of the Golden Age Ended in 1962
On September 30, 1962, the final two remaining scripted radio shows signed off for the last time on CBS. The detective series Yours Truly, Johnny Dollar ended a run that day that began in 1949, and mystery-drama Suspense ended a 20-year run that had begun on June 17, 1942. As evidenced by its longevity, Suspense was particularly venerable; it was a Peabody Award winner whose scripts drew from classical literature, stage plays and screenplays, and entirely original material. Suspense attracted top guest stars such as Humphrey Bogart, Bette Davis, Cary Grant, Bela Lugosi, Rosalind Russell, and James Stewart. CBS even produced a television adaptation that began airing in 1949, but it was canceled in 1954, outlasted by the original version on the radio.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.