While they’re rarely seen today, variety shows, with a genial host introducing an eclectic array of singers, comedians, jugglers, and the like, were once among the most popular displays on television — and before that, on radio, and before that, on stage. They’re a remnant of another time, before a remote control or the click of a mouse could point our drifting attention toward a different channel.
Until relatively recently, variety shows were a prominent part of American culture. Here’s a look at how this form of showmanship rose with the times, but failed to keep pace as the entertainment industry evolved.
Credit: Bettmann Archive via Getty Images
From the Stage to Radio
Variety acts have been part of the American theater tradition since at least the 18th century, when they were used to keep audiences amused between sets of the main show. They emerged as independently staged productions by the 1840s, and by the early 1880s, the variety show extravaganza known as vaudeville was en route to becoming the country’s most popular form of entertainment.
With the burgeoning prevalence of radio in the 1920s, performers who made their living on stage began showcasing their skills over the airwaves. The medium’s first mainstream variety show belonged to singer and bandleader Rudy Vallée, who provided music, interacted with guest stars, and unveiled a dramatic sketch as part of The Fleischmann’s Yeast Hour beginning in October 1929.
Vallée was credited with discovering top talents such as Eddie Cantor, who brought in a studio audience to liven up his own radio program. Stars such as Ed Wynn, Fred Allen, and Bing Crosby also enjoyed success as variety show hosts during this era.
Television’s first variety show surfaced on NBC in May 1946, courtesy of Standard Brands, which had been a major sponsor of the format on radio. Hour Glass featured an ever-changing array of guest stars who introduced comedic sketches, musical acts, and commercials. But squabbling between the network and sponsor ultimately led to the show’s cancellation in February 1947.
NBC tried its hand at variety again in June 1948 by bringing the long-running radio program Texaco Star Theatre to television, and soon settled on Milton Berle as its permanent host. A well-traveled if moderately heralded veteran of vaudeville, radio, and movies, Berle found this weekly gig a perfect platform for his brand of outlandish physical comedy. Texaco Star Theatre quickly emerged as the top show on TV, to the point where restaurants and movie theaters reportedly closed while Berle commandeered the attention of would-be customers from 8 p.m. to 9 p.m. on Tuesday nights.
Twelve days after Texaco Star Theatre’sTV debut, Toast of the Town premiered on CBS with a markedly different host. Awkward and notorious for forgetting names, Ed Sullivan hardly looked the part of a man tasked with entertaining the masses. Yet from the very first episode, which featured the Toastettes dance troupe and the comedy duo of Dean Martin and Jerry Lewis, it was clear its host possessed a sharp eye for talent. The show was renamed in his honor in 1955, and Sullivan took care to balance the presentation of children’s fare with up-and-coming rock ’n’ roll acts such as Elvis Presley and the Beatles. The show served as a kingmaker of American tastes until going off the air in 1971.
The successes of Berle and Sullivan spurred an onslaught of TV variety shows, many of which carried their audiences from previous incarnations. Arthur Godfrey brought his competition program Arthur Godfrey’s Talent Scouts from CBS Radio to TV in late 1948, and soon afterward premiered the more traditional variety show Arthur Godfrey and His Friends. The Red Skelton Show became another successful radio-to-TV hit beginning in 1951, with its star’s stable of comic characters such as Junior the Mean Widdle Kid proving just as engaging on camera.
Not every established star was able to make the leap to a weekly TV format, however. Comedian Joe E. Brown, who possessed face-distortion abilities to rival those of Berle, couldn’t keep The Buick Circus Hour afloat for more than a season in the early 1950s. And for all his accomplishments as a singer and actor, Frank Sinatra twice failed to gain significant traction with a variety show that decade.
While old entertainment standbys such as Dean Martin enjoyed success with their programs in the 1960s, variety shows experienced a lag until being jolted by a wave of new talent later in the decade.
Chief among these was The Carol Burnett Show, which boasted the comedic and musical talents of its titular star as well as a strong supporting cast. Known for its recurring characters and spoofs of popular movies, as well as the signature ear tug and Tarzan yell of its host, The Carol Burnett Show claimed a whopping 23 Emmy Awards over its run from 1967 to 1978.
Also debuting in 1967, The Smothers Brothers Comedy Hour featured the sibling-rivalry schtick of folk singers Tom and Dick Smothers. While the show included traditional variety elements such as musical production numbers, the co-hosts and their guests rankled CBS executives with their barely veiled drug references and criticism of the Vietnam War, until they were canceled at the end of the second season. Meanwhile, Rowan & Martin’s Laugh-In, which premiered in January 1968, also poked fun at cultural mores and pushed the envelope with its risqué jokes, although it managed to hang on until 1973.
With the launch of The Flip Wilson Show in 1970, the variety format had its first successful Black host. Wilson excelled at character creation, particularly with the fan-favorite Geraldine Jones, and his show was the country’s second-most-watched program for two of its four seasons.
Musicians came to dominate variety shows in the 1970s, with Cher and Sonny Bono, Donny and Marie Osmond, and Tony Orlando and Dawn enjoying solid ratings as hosts. Other notable entries from the period include The Muppet Show, which featured the brilliant puppet work of Jim Henson and his crew from 1976 to 1981, and Saturday Night Live, which embarked on its endless run in 1975.
However, the format was clearly running on fumes by the time Pink Lady and Jeff, which starred a pair of non-English-speaking Japanese pop stars, went on and off the air in 1980. The long-running Hee Haw, which first introduced a country flavor to the genre back in 1969, continued airing original shows until 1992, while The Statler Brothers also enjoyed success among the country set on the Nashville Network through most of the 1990s. Otherwise, few variety shows managed to make a dent in the public consciousness across the final years of the 20th century.
The format’s demise is often blamed on the rise of cable television and the accompanying fragmentation of audiences: About 23% of American households received basic cable TV service in 1980, a number that jumped to 60% by the end of the decade. However, it’s clear that changing tastes have also dictated viewing preferences, as 21st-century audiences have tuned in to traditional networks to watch reality TV competitions such as America’s Got Talent, which features the disparate singing, dancing, and athletic performances that would have been welcomed on The Ed Sullivan Show, though without the competition element
Variety shows continue to find their fans in international markets, and multitalented artists including Maya Rudolph and Neil Patrick Harris have attempted to revive them with limited success in recent years. And while SNL shares some elements with traditional variety shows, it is mostly focused on sketch comedy. Otherwise, variety has largely been relegated to old video clips and other forums of nostalgia, waiting for the day when the curtain rises again and this once-celebrated form of entertainment is called back into the spotlight.
The role of first lady has often been seen as ceremonial, rooted in hospitality, social engagement, and the management of White House events. The president’s spouse is expected to host gatherings, welcome dignitaries, and accompany the president at official functions — duties that are social in nature, but still help shape public perception of the presidency and project the values of the nation.
The women who have held this position have never been confined to protocol, however. Without a formal title or salary, many first ladies have carved out their own platforms — championing causes, serving as cultural ambassadors, and guiding national conversation. Each made the role her own, and some redefined it entirely. Here are five first ladies who changed the game during their time in the White House.
In the early years of the nation, most first ladies had little or no formal education, reflecting the limited opportunities available to women at the time. Louisa Adams, the wife of John Quincy Adams, America’s sixth president, broke this pattern. She was the first person in the position to receive structured schooling, studying at a convent school in France from 1781 to 1783 and at a boarding school in England from 1784 to 1789. It wasn’t until decades later that Lucy Hayes, the wife of Rutherford B. Hayes, became the first U.S. first lady to have earned a college degree. In 1850, she graduated with a liberal arts degree from Cincinnati Wesleyan Female College, marking a milestone at a time when higher education for women was still rare.
Later first ladies expanded this legacy with advanced academic achievements. Laura Bush was the first with a master’s degree, receiving a Master of Library Science from the University of Texas at Austin in 1973. Hillary Clinton became the first to hold a law degree, having earned her Juris Doctor from Yale Law School in 1973, a path later followed by Michelle Obama, the first Black first lady, who earned her Juris Doctor from Harvard Law School in 1988. Jill Biden reached the highest level of academic achievement to date, completing a Doctor of Education in Educational Leadership at the University of Delaware in 2007.
Credit: PhotoQuest/ Archive Photos via Getty Images
The First To Have Her Own Staff
In 1901, Edith Roosevelt, the wife of President Theodore Roosevelt, became the first U.S. first lady to hire her own staff. Recognizing the increasing demands of the White House and her own desire to manage them effectively, she appointed Isabella “Belle” Hagner as her social secretary. Hagner’s responsibilities included handling correspondence, planning events, and managing the first lady’s schedule. Before Roosevelt, the first lady’s duties were often managed informally by family members, friends, or existing White House staff, so this move established the necessity of a dedicated team to assist in the multifaceted duties of the position.
Having a staff marked an important shift in how the role of first lady was perceived and executed. In 1961, Jacqueline Kennedy became the first U.S. first lady to hire a dedicated press secretary. She appointed Pamela Turnure to manage media inquiries, arrange interviews, and coordinate public appearances, professionalizing the role of first lady. In 1978, Rosalynn Carter became the first to establish a formal Office of the First Lady, which provided dedicated staffers and resources to support her initiatives. This move institutionalized the role, allowing future first ladies to operate with an official workspace and organizational support within the White House.
Advertisement
Advertisement
Credit: Bettmann Archive via Getty Images
The First To Use Media as a Tool for Public Influence
Eleanor Roosevelt, the wife of Franklin D. Roosevelt, transformed the role of first lady by becoming the first to use media strategically to communicate directly with the public. She held regular press conferences — a whopping 348 over 12 years — and in 1935 launched her own syndicated newspaper column, “My Day,” which appeared six days a week. Through this platform, she became the first U.S. first lady to speak independently to millions nationwide, offering her perspectives on current events, social issues, and public policy.
Roosevelt’s example demonstrated that the first lady could extend her influence beyond ceremonial duties, using emerging technologies to connect with citizens and advocate for causes. Her pioneering use of media served as a model for future first ladies to shape public opinion and cultivate an independent public persona. In 1962, Jacqueline Kennedy became the first U.S. first lady to give a televised tour of the White House with the hour-long specialA Tour of the White House With Mrs. John F. Kennedy, which was also the first documentary specifically marketed to a female audience.
Betty Ford transformed the first lady’s role by openly discussing her breast cancer diagnosis while in office in 1974, a time when such topics were rarely mentioned publicly. By sharing her experience, she encouraged women nationwide to get screened and helped break the stigma surrounding breast cancer. Her openness had a significant impact, prompting many women to seek medical advice and screenings, a phenomenon known as the “Betty Ford blip.”
After leaving the White House, she continued her health-related advocacy, speaking candidly about addiction and founding the Betty Ford Center, further solidifying her legacy as a champion for public health and personal honesty. Ford’s willingness to address these issues became a model for future first ladies to engage with health topics in meaningful ways, influencing public discourse and policy.
From the very start, first ladies have used their position to work toward the betterment of the country. Martha Washington, the first U.S. first lady, actively supported the welfare of Revolutionary War soldiers, offering assistance to veterans in need and making herself approachable to those seeking help. Her example set an early precedent for using the White House as a platform for social advocacy.
Over time, first ladies expanded this role: Eleanor Roosevelt broke new ground by publicly championing human rights and speaking at international forums, while Lady Bird Johnson was the first to formally announce an agenda, promoting President Lyndon B. Johnson’s vision for a “Great Society,” including a war on poverty and initiatives to beautify public spaces. Together, their efforts helped define advocacy as a central expectation of the first lady.
Subsequent first ladies continued this legacy, focusing on causes that reflected both personal passions and national priorities. Nancy Reagan launched the “Just Say No” campaign to prevent drug abuse among children, and Barbara Bush championed literacy and family support initiatives. In each case, the cause reflected the first lady’s own perspective and priorities, showing how the office can shape public life in diverse and lasting ways.
Theft has long been a part of the human experience. Examples of its prevalence can even be found in ancient mythology: Prometheus stole fire from the gods; Odysseus and Diomedes snuck into Troy to steal the Palladium; and in Hindu mythology, Garuda stole the vase of Amrita from the gods to free his mother from Kadru, the mother of serpents.
Of course, theft is very much a real-world concern as well, with the most audacious holdups — the likes of bank jobs, diamond heists, or great train robberies — sometimes gaining almost legendary status. But in the annals of crime, few heists, if any, have captured the world’s imagination quite like the disappearance of Leonardo da Vinci’s “Mona Lisa” from the Louvre in 1911. Not only did the theft of this Renaissance masterpiece shock the art world, but it was also the most valuable object ever stolen.
On August 21, 1911, one of the most audacious heists in art history was successfully brought to fruition. The man responsible for this daring crime was Vincenzo Peruggia. The Italian wasn’t a criminal mastermind by trade — he was, in fact, a painter and decorator. But Peruggia had worked in the Louvre previously, and one of his jobs was constructing glass cases to protect works of art. He was therefore familiar with the entire museum and had insider knowledge of how to quickly and quietly remove a painting from the wall.
On the evening of Sunday, August 20, Peruggia entered the Louvre dressed in the same kind of white work overalls worn by the museum caretakers. He then hid inside a storage closet, where he remained until the following morning, when the Louvre was closed and foot traffic was light. At around 7:15 a.m., Peruggia poked his head out, checked to see if the coast was clear, and then headed straight for the nearby Salon Carré, where the “Mona Lisa” was housed. Then, he simply took the painting off the wall, carried it to an adjacent service stairwell, and removed the relatively small canvas from its protective glass frame. Hiding it under his overalls, he walked out of the museum undetected.
Initially, staff at the Louvre didn’t realize the “Mona Lisa” had been stolen. It wasn’t uncommon for paintings to be removed for cleaning or photography, so the blank space that the portrait normally occupied was ignored for more than a day. But then, when people started to ask questions, the whole story exploded, sparking an international manhunt and unprecedented media coverage. At the time, the “Mona Lisa” wasn’t nearly as famous as it is today. But the media frenzy surrounding the theft catapulted the painting to new heights, and all of a sudden there was global public interest in Leonardo’s enigmatic masterpiece.
French police soon began interrogating hundreds of suspects. The poet Guillaume Apollinaire was arrested and imprisoned thanks to his associations with the light-fingered Joseph Gery Pieret, a man known to have stolen small items from the Louvre before. Apollinaire then implicated none other than Pablo Picasso, who had recently bought stolen sculptures from Pieret. (We don’t know if Picasso was aware they were stolen.) At one point, the American tycoon and art lover J.P. Morgan was also suspected of commissioning the theft. All of these men were found to be innocent, and the investigation dragged on.
For more than two years, the “Mona Lisa” sat hidden in Peruggia’s modest Paris apartment, where he would often take it out to gaze upon the masterpiece — he poetically stated, after his arrest, “I fell a victim to her smile and feasted my eyes on my treasure every evening. I fell in love with her.” Eventually, Peruggia tried to sell the painting, which led to his downfall. The potential buyers, upon seeing the genuine “Mona Lisa,” reported the thief to the authorities. On December 11, 1913, police arrested Peruggia and the “Mona Lisa” was returned to its rightful place in the Louvre.
Since the theft of the “Mona Lisa,” many other famous and valuable paintings have been stolen. For example, Edvard Munch’s “Scream,” worth $120 million, was stolen twice, in 1994 and 2004, and “The Concert”by Johannes Vermeer, valued at $200 million, was stolen in 1990. Of course, it’s not only paintings that get the big bucks. A $3 million violin — known as the Davidoff-Morini Stradivarius — was nabbed in 1995. And in 2003, the Saliera, a sculpture and functional salt cellar worth about $57 million, was taken from a museum in Vienna. Even Dorothy’s ruby red slippers from The Wizard of Oz were stolen, before later selling at auction for $28 million. And back in 1907, someone stole the Irish Crown Jewels — worth at least $5.5 million today —from Dublin Castle, in a case that remains unsolved.
The “Mona Lisa,” however, is in a different league. It’s hard to put a price on the famed painting, but it did receive an insurance valuation of $100 million in 1962, equivalent to about $865 million today — the highest insurance valuation ever recorded for a painting. This alone makes the “Mona Lisa” the most valuable object ever stolen, as recognized by Guinness World Records. More recently, in 2014, experts valued the painting at a whopping $2.5 billion, far eclipsing the value of any other stolen object in history.
Credit: Evening Standard/ Hulton Archive via Getty Images
Author Bess Lovejoy
October 9, 2025
Love it?32
The roots of museums reach back thousands of years. From Mesopotamian princesses to Renaissance aristocrats, humans have long been drawn to collect, preserve, and display the material traces of their world. But exactly how old is this tradition? And which institution deserves the title of the first museum in history?
The earliest evidence of what we might recognize as a museum comes from the city of Ur, in modern-day Iraq. Once a flourishing port on the Euphrates River and the heart of ancient Sumerian civilization, Ur is also remembered as Abraham’s hometown in the Bible.
In the 1920s, British archaeologist Charles Leonard Woolley led excavations at Ur, uncovering treasures that dazzled the public: gold and lapis-inlaid jewelry, royal tombs, and evidence of elaborate funeral rites. Then, in 1924, Woolley stumbled upon something quieter but no less revolutionary.
Inside the ruins of a palace, he and his team found chambers belonging to Ennigaldi-Nanna, daughter of King Nabonidus, the last ruler of the Neo-Babylonian Empire. Among the rubble lay a puzzling collection: an inscribed black boundary stone from 1400 BCE, fragments of a king’s statue from 2250 BCE, bronze figurines, and clay tablets dating centuries earlier. The items spanned more than a millennium of Mesopotamian history.
What tied them together was a small clay drum inscribed in four languages. The text identified the origins of one of the objects and explained how it had been unearthed. To Woolley, this was unmistakably a museum label — the first known to history. He concluded that Ennigaldi had curated a collection of antiquities, deliberately displayed for their historical value.
Little is known about her motives, though her father was fascinated by the past and even conducted excavations himself. Ennigaldi also served as a priestess of the moon god Sin and may have overseen a scribal school for elite women. Whether motivated by scholarship, religion, or royal prestige, her collection, assembled around 530 BCE, stands as the earliest known public museum.
The idea of gathering and displaying objects wasn’t unique to Mesopotamia. In the Greek and Roman worlds, temples often doubled as repositories of offerings, art, and plundered treasures. Sculptures also decorated public gardens, forums, and bathhouses.
The Museum of Alexandria, founded in the third century BCE, was another important milestone in the history of museums. Despite its name, thismouseion — literally a shrine to the Muses — resembled more of a research institute than a gallery. Attached to the famous Library of Alexandria, it housed scholars, lecture halls, and dining rooms rather than artifacts. Still, the institution’s prestige meant that later Europeans revived the word “museum” to describe their own collections.
After Ennigaldi’s experiment, it took nearly 2,000 years before anything resembling a museum emerged in Europe. During the Renaissance, wealthy European nobles, merchants, and scholars began assembling “cabinets of curiosity” or “wonder rooms.” These eclectic collections brought together fossils, coins, artworks, scientific instruments, and exotic artifacts from voyages abroad.
The goal was encyclopedic: to contain the world’s marvels in miniature. Visitors might marvel at a stuffed dodo, a mummy’s hand, or a carved gem, all jumbled together without concern for taxonomy. John Tradescant, a gardener to English royalty, famously opened his home and garden in South London to display such a collection, later catalogued and published as the Musaeum Tradescantianum.
These cabinets were largely private, their accessibility subject to the whim of the owner. Yet they laid the foundation for the idea that collections could be displayed for education as well as prestige.
The leap from private collections to public institutions occurred from the 15th through 18th centuries. A turning point came in 1471, when Pope Sixtus IV donated ancient bronze sculptures to the people of Rome. This gift formed the nucleus of the Capitoline Museums, which officially opened in 1734 and are often considered the first truly public art museums.
Another watershed was the Ashmolean Museum in Oxford, England. Built to house the collections of the wealthy antiquary Elias Ashmole and the Tradescant family, it opened its doors in 1683 and is frequently cited as the first modern museum. Its displays included antiquities, coins, zoological specimens, and the (now-lost) body of the last dodo seen in Europe.
By the mid-18th century, museums were firmly established. The British Museum, founded in 1753 from the collection of Irish physician and naturalist Hans Sloane, institutionalized the model: publicly accessible repositories dedicated to collecting, preserving, and classifying the world’s knowledge. The Louvre, transformed into a national museum during the French Revolution in 1793, underscored the museum’s political as well as cultural significance.
So, what was the first museum? The answer depends on how you define “museum.” If we mean the first known collection curated for historical interest and accompanied by labels, the title belongs to Princess Ennigaldi’s collection in Ur. If we mean the first public institution designed to grant ordinary people access to cultural treasures, the Capitoline Museums of Rome claim the honor. And if we mean the first modern museum in the European sense — systematic, educational, and encyclopedic — the Ashmolean in Oxford stands out.
Each reflects a different stage in humanity’s enduring desire to preserve the past and make meaning from material culture. The museum has never been a single invention, but rather a recurring impulse across time: the need to gather objects, organize them, and invite others to see what stories they tell. In that sense, the first museum may not be a place at all, but an idea that keeps resurfacing whenever people try to make sense of history through things.
“They don’t make them like they used to.” You’ve likely heard this common refrain or even said it yourself before. Maybe it was a grumble about modern disposability, but perhaps it was a wistful reflection on how many parts of daily life have changed.
Old houses in particular can be full of reminders of how life once looked. Over the years, some domestic features that made sense for their eras have faded away as habits, technology, and tastes evolved. Here are seven once-common house fixtures that have all but disappeared.
For generations of kids, a laundry chute was less about dirty socks and more about fun. Who didn’t dream of sliding or sending toys down one like a secret passage? For the people in charge of the household chores, though, they were the ultimate convenience. Laundry chutes first appeared in the United States sometime around the late 1800s. They were inspired by similar systems in wealthy Victorian-era homes in England, which were an evolution of industrial chutes used for mail and coal.
While laundry chutes were initially common only in upper-class houses with staff, by the 1930s, they had become a beloved fixture of middle-class homes, too. But by the mid-1960s, their popularity was on the decline. Rising construction costs in the 1970s further pushed builders to cut out extras, and as modern washers and dryers migrated upstairs into their own rooms, the need for basement-bound chutes all but disappeared.
Credit: Horst P. Horst/ Conde Nast Collection via Getty Images
Intercom Systems
They may seem quaint now compared to sophisticated smart home systems, but there was a time when built-in intercoms were a handy home tool. As postwar suburban homes grew bigger in the 1950s and ’60s, families could use them to communicate with each other from different parts of the house. A voice from the kitchen would announce dinner, remind someone to take out the trash, or check who was home — all without anyone leaving their room.
Intercoms such as the ones made by NuTone were primarily fixtures of upscale homes. They were marketed as a luxury convenience that could also broadcast AM and FM radio throughout the house. But the novelty of intercoms didn’t last long, and by the 1980s, some major intercom manufacturers such as GE had ceased making them.
In the 19th and early 20th centuries, a little door on the side of the house was a common sight for many American families, but it wasn’t for letters or for a pet to scurry in and out. These built-in compartments, which usually had two doors, one on the outside and one on the inside of the compartment, were for milk. This was during an era when milkmen delivered glass bottles of fresh milk to the home. The milk doors were a receptacle for new bottles, and a place for families to return empty ones and leave the milkman’s payment.
Milk delivery continued for decades throughout the U.S. But by the 1960s and ’70s, it was competing with a new reality. Suburban sprawl made delivery routes longer and more costly, supermarkets offered cheaper one-stop shopping, and refrigerators became standard in nearly every home. Daily milk delivery was no longer needed, and remaining milk doors became little more than a symbolic portal to the past.
Perhaps one of the strangest bygone house features is the built-in razor-blade disposal. Prior to the early 20th century, men would need to visit the barber for a fresh shave, since doing so at home was considered too risky. But thanks to the invention of the safety razor in 1904, at-home shaving became the norm. While this was a major leap forward in convenience, the disposable razor blades were a nuisance when it came time to throw them out. Too sharp (and potentially contaminated) for the regular trash, they were instead placed into the wall.
That’s right — small openings were incorporated into the back of medicine cabinets or in the walls above sinks. Once the blades went in, they didn’t come out. They often stayed inside the walls, accumulating until someone decided to renovate somewhere down the line. By the mid-1970s, fully disposable razors with the blade built right in became popular, and these strange little slots went out of style.
Love them or hate them, popcorn ceilings were once everywhere. The infamous finish first became popular in the 1950s, when postwar homes were being built quickly and cheaply to accommodate booming populations. Construction crews appreciated the ease and affordability of the ceiling style, since it doesn’t require sanding or finishing multiple layers, and homeowners liked that it hid imperfections and even provided some minor soundproofing.
Popcorn ceilings were the norm for almost 30 years — and you may even have some in your own home today — but you likely won’t see them in homes built after 1980. A major reason? The material itself. Early formulations of the spray-on mixture often contained asbestos, naturally occurring minerals that, when disturbed, can lead to serious health risks. As maligned as their reputation may be, though, anyone who grew up under popcorn ceilings still probably has a soft spot for them — right along with shag carpets and avocado-green kitchen appliances.
Credit: Anne Cusack/ Los Angeles Times via Getty Images
Phone Nooks
Before cellphones and even cordless home phones, the household telephone typically stayed in one spot. In the early 20th century, to make room for this emerging technology, homes were sometimes built with a dedicated phone nook (also called a phone niche). These small, recessed wall spaces were usually found in a hallway or near the stairs. They often had a compact shelf for a heavy rotary phone and space for a small seat to tuck underneath.
If the house wasn’t built with a phone nook, cabinets could be ordered from catalogs such as Sears and installed anywhere homeowners desired. But as phones became smaller, lighter, and eventually mobile, there was no longer a need to anchor them to a single place. Today, phone nooks that remain are often used as storage for items such as mail, keys, tchotchkes, and houseplants.
Ironing was once a more frequent chore for many households, so having a simple setup was helpful. Enter the fold-out ironing board: a shallow, built-in cabinet, often in the kitchen, that concealed a board that could be pulled down when needed. This space-saving design choice was common in 1920s homes, along with Murphy beds and phone nooks.
Of course, ironing boards still exist, but over time many fabrics became easier to launder and wear without wrinkling, and everyday clothing became less formal. As a result, the need for these built-in boards declined. Today, you might find that existing relics of fold-out ironing board cupboards make better spice racks, but they are nonetheless a reminder of the changing domestic habits that keep households evolving.
Because U.S. presidents are often among the most famous and critiqued people of their era, they have frequently garnered nicknames for policies or activities that defined their persona — some of which are more well known than others.
George Washington, for one, was sometimes called the “American Cincinnatus,” after the Roman statesman who prioritized the well-being of the republic over personal gain. Andrew Jackson was dubbed “Sharp Knife” by the Muscogee (Creek) Nation for his ruthless negotiating tactics. And Abraham Lincoln became known on the 1860 campaign trail as the “Rail-Splitter,” for his early years of hardscrabble labor on the frontier.
While some nicknames are self-explanatory, others are more confounding when taken without context from the period in which they originated. Here’s a look at how 10 of the more unusual nicknames stuck to U.S. presidents.
Credit: GL Archive/ Archive Photos via Getty Images
James Monroe: “The Last Cocked Hat”
Although he was younger than many of the renowned Founding Fathers, James Monroe is generally lumped in with that group due to his service in the American Revolution and in the administrations of George Washington, Thomas Jefferson, and James Madison. As such, he was one of the final public figures to carry the torch of that era, and his insistence on adhering to the late-18th-century fashions of a powdered wig and tricorn hat, even as he served as president well into the following century, led to him being called “The Last Cocked Hat.”
When President William Henry Harrison died after just one month in office in 1841, thrusting the job into the hands of the vice president for the first time in the nation's history, people were unsure as to whether John Tyler should be treated as the acting president or as a VP with expanded powers. There was nothing ambiguous about the matter to Tyler, who firmly believed he was simply the president, case closed. Yet his habit of upsetting both the Whigs he was ostensibly representing and the Democrats he had initially aligned with prompted critics across the political spectrum to refer to him by the decidedly less respectful title of "His Accidency."
James Buchanan's nickname of "Ten-Cent Jimmy" originated during his tenure as a senator from Pennsylvania in the early 1840s, when he allegedly advocated for laborers to receive a pittance of just 10 cents for a day of hard work. The miserly label was revived for the 1856 presidential race, even making its way into a song for candidate John C. Frémont, although it wasn't enough to derail Buchanan's path to victory.
During the 1880 presidential campaign, supporters sought to play up James A. Garfield's humble origins with songs such as "Boatman Jim" highlighting his teenage job on the vessels that navigated the Ohio and Erie Canal. Left unmentioned in these ditties were the candidate's difficulties in this line of work: According to some reports, Garfield fell overboard 14 times in a six-week span, the final instance leading to a serious fever that forced him to quit.
To steal a line from Paul McCartney, Chester A. Arthur was "born a poor, young country boy," but he dressed the part of a man with a taste for material riches. The transformation began in college, and by the time Arthur was running the country in the early 1880s, the press was having a field day with reports of him owning 80 pairs of trousers and splurging on hats. The sartorial presentation led to Arthur being dubbed the "Dude President" — with "dude" referring to a conspicuously fashionable man — along with related nicknames such as "Elegant Arthur" and "Gentleman Boss."
Advertisement
Advertisement
Credit: Bettmann Archive via Getty Images
Benjamin Harrison: "The Human Iceberg"
Although he proved adept enough at campaign speeches, the scholarly Benjamin Harrison disliked small talk and was accused of being distant and aloof in person. As a result, in addition to contending with epithets that poked fun at his relatively diminutive 5-foot-6 stature, President Harrison also endured nicknames that played on his lack of personal warmth, such as "The Human Iceberg" and "The Refrigerator."
Even after Spain was widely (and likely erroneously) blamed for the explosion of the USS Maine off the coast of Cuba in February 1898, President William McKinley sought a diplomatic solution to tensions with the European country. The measured approach didn't sit well with bloodthirsty opponents in Congress and the press, who called the president "Wobbly Willie" for his refusal to take a firm stance in defense of the red, white, and blue. Although the president soon gave Congress the go-ahead to declare war, the nickname resurfaced whenever McKinley was thought to be wavering on an issue.
Often simply known by the initials FDR, Franklin D. Roosevelt earned a new nickname from the press late in the second term of his presidency. Cagey about whether he planned to run for a third term — at the time, there were no presidential term limits — Roosevelt was coined "The Sphinx" after the famed Egyptian landmark that supposedly guarded the answer to a great riddle. FDR even saw his likeness lampooned with a papier-mâché Sphinx sculpture during a December 1939 journalists' dinner, although he resisted answering the riddle until confirming his third-term candidacy just before the July 1940 Democratic National Convention.
Also widely known by his initials, Lyndon B. Johnson briefly took on a harmless if unflattering nickname after ascending to the presidency in November 1963. Apparently eager to demonstrate his energy-saving ways, Johnson made a habit of trekking through the White House to shut off lights in empty rooms, prompting amused observers to dub him "Light Bulb" Johnson. The appellation reappeared during the 1964 presidential campaign, with opponents distributing paraphernalia reading, "Turn off Light Bulb Johnson."
While seemingly as derogatory as his longtime alias of "Tricky Dick," bestowed for allegations of political scheming, Richard Nixon's law school nickname of "Iron Butt" originated from a place of admiration. The moniker apparently stemmed from Nixon's reputation as one of the most studious members of his class. When discussing his career prospects with an older student, the elder insisted that Nixon would be able to handle the professional workload just fine because of his "iron butt" built to withstand long hours in a seat.
The golden age of Hollywood was an era of glamorous stars and timeless films, but behind the sparkle was a somewhat less romantic reality. From 1934 until the late 1960s, films were subject to strict moral scrutiny and censorship under the Motion Picture Production Code, better known as the Hays Code.
Named for politician Will H. Hays, who served as president of the Motion Picture Producers and Distributors of America, the set of rules was born from scandal. A series of high-profile controversies in the 1920s convinced the public that Hollywood was reckless, immoral, and a dangerous influence on the general public. Fearing government censorship, Hollywood studios opted to police themselves instead.
On paper, the Hays Code guidelines promised protection for impressionable viewers. In practice, this meant a long list of oddly specific rules primarily targeting crime, profanity, or anything sexual in nature, many of which reflected social anxieties of the era but seem outlandish and outdated today. Here are five of the strangest rules from Hollywood’s Hays Code days.
It’s a sound most of us associate with children or silly teasing, but in 1930s Hollywood, making a “raspberry” sound was forbidden. The juvenile act of placing one’s tongue between the lips and blowing (also known at the time as the “Bronx cheer”) was deemed a “vulgar expression” and listed under the Hays Code’s profanity section. Other gestures of mockery were flagged as well, such as using the middle finger.
Of course, the rules didn’t just forbid gestures. Many words were also considered profane, including “cripes,” “lousy,” and “damn.” In fact, one of the most famous lines from 1939’s Gone With the Wind — “Frankly, my dear, I don’t give a damn” — almost didn’t make it past censors the Production Code Administration office, who ultimately allowed the line due to its literary roots.
Hollywood had strict orders when it came to religion: Ministers, priests, and other clergy could not be portrayed as comic relief or sinister villains. This rule didn’t just reflect the societal norms at the time; it was also due to the personal influence of Hays, a pillar of the Presbyterian church, as well as Joseph Breen, who oversaw the PCA’s enforcement of the Hays Code for decades and was himself a devout Catholic.
This isn’t to say that clergy couldn’t be portrayed as flawed or unconventional, but they couldn’t be made into a laughing stock. Popular films such as Going My Way (1944) and The Bells of St. Mary’s (1945) fit the mold perfectly: Bing Crosby’s beloved character Father Chuck O’Malley was portrayed as approachable and human but always dignified. The rule extended to language, too: The words “God,” “Lord,” “Jesus,” and “Christ” could be said only if they were used reverently.
Dancing is everywhere in classic Hollywood films, and the Hays Code even recognized it as “an art” and “a beautiful form of expressing human emotions.” But even this beautiful art form had its limits at the time. Any dancing that suggested sexual activity or that was “intended to excite the emotional reaction of an audience” could get a movie canned. More specifically, dancing that “involved excessive upper-body movement while the feet stayed still” was considered indecent. Suggestive hip-swaying? No way, especially not without the added distraction of moving feet.
High-energy, dazzling routines full of spins, lifts, and fancy footwork were a hallmark of the era, however. Film critic David Denby has even suggested that the magic between screen legends Fred Astaire and Ginger Rogers might never have been the same without the limitations of the Hays Code. According to Denby, what could have been rote “scenes of passion” were instead dance numbers that became their own form of courtship through dizzyingly dynamic sequences.
It’s one of the most natural (and dramatic) moments of human life, but Hollywood was not allowed to depict childbirth “in fact or in silhouette” under the Hays Code. Pregnancy itself was not specifically off-limits in the official list of rules, but Olga J. Martin, one of Breen’s former secretaries, confirmed in her book Hollywood’s Movie Commandments that it was certainly frowned upon and was almost always removed from films since it was considered “improper for public discussions.”
A woman in labor did make it onto the big screen, however, and in an iconic film at that. In 1939’s Gone With the Wind, Olivia de Havilland’s character Melanie goes into premature labor. The audience doesn’t see much — a pained face and a vague silhouette. But given the bluntness with which the subject matter was banned, it’s a wonder the PCA, albeit after a back-and-forth with the film’s producer David O. Selznick, allowed the scene to exist at all.
Advertisement
Advertisement
Credit: Marka/ Universal Images Group via Getty Images
No Toilets
Old Hollywood was a time of many taboos, but toilets might be the most absurd. For decades, in the name of decorum, bathroom interiors, especially toilets, did not appear on screen. Broader “toilet gags” — which encompassed any form of potty humor as well as certain bodily functions — were also banned under the Hays Code’s profanity section.
It wasn’t until Alfred Hitchcock’s 1960 film Psycho that a toilet — and a flushing one at that — finally appeared on screen. It was just one of the film’s many snubs to the Hays Code, which at the time was being challenged by filmmakers and enforced less and less, a gradual process that began after Breen retired in 1954.
Throughout the 1950s and 1960s, the Hays Code steadily lost influence and was enforced less and less — as television and foreign films outside of its jurisdiction became more accessible, filmmakers began pushing boundaries. By 1968, the code was officially replaced by the Motion Picture Association of America (MPAA) rating system that is still in use today.
Have you ever wondered if your last name might have royal connections? Whether, through the centuries, your surname has traveled through the noble bloodlines of ancient empires and medieval kingdoms? Today, with genealogy websites and online surname databases, it’s easier than ever to trace a name’s history. But while millions of people around the world might be carrying monikers that once graced the halls of power, they often do so without realizing any potentially regal heritage.
Sometimes, the connection might seem obvious — if your last name is Tudor, Windsor, Habsburg, or Plantagenet, it’s not unreasonable to consider a royal connection. But those aren’t the only surnames with links to the kings and queens of yore. Here are seven common last names in the U.S. that may suggest a royal — or at least noble — lineage.
The name York is of Anglo-Saxon origin, and it’s a relatively common last name in the United States. It comes from the historic county of Yorkshire in northern England, which in turn gave its name to the House of York, a royal dynasty that provided three kings of England in the 15th century. The house was a cadet branch (a junior line of a noble, royal, or otherwise powerful family) of the House of Plantagenet. The House of Lancaster was also a cadet branch of the Plantagenets, and the two houses fought against each other in the Wars of the Roses. Lancaster is a moderately common surname in the United States.
The Bruce family was an old Scottish family of Norman French descent (coming from Bruis or Brix in France), to which two kings of Scotland belonged — most notably Robert the Bruce, who freed Scotland from English rule in the 14th century. Today, the Bruce name remains prominent in Scotland and is also a relatively common surname in the U.S.
Some well-known American celebrities have the surname Howard, including Ron Howard, his daughter Bryce Dallas Howard, and the celebrated American soccer player Tim Howard. There’s a chance that they, like other Howards, might have a connection to the historic Howard family, a notable aristocratic English family whose head, the Duke of Norfolk, is the premier duke and hereditary earl marshal of England. It’s not quite royalty, but the surname’s extensive historical connection to the English monarchy is nothing to sneeze at.
In 2000, Russell was ranked the 93rd most popular surname in the United States. Today’s Russells might have a connection with one of the most influential aristocratic lineages in Britain. While not of royal blood, the historic Russell family has centuries of noble heritage with plenty of royal links. The family became prominent under the Tudor sovereigns, and the senior line has held the title of Duke of Bedford since 1694. Lord John Russell served as British prime minister twice in the mid-19th century — and his grandson was the philosopher Bertrand Russell.
After rising to prominence toward the end of the 1200s, Clan Campbell went on to become arguably the most successful clan in Scottish history. Through staunch support of Robert the Bruce and later Stewart monarchs, the Campbell family developed significant and long-standing royal connections. In 1871, John Campbell, Marquess of Lorne, married Princess Louise, the sixth child of Queen Victoria, further cementing the Campbell family’s ties with royalty. According to the 2010 census, Campbell is the 47th most common surname in the United States today.
Credit: Pictures From History/ Universal Images Group via Getty Images
Nguyen
In Vietnam, as much as 40% of the country’s population has the surname Nguyen. This incredibly high percentage is likely due to the Vietnamese tradition of showing loyalty to a leader by taking that leader’s family name. The Nguyen dynasty, which ruled Vietnam from 1802 to 1945, was the very last Vietnamese dynasty, which largely accounts for the surname’s extreme popularity in Vietnam. In the U.S., an estimated 2.3 million people identified as Vietnamese as of 2023, which helps explain why Nguyen is the 38th most-common surname in America. So, anyone with the last name Nguyen is linked, at least in name, with the Nguyen dynasty — although the chances of a direct bloodline are slim.
The surname Spencer has been in use since at least the mid-13th century. It comes from the word “dispenser,” meaning “steward” — which in itself can be a court title or position. Then there’s the Spencer family, which was elevated to the English nobility by James I in 1603. Since then, there have been many lords, ladies, and earls from the Spencer line, and the family frequently moved in royal circles. Georgiana Spencer, Duchess of Devonshire (1757–1806) had an informal but influential role advising King George IV, and also developed a friendship with the Queen of France, Marie Antoinette. Two of the most prominent figures of the 20th century also came from the Spencer family tree: Winston Churchill and Diana, Princess of Wales, the first Spencer to marry directly into the royal family.
For generations, American children learned to loop their letters into graceful, flowing words. Notes passed in class, signatures practiced on notebooks, the elegance of a handwritten letter — all of it once depended on cursive. Yet for much of the last two decades, cursive seemed destined to fade into history.
The decline was especially sharp after 2010, when cursive was omitted from the Common Core education standards. Typing skills were prioritized instead, and many schools quietly dropped cursive instruction altogether. An entire cohort of students grew up with little or no exposure to this form of penmanship. In 2022, former Harvard President Drew Gilpin Faust recalled that in one of her history seminars, two-thirds of the students admitted they couldn’t read or write cursive. So how did cursive, once a cornerstone of education, fall out of favor? And is there any chance it will return?
Credit: Bob Rowan/ Corbis Historical via Getty Images
Why Cursive Faded
For much of the 19th and 20th centuries, and even earlier, penmanship was regarded as a marker of both education and refinement. Historically, handwriting instruction — including cursive — was considered a cornerstone of elementary education. It was seen not only as a practical skill but as a way to instill discipline, patience, and even character in young students.
The reasons for cursive’s decline are layered. Some educators argue that while handwriting in general aids child development, cursive is no more beneficial than writing in print. The digital shift also played a role: By the mid-2000s, schools were investing heavily in computer labs and keyboarding classes.
When Common Core omitted cursive for K-12 education, many districts saw little reason to keep it. (Although school curriculums are set at the state and not federal level in the U.S., 41 states agreed on the Common Core standards.) Teachers prioritized developing digital skills and “teaching to the test” — meeting the demands of standardized testing.
Credit: Albin Guillot/ Roger Viollet via Getty Images
What We Lose Without Cursive
Though cursive may be less practical than other forms of writing today, nostalgia lingers. Cursive is more than just pretty penmanship; it connects us to the past in a literal way. The Declaration of Independence, the Bill of Rights, soldiers’ letters home, and many historical diaries are all are written in cursive. Without the ability to read it, future generations risk losing direct access to these documents.
In her widely read Atlantic article, Faust argued that this loss is not trivial. Relying on experts or software to “translate” old handwriting puts distance between people and their own history. Even in everyday life, not being able to read a grandparent’s letters or decipher a handwritten recipe represents a small but meaningful rupture in cultural continuity.
There’s also the simple tactile joy of cursive. Anyone who remembers the thrill of getting their first fountain pen, or the way a signature can feel like an extension of the self, understands its emotional weight. Cursive embodies the idea of writing as a personal expression, not just a utilitarian skill.
Fortunately, cursive is not gone just yet. In fact, it’s staging a comeback. As of 2025, 25 U.S. states require some form of cursive instruction in schools. California, which stopped teaching it in 2010, reinstated the requirement in January 2024 for grades one through six.
Supporters see this as more than a nostalgic gesture, arguing that neuroscience research supports the idea that writing in cursive activates brain pathways that help with learning and language development.
Even so, instruction is inconsistent. In many of the states that require cursive instruction, that requirement is neither enforced nor funded, meaning that teachers will be forced to squeeze cursive into already crowded lesson plans.
The return of cursive instruction doesn’t mean children will stop learning to type, and digital literacy remains essential. But California’s move — and similar laws in other states — suggests that many educators and policymakers believe there is still room for the old alongside the new.
Perhaps cursive’s survival is not so much about utility as it is about connection. In a world where so much of our communication vanishes into screens, cursive offers something tangible — ink pressed into paper, loops and swirls carrying the imprint of a hand. It links students not just to their own learning but to the generations before them.
We may never return to a time when cursive was the default way of writing, but its resurgence suggests that we are not ready to let go of it entirely. Cursive endures as a bridge between past and present, history and memory, function and art. And as long as children are still being taught to form their letters in flowing script, the story of cursive is not over yet.
The Purple Heart is the oldest and arguably most famous military award in the U.S. Its origins stretch back to 1782 and the Badge of Military Merit — a heart made of purple cloth — which became the modern Purple Heart in 1932. The medal is awarded to U.S. military service members who have been wounded or killed as a result of enemy action. In total, more than 1.8 million Purple Heart medals have been presented.
However, it’s far from the only military decoration in the U.S. In fact, the U.S. military maintains an extensive system of honors and awards. There are more than 100 decorations, including medals, service ribbons, ribbon devices, and specific badges, recognizing various forms of service, valor, achievement, and dedication. These acknowledge everything from the highest acts of heroism in combat to meritorious service in peacetime operations.
Let’s take a look at some of the most prestigious and notable U.S. military decorations. Each one, in its own way, recognizes the exceptional service and sacrifice displayed by members of the armed forces — and in some cases, civilians.
Credit: Bettmann Archive via Getty Images
Medal of Honor
The Medal of Honor is the highest medal for valor in combat that can be awarded to members of the U.S. armed forces. While the Purple Heart is awarded to U.S. military service members who have been wounded or killed as a result of enemy action, the Medal of Honor is for acts of extraordinary valor. Created in 1861, the medal recognizes the bravest of the brave. Since its inception, more than 3,500 service members have received the Medal of Honor. Only 19 have received it twice — five of those recipients were Marines with Army units who received both the Army and Navy versions of the medal, and 14 others received it for separate acts of supreme valor.
The recommendation process for receiving the Medal of Honor can be complex, taking more than 18 months as it passes up the chain of command. It’s ultimately approved or disapproved by the president of the United States, who personally awards the medal.
The Service Crosses represent the second-highest level of military decorations for valor, honoring extraordinary acts of heroism in action against an enemy. The crosses come in three variations: the Army’s Distinguished Service Cross, the Air Force Cross, and the Navy Cross. The Navy Cross is awarded to members of both the Navy and the Marine Corps, as well as members of the Coast Guard when operating under the authority of the Department of the Navy.
The Silver Star is the third-highest military combat decoration. It is awarded for displaying gallantry while engaged in action against an enemy of the United States. This decoration recognizes acts of valor that, while extraordinary, do not quite meet the criteria for the Medal of Honor or Service Crosses.
The Distinguished Service Medal is a family of awards, with separate versions for the different branches of the military, as well as a version for the Department of Defense (the Defense Distinguished Service Medal). Unlike the Army’s Distinguished Service Cross, which is awarded only for heroism in combat, the Distinguished Service Medal is also awarded for noncombat services — officially described as “exceptionally meritorious service to the government in a duty of great responsibility.”
Advertisement
Advertisement
Credit: Ben Hasty/MediaNews Group/ Reading Eagle via Getty Images
Bronze Star
The Bronze Star is given for acts of heroism, acts of merit, or meritorious service in a combat zone. When awarded specifically for acts of valor, the metal is decorated with a “V” device. Civilians can receive the Bronze Star as well. For example, the war reporter Joseph Galloway received the award for heroically rescuing a badly wounded soldier while under fire during the Vietnam War. Ernest Hemingway was also awarded the Bronze Star for his service as a war correspondent during World War II. Within the hierarchy of military awards, the Purple Heart is ranked immediately behind the Bronze Star.
The Legion of Merit is awarded to members of the armed forces for “exceptionally outstanding conduct in the performance of meritorious service to the United States.” When it was established in 1942, the Legion of Merit was the first U.S. decoration created to honor both American service members and citizens of other nations. To this day, it is given to foreign officials of high rank or foreign military advisers for services to the United States.
Advertisement
Advertisement
Credit: Bill Clark/ CQ-Roll Call, Inc. via Getty Images
Distinguished Flying Cross
The Distinguished Flying Cross, created in 1926, is America’s oldest military aviation award. The medal is awarded to any officer or enlisted person of the armed forces for heroism or extraordinary achievement while participating in aerial flight. The first Distinguished Flying Cross medal was presented by President Calvin Coolidge to Charles Lindbergh on June 11, 1927, in recognition of the aviator’s solo flight across the Atlantic Ocean.
The Air Medal was created during World War II, partly as an effort to boost morale among U.S. Army Air Force crews during the dark days of the war. It was meant to recognize actions that were “meritorious” but did not reach the level of “heroism or extraordinary achievement” required to be awarded the Distinguished Flying Cross. The Air Medal’s reward criteria have shifted considerably over the years, with the medals awarded for service during World War II often regarded as particularly significant due to the extreme dangers faced during the conflict.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.