Before cable was the norm, and long before streaming services were even an idea, network television ruled the airwaves. With fewer choices, viewers coalesced around a small number of shows in a way that’s practically unheard of in today’s fragmented media landscape.
That was especially true in the 1960s, when countercultural forces were butting up against decades of tradition — a phenomenon that could be seen in the stories shown on the small screen in living rooms across America. Let’s take a trip to the past with these eight TV shows that dominated the 1960s:
Arguably the decade’s defining television program, The Andy Griffith Show was a ratings juggernaut throughout its entire run. It never placed lower than seventh in the Nielsen ratings, and it aired its finale while still at No. 1 — a feat replicated only once before (I Love Lucy) and after (Seinfeld).
The show was inherently nostalgic, with Griffith once stating, “Though we never said it, and though it was shot in the ’60s, it had a feeling of the ’30s … of a time gone by.” Even if you aren’t old enough to have grown up watching it, The Andy Griffith Show has been in syndication for so long that there’s a good chance its theme song was still a part of your childhood.
The Addams Family (1964-66) and The Munsters (1964-66)
The ’60s were a golden age for spooky family sitcoms. And while The Addams Family has had a longer afterlife (in the form of two cult-classic ’90s movies, the Netflix series Wednesday, and two recent animated films), The Munstersfared better in the ratings when the two shows were on at the same time.
As for why the two hits came to an end within a month of each other, Eddie Munster actor Butch Patrick points the finger at a certain caped crusader: “I think Batman was to blame,” he said in a 2019 interview. “Batman just came along and took our ratings away.” Both shows lasted two seasons and aired nearly the same number of episodes (64 for The Addams Family, an even 70 for The Munsters), making them perfect for a back-to-back binge session.
Bewitched (1964-72) and I Dream of Jeannie (1965-70)
Why limit yourself to one witchy show when you can have two? Whether you preferred Samantha (Elizabeth Montgomery) or Jeannie (Barbara Eden), you were spoiled for choice when it came to supernatural sitcoms in the ’60s. Bewitched was the country’s second-highest-rated show during its debut season, was named one of TV Guide’s Top 50 Shows, and inspired a movie starring Nicole Kidman — but many viewers dreamed of Jeannie just the same.
Though it began in the 1950s and ended in the ’70s, this long-running Western was at its peak in the ’60s. It hit No. 1 by 1964, stayed there for three years, and didn’t drop out of the top 10 until 1971 — all while helping redefine the genre to a generation of viewers.
The series followed the Cartwright family as they faced one moral dilemma after another during and just after the Civil War, with the action taking place near Lake Tahoe in Virginia City, Nevada. And as was the case for a lot of shows on this list, its earworm of a theme song became a hit unto itself. Airing 431 episodes over 14 seasons, Bonanza is the second-longest-running Western in TV history, behind only Gunsmoke.
What was meant to be a three-hour tour turned into a three-season classic. One of the decade’s most iconic series, Gilligan’s Island still airs reruns today and continues to inspire debate in the form of “Ginger or Mary Ann?” More than anything, though, it made people laugh.
The show received solid ratings during its initial run, reaching as high as No. 3 in the ratings during its first season, and has only grown in popularity since its cancellation. It spawned a number of made-for-TV movie sequels and animated spinoffs, the most bizarre of which has to be 1981’s The Harlem Globetrotters on Gilligan’s Island.
A dimension not only of sight and sound but of mind, The Twilight Zone inspired too many other TV series and movies to list. Episodes such as “Time Enough at Last” and “It’s a Good Life” are no less haunting today than they were 60 years ago, when Rod Serling introduced viewers to an enduring sci-fi classic that has since been revived three times (in 1985, 2002, and 2019) — never to nearly the same effect — and that inspired a 1983 movie and an appropriately terrifying theme park ride.
Though it wasn’t a runaway hit as far as ratings were concerned, The Twilight Zone proved so influential that it’s impossible to tell the story of 1960s television without it. Frequently cited among the greatest TV shows of all time, it has aged as well as, if not better than, any other series of its era.
For more than two millennia, readers have enjoyed the brief, morally pointed tales known as Aesop’s fables. For many of us, these stories were among the first we heard as children, alongside Mother Goose rhymes and the fairy tales of the Brothers Grimm. Long before we knew anything about ancient Greece, we learned that a steady pace could win the race, that dishonesty would cost us others’ trust, and that pride often comes before a fall.
Stories such as “The Tortoise and the Hare,” “The Boy Who Cried Wolf,” and “The Lion and the Mouse” have circulated in classrooms, children’s books, and popular culture. Their appeal lies in their simple, short narratives, often featuring animals with human traits, that deliver clear, memorable lessons. And they tend to stay with us — many of us can still recall a favorite fable and the moral it carried.
Yet while the fables themselves are widely known, the figure to whom they are attributed — Aesop — remains uncertain. Was Aesop a real person? And if so, who was this mysterious fabulist?
Credit: Courtesy of the Rijksmuseum, Amsterdam; object no. RP-P-2016-49-18-19
The First Mentions of Aesop
Ancient sources place Aesop in the Greek world of the late seventh to mid-sixth centuries BCE, often describing him as an enslaved storyteller. But the evidence is limited, indirect, and sometimes contradictory — a mix of early references, later embellishments, and literary tradition, making Aesop one of antiquity’s more elusive figures.
The earliest surviving mentions of Aesop appear in Greek texts written more than a century after he supposedly lived. The fifth-century BCE historian Herodotus refers to Aesop as an enslaved person on the island of Samos and notes that he was killed at Delphi. The account is brief and lacks detail, but it is widely treated as the earliest historical reference.
In the fourth century BCE, Aristotle mentions Aesop in Rhetoric, portraying him as a storyteller whose fables could be used as persuasive examples in political contexts. Aristotle cites a fable attributed to Aesop, involving a fox and a hedgehog, as an example of how storytellers can employ moral tales to persuade or instruct in political contexts. This suggests that by Aristotle’s time, Aesop was already linked with a recognizable body of moral storytelling used for public argument and instruction.
Later ancient writers expand on these details, portraying Aesop as an enslaved man who gained freedom through intelligence and wit and who used fables to comment indirectly on social and political life. However, these accounts vary in detail and reliability, and none provides a verifiable biography.
By the early centuries CE, Aesop’s life had become the subject of extended narrative. The most influential example, the Greek text Life of Aesop, likely composed between the first and second centuries CE, presents a dramatic and often comic biography rather than a strictly historical account.
In this version, Aesop is depicted as physically unattractive but exceptionally clever — an enslaved person who rises to prominence through wit and storytelling. The narrative places him in the company of figures such as Solon and Croesus and ends with his execution at Delphi after a conflict with local residents.
Modern scholars generally treat the Life of Aesop as fictional or semifictional, reflecting popular storytelling traditions of the time. While it preserves some elements found in earlier sources — such as Aesop’s status as an enslaved storyteller and his death at Delphi — most of its details are considered later inventions. In this way, Aesop himself became a kind of fable character: a clever underdog, using his words and wit to challenge those in power.
Advertisement
Advertisement
Credit: Encyclopædia Britannica, Inc.
These Famous Fables Weren’t Written Down at First
The stories attributed to Aesop were not recorded during his lifetime. In classical Greece, short fables circulated orally and were commonly used as rhetorical tools in speeches, debates, and instruction. Their brevity and clarity made them easy to remember, repeat, and adapt for different situations. Similar storytelling traditions also existed beyond Greece, particularly in India and the Middle East, where collections of moral tales developed independently and later intersected with Aesop’s stories through translation and cultural exchange.
A written collection of fables attributed to Aesop was probably compiled in the fourth century BCE by Demetrius of Phalerum, though this version has not survived. Over time, other writers helped preserve and reshape the material. The Roman writer Phaedrus, in the first century CE, turned many of the fables into Latin verse, while the Greek poet Babrius produced similar poetic versions.
Because these stories were transmitted across generations and cultures, they evolved. New fables were added, others were adapted, and many were likely never connected to a single original author. As a result, “Aesop” gradually became less a historical voice and more a name attached to a broader storytelling tradition.
Whether or not Aesop can be identified as a single historical individual, his cultural role is clear. By the classical period, he was already regarded as the archetypal teller of fables — a reputation that only grew in later centuries.
In the medieval and early modern periods, collections of Aesopic fables were translated, adapted, and widely printed across Europe. With the spread of the printing press, they became standard tools of moral and literary education, shaping how generations of readers encountered storytelling and ethical lessons.
So was Aesop real? Modern scholarship takes a cautious view. He may have been a real storyteller in the Greek world of the sixth century BCE, but the details of his life cannot be firmly established. Over time, his name became attached to a growing and evolving body of stories. Like Homer, his life blurs the line between history and legend — a person whose identity has been absorbed into the tradition he represents.
Few social rituals are as widespread or instinctive as clinking glasses after a toast. At weddings, dinners, and bars and pubs around the world, we reach across the table, touch glasses with a satisfying clink and a quick “cheers,” and take a sip. But where does this custom actually come from? Let’s take a look at the origins of this familiar custom, and try to sort the myth from reality.
The most common origin story goes something like this: In medieval times, clinking cups or glasses hard enough would cause liquid to slosh and spill from one vessel into another, so if your drinking companion had poisoned your cup, they’d be consuming poison too. As such, the clinking was a way to show that no drinks had been spiked, whether with belladonna, hemlock, arsenic, mercury, or any other common toxin — poison being a popular way of eliminating one’s rivals in the Middle Ages, especially among the nobility.
Despite being widely repeated, this theory doesn’t make much sense if you think about it — and, indeed, it’s almost certainly not true. Both Snopes and Ripley’s have debunked the theory, concluding that all versions of this explanation are false. The logistics alone are problematic. Even if a cup or glass were filled to the brim — which in many cases it would not be — most of the clinking spillage would land on the floor, not in your companion’s cup. And if some drops of ale- or wine-diluted poison did enter, would it be enough to cause much harm? Perhaps not.
What’s more, as Snopespoints out, the practice of toasting to someone’s health dates back to the ancient world at least — well before individual glasses were common. In those times, everyone typically drank using shared vessels, rather than carrying around their own glass or cup. Producing your own private drinking vessel at a communal table would likely raise suspicion, rather than guard against it.
One alternative theory, of no precise origin, suggests that clinking glasses was meant to frighten away evil spirits. In medieval Europe, there was a superstitious belief that evil spirits lurked in alcohol or hovered around celebrations. The high-pitched sound of touching glasses, according to the theory, would chase them away. It’s a nice idea, and there may be some truth to the story, but there’s scant evidence to support it being the sole, or even partial, origin of this toast ritual.
Another theory suggests the practice was a way to complete the sensory experience of drinking. Sipping wine and toasting already involved sight, touch, smell, and taste — and the clink added sound, the last of the five senses. Historian Margaret Visser argues that clinking grew in popularity during the 17th century, when Venetian glassmakers perfected the art of clear, resonant crystal. For the first time, drinking vessels produced a beautiful ringing tone when struck together, and that sensory pleasure became part of the ritual.
The most accurate answer to why we clink glasses is also the least satisfying: Nobody really knows for sure. The toasting of someone’s health is an ancient ritual, rooted in Greek and Roman drinking culture and quite likely long before — and these ancient civilizations may well have knocked their mugs and cups together in rowdy celebration or solemn toasts.
The more delicate clink likely became fashionable in the 17th century, when new glassware made the sound more appealing — and possibly because it gave people a way to maintain the communal spirit of shared drinking in an age of individual cups. The one thing that seems almost certain is that the poison theory holds no water, let alone any wine. Despite being the most widespread theory — repeated at dinner tables and now online — it is almost certainly a myth.
If you grew up in the 1950s, ’60s, or ’70s, you might remember starting the day with a bowl of frosted cereal and a glass of orange juice, followed by a sandwich on soft white bread for lunch. At the time, packaging and advertisements emphasized that these foods were nutritionally balanced, vitamin-enriched, and backed by modern science.
Many foods promoted as “healthy” in decades past rose to prominence during specific cultural moments: the post-World War II convenience boom, the rise of industrial food processing, and the low-fat movement of the 1970s and ’80s. In each case, marketing often outpaced scientific understanding. Looking back at these former “health foods” reveals how dramatically nutrition advice and public perception can shift over time — and how easily the label of “healthy” can be shaped by trends, rather than evidence.
Margarine surged in popularity beginning in the 1940s and especially through the 1960s and ’70s, when concerns about heart disease began to enter public consciousness. As early as the ’50s, public health messaging increasingly warned against saturated fats, and margarine — made from vegetable oils — was positioned as the modern, healthier alternative to butter.
Advertising leaned heavily on nutrition science. Packaging and print ads used phrases such as “heart-healthy,” “cholesterol-conscious,” and “made from pure vegetable oils.” Some campaigns featured endorsements from doctors or referenced emerging research about cholesterol, even when that research was still developing or incomplete. Margarine was presented not just as a butter substitute but as a proactive choice for protecting one’s heart.
What consumers didn’t realize was that many early margarines were produced through partial hydrogenation, creating trans fats. These fats were later found to raise LDL (“bad”) cholesterol and lower HDL (“good”) cholesterol — essentially the opposite of what the marketing promised. The widespread use of margarine as a health food was based on a simplified understanding of fat and heart disease, combined with persuasive messaging that emphasized innovation over long-term evidence.
From the 1940s through the 1990s, fruit juice — especially orange juice — was marketed as an essential part of a healthy daily routine. Campaigns promoted it as a concentrated source of vitamins, particularly vitamin C.
Juice companies frequently aligned themselves with vitality and performance, and advertisements featured active families, athletes, and growing children, implying that juice was a foundational health food for energy and development. Some messaging even suggested juice could help prevent illness, leaning into its vitamin content as a kind of nutritional safeguard.
The reasoning seemed sound at the time. Fruit is healthy, so fruit juice must be as well — perhaps even more so, because it was processed and standardized. What was overlooked, however, was the role of fiber. Juicing removes most of the fiber found in whole fruit, leaving behind concentrated natural sugars that are absorbed quickly by the body. The result is a drink that behaves metabolically more like a sugary beverage than a whole food.
Powdered drink mixes such as Tang rose to prominence in the 1960s, marketed not just as convenient but as modern, science-backed nutrition. Tang, in particular, benefited from its association with the U.S. space program. This connection to astronauts gave Tang a powerful health connotation. Marketing emphasized added vitamins — especially vitamin C — and framed the drink as a smart way for families to support energy, growth, and overall wellness. The messaging leaned heavily on innovation and fortification, suggesting that a technologically enhanced product could be just as good as, or even better than, whole foods.
Other powdered mixes of the era, including instant breakfast drinks such as Carnation Instant Breakfast, were even more explicitly positioned as “healthy.” Marketed as convenient meal substitutes, they were promoted to busy families as a way to ensure children received essential nutrients — even if they skipped a traditional breakfast.
In reality, many of these powdered drinks were largely composed of sugar or refined carbohydrates, with added vitamins used to bolster their nutritional image. While they did provide certain nutrients, their reputation as healthy drink options was driven more by marketing than by a balanced nutritional profile.
Breakfast cereals were among the earliest foods to be marketed as health foods. In the late 19th and early 20th centuries, companies such as Kellogg promoted them as part of a clean, balanced diet, often tied to digestive health. These early cereals were relatively plain and minimally sweetened, reflecting broader health reform movements of the time.
In the 1940s through the 1970s, cereal marketing shifted as products became more refined and significantly sweeter. Even so, they were promoted as nutritious, with packaging emphasizing added iron, B vitamins, and other nutrients. Advertising targeted both parents and children — highlighting “essential vitamins and minerals” and “fuel” for the day — while mascots and sports tie-ins suggested strength and performance.
The idea was that fortification could make up for processing. As a result, even sugar-laden cereals were framed as healthy, despite their high sugar content and reliance on refined grains, which can lead to quick spikes in blood sugar and short-lived energy.
Granola emerged as a health food icon in the 1960s and ’70s, closely tied to the natural foods movement and counterculture. It was marketed as a return to simpler, more “natural” eating — often associated with outdoor lifestyles, self-sufficiency, and holistic wellness.
Unlike many other foods promoted as healthy, granola’s health reputation didn’t come from industrial marketing alone. It was also promoted through word of mouth, co-ops, and early health food stores. Packaging and messaging emphasized whole ingredients such as oats, nuts, and dried fruit, along with ideas of purity, energy, and fiber.
However, as granola entered the mainstream, commercial versions began to include significant amounts of added sugar, oils, and sweeteners to improve taste and texture. Despite its wholesome image, many versions became calorie-dense and sugar-heavy, diverging from the simpler recipes that originally defined the snack.
Low-fat yogurt became a major “health food” during the 1970s and ’80s, when dietary fat — especially saturated fat — was widely viewed as the primary cause of weight gain and heart disease. Yogurt was already associated with calcium and digestive health, but the low-fat versions were marketed as an ideal choice for dieting and heart-conscious consumers.
Yogurt advertising emphasized weight management, often featuring slim figures, fitness themes, and language like “guilt-free” or “light.” The issue arose when fat was removed and replaced with added sugars to maintain flavor. Many low-fat yogurts became highly sweetened and, with the addition of fruit and other flavorings, shifted closer to dessert than to a balanced health food. In the end, the emphasis on cutting fat overshadowed the importance of overall nutritional balance.
From the 1930s to the 1960s, white bread was widely promoted as a triumph of modern food science. Industrial baking processes produced soft, uniform loaves that were seen as cleaner, safer, and more advanced than traditional whole-grain breads.
The marketing campaign for Wonder Bread emphasized nutrition during the “wonder years” of childhood, highlighting the bread’s protein, minerals, carbohydrates, and vitamins. Advertisements suggested that white bread could help kids grow strong and healthy, while also promoting its soft texture and easy digestibility as ideal for everyone in the family.
At the time, refinement was associated with progress. The removal of bran and germ was seen not as a loss, but as an improvement — making bread more appealing and easier to digest. Only later did it become clear that this process also removed fiber and other beneficial nutrients.
Credit: Culture Club/ Hulton Archive via Getty Images
Author Timothy Ott
April 9, 2026
Love it?101
Falling between the heyday of the Western Roman Empire and the onset of the Renaissance, the Middle Ages have an unflattering reputation as something of a backward epoch of human civilization. Wars raged across Europe, serfs toiled in backbreaking service to feudal lords, and diseases wiped out villages with little hope of preventing the next outbreak.
While the negative connotations may not be entirely fair, few would dispute that medieval citizens lived in more primitive conditions than their modern counterparts, and that the day-to-day necessities for survival were markedly different. As such, the era produced certain professions that filled important needs of the time but seem quite unusual in hindsight. Here are six of the strangest.
Although the widespread belief that rats were the main carriers of the bubonic plague has largely been debunked, these critters nevertheless did spread disease in medieval urban centers and otherwise proved legitimate pests by feasting on food supplies. Thus gave rise to the era’s version of the exterminator, the rat catcher.
Known to travel from town to town with a few of their rodent victims suspended from a stick, skilled rat catchers deployed methods that included setting traps in infested areas and unleashing dogs or ferrets on their quick-footed targets. It’s worth noting that while rat catchers were in demand in the Middle Ages, the profession reached its pinnacle in the crowded streets of Victorian London, with practitioners such as Jack Black achieving renown for their prowess in the field.
Regularly used by the Romans before fading from the public record, the treadwheel crane enjoyed a revival during the Middle Ages as a means for constructing the magnificent cathedrals and palaces that still mark the European landscape today. This device was operated by one or two people walking in what was essentially a giant hamster wheel, with a rope that wound around the wheel axle used to lift stone blocks weighing thousands of pounds.
Contrary to what may seem intuitive, the job of the treadwheel worker was not as physically taxing as, say, that of the hod carrier who bore heavy loads as they scaled the scaffolding of a developing structure. Furthermore, this particular task provided a source of employment for blind people, who wouldn't be spooked by working at the great heights these cranes sometimes reached. Nevertheless, the heavy wheel could be difficult to control, and serious injury could result if it malfunctioned or the operator had the misfortune of losing their balance.
Advertisement
Advertisement
Credit: Culture Club/ Hulton Archive via Getty Images
Pardoner
Leveraging the concept of indulgences, in which a practitioner of Catholicism receives a lesser punishment for sins by demonstrating acts of atonement or providing a monetary donation, the medieval Catholic Church began assigning pardoners to essentially serve as door-to-door salesmen of these spiritual boosters. While this arrangement suited the needs of both parish and parishioners, it also enabled a system in which corrupt officials and frauds padded their pockets by doling out as many indulgences as possible.
The infamy of this profession prompted Geoffrey Chaucer to write about its contradictions in a chapter in The Canterbury Tales, and reformist Martin Luther later decried the system as a basis of his Ninety-five Theses. It would be a half-century, however, before Pope Pius V eliminated the job market for pardoners by abolishing the sale of indulgences for good in 1567.
While the medieval period didn't offer its inhabitants anything close to the one-stop shopping options of the internet age, it at least provided a convenient solution for those in dire need of simultaneously dealing with a toothache and getting a haircut. The emergence of the barber-surgeon came about after clerics, who handled an array of surgical and medical procedures by the early Middle Ages, were forbidden from drawing blood by the Fourth Lateran Council of 1215. Barbers were trained to pick up the slack due to their experiences with razors, and many became adept at bloodletting, pulling teeth, and even amputation. The proliferation of these practitioners led to the formation of England's barber-surgeon trade guild in 1540, although the special skills required by the medical side of the profession resulted in the separation of these titles by the early 1800s.
“Fulling” is the process by which cloth fibers are condensed and strengthened for transformation into wearable garments, a practice that dates back to at least ancient Egypt and remains in use today. Although it generally involves the folding and mashing of cloth in a soapy liquid, the medieval means for undertaking this activity were somewhat less benign. A medieval fuller was required to stomp around on a cloth pile submerged in a vat of stale urine, which was deemed the rinsing liquid of choice due to its high levels of ammonium salts being ideal for removing grease and softening the fabric. Since the process could take several hours, that meant an extended period of time huffing and puffing and breathing in the fumes from the bodily waste splashing about below.
A medieval court could be entertained by an array of jesters, jugglers, and acrobats, but nestled under that general umbrella of entertainers was a subset of performers who earned a living by breaking wind. Such was the case with an English gentleman known as Roland the Farter, who, according to the 13th-century Liber Feodorum (Book of Fees), regularly performed a dance at King Henry II's annual Christmas pageant that ended with the delivery of “one jump, one whistle, and one fart.”
Roland was rewarded handsomely for his proclivities, receiving a manor in Suffolk and up to 100 acres of land. But lest you think this was just one individual with a distinctly unique skill set, there are also records of a contemporary collection of flatulists, known as braigetori, who were enjoying similar acclaim in Ireland. Indeed, this was a profession that endured even as the Middle Ages gave way to the high-minded civilizations that followed. Joseph Pujol became a celebrity in France in the late 19th and early 20th centuries for his riotous rear-end performances under the moniker of Le Pétomane, while Paul Oldfield, aka Mr. Methane, recently farted his way into Guinness World Records for his lengthy career in the field.
War is full of logistical challenges, one of the major concerns — in conflicts both ancient and modern — being how to feed the armies doing the fighting. Whether it’s Roman legionaries, British Redcoats, or modern infantry, soldiers have always needed a reliable supply of food to maintain both their energy levels and morale. As the old saying goes, “An army marches on its stomach.”
Military rations have existed since at least the time of ancient Rome, when soldiers received 2 pounds of bread a day, sometimes with meat, olive oil, and wine. Today, U.S. troops are provided with MREs — “Meals, Ready-to-Eat” — which are carefully tested, formulated, and packaged rations designed to sustain soldiers during training and military operations. These MREs have a shelf life of three years and can survive being dropped from an aircraft. But not every soldier is a fan of these pouches of food, which they sometimes refer to as “Meals, Rarely Edible” or “Meals Rejected by the Enemy.”
While modern MREs don’t often come with glowing reviews, some foods created specifically for soldiers — or adopted and popularized by the military — have become beloved by the civilian population. Here are six foods that managed to find their way from the ration pack to supermarket shelves across America.
In the 1930s, Forrest Mars Sr. (the son of Mars founder Franklin Clarence Mars) was traveling in Europe. According to confectionary legend, it was during this time that Forrest Mars observed soldiers eating chocolate pellets surrounded by a sugar shell during the Spanish Civil War. Inspired, he took the concept back to the United States where, in 1941, M&Ms were born.
With World War II already underway, M&Ms were initially made specifically for the U.S. military, providing an ideal way for soldiers to carry energy-rich chocolate in tropical climates without it melting. In 1947, the candy was made available to the public, and its popularity has never waned since.
Cheesy puff snacks and the chaos of warfare may seem diametrically opposed, but it’s the military we must thank for the former’s existence — including the top-selling brand, Cheetos. Their origin can be traced back to the Natick Soldier Systems Center, a U.S. Army research complex responsible for the development of the U.S. military’s food, clothing, and shelters.
One of the items to come out of the research complex was processed and powdered cheese, which was created for military use in World War II. According to Anastacia Marx de Salcedo, author of Combat-Ready Kitchen: How the U.S. Military Shapes the Way You Eat, by the end of the war “a whole little industry had sprung up to support this dehydrated cheese.” And one of the very first products to use the cheesy powder was the now-ubiquitous Cheetos.
Instant coffee isn’t a modern invention — it was created and patented as a “coffee compound” by an Englishman named John Dring in 1771. It later appeared in cake form during the American Civil War, before being refined by Japanese chemist Satori Kato, whose soluble coffee was introduced to the public at the Pan-American Exposition of 1901.
Instant coffee as we know it today became commonplace during World War I, providing a quick and easy taste of home for soldiers on the front lines. The Department of Defense began buying as much as 37,000 pounds of coffee powder each day, at which point the instant coffee industry skyrocketed. Then, in 1938, Nestlé introduced its new product, Nescafé. With the outbreak of World War II, Nescafé was included in the emergency rations of every U.S. soldier, further cementing the popularity of the instant brew.
In 1942, the U.S. Army offered a lucrative contract to any company that could figure out a process for producing palatable frozen orange juice — deemed necessary for keeping vitamin C levels at acceptable levels among the troops. It wasn’t until 1945 that a viable process was created, at which point the Florida Foods Corporation won the contract to produce 500,000 pounds of orange juice concentrate.
However, the war ended before the product was shipped. Finding itself sitting on a giant mound of oranges, Florida Foods changed its name to the Vacuum Foods Corporation a year later and began selling the nation’s first concentrated frozen orange juice, which it called Minute Maid (in reference to the product’s quick and easy preparation). Despite initially slow sales, Minute Maid eventually took off — with the help of a catchy jingle from Bing Crosby — paving the way for similar frozen products.
The first successful frozen prepackaged meal — what was later dubbed the “TV dinner” — hit supermarket shelves in 1953. Produced by Swanson, it was a Thanksgiving meal of turkey, cornbread stuffing, and peas. But while Swanson took TV dinners to the masses, the concept was originally developed for the military.
In 1945, Maxson Food Systems manufactured the first complete frozen meal — known as “Strato-Plates” — specifically to be reheated for troops and other passengers on long military flights. A typical meal was a basic three-part combo of meat, vegetable, and potato, each housed in its own compartment on a plastic plate. Maxson’s frozen meals never made it to the civilian retail market, but they were the precursors of the TV dinner, despite no TVs being involved.
Though Spam wasn’t invented specifically for the military, war rations certainly helped popularize it. When Spam was created in 1937 by Hormel Foods Corporation, it was seen as a way to increase sales of unprofitable pork shoulder, which was then considered an undesirable cut. Initially, sales of Spam were poor, partly because people had doubts about canned meat being safe for consumption. Then World War II began, and the U.S. military saw Spam as a perfect addition to its soldiers’ rations, being affordable, filling, easily portable, and shelf stable.
Spam accompanied U.S. troops all over the world, with 100 million cans shipped out to the Pacific theater alone. It became something of a culinary icon during the war, and went on to become an enduring supermarket staple — today, more than 9 billion cans of Spam have been sold.
Recorded human history is just a tiny blip on the temporal radar. The Pyramids of Giza were built around 2500 BCE, but 4,500 years ago seems like yesterday when you compare it to the 300,000 years our species, Homo sapiens, has been around. Since our earliest ancestors didn’t leave as much behind, we know very little about how they lived.
“Human,” however, is a genus, not a species, and the history of humanity includes much more than sapiens. With new genetic research, we’re learning more about our ancestry all the time — but much of what we learn just raises even more questions. How closely related are we to Neanderthals? When did humans start making art? What were our ancestors’ day-to-day lives like? Pique your curiosity with these seven facts about the very earliest humans on Earth.
Scientists first theorized in the 19th century that humans originated in Africa, and modern genetic science has largely confirmed that to be the case, though researchers are still working to determine the exact geography. Scientists also disagree on when and how humans dispersed. One early theory suggested that our Homo sapiens ancestors started to leave Africa around 60,000 years ago. Most non-African humans today can trace their origins back to a large exodus around that time, but smaller migrations may have started much earlier. Fossil evidence shows that groups of foragers arrived in Asia around 120,000 years ago, and brought skills such as deep-sea fishing and cave art with them. Other fossil discoveries, including a 210,000-year-old skull found in Greece, suggest some humans left Africa even earlier.
Early Human Species Mated With One Another Frequently
Homo sapiens, Neanderthals, and Denisovans — the latter of which is an early human species discovered in 2008 — share a common ancestor, Homo heidelbergensis. This species also probably came from Africa, but had reached modern-day Israel more than 700,000 years ago. It has been theorized that when one group of Homo heidelbergensis left Africa more than 400,000 years ago, some moved west to Europe and evolved into Neanderthals; others moved east to Asia and became Denisovans. Those who stayed in Africa evolved into Homo sapiens.
When Homo sapiens eventually left Africa, they encountered Neanderthals and Denisovans and started reproducing, and Denisovans and Neanderthals mated with one another, too. Those two species are now extinct, but they live on in modern human DNA, which contains a significant amount of both — most non-Africans are between 1% and 4% Neanderthal, and many people with Southeast Asian and Pacific Island heritage are up to 5% Denisovan. Recent research shows that people of African descent have some Neanderthal DNA, too, likely a result of back-and-forth migration.
Photo credit: Culture Club/ Hulton Archive via Getty Images
Paleolithic People Nurtured Their Pets
Scientists are still puzzled about when and how humans first befriended dogs, but we know it was a very long time ago. One of the earliest pet puppers lived 14,000 years ago in modern-day Germany — and it wasn’t even a working dog, demonstrating a close bond based more on affection than function.
In 2018, researchers analyzed a Paleolithic grave that contained two humans and two dogs. The younger of the two dogs was around 27 weeks old (a little over 6 months) when it died, and it showed evidence of an illness that suggests it wouldn’t have even lived that long without human intervention. Puppies are vulnerable to a serious illness called canine distemper, which is almost always fatal if untreated. The dog in the grave showed a serious case of distemper that developed when it was just 3 or 4 months old, yet it went on to live another two to three months. The discovery implies a nurturing, loving relationship with dogs that stretches back thousands of years.
Just as modern humans occasionally like a change of scenery, Mesolithic people in Scotland ventured out on short trips — though more likely for hunting purposes than sightseeing and leisure. A campsite uncovered on the Mar Lodge Estate historic site in the Scottish Highlands seemed designed for one single short-term stay. Archaeologists found evidence of a campfire and limited tool-making, and believe a very small group of people stayed here one time for only a day or two. A similar site was discovered in Nairn, Scotland, about 45 miles away, in 2011. Another excavation in the Mar Lodge Estate site found a longer-term Mesolithic settlement, but archaeologists don’t think it was used year-round because of harsh winter conditions.
It’s easy to imagine early hominids spreading out or just grabbing the nearest cave, but at least one group of humans from the Neolithic era built pretty close quarters around 9,000 years ago. In Çatalhöyük, located in central Turkey, early people built mud-brick houses up against each other and on top of each other; researchers believe the site could have as many as 16 layers of homes. Each home had a hearth, oven, and platforms that were likely used for sleeping. The settlement didn’t have streets, so people traveled on the roof, which is also where they entered their homes. Residents kept their homes tidy, and dumped their trash in small open areas between buildings. Cemeteries were also part of these vertical cities; people buried their dead beneath the floor.
Credit: Fine Art/ Corbis Historical via Getty Images
Humans Have Been Making Art for More Than 40,000 Years
Southeast Asia is home to some incredibly ancient cave art, including the oldest known figurative drawings — art that depicts a specific form instead of just being abstract. One cave painting in Borneo depicting a four-legged animal dates back at least 40,000 years, but potentially as many as 52,000 years. Scientists suspect it’s a banteng, a species of cattle that still exists both in captivity and the wild. Another painting, from the Indonesian island of Sulawesi, at least 44,000 years old, shows a buffalo being hunted by half-human, half-animal creatures. A cave painting of a Sulawesi warty pig is at least 45,500 years old. Abstract art, meanwhile, goes back even further. Cave art on the Iberian Peninsula in Spain, potentially made by Neanderthals, is at least 64,000 years old.
In the last 150 years, scientists identified some very early human species. Homo habilis, for instance, discovered in 1960, lived between 2.4 million and 1.4 million years ago. Homo rudolfensis, discovered in 1986, lived between 1.9 million and 1.8 million years ago. And Homo erectus, discovered in 1890, lived between 1.89 million and 110,000 years ago. In more recent years, genetic research has turned up other previously unknown species that we are still learning about. Scientists discovered the Denisovans in 2008 as a separate species based on genetic sequencing of fossils. Other species of ancient humans have only been found in the genome. One human species started breeding with Homo sapiens around 43,000 years ago. Some are much, much older: When Neanderthals and Denisovans first reached Europe and Asia hundreds of thousands of years ago, so-called “super archaics” were already there. Neanderthals and Denisovans mated with them and eventually replaced them, just as modern humans would eventually do to Neanderthals and Denisovans.
Credit: PHAS/ Universal Images Group via Getty Images
Author Kristina Wright
April 2, 2026
Love it?128
Furniture isn’t just about form and function — it’s a reflection of how we live. As technology evolves and lifestyles shift, pieces that were once considered household essentials can quietly fade into obscurity. From furniture designed around now-outdated technology to those that catered to social customs of another era, many former decor staples have all but disappeared from modern homes.
If you’ve ever tried to fit a heavy television into a hulking TV cabinet or spent an hour at the “gossip bench” catching up with an old friend, you’re not alone. These once-popular furnishings tell the story of how we used to live. A few may linger in basements, guest rooms, or antique shops, quietly reminding us of how much things have changed. Here are seven kinds of furniture that were once common but are now rarely seen. How many have you owned?
Popular in the 1970s and ’80s, waterbeds promised a futuristic sleep experience with their wavelike motion and adjustable temperature. Though patented in California in the late 1960s, the concept had already rippled through science fiction — sci-fi author Robert Heinlein described similar beds in his novels years earlier, imagining them as ideal for both comfort and hygiene.
Once marketed as both cutting-edge and sexy, waterbeds quickly gained popularity, peaking in 1987 when they accounted for nearly 20% of all mattress sales in the U.S. One memorable slogan captured the era’s enthusiasm: “Two things are better on a waterbed. One of them is sleep.” But the charm faded as waterbeds’ drawbacks mounted: heavy frames, tricky maintenance, awkward moves, and a constant risk of leaks. Though rarely seen today, waterbeds remain a quirky relic of a bygone era — a ripple in sleep history that once made big waves.
Once a symbol of organized sophistication, the secretary desk combined elegance and style with utility. Featuring a hinged, fold-down writing surface and a nest of drawers, cubbies, and sometimes even hidden compartments, it served as the ideal command center for managing household correspondence, finances, and daily affairs.
The name is derived from the French secrétaire à abattant, a drop-front desk. This form rose to prominence in 18th-century France and later found popularity in U.S. homes in the 19th and early 20th centuries, often passed down as family heirlooms. But as handwritten letters and billing statements gave way to email and online banking, the need for a dedicated writing nook waned. Today, the secretary desk is more likely to be admired in antique shops or repurposed as an accent piece — a charming nod to a time when “inbox empty” involved an actual wooden drawer.
Advertisement
Advertisement
Credit: Otto Kühn/ ullstein bild via Getty Images
Telephone Tables
The telephone table, affectionately known as a “gossip bench,” was the most popular spot in the house in mid-20th-century America. This compact piece of furniture was designed for one very specific mission: managing the family’s single landline telephone. Originating sometime after the invention of the telephone in 1876, this nifty piece evolved out of the telephone stand of the 1890s. It combined a small seat with a tabletop for notepads and pens, and often a built-in shelf or cubby for the bulky phone book. It was the designated spot where people went to share news, plan social events, or simply chat for hours.
As cordless phones freed us from the seat and mobile phones untethered calls from our homes entirely, the telephone table was rendered obsolete. Today, it stands as a charming reminder of a time when phone calls were events — and if you weren’t at home to get the call, you might miss all the good gossip.
For decades, the dressing table, also known as a vanity table or vanity, was more than just a piece of furniture — it was a dedicated beauty retreat. With its attached mirror, small drawers, and delicate design, the vanity offered women a personal space to style their hair, apply makeup, and prepare for the day or a glamorous night out. The concept dates back centuries, with origins in late-17th-century Europe, where aristocratic dressing tables were elaborate symbols of refinement.
During the art deco movement of the early 20th century, when beauty routines were marketed as both necessary and aspirational, vanity tables found a place in American homes. Hollywood helped cement the allure: Silver-screen stars were often photographed perched gracefully at their vanities, surrounded by perfume bottles and powder puffs.
By the late 20th century, however, the rise of modern bathrooms with bright lighting, expansive counters, and built-in storage gradually made the bedroom vanity redundant. As homes evolved for efficiency and space-saving, the once-essential vanity became more of a decorative throwback than a daily necessity. While vanity tables can still be found in vintage-inspired bedrooms, they’re mostly a charming reminder of the glamour of a bygone era.
By the 1960s, most American households had a television — but that didn’t mean anyone wanted to see it on display. Early sets were bulky, unattractive, and often viewed as disruptive to a well-appointed living room. So, manufacturers dressed them up in polished wood consoles to make the machines look like part of the living room decor.
As TVs got bigger and the gadgets piled on — VCRs, cable boxes, gaming consoles, stereo receivers, speaker systems — the simple disguise no longer cut it. Enter the TV cabinet and, later, the full-blown entertainment center: hulking structures designed not just to hold a TV, but to contain and conceal the entire home media system. These cabinets peaked in popularity in the 1980s and ’90s, often spanning entire walls and weighing more than the electronics they held. Not only were they practical, but they also soothed lingering anxieties that TV-watching was lazy, lowbrow, or somehow a design failure. Closing the cabinet doors felt like restoring order — and taste — to the room. But as televisions got sleeker and streaming services replaced stacks of movie boxes, the furniture built to hide it all quickly lost its relevance. Now TV cabinets are seen as bulky, old-fashioned relics — like building a closet just to store your iPad.
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Chaise Longues
Once the epitome of leisure and elegance, the chaise longue, from the Frenchfor “long chair,” has a history stretching back to ancient Egypt, Greece, and Rome. The form evolved through centuries of European refinement, particularly in 16th-century France, where it became a fixture in aristocratic homes as a symbol of comfort and status. By the 19th century, it had taken on new life in Victorian parlors as the fainting couch — a semi-reclined piece thought to offer a graceful landing spot for corseted women who found themselves short of breath.
In the first half of the 20th century, the chaise had a solid run in U.S. homes, especially in bedrooms, sunrooms, and formal living spaces. It offered a spot for reading, relaxing, or striking a languid pose à la an Old Hollywood starlet. It paired comfort with glamour, and no therapist’s office or upscale photo shoot was complete without one.
But as rooms got smaller and furniture leaned more functional, the chaise longue became harder to justify. It takes up the footprint of a recliner or armchair — with less versatility. And unlike a sofa or sectional, it’s typically built for one, making it less practical for families or shared lounging. The chaise still has its fans, but these days it’s mostly reclining in retirement.
Once a fixture in midcentury homes, smoking stands were compact, single-purpose pieces designed to hold everything a well-prepared smoker might need: an ashtray, matches, lighters, and a stash of cigarettes or cigars. Often crafted from wood or metal with decorative flair, smoking stands stood proudly beside armchairs, inviting guests to light up without leaving the conversation. From the 1920s through the 1950s, they were as common as coffee tables — a mark of hospitality and, often, masculine style.
Over time, these more elaborate tables gave way to simpler pedestal ashtray stands — functional but stripped of the frills — as smoking became more routine and less ceremonial. But as the dangers of tobacco use became widely recognized and indoor smoking fell out of favor, even those disappeared. Public bans, health awareness, and minimalist living left little space — literally or socially — for dedicated smoking furniture. Now they’re rarely seen outside of antique shops and estate sales, a reminder of when lighting up was trendy.
Advertisement
Advertisement
The Most Popular Baby Name Every Year of the Last Century
Credit: Reg Speller/ Hulton Archive via Getty Images
Author Kristina Wright
April 2, 2026
Love it?133
Over the past hundred years, baby-naming trends have largely been shaped by family traditions and popular culture. Classic names such as Mary, John, Betty, and James often appear repeatedly in family trees, passed down out of respect for previous generations and a desire to keep family legacies alive. By the latter half of the 20th century, parents found baby name inspiration in popular culture, including films, theater, and music. The name Jennifer, for instance, began its climb in the U.S. thanks to the George Bernard Shaw play The Doctor’s Dilemma, which debuted on Broadway in 1927. Today, Olivia and Liam are the reigning favorites, and it’s likely only a matter of time before names that are already in the top 10 — such as Mia, Mateo, Evelyn, and Elijah — claim the No. 1 spots.Here is a fascinating look at the most popular girls’ and boys’ names of the last century, based on data collected by the U.S. Social Security Administration from Social Security card applications.
1924: Mary, Robert 1925: Mary, Robert 1926: Mary, Robert 1927: Mary, Robert 1928: Mary, Robert 1929: Mary, Robert
The “Roaring ’20s” brought new cultural, economic, and sexual freedoms for women, but the most popular female names of the Greatest Generation — those born between 1901 and 1927 — didn’t reflect this newfound sense of liberation. Mary remained the most popular girls’ name from 1924 to 1929, just as it had since 1900. A biblical name that appears in both the Old and New Testaments, Mary is the anglicized form of Maria and originated from the Hebrew Miryam. In 1924, the name Robert, favored by European royalty and nobility in the Middle Ages,” replaced John, another common biblical name, as the most popular boys’ name, ending John’s decades-long place at the top of the list.
1930: Mary, Robert 1931: Mary, Robert 1932: Mary, Robert 1933: Mary, Robert 1934: Mary, Robert 1935: Mary, Robert 1936: Mary, Robert 1937: Mary, Robert 1938: Mary, Robert 1939: Mary, Robert
The end of the 1920s represented the beginning of the Silent Generation, which continued until 1945. For children born in the 1930s, it was an era marked by scarcity, sacrifice, and traditional values. This generation is also known as the Traditionalist Generation, and the most popular baby names of the 1930s reflect that characteristic: As in the second half of the 1920s, Mary and Robert remained the most popular female and male names, respectively, throughout the decade.
Advertisement
Advertisement
Credit: Reg Speller/ Hulton Archive via Getty Images
1940s
1940: Mary, James 1941: Mary, James 1942: Mary, James 1943: Mary, James 1944: Mary, James 1945: Mary, James 1946: Mary, James 1947: Linda, James 1948: Linda, James 1949: Linda, James
With the end of the Silent Generation in 1945, a new era of baby boomers — so named for the “baby boom” that happened after World War II — was born. After decades as the No. 1 female name, Mary lost that designation when Linda claimed the top spot in 1947, boosted by the popularity of Jack Lawrence’s 1946 song “Linda.”For boys’ names, Robert’s 16-year reign at the top came to an end at the beginning of the decade, when James — a staple on the top 100 most popular names thanks to both its biblical and royal associations — took the No. 1 spot in 1940 and stayed there until 1952.
Credit: ClassicStock/ Archive Photos via Getty Images
1950s
1950: Linda, James 1951: Linda, James 1952: Linda, James 1953: Mary, Robert 1954: Mary, Michael 1955: Mary, Michael 1956: Mary, Michael 1957: Mary, Michael 1958: Mary, Michael 1959: Mary, Michael
Linda continued to be the top female name at the start of the 1950s but swapped places with Mary (again) in 1953, when Mary reclaimed the No. 1 spot and stayed there for the rest of the decade. Meanwhile, three different boys’ names saw the No. 1 spot during the 1950s. James remained at the top for three years before Robert returned as the No. 1 name for just one year in 1953. In 1954, Robert slipped to No. 2 behind Michael — a biblical name historically associated with emperors, kings, and saints — which remained the most popular male name for the rest of the decade.
1960: Mary, David 1961: Mary, Michael 1962: Lisa, Michael 1963: Lisa, Michael 1964: Lisa, Michael 1965: Lisa, Michael 1966: Lisa, Michael 1967: Lisa, Michael 1968: Lisa, Michael 1969: Lisa, Michael
In 1960 and 1961, Mary was once again the most popular female name in the nation. In 1961, Lisa appeared on the list of the top five most popular names for the first time, coming in second behind Mary. The following year, Lisa surpassed Mary as the No. 1 female name, and stayed in the top spot for the rest of the decade. David, which first appeared in the top five most popular male names in 1948 and continually ranked between No. 2 and No. 5 for the next 12 years, finally took the top spot in 1960. Its reign was short-lived, though: David was bumped in 1961 by perennial favorite Michael, making 1960 the only year David has ever held the No. 1 spot, despite being in the top five 39 times over the past century.
1970: Jennifer, Michael 1971: Jennifer, Michael 1972: Jennifer, Michael 1973: Jennifer, Michael 1974: Jennifer, Michael 1975: Jennifer, Michael 1976: Jennifer, Michael 1977: Jennifer, Michael 1978: Jennifer, Michael 1979: Jennifer, Michael
Generation X, which includes people born between 1965 and 1980, introduced a generation of “latchkey kids,” children of dual-income or divorced parents whose free-range childhoods led them to be characterized as independent, resourceful, and cynical. Jennifer, which first appeared in the top five most popular female names in 1968, took the top spot in 1970 and stayed there throughout the 1970s and into the 1980s. Michael, which had dominated the top of the list since the mid-1950s, continued as No. 1 throughout the decade. Three names — James, Christopher, and Jason — vied for the No. 2 spot over the course of the decade, but neither Christopher nor Jason ever reached No. 1, unlike James, which had previously held that spot for 13 years.
1980: Jennifer, Michael 1981: Jennifer, Michael 1982: Jennifer, Michael 1983: Jennifer, Michael 1984: Jennifer, Michael 1985: Jessica, Michael 1986: Jessica, Michael 1987: Jessica, Michael 1988: Jessica, Michael 1989: Jessica, Michael
The year 1981 marked the start of Generation Y, better known as millennials. This generation grew up during a time of major technological advancements, including personal computers and internet access. During the early 1980s, Jennifer continued to hold the top spot as the most popular female name, but 1985 saw another three-syllable “J” name take the lead: Jessica. This name may have appealed to parents who wanted a trendier “J” name than Jennifer, but the first recorded instance of Jessica actually dates back to William Shakespeare’s play, The Merchant of Venice. Meanwhile, Michael enjoyed another decade at No. 1, while Christopher remained No. 2 throughout the 1980s and into the 1990s, never quite surpassing the popularity of Michael.
1990: Jessica, Michael 1991: Ashley, Michael 1992: Ashley, Michael 1993: Jessica, Michael 1994: Jessica, Michael 1995: Jessica, Michael 1996: Emily, Michael 1997: Emily, Michael 1998: Emily, Michael 1999: Emily, Jacob
In 1993, the World Wide Web launched, introducing a whole new way of learning and communicating. The millennial generation came to a close in the mid-1990s, and 1997 is considered the beginning of Generation Z, also called “zoomers.” Jessica bounced between No. 1 and No. 2 for most of the decade before disappearing from the top five in 1998. Ashley, a name more commonly given to boys prior to the 1960s, held the lead in 1991 and 1992, but slipped to No. 2, No. 3, and No. 5 for the rest of the decade. Emily, another appealing “-ly” name, appeared in the top five in 1993 and held the lead for 13 years, starting in 1996. And after holding the No. 1 spot as the most popular male name for 44 of the previous 45 years, Michael was dethroned in 1999 by Jacob, a name that first appeared in the top five in 1995.
Advertisement
Advertisement
Credit: PYMCA/Avalon/ Hulton Archive via Getty Images
2000s
2000: Emily, Jacob 2001: Emily, Jacob 2002: Emily, Jacob 2003: Emily, Jacob 2004: Emily, Jacob 2005: Emily, Jacob 2006: Emily, Jacob 2007: Emily, Jacob 2008: Emily, Jacob 2009: Isabella, Jacob
The year 2001 marked the beginning of the second millennium CE, but the occasion didn’t have an impact on the most popular baby names for the youngest Gen Zers. Though computers and the internet changed the way millennials lived and worked, Generation Z is considered the first true digital native generation, as this cohort has never known a time when the internet wasn’t as close as their smartphone. Still, for older Gen X and millennial parents, sticking with tradition was the name of the name game in this booming digital era. Emily held the No. 1 spot for the entire decade until Isabella, which first appeared on the top five in 2006, took the lead in 2009. And Jacob, a biblical name with a more contemporary feel, replaced Michael after a long reign and remained No. 1 throughout this decade and into the next.
2010: Isabella, Jacob 2011: Sophia, Jacob 2012: Sophia, Jacob 2013: Sophia, Noah 2014: Emma, Noah 2015: Emma, Noah 2016: Emma, Noah 2017: Emma, Liam 2018: Emma, Liam 2019: Olivia, Liam
Those born in the 2010s are shaping up to be known as Generation Alpha. The oldest members are just entering their teen years and the qualities that will define their cohort are still being formed. That fact might be reflected in the number of names — four for girls and three for boys — that held the No. 1 spot over the decade. This decade saw a trend toward vintage and vintage-inspired names. Isabella started in the No. 1 spot just as it had ended the previous decade, but was displaced in 2011 by Sophia, which stayed at the top for three years before being replaced by Emma. Emma then stayed at No. 1 for four years, before Olivia claimed the No. 1 spot to finish the decade. Continuing its popularity from the previous decade, Jacob held the top spot as the most popular male name for three years before being unseated by Noah. Four years later, in 2017, Liam — a derivative of William that first appeared on the top five list in 2013 as the third most popular male name — replaced Noah as the most popular name for the rest of the decade.
We’re only a few years into this century’s ’20s, but the most popular baby names reflect how trends have shifted since the start of the new millennium. So far in this decade, Olivia and Liam are holding steady at No. 1, but old-fashioned names are making a notable comeback. Henry, which ranked as the 18th most popular male name in 1924, climbed to No. 8 in 2023, and Evelyn, which was the 12th most popular female name in 1924, was No. 9 in 2023. On the other hand, Robert and Mary, the two most popular names 100 years ago, now rank 89th and 135th in popularity, showing how much trends have changed in the last hundred years.
Credit: Historical/ Corbis Historical via Getty Images
Author Tony Dunnell
April 2, 2026
Love it?69
In 1789, George Washington became the first president of the United States. Since then, 44 other individuals have served as commander in chief, each leaving a political legacy to be analyzed and judged in the course of time. But their legacies are not only political — they’re also familial. The number of children each president had is often overlooked, but on a personal level, few things could be more important. And in two cases, presidential children — John Quincy Adams and George W. Bush — went on to become presidents themselves, combining the familial with the political.
With that in mind, here’s a look at how many children each U.S. president had. For the sake of clarity, this list is ordered by the total number of known biological children only. Fostered and legally adopted children are noted but not counted in the total due to various factors, including legal adoption not existing in the United States until 1851. George Washington, for example, had no biological children but did raise Martha Washington’s two children from a previous marriage (as well as her four grandchildren and several nieces and nephews), but they were not legally adopted.
From the five presidents (including Washington) with no known biological children to the commander in chief with the most kids at 15, here’s a list of all the U.S. presidents in order of the number of children born to them.
Five presidents fathered no known biological children. In some cases, this was likely due to infertility caused by medical issues, such as the tuberculosis infection Washington suffered before he was married. James Buchanan, meanwhile, remains the only U.S. president who never married.
George Washington: 0 (2 stepchildren) James Madison: 0 (1 stepchild) Andrew Jackson: 0 (1 unofficially adopted child) James K. Polk: 0 James Buchanan: 0
In recent decades, the average number of children per U.S. family has hovered around two — a big difference from a century ago, when that number was closer to seven. It’s perhaps no surprise, then, to see some more modern presidents in this range of one to three children, including Richard Nixon, Bill Clinton, George W. Bush, and Barack Obama.
Harry S. Truman: 1 Warren G. Harding: 1 Bill Clinton: 1 Millard Fillmore: 2 William McKinley: 2 Calvin Coolidge: 2 Herbert Hoover: 2 Dwight D. Eisenhower: 2 Lyndon B. Johnson: 2 Richard Nixon: 2 George W. Bush: 2 Barack Obama: 2 William Howard Taft: 3 Franklin Pierce: 3 Chester A. Arthur: 3 Woodrow Wilson: 3 Benjamin Harrison: 3 James Monroe: 3
Many presidents fathered four or more children, but many also suffered the loss of a child. Child mortality rates were once far higher than they are now, and this difficult loss was not uncommon even among presidents. Abraham Lincoln, Martin Van Buren, Zachary Taylor, Theodore Roosevelt, and Franklin D. Roosevelt are among the presidents who lost a child. Lincoln lost two sons during his lifetime, which may have caused his “melancholy” — a condition now thought to be clinical depression.
John Quincy Adams: 4 Abraham Lincoln: 4 Ulysses S. Grant: 4 Gerald Ford: 4 Jimmy Carter: 4 Ronald Reagan: 4 (and 1 adopted) Joe Biden: 4 John F. Kennedy: 4 Andrew Johnson: 5 Grover Cleveland: 5 (and possibly 1 additional child out of wedlock) Donald Trump: 5 John Adams: 6 Theodore Roosevelt: 6 Franklin D. Roosevelt: 6 George H.W. Bush: 6 Zachary Taylor: 6 Martin Van Buren: 6
One president stands head and shoulders above the rest when it comes to procreating: John Tyler, who fathered 15 children across two marriages. Another notable figure here is Thomas Jefferson, who had six children with his wife of 10 years, Martha Jefferson, and likely also fathered six more children with the enslaved woman Sally Hemings. Jefferson’s alleged relationship with Hemings has been debated for more than two centuries, but DNA evidence strongly suggests that Jefferson fathered at least one of Hemings’ sons, and it’s possible that he was the biological father of all of her children.
Thomas Jefferson: 6 (and possibly 6 additional children with Hemings) James A. Garfield: 7 Rutherford B. Hayes: 8 William Henry Harrison: 10 John Tyler: 15
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.