Credit: Neil Munns - PA Images/ PA Images via Getty Images
Author Nicole Villeneuve
December 10, 2025
Love it?43
Every December, shopping malls, markets, parades, and office parties call in an army of Santas to headline their Christmas festivities. Donning the red suit is no small responsibility: The best Santas must know the correct way to say “ho ho ho,” how to squint those magical eyes and apply the right amount of makeup for rosy cheeks, and how to care for that signature beard — and also stay calm and react safely should a little one try to give it a yank. So how are these skills perfected? Welcome to the strange world of Santa schools.
Credit: Bettmann Archive via Getty Images
While the history of Santa Claus dates back to the fourth century CE and a generous man known as St. Nicholas, professional Santas are just a little more than a century old. The idea of Santas-for-hire began taking shape in the late 19th century, when American department stores began turning holiday shopping into a full-fledged production.
By 1910, any store with a toy department was expected to have a jolly, white-bearded man for children to tell their Christmas wishes to. As stores scrambled to fill the red suit, one boy in Albion, New York, found himself captivated by the character. That boy, Charles W. Howard — who first played Santa in a school play as a child and never grew out of the role — would go on to teach generations of others how to properly become St. Nick.
After spending time playing Santa in several upstate New York stores, Howard grew disillusioned with what he saw among many of his fellow Santas: cheap suits, unkempt beards, and a lack of storytelling flair. So in 1937, Howard opened the Charles W. Howard Santa School right in his own home. With just three inaugural students, it was a humble operation — and deeply idiosyncratic.
Credit: R. Mathews/ Hulton Archive via Getty Images
During their schooling, prospective Santas studied the season’s must-have toys and treasured stories, and were coached on how to handle a kid if they cried. No detail was overlooked: Howard even hired a reindeer expert from Michigan to teach the Santas how to properly feed and handle the Christmassy creatures. His belief that Santa Claus required both heart and craft showed in his deeply earnest, almost spiritual teachings.
Howard’s passion and skills paid off: He was hired to play Santa in the Macy’s Thanksgiving Day Parade in New York City from 1948 to 1965, and served as a consultant on the 1947 classic film Miracle on 34th Street. When Howard died in 1966, his school moved to Michigan and continued under new leadership, eventually becoming the flagship institution known as the “Harvard of Santa Schools” in what was an increasingly crowded field.
By the 1960s, Santa schools had multiplied. After Howard’s home-based operation took off, similar programs sprouted up across the country. The world of Santa training became even more elaborate. There are now symposiums on beard bleaching, as well as marketing and business lessons for those looking to take their Santa role pro. The child-first approach that informed Howard’s earliest teachings carried into modern Santa schools: Many offer the services of early childhood educators and specialists with experience working with children facing anxiety, loss, or other challenges.
Even as Christmas wish lists and Santa-experience expectations evolve, the core thread remains the same: Santa shouldn’t just be imitated, he should be embodied. “He errs who thinks Santa enters through the chimney,” Howard once said. “Santa enters through the heart.” Nearly a century later, his sincerity lives on. Strange as Santa schools may seem, they also keep the magic of Christmas alive, turning Kris Kringle into much more than a costume, but rather a presence that lives vividly in a child’s imagination.
Credit: MSPhotographic/ iStock via Getty Images Plus
Author Paul Chang
December 4, 2025
Love it?52
Once a staple of diners and TV dinners, Salisbury steak has quietly disappeared from American menus in recent decades. What began as a 19th-century “health food” became a frozen dinner icon, only to fall victim to changing tastes. Here’s a look back at the history of this once-proud patty.
Credit: Graphic House/ Archive Photos via Getty Images
The Origins of Salisbury Steak
Salisbury steak — seasoned ground beef patties mixed with breadcrumbs or other ingredients — was invented by James Salisbury, a New York physician who was fascinated by the relationship between diet and health. In the 1850s, he conducted a series of self experiments in which he exclusively ate a single food for a few days or weeks. His first test, a diet of only baked beans, produced disastrous results: “I became very flatulent and constipated, head dizzy, ears ringing, limbs prickly, and was wholly unfitted for mental work,” he wrote in The Relation of Alimentation and Disease (1888). Next came oatmeal and other staples, but it was ground beef, which he called “muscle pulp of beef,” that finally delivered the results he sought.
His prescription was simple: broiled beef patties, served with simple seasonings such as butter, salt, pepper, Worcestershire sauce, and lemon. This recipe, he wrote, “affords the maximum of nourishment with the minimum of effort to the digestive organs.” Vegetables, on the other hand, were not only unnecessary but also harmful in his view; Salisbury declared that vegetarians had “less nervous and muscular endurance than meat eaters.”
During the Civil War, Salisbury applied his nutrition theories to Union soldiers, who were fed a diet of beef patties instead of the usual hard biscuits, which tended to cause digestive issues. In the decades following the war, Salisbury’s diet evolved from medical prescription to one of America’s first fad diets, spurred on by the rise of wellness culture and increased public interest in healthy food. (Other notable 19th-century foods believed to promote health included graham crackers, which were originally intended to suppress the libido, and Kellogg’s cornflakes, which were served by their inventor John Harvey Kellogg at his world-famous sanitarium.)
World War I was a further boon for the Salisbury steak, as the name became preferred to the German-sounding “hamburger.” By the mid-20th century, the dish had become a blue-collar classic. Hearty, inexpensive, and easy to prepare in large batches, it appeared on diner menus and Navy cookbooks across America.
Then came the postwar boom — and with it, the rise of television. In 1953, the Swanson company introduced the TV dinner, a frozen meal packaged in a compartmentalized aluminum tray. Within its first year of production, more than 10 million trays were sold in the U.S., and the Salisbury steak was one of the TV dinner’s most enduring offerings. For a generation of midcentury families, Salisbury steak was the taste of modern life: a hot meal ready in minutes, eaten in front of the television.
Advertisement
Advertisement
Credit: Jerry Cooke/ Corbis Historical via Getty Images
So What Happened to Salisbury Steak?
The same forces that made Salisbury steak famous would later make it unfashionable. In the 2000s, Americans began to fall out of love with processed foods, in part due to growing awareness of their negative health effects. In 2012, New York City Mayor Michael Bloomberg proposed a ban on large sodas in the city, and First Lady Michelle Obama encouraged children to “Move!” The Salisbury steak, which was by then synonymous with microwaved TV dinners, fell victim to its association with cheap, unhealthy meals.
That said, not all processed foods suffered this fate, and another reason for Salisbury steak’s decline may be that it was outcompeted by its more convenient cousin, the burger, which rose to prominence as one of America’s most popular and iconic meals during the latter half of the 20th century. With the spread of fast-food chains such as McDonald’s and Burger King, many customers opted for the Salisbury steak’s handheld cousin when in the mood for a beef patty.
Though Salisbury steak has faded from the cultural spotlight in recent years, it is viewed with nostalgia by many who grew up with it and can still be found in home kitchens across America.
Credit: PhotoQuest/ Archive Photos via Getty Images
Author Tony Dunnell
December 4, 2025
Love it?51
The oral tradition of nursery rhymes goes back to at least the 13th century. But the golden age came in the 18th century, when many of the most famous verses emerged and became established in the colorful (and sometimes creepy) canon of classics we still hear today. While many of these rhymes seem, at first glance, like innocent childhood entertainment — simple, silly verses passed down through generations to delight young ears — they often have surprisingly complex backstories.
Despite being aimed at children, many classic nursery rhymes are far darker, and in some cases more subversive, than they may appear, touching on everything from medieval taxes to religious persecution. Here’s a look at the hidden origins of five famous nursery rhymes, revealing how even the most innocent-sounding verses can offer a fascinating window into the past.
Credit: Buyenlarge/ Archive Photo via Getty Images
“Baa, Baa, Black Sheep”
The earliest printed version of “Baa, Baa, Black Sheep” dates back to 1744, but the rhyme is likely much older than that. The words, which have barely changed over the centuries, appear to tell a simple story of wool being delivered to three different people: the master, the dame, and a little boy. Historians believe, however, that the nursery rhyme actually alludes to a medieval wool tax that existed in England from 1275 up to the 1500s. The tax demanded that wool producers deliver a third of their product to the king (the master), and a third to the church (the dame), leaving only a third for the farmer — a tax seen as entirely unfair at the time. The specific mention of a black sheep possibly adds another layer, as black wool was less valuable than white because it couldn’t be dyed.
Credit: Historical/ Corbis Historical via Getty Images
“Here We Go Round the Mulberry Bush”
As is the case with many nursery rhymes, the origins of this playground favorite are disputed. But according to historian R.S. Duncan, the song was invented by female prisoners at England’s Wakefield Prison around two centuries ago. Duncan — a former governor of Wakefield Prison — explained how the inmates used to walk their visiting children around a mulberry tree planted in the prison’s courtyard, where the women were allowed to exercise. They invented the rhyme to help pass the time and keep their children occupied. Adding credence to this origin theory is the fact that a mulberry tree has stood at the prison since at least the 19th century.
Advertisement
Advertisement
Credit: Buyenlarge/ Archive Photos via Getty Images
“Ring Around the Rosie”
“Ring Around the Rosie” (or “Ring a Ring o’ Roses”) has perhaps the most widely repeated and infamous origin story of any nursery rhyme. According to popular belief, the rhyme refers to the Great Plague of London in 1665 or possibly earlier outbreaks of the bubonic plague in England, with “rosie” representing the rash, “posies” being the flowers carried to mask the smell of death, and “all fall down” symbolizing the widespread fatalities.
This origin story, however, is actually a modern invention and one now largely debunked. The rhyme’s association with plagues didn’t emerge until the mid-1900s, and there’s little to no evidence, beyond simple speculation, to connect it with the plague — which makes folklorists and scholars highly skeptical of the theory. The rhyme most likely originated as a simple children’s party game, in which children hold hands in a circle and follow fun actions (“We all fall down!”). According to folklorist Philip Hiscock, games such as these may have been invented to skirt Protestant bans on dancing in both Britain and North America in the 19th century.
Whichever way you look at it, “Goosey Goosey Gander” is a strange rhyme, one that somewhat cryptically involves a goose and an old man getting thrown down some stairs. The earliest recorded version of the rhyme dates back to 1784, and as with other nursery rhymes, its origin is open to debate.
One of the most compelling theories involves the religious persecution of Roman Catholics during the reign of England’s King Henry VIII, when wealthy Catholic families often had priest holes in which to hide members of the clergy. If a priest was discovered, he would be forcibly taken from the house (“thrown down the stairs”) and possibly put to death. An alternative theory links the rhyme to the closing of brothels in London during Henry VIII’s reign. Prostitutes were often known as “geese” at the time, and when the brothels were shut down, they were forced to work elsewhere, possibly explaining the significance of “Whither shall I wander … in my lady’s chamber.”
“Mary Had a Little Lamb” is a bit of an outlier in the world of nursery rhymes. Not only are its origins well documented, but it’s also a refreshingly innocent rhyme that’s actually appropriate for children, with no disturbing backstory. The poem was written by American writer Sarah Josepha Hale — who, incidentally, was the force behind the creation of the national Thanksgiving holiday — and published in 1830. It is now generally accepted that the nursery rhyme was based on a real incident at a small school in Newport, New Hampshire.
One day, a student brought her pet lamb to the school at her brother’s urging. The lamb proved too distracting in class and had to wait outside, but it stayed nearby until school was dismissed, then ran to Mary for attention. This heartwarming event inspired the poem, which later became one of the most popular nursery rhymes in the English-speaking world. It gained even greater fame when Thomas Edison recorded it in 1877 as the first song ever captured on his phonograph.
The idea of a muse — someone who serves as a profound source of artistic inspiration — is far from new. Muses were an everyday part of ancient Greek culture, which typically notes nine muses — all of them goddesses — covering every branch of the arts. They are mentioned in Homer’s Odyssey, which was completed sometime around 675 BCE, and the concept never went out of fashion after that. In modern times, muses can still be found throughout the arts — even in rock ’n’ roll. And while men have certainly served as sources of artistic inspiration, the most legendary muses in rock, just like those of ancient Greece, have been women.
Behind some of the genre’s most unforgettable songs stand influential muses — lovers, partners, friends, objects of obsession — who sparked the creative fires that led to classic rock tracks. Here we look at some of rock’s greatest muses — figures who energized albums, defined eras, and occasionally provoked the frustrated smashing of guitars.
Credit: Michael Ward/ Hulton Archive via Getty Images
Pattie Boyd
Pattie Boyd was the archetypal 1960s “It” girl and arguably rock’s most legendary muse. The English model married George Harrison in 1966 and inspired him to write a handful of classic Beatles songs, including “I Need You,” “If I Needed Someone,” “Something,” and “For You Blue.” Boyd and Harrison divorced after a decade and she married their mutual friend Eric Clapton in 1979, a union that inspired songs such as “Wonderful Tonight” and “She’s Waiting.” They divorced in 1989, at which point Clapton began working on his album Journeyman (which included a song written by Harrison). One track on the album, “Old Love,” was about his ex-wife, proving the potency of Boyd’s enduring role as a muse.
In the 1960s, Marianne Faithfull emerged as the ethereal muse of Mick Jagger, inspiring numerous songs by the Rolling Stones, including “Wild Horses” and “You Can’t Always Get What You Want.” Being attached to the Stones came with plenty of turmoil and turbulence, but Faithfull survived the excesses of the rock ’n’ roll lifestyle while contributing significantly to the band’s creative evolution. Faithfull was far more than just a muse, though — she was a rock star and music icon in her own right, and one of the leading female artists of the British Invasion.
Yoko Ono was an avant-garde artist who became John Lennon’s wife, collaborator, and muse. She was unfairly blamed for the breakup of the Beatles, a stigma that has yet to be fully shed — being labeled a “Yoko” is still the death knell of many a rock muse. But without Ono as his creative partner and inspiration, Lennon wouldn’t have penned songs such as the Beatles’ “Don’t Let Me Down” and “I Want You (She's So Heavy).” Fans also would have missed out on Lennon’s most famous and beloved solo track, “Imagine,” for which Ono received a co-writing credit.
In 1969, just a year before Queen formed, Freddie Mercury met Mary Austin at the fashion boutique where she worked in Kensington, London. They began a long-term romantic relationship that ended when Mercury told Austin that he was gay, but their friendship remained strong. Austin became Mercury’s lifelong soulmate and muse, and he wrote several songs about her, most notably “Love of My Life.” In 1985, six years before his death, Mercury said, “All my lovers asked me why they couldn’t replace Mary, but it’s simply impossible. The only friend I’ve got is Mary, and I don’t want anybody else.” Austin was by Mercury’s side when he died, and he left her half of his $75 million estate.
In the 1970s, Bebe Buell caused quite a buzz. As well as being a model and Playboy magazine’s “Playmate of the Month” in November 1974, she mixed with many of the most legendary bands and musicians of the decade — not as a groupie, as she has often said, but as a muse. Enough songs to fill an entire album are believed to have been written about Buell, including Todd Rundgren’s “Can We Still Be Friends,” Prince’s “Little Red Corvette,” and Elvis Costello’s “Human Hands.” Her other high-profile rock star relationships included Mick Jagger, Iggy Pop, Jimmy Page, Rod Stewart, and Steven Tyler (Buell is Liv Tyler’s mother) — so it’s likely she inspired more rock ’n’ roll tunes than we know.
Credit: Gianni Penati/ Conde Nast Collection via Getty Images
Linda McCartney
Linda Eastman met Paul McCartney in 1967 while on a photo assignment in London. The two hit it off almost immediately, beginning one of the greatest love stories in pop music history. Linda McCartney became Paul’s wife, soulmate, and musical partner in the band Wings, forging a personal and artistic relationship that spanned three decades until Linda’s death in 1998. She inspired many of Paul’s songs, including “Maybe I’m Amazed,” “My Love,” “Heart of the Country,” and “The Lovely Linda.” Paul once said of his wife, “Any love song I write is written for Linda.”
Credit: Buyenlarge/ Archive Photos via Getty Images
Author Timothy Ott
December 4, 2025
Love it?20
The bugle has long been part of military life, historically used to signal commands and guide troops before taking on its modern ceremonial role. Even for those who never served in the armed forces, the powerful, piercing sound can call to mind a mounted cavalry officer blowing into their instrument from a hilltop, or a uniformed soldier playing a somber melody to saluting troops gathered around a flagpole.
Whatever emotion it triggers, the bugle is often associated with the U.S. military — which makes sense, given that the most well-known bugle songs, such as “Taps,” originated in the armed forces. But how did this specific instrument come to define Uncle Sam’s musical leanings?
Credit: duncan1890/ DigitalVision Vectors via Getty Images
Bugles Came to America With the Revolutionary War
The use of horns for warfare dates back to at least the early years of the Roman Empire, when a predecessor of the bugle known as the buccina was among the brass instruments that sounded out at military ceremonies and when marching into battle. Centuries later, in the 1750s, light infantry battalions in the German state of Hanover adapted a semicircular hunting horn for military use. This instrument was picked up by the English in the following decades, taking on the name of bugle horn.
During the American Revolution, Continental Army troops were trained in the traditional European methods of military signaling, which generally involved the drum and fife. Soldiers learned that a distinct drum beat known as “The Reveille” would wake them from a night’s slumber, while another rhythm, “To Arms,” meant it was time to grab their weapons and prepare for combat.
While the sounds of English drumbeats filled the battlefields of the Revolutionary War, the Americans were also exposed to the blares of the bugle, the piercing instrument used by the Redcoats to spur troop movement and as a means of psychological warfare. Per one account, during the Battle of Harlem Heights in September 1776, attacking British troops blew their horns in a fashion that suggested the end of a fox chase.
Bugles Increased in Popularity During the War of 1812
At the start of the War of 1812, the U.S. Army had only one unit that was utilizing the bugle instead of the fife and drum, the Rifle Regiment. By then, the instrument had evolved from its original semicircular form into the now-familiar trumpet shape, and it was on its way to becoming a standardized feature of military activity.
The 1812 manual A Hand Book for Riflemen by American author William Duane included music for 61 bugle signals, many of which originated with the British. Meanwhile, early military bands found space to incorporate the instrument into their repertoires: The U.S. Marine Corps Band added a bugle in 1812, while the United States Military Academy Band soon counted two buglers among its ranks.
Following the war, the U.S. military began studying French martial strategies, including instrument signaling. An 1825 military manual included 16 new bugle calls. Among them was one that summoned all uniformed personnel to gather, known as “Assembly.” And in 1835, Major General Winfield Scott included 22 bugle calls in his influential Infantry Tactics manual, including the one that eventually formed the basis of “Taps.”
Advertisement
Advertisement
Credit: Buyenlarge/ Archive Photos via Getty Images
A List of Calls Was Streamlined After the Civil War
During the Civil War, bugles figured prominently in the military bands that provided a morale boost to soldiers and the units that depended on the instrument to be heard above the din of cannon and musket fire. Two buglers or trumpeters were authorized for each company or battery in a Union regiment; those in the infantry had more than 50 calls to learn, and members of the cavalry and artillery divisions were each assigned more than 30.
However, the sheer number of signals proved confusing and difficult for combatants to remember, and in the war’s aftermath, Major Truman Seymour was tasked with auditing and refining the expansive collection. Seymour streamlined the calls and delivered what became the U.S. Army’s definitive list of bugle calls in 1867.
Even during times of peace, these calls came to define everyday life for people living in Army camps or frontier outposts in the late 19th century. A typical day would start with the bugler’s early morning call of “Reveille,” while the playing of “Mess Call” beckoned listeners to lunch. The notes of “Recall” marked the end of afternoon duties, before the nighttime call of “Tattoo” indicated it was time to hit the hay.
Bugles Fell Out of Use as Radio Communications Improved
Even as the bugle was enjoying its heyday as a centerpiece of Army communications, developing wireless technology threatened to upend the ancient practice of using music to dictate the course of battle.
Buglers were still used to move regiments during World War I, with one even marking the end of fighting at 11 a.m. on November 11, 1918, by playing “Taps.” However, the bugle was largely phased out for signaling purposes by field radios during World War II.
Today, the spirit of the military bugle remains alive thanks in part to performing units such as the United States Marine Drum and Bugle Corps. The uniformed bugler has otherwise seen their ranks diminish, as Army bases typically play recordings over a PA system in place of a live musician. And yet, those calls remain part of daily rituals for many service members, the notes of “Retreat,” “To the Color,” and other stalwarts sounding the same to 21st-century ears as they did in the late 19th century.
From the downright shocking to the utterly bizarre, some facts about history are particularly fascinating. Did you know the U.S. had a president before George Washington, or that Americans used to live inside giant tree stumps? If you missed these facts the first time, don’t worry — we’ve got you covered. Read on for the 25 most popular facts we sent on History Facts this year.
When Congress declared war on Japan on December 8, 1941, more Americans than ever before heard the call of duty. Some 16.1 million U.S. citizens served in the military by the time World War II ended in 1945, representing 12% of the total population of 132 million at the time.
Before the logging industry, the trees in old-growth forests were hundreds of feet tall, with gnarled bases and trunks that could measure more than 20 feet across. To fell these forest giants, loggers would build platforms 10 to 12 feet off the ground, where the tree’s shape was smoother. The massive remaining stumps had soft wood interiors and sometimes even hollow areas, so it was relatively easy to carve out the center of a stump and turn it into a building, such as a barn, post office, or even the occasional home.
In 1800, tea was the most popular drink among Brits — something of a problem for the British Empire, as all tea was produced in China at the time. And so the English did something at once sinister and cunning: They sent a botanist to steal tea seeds and bring them to India, a British colony at the time. One historian called it the “greatest single act of corporate espionage in history.”
In 1919, Dwight D. Eisenhower, then a lieutenant colonel in the Tank Corps, learned of the U.S. Army’s plan to test the capabilities of its transport vehicles by moving 80 military vehicles across the country. After joining the expedition, he dutifully submitted a report analyzing the quality of the roads encountered along the way. Decades later, Ike made the development of America’s highways a centerpiece of his domestic agenda upon being elected U.S. president in 1952. His vision became the Interstate Highway System that crisscrosses the nation today.
For most of human history, you were either a child or an adult. The word “teenager” first entered the lexicon in 1913, appropriately enough, but it wasn’t until decades later that it took on its current significance. Three developments in the mid-20th century had a major influence on the creation of the modern teenager: the move toward compulsory education, which got adolescents out of farms and factories and into high school; the economic boom that followed World War II; and the widespread adoption of cars among American families.
The ancient Romans are known for many innovations that were ahead of their time, and some that seem ahead of even our time. Case in point: Concrete used in some ancient Roman construction is much stronger than most modern concrete, surviving for millennia and getting stronger, not weaker, over time. The secret ingredient? The sea. Builders mixed this ancient mortar with a combination of volcanic ash, lime, and seawater, creating a material that essentially reinforced itself over time, especially in marine environments.
Though George Washington is indisputably the first president of the United States, he technically wasn’t the first person in the federal government with the title of “president.” Washington was elected under the government formed by the ratification of the U.S. Constitution in 1788, but the Constitution wasn’t the only government-forming document in the nation’s history. Ratified in 1781, the Articles of Confederation — the United States’ first constitution — formed what’s known as the Confederation Congress. This early governing body was led by a president who held a one-year term, the first of whom was Samuel Huntington of Connecticut.
The well-coiffed men of the Victorian era often had some truly impressive beards. The look was partially driven by the desire to appear manly and rugged, but beards were also seen as a way to ward off disease. At the time, many doctors endorsed the miasma theory of disease, which (incorrectly) held that illnesses such as cholera were caused by bad air. Facial hair, it was believed, could provide a natural filter against breathing in so-called “miasms.”
Credit: Heritage Images/ Hulton Archive via Getty Images
When grocery owner Sylvan N. Goldman rolled out the first shopping carts in 1937, he expected a runaway hit. But the reaction wasn’t exactly enthusiastic. Women, already used to pushing strollers, weren’t eager to push another one at the store. Men, on the other hand, preferred not to push something stroller-like at all. Goldman even hired store greeters to hand shoppers a cart, and paid actors to walk around shopping with them until the idea finally caught on.
The earliest loaf of bread ever discovered is a whopping 8,600 years old, unearthed at Çatalhöyük, a Neolithic settlement in what is now southern Turkey. While excavating the site, archaeologists found the remains of a large oven, and nearby, a round, organic, spongy residue among some barley, wheat, and pea seeds. After biologists scanned the substance with an electron microscope, they revealed that it was a very small loaf of uncooked bread.
When the Mayflower passengers finally reached the shores of the New World, they spent a few weeks scouring the region for a spot to bunker down for the winter. As one passenger wrote in their journal, “[W]e could not now take time for further search or consideration, our victuals being much spent, especially our beer.” The Pilgrims promptly began building what became Plymouth Colony, with a brew house unsurprisingly among the first structures to be raised.
It’s easy to assume that Baby Ruth candy bars were named for the famed baseball player George Herman “Babe” Ruth Jr. Indeed, even the Great Bambino assumed as much at the time. After all, the nougaty confection debuted in 1921, after the ballplayer became a household name. But according to the official, legal explanation of the moniker, Baby Ruth bars were named after a different Ruth altogether: “Baby” Ruth Cleveland, the daughter of former U.S. President Grover Cleveland.
Credit: Photo Josse/Leemage/ Corbis Historical via Getty Images
For the thousand or so years that encompassed the Middle Ages, people in Western Europe sometimes slept in two shifts: once for a few hours usually beginning between 9 p.m. and 11 p.m. and again from roughly 1 a.m. until dawn. The hours in between were a surprisingly productive time known as “the watch.” People would complete tasks and chores, check on any farm animals they were responsible for, and take time to socialize.
Credit: H. Armstrong Roberts/ Retrofile RF via Getty Images
For most of human history, a birthday was just another day, and many people didn’t even know when theirs was. Ancient societies sometimes recorded births within noble or wealthy families, but systematic recordkeeping was rare. It wasn’t until the 1530s in England that churches were mandated to document baptisms. Similar practices appeared in colonial America, but birth registration didn't become widespread until the early 1900s.
The Dust Bowl wasn’t entirely confined to the Great Plains. Some of the dust storms that resulted from the natural disaster were so extreme that their clouds reached cities more than 1,500 miles away on the East Coast. Boston, Massachusetts, even saw red snow due to red clay soil becoming concentrated in the atmosphere.
As you may expect, “sept” is a prefix with Latin roots that means “seven.” And indeed, the month of September was originally the seventh month in the Roman republican calendar. That calendar was used in ancient Rome for hundreds of years before the debut of the Julian calendar in 46 BCE. January and February joined the Julian calendar as the first and last month of the year, respectively, but nobody changed September’s name.
We’ll never definitively know what presidents such as George Washington or Abraham Lincoln sounded like, since there are no audio recordings of their voices. The oldest existing recording of a U.S. president is the voice of Benjamin Harrison, the 23rd commander in chief, giving remarks at a diplomatic event. Harrison served from 1889 to 1893, and the audio recording dates to around his first year in office. His voice was captured on a wax cylinder phonograph, a recording device developed by Thomas Edison in the late 1880s.
When you think of synchronized swimming, you may picture the glittering “aquamusicals” of the 1940s and ’50s. But the idea of choreographed aquatic performance actually dates back nearly two millennia — to the watery amphitheaters of ancient Rome. Roman rulers were obsessed with turning water into spectacle, even flooding the Colosseum to do so. The Roman poet Martial described a performance in which women portraying Nereids, or sea nymphs, dove and swam in formation across the Colosseum’s waters.
If you were a well-to-do family in colonial America, you may have draped your floor in richly painted oilcloth, or even imported carpets across the Atlantic. Most people, however, settled for simpler floor coverings, such as straw matting or sand. The latter came with a bonus feature: You could turn it into decor if you were feeling creative, drawing fun designs in the sand as a temporary decoration.
Edinburgh Castle sits atop an imposing rock outcropping called Castle Rock. Ancient people started using the outcropping in the Bronze Age, and flattened its top around 900 BCE. But what they didn’t know is that hundreds of millions of years prior, that rock was the inside of a volcano. The volcano went dormant (and eventually extinct) roughly 340 million years ago, and the magma inside solidified, creating a rock formation that’s exceptionally sturdy and erosion-resistant — the perfect location for a stronghold.
Credit: The Picture Art Collection/ Alamy Stock Photo
When the San José first set sail in 1698, it probably wasn’t expecting to be making headlines three centuries later. The Spanish navy ship met its watery end off the coast of Cartagena, Colombia, with 200 tons of gold and emeralds aboard. Now known as the “holy grail” of shipwrecks, it’s presumed to be worth as much as $18 billion, which explains why several different entities have laid claim to the wreck since its discovery in the 1980s.
Though it’s often seen as a quintessentially American custom today, tipping has its roots in the feudal societies of medieval Europe. In the Middle Ages, wealthy landowners occasionally gave small sums of money to their servants or laborers for extra effort or good service. The gesture later evolved into a more formal custom: By England’s Tudor era, guests at aristocratic households were expected to offer “vails” to the household staff at the end of their stay.
The terra-cotta army in China is a collection of more than 7,000 life-size clay soldiers created in the third century BCE, each made with remarkable unique detail. But there used to be yet another layer of detail: Originally, these figures were painted in various vibrant colors. After the statues were sculpted, fired, and assembled, artisans applied lacquer (derived from a lacquer tree), followed by layers of paint made from cinnabar, malachite, azurite, bone, and other materials mixed with egg.
Among the European aristocracy in the 16th and 17th centuries, especially in France and England, handkerchiefs were meant for display, whether in a pocket, a hand, or as part of an elaborate social ritual. These were no ordinary hankies; they were made with intricate lacework and fine embroidery. Wealthy Europeans posed for portraits with their hankies, bequeathed them in wills, and included them in dowries.
In 1815, Mount Tambora in Indonesia erupted with extraordinary force. The fallout dimmed the sun worldwide, lowering temperatures and devastating harvests. As a result, food prices soared, and horses were slaughtered for meat or starved for lack of feed. This sudden scarcity of transport led to an innovation. In 1817, German inventor Baron Karl von Drais unveiled his Laufmaschine, or “running machine” — a simple two-wheeled wooden frame that riders straddled and propelled by pushing their feet along the ground. Like modern bicycles, it could travel far faster than walking, even on muddy post-rain roads.
Ever since the first Neanderthal fossils were identified in the mid-19th century, these ancient humans have generally been portrayed as brutish and inarticulate — interpretations steeped in the racial prejudices of Victorian-era anthropology. But we now know more about this extinct species than ever before, and new discoveries tell a very different story.
Neanderthals lived in groups, cared for their communities, and likely used sound and speech in far more complex ways than once imagined. Today, scientists are combining fossil evidence, computer modeling, and genetics to find out if Neanderthals really could talk — and if so, what they sounded like. By studying Neanderthal anatomy and their hearing range, researchers are reconstructing this lost voice — and, in doing so, uncovering clues about how the capacity for language evolved in us.
Neanderthals (Homo neanderthalensis) were a remarkably adaptable species of early humans who lived across Europe, West Asia, and parts of the Middle East from roughly 400,000 to 40,000 years ago. They are our closest extinct relatives — genetically distinct from modern humans (Homo sapiens), yet descended from the same ancestral population. Fossil and archaeological evidence shows that Neanderthals thrived for hundreds of millennia, forming resilient, interconnected communities.
Skilled toolmakers and problem-solvers, Neanderthals were capable of shaping stone, bone, and wood into tools for hunting and daily life. They controlled fire, built shelters, made and wore clothing, and even created ornamental and symbolic objects. Burials and healed injuries, meanwhile, hint at compassion and care within their groups.
Physically, these early humans were powerful and well adapted to cold climates, with broad noses, strong limbs, and compact builds that conserved heat. Yet their anatomy also held clues to a sophisticated capacity for communication — including skulls, throats, and inner ears that closely resemble our own.
One key clue to Neanderthal vocal ability comes from a small U-shaped bone in the throat, called the hyoid. This bone anchors the tongue and supports the muscles involved in speech.
A remarkably well-preserved hyoid from the Kebara 2 skeleton — discovered in northern Israel in 1989 and dated to about 60,000 years ago — is nearly identical in shape to that of modern humans. This suggests that Neanderthals had the anatomical foundation necessary for controlled, speechlike vocalizations rather than mere grunts or animalistic calls.
The discovery of the Kebara 2 hyoid triggered renewed debate over whether they possessed a “modern” vocal tract — and more refined modelling in recent decades has helped the idea gain traction in anthropological circles.
Neanderthals’ hearing ability also contributes to what we know about how they may have communicated. CT scans and 3D reconstructions of Neanderthal ear canals and middle-ear bones reveal that their hearing range peaked between 4 and 5 kHz — a frequency band important for perceiving the subtle consonant sounds of human speech. This finding indicates that Neanderthals were attuned to hear the same sound range that Homo sapiens use in spoken language, suggesting their communication system may have been more developed than previously thought.
Advertisement
Advertisement
Credit: Bettmann Archive via Getty Images
Reconstructing the Neanderthal Voice
Using CT scans and 3D models of Neanderthal skulls, vocal tracts, and hyoid bones, scientists are now building detailed reconstructions of how these ancient humans produced sound. Their vocal anatomy was similar but not identical to that of modern humans. Neanderthals had slightly shorter vocal tracts and differently angled skull bases, both of which influence how air resonates as it moves through the throat and mouth. These anatomical differences likely shaped the quality of their speech, producing a voice that would have been recognizably human but with a narrower dynamic range — fewer sharp contrasts between certain vowels and a generally more uniform resonance.
Even so, anatomical modeling shows that Neanderthals could have produced a wide array of consonant and vowel-like sounds, including core vowel targets such as “ee,” “ah,” and “oo.” In other words, they had the physical capacity for speech, though their voices may have sounded flatter or more nasal than a modern human’s voice.
What remains uncertain is the complexity of the language they used. Because we can rely only on models derived from fossil anatomy, scientists cannot determine how expressive or syntactically flexible Neanderthal speech might have been. Recent research suggests they likely could speak, but they may not have used metaphors or the kind of layered syntax that modern humans rely on to express abstract ideas, emotional nuance, or the relationship between concepts.
By contrast, early Homo sapiens — who lived alongside Neanderthals for thousands of years — may have developed greater vocal flexibility and more complex syntax. That linguistic agility likely gave our ancestors an edge: Language helped them organize larger groups, share knowledge, and build cultures that endured. Even so, the divide wasn’t vast. The two groups interbred, shared tools, and may have exchanged words. If you stood beside a Neanderthal 50,000 years ago, their voice might have sounded unfamiliar — but recognizably human.
Credit: Christine_Kohler/ iStock via Getty Images Plus
Author Bess Lovejoy
November 26, 2025
Love it?109
Tests are rarely enjoyable, but imagine taking one in the late 1800s, long before multiple-choice options or standardized curricula. Back then, school exams could be long, demanding, and startlingly wide-ranging. You might be asked to diagram sentences, explain the circulation of the blood, name the capitals of ancient empires, or sketch a map — all before lunch.
One window into this world is The New Common School Question Book, compiled by Wisconsin superintendent Asa H. Craig. Published in 1899 with earlier versions dating back to 1872, this question book was used by candidates preparing for teacher exams, teachers writing tests for students, and common school (public school) students of various ages — common school was generally grades 1 through 8 — studying for those tests.
The book’s thousands of questions, which are available in the Library of Congress archive, span a dizzying list of subjects — U.S. history, geography, English grammar, letter writing, written arithmetic, bookkeeping, drawing, inventions, government, physiology, and more.
The result is a vivid snapshot of what 19th-century Americans considered essential knowledge. Some questions still feel familiar, while others reflect a considerably different world.
So, could you pass a school exam from the 1800s? Let’s find out. Note: The questions and answers below are verbatim and may reflect the knowledge or biases of the time.
A. Maine, New Hampshire, Vermont, New York, Michigan, Minnesota, North Dakota, Montana, Idaho, Washington.
Q. How great is the earth’s annual motion?
A. About 68,000 miles an hour.
Q. What is essential in every syllable?
A. A vowel.
Q. Which are the five largest islands in the world?
A. Australia, Greenland, Borneo, New Guinea and Madagascar.
Q. What do the words per cent mean?
A. By the hundred.
Q. What nations explored the country now known as the United States?
A. The Spaniards, English, French and Dutch.
Q. What state is the geographical center of the United States?
A. Kansas, if we do not include Hawaii in the United States. It is also the center of population.
Q. What is the meaning of “sargasso”?
A. It is a Spanish name, meaning grassy.
Q. What part of North America is in the same latitude as England and Ireland?
A. The southern part of Canada.
Q. How are the teeth set in the jaw?
A: With long fangs, so that they may not easily be started from their places.
Q. How and by whom was alcohol discovered?
A. It is said that Paracelsus, a chemist of the fourteenth century, accidentally discovered alcohol, and upon testing its power boasted of having found the essence of life, the power to cure the weak, and the great benefactor of mankind.
Q. Define a fibre.
A. It is a thread of exceeding fineness, and may be round or flattened.
Q. Where and when was the first white child of English parents born in America? What was her name?
A. At the temporary settlement on Roanoke Island in 1587. Virginia Dare.
Q. Who said “I would rather be right than President” and why did he say it?
A. It was an expression of Henry Clay when his friends insisted that to advocate the compromise would lessen his chances for the Presidency. This step demanded great moral courage, as it required a partial surrender of his cherished theories of protection and an open breach with many political friends.
Advertisement
Advertisement
Q. What is the area of a circle whose diameter is 1 foot 1 inch?
A: 132.73 square inches.
Q. How are parts of Western Texas occupied?
A. By herds of wild horses.
Q. What became of [John Wilkes] Booth’s accomplices?
A. Harold, Payne, Atzerodt, and Mrs. Surratt were hanged; Arnold, Mudd and O’Laughlin were imprisoned for life, and Spangler was sentenced for six years.
Q. What is an impersonal verb?
A. A verb having person and number without a subject; as, methinks, meseems.
Q. Name five of the principal articles exported by the people of the United States.
A. Cotton, wheat, pork, cheese, machinery.
Q. The time since noon is 7/17 of the time to 4 o’clock p.m.; what is the time?
A. 10 minutes past 1 o’clock p.m.
Q. What law is impressed on all animal beings?
A. The law of continual change.
Q. What is a standard unit?
A. A unit of measure from which the other units of the same kind may be derived.
Q. If 3 gallons of brandy, at $3 a gallon, and 5 quarts of alcohol, at 40 cents a gallon, be mixed with 1/2 gallon of water, for what must the mixture be sold a gallon to gain 37 per cent?
A. $2.74.
Q. What can be said of the fisheries of the Columbia River?
A. They are an immense industry.
Q. What is the aim of drawing?
A. The aim of drawing is to secure culture through the senses by which we apprehend the forms of things.
Q. What are the wastes of Patagonia?
A. Sterile tracts covered with sand and gravel.
Q. To whom does the honor of having first established religious freedom in America belong?
A. To the Roman Catholics of Maryland, by the “Toleration Act” of 1649.
Q. What great calamity visited the people of America in August 1793?
A. The yellow fever broke out in Philadelphia, and raged with such virulence that within three months, out of a population of 60,000, no less than 4,000 perished.
Q. Why should care be banished from the table?
A. Care or grief restrains digestion. The nervous action holds the nourishing organs of the system back. But with merriment and pleasant thoughts the opposite is the case.
Q. A boy [was] hired to a mechanic for 20 weeks on condition that he should receive $20 and a coat. At the end of 12 weeks the boy quit work, when it was found that he was entitled to $9 and the coat; what was the value of the coat?
A. $7.50.
Advertisement
Advertisement
This Ancient Civilization Was More Advanced Than Rome
Credit: Dorling Kindersley/ Dorling Kindersley RF via Getty Images
Author Timothy Ott
November 26, 2025
Love it?88
Few history buffs need to be reminded of the accomplishments of the Roman Empire, which contributed lasting innovations in construction, publishing, law, and many other fields. Far lesser known and understood, however, is the Indus Valley civilization that sprung up around the Indus River and its adjacent waterways in modern-day Pakistan and eventually stretched into parts of modern-day India and Afghanistan.
The Indus Valley’s peak years lasted from approximately 2600 BCE to 1900 BCE, around the same time that cities in Mesopotamia and Egypt were thriving. At the height of the civilization, the Indus people enjoyed advancements that not only surpassed those of their contemporaries but also rivaled — and in many cases outshone — the advancements that arrived more than a millennium later in ancient Rome.
One major drawback to studying the Indus Valley civilization is that, unlike the cuneiform script of Mesopotamia and the hieroglyphics used by the Egyptians, the Indus people’s distinct system of writing has yet to be deciphered. But while that has prevented historians from gaining significant insight into the minds of these ancient denizens, archaeological discoveries have provided plenty of evidence for their advanced thinking.
Unlike the chaotic pathways of Mesopotamia, the streets of Indus Valley cities were established in a grid system oriented along the north-south and east-west axes, intersecting roads at right angles to one another, which allowed for an orderly flow in population centers that hosted upwards of 35,000 residents.
Main thoroughfares could reach 30 feet wide to allow for the passage of carts, while the entrances of houses were stationed off narrower alleyways, away from the busy streets. Most homes received water furnished by a private well and were typically positioned around a central courtyard to provide an area for light and ventilation.
The cities themselves were built on massive stone platforms, in some cases covering more than 80,000 square feet, to remain above the floods of the Indus River. One of the largest cities, Mohenjo-daro, is famed for its Great Bath measuring nearly 900 square feet; the ruins of this ancient hub are now a UNESCO World Heritage Site. And the Lothal archaeological site, another ancient city, features a basin around 700 feet long and 100 feet wide that is believed to be the world’s first dockyard.
Underlying all this construction was a system of measurement that followed carefully delineated ratios and led to the creation of oven-baked bricks of identical size. This, in turn, led to standard sizes for streets and buildings that could be found across urban centers throughout the Indus Valley.
Cities in the Roman Empire — which began in 27 BCE, more than a thousand years after the Indus Valley’s decline — were also renowned for adhering to a grid system, known as centuriation, and for a well-planned layout that placed forums and amphitheaters at the intersection of major thoroughfares. Yet Rome itself was not so carefully organized, with its collection of narrow, winding streets that emerged amid the hilly, swampy terrain of the original settlement. These conditions proved problematic as the city swelled to more than a million residents by the imperial period, with many Romans packed into multistory apartment buildings known as insulae that were susceptible to fires.
Perhaps the most impressive features of Indus Valley cities are the pioneering indoor plumbing and waste management systems that helped curb the spread of diseases. Virtually every home had an indoor washroom and latrine, with brick pavement floors packed tightly to prevent leaking and sloped to ensure proper drainage. Waste traveled through terra-cotta pipes that were routed to covered drain ditches that ran along city avenues and into an underground sewage system that flowed out of the city. Screens were installed at various areas for the collection of solid waste, while other points along the sewage network had holes and removable stones that allowed for inspection.
A millennium later, Rome also had an extensive underground sewer system, as well as a dedicated waste-management labor force and even a collection of public toilets for its citizens. However, the indoor plumbing structure that was enjoyed by just about every Indus Valley city-dweller was a luxury that was available only in Rome’s more expensive neighborhoods.
One of the major mysteries of the Indus Valley civilization is the dearth of surviving structures pointing to a clear ruling authority, such as a palace or royal tomb for a monarch, or a temple for a dominant religious organization.
The lack of such grand edifices matches up with other architectural features of Indus Valley cities: Homes, while differing in size, were not so drastically misproportioned as to highlight major discrepancies in wealth between residents. What’s more, public buildings such as the Great Bath were located in easily accessible areas that underscored the idea of communal sharing.
This has prompted some historians to posit that Indus Valley communities were organized by the principles of a heterarchy, with contributions coming from different groups of people, as opposed to the top-down demands of a hierarchy. And that could well have fostered a system of social equality that would have been unheard of in Rome, with its aristocratic class of patricians holding control in the early days of the republic, before power concentrated in the hands of an emperor in the imperial era.
For all their achievements, the Indus people couldn’t stave off the demise that felled Rome and the other great cultures of antiquity; the civilization began to decline around 1700 BCE due to a combination of factors including climate change and dwindling trade. Nevertheless, this often-overlooked culture has received its belated due for being far ahead of its time, a testament to the impressive records it left behind even as more remain to be revealed.
Though exceptions do exist — Abraham Lincoln and Andrew Johnson had no formal education at all — most U.S. presidents have earned at least an undergraduate degree. And in the majority of cases, their fields of study were well aligned with the role of POTUS.
Subjects such as history, political science, law, and economics have long been common choices for a career in politics, while earlier leaders often followed a broad liberal arts education. But not all U.S. presidents chose subjects that were a natural fit for a future in the Oval Office. Here are four presidents whose fields of study might seem surprising for the commander in chief.
James Madison had an inquisitive mind long before he became the fourth president of the United States and the “Father of the Constitution.” As a teenager, he was sent to the College of New Jersey — which later became Princeton University — where he studied Latin, Greek, and theology, and read the Enlightenment philosophers. He completed the required three-year course of study in two years, but remained for an additional year to study Hebrew.
At the time, Madison was considering a career as a clergyman, and a knowledge of Hebrew was important for biblical scholarship. That career, of course, never materialized, and Madison went on to become a statesman, diplomat, U.S. founding father, and president of the United States. He remains the only POTUS to speak Hebrew, and one of 20 U.S. presidents (out of 45) to speak a second language.
Credit: Historical/ Corbis Historical via Getty Images
Herbert Hoover: Geology
As a young man, Herbert Hoover was determined to go to the newly established Stanford University in Palo Alto, California. He studied hard and barely scraped through the university’s entrance exam. He was initially interested in mechanical engineering but soon changed his major to geology.
Hoover graduated in 1895 but struggled to find a job. Eventually, however, his career as a mining engineer took off, taking him around the world, including to Australia and China. He went on to earn a fortune — around $4 million by 1914 — from investments in mines in Australia, Russia, and Myanmar, and income from consulting operations around the world. Despite his highly specialized technical background, Hoover eventually moved into politics, becoming president in 1929.
America’s 39th president was born in 1924 in the small farming town of Plains, Georgia. He was educated in local public schools, studied engineering at Georgia Southwestern College and the Georgia Institute of Technology, and later received a Bachelor of Science from the United States Naval Academy in 1946. Carter then became a submariner, rising to the rank of lieutenant.
At the time, America’s nuclear submarine program was in its infancy. Carter was chosen to work on the program and was assigned to Schenectady, New York, to undertake graduate studies in reactor technology and nuclear physics at Union College. These highly specialized studies eventually led him to be selected as a senior officer on the USS Seawolf, the world’s second nuclear submarine. But when Carter’s father passed away, he was forced to resign to take over the family peanut farm. Nuclear science wasn’t particularly useful in the world of peanut farming, but it is certainly unique in the history of presidential qualifications.
Credit: Historical/ Corbis Historical via Getty Images
Lyndon B. Johnson: Teaching
When he was 12 years old, Lyndon B. Johnson famously told his classmates, “You know, someday I’m going to be president of the United States.” It was a bold claim, and one that initially seemed quite unlikely. The young Johnson went to summer courses at Southwest Texas State Teachers College, where he failed to impress. He then drifted about, did odd jobs — including manual labor on a road crew — and started getting into drunken fights, eventually leading to his arrest.
In 1927, Johnson got his life back on track. He returned to Southwest Texas State Teachers College and was assigned a teaching job at a tiny school in a very impoverished and largely Latino area, where he witnessed the extremes of both poverty and racism. He excelled in the role, and his experiences there had a profound effect on him – instilling a lifelong commitment to addressing poverty and advocating for civil rights. Johnson graduated in 1930 with a Bachelor of Science in history and a certificate of qualification as a high school teacher. He soon decided, however, that a teaching career wasn’t for him, turning instead to a life in politics that ultimately led to the White House — just as he had predicted as a child.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.