Credit: MSPhotographic/ iStock via Getty Images Plus
Author Paul Chang
December 4, 2025
Love it?0
Once a staple of diners and TV dinners, Salisbury steak has quietly disappeared from American menus in recent decades. What began as a 19th-century “health food” became a frozen dinner icon, only to fall victim to changing tastes. Here’s a look back at the history of this once-proud patty.
Credit: Graphic House/ Archive Photos via Getty Images
The Origins of Salisbury Steak
Salisbury steak — seasoned ground beef patties mixed with breadcrumbs or other ingredients — was invented by James Salisbury, a New York physician who was fascinated by the relationship between diet and health. In the 1850s, he conducted a series of self experiments in which he exclusively ate a single food for a few days or weeks. His first test, a diet of only baked beans, produced disastrous results: “I became very flatulent and constipated, head dizzy, ears ringing, limbs prickly, and was wholly unfitted for mental work,” he wrote in The Relation of Alimentation and Disease (1888). Next came oatmeal and other staples, but it was ground beef, which he called “muscle pulp of beef,” that finally delivered the results he sought.
His prescription was simple: broiled beef patties, served with simple seasonings such as butter, salt, pepper, Worcestershire sauce, and lemon. This recipe, he wrote, “affords the maximum of nourishment with the minimum of effort to the digestive organs.” Vegetables, on the other hand, were not only unnecessary but also harmful in his view; Salisbury declared that vegetarians had “less nervous and muscular endurance than meat eaters.”
During the Civil War, Salisbury applied his nutrition theories to Union soldiers, who were fed a diet of beef patties instead of the usual hard biscuits, which tended to cause digestive issues. In the decades following the war, Salisbury’s diet evolved from medical prescription to one of America’s first fad diets, spurred on by the rise of wellness culture and increased public interest in healthy food. (Other notable 19th-century foods believed to promote health included graham crackers, which were originally intended to suppress the libido, and Kellogg’s cornflakes, which were served by their inventor John Harvey Kellogg at his world-famous sanitarium.)
World War I was a further boon for the Salisbury steak, as the name became preferred to the German-sounding “hamburger.” By the mid-20th century, the dish had become a blue-collar classic. Hearty, inexpensive, and easy to prepare in large batches, it appeared on diner menus and Navy cookbooks across America.
Then came the postwar boom — and with it, the rise of television. In 1953, the Swanson company introduced the TV dinner, a frozen meal packaged in a compartmentalized aluminum tray. Within its first year of production, more than 10 million trays were sold in the U.S., and the Salisbury steak was one of the TV dinner’s most enduring offerings. For a generation of midcentury families, Salisbury steak was the taste of modern life: a hot meal ready in minutes, eaten in front of the television.
Advertisement
Advertisement
Credit: Jerry Cooke/ Corbis Historical via Getty Images
So What Happened to Salisbury Steak?
The same forces that made Salisbury steak famous would later make it unfashionable. In the 2000s, Americans began to fall out of love with processed foods, in part due to growing awareness of their negative health effects. In 2012, New York City Mayor Michael Bloomberg proposed a ban on large sodas in the city, and First Lady Michelle Obama encouraged children to “Move!” The Salisbury steak, which was by then synonymous with microwaved TV dinners, fell victim to its association with cheap, unhealthy meals.
That said, not all processed foods suffered this fate, and another reason for Salisbury steak’s decline may be that it was outcompeted by its more convenient cousin, the burger, which rose to prominence as one of America’s most popular and iconic meals during the latter half of the 20th century. With the spread of fast-food chains such as McDonald’s and Burger King, many customers opted for the Salisbury steak’s handheld cousin when in the mood for a beef patty.
Though Salisbury steak has faded from the cultural spotlight in recent years, it is viewed with nostalgia by many who grew up with it and can still be found in home kitchens across America.
Ancient Egypt was an industrious civilization, with extensive trade networks, a highly productive agricultural economy, and ambitious construction projects that ranged from ship building to monumental architecture. It existed for more than 3,000 years, expanding from a scattering of hunter-gatherer settlements to become one of the greatest empires in the world, famed for its hieroglyphic writing, advanced mummification, and, of course, the pyramids.
In order to fuel such industry, one thing was highly important: a solid breakfast. Luckily for the Egyptians, the fertile Nile River Valley provided abundant crops, helping transform Egypt into one of the most powerful agrarian civilizations of the ancient world. Annual flooding of the Nile created fields so fertile that in a good season, Egypt produced enough food to feed every person in the country with ease and still have a surplus.
So, there was little excuse for a laborer to skip breakfast, or for a pharaoh’s first meal of the day to be anything short of satisfactory. Here’s a look at what ancient Egyptians consumed in the morning before a long day in the Land of Ra.
Bread was a fundamental staple of the ancient Egyptian diet. According to William Rubel in Bread: A Global History, more bread-related artifacts have been found from ancient Egypt than from any other period. The Egyptians made their bread from emmer wheat, one of the first crops domesticated in the region. Emmer is a hulled wheat, making it more difficult to turn into flour than other varieties. It was ground on flat stones called querns and then baked in ovens.
Despite the popularity of bread in ancient Egypt, archaeologists have discovered one notable drawback. During the bread-making process, airborne sand often got into the mix, resulting in widespread dental problems caused by the abrasive nature of sand and grit in this common breakfast food.
Bread for breakfast seems totally normal to our modern sensibilities — but beer for breakfast? Not so much. In ancient Egypt, though, beer was a primary source of nutrition, consumed daily by the working classes. (The upper class typically favored wine.) Unlike modern brews, the beer of ancient Egypt was often opaque, thick, and more like a soup or gruel.
Made from barley or emmer wheat, the beer was typically quite low in alcohol content, around 3% to 4% — but still enough to start the day with a spring in one’s step. (Beer prepared for religious festivals and celebrations was normally of better quality and had a higher alcohol content.) The beverage was also sometimes issued as payment, and laborers — including those who built the pyramids of Giza — were provided with a daily ration of about 10 pints.
Advertisement
Advertisement
Credit: Tessa Bunney/ Corbis News via Getty Images
Green Onions
What goes well with bread and beer for breakfast? Green onions, of course. In ancient Egypt, a simple breakfast of bread, beer, and green onions was fairly typical, providing enough energy for a laborer to get through much of the day. Green onions, also called scallions, were among the most common vegetables in ancient Egypt, and were valued for more than just their flavor and nutritional value. The Egyptians believed that onions warded off evil spirits, protected against the evil eye, and also cured most ailments, making them a fine way to start the day.
Credit: Duane Braley/ Star Tribune via Getty Images
Garlic
As they did with green onions, the ancient Egyptians regarded garlic as having culinary, medicinal, and protective qualities. Garlic formed part of the daily diet of many Egyptians, and was considered a key food for maintaining and increasing strength. Because of this, it was given to workers involved in heavy labor, including those who built the pyramids. It’s uncertain whether the Egyptian elites consumed garlic with such frequency, but historians do know that it was used in rituals. Well-preserved garlic cloves were also discovered in the tomb of Tutankhamun.
Egypt is the world’s top producer of dates today, yielding around 1.7 million tons annually. And dates were just as popular back in ancient times. They could be eaten fresh or dried for preservation, and the fruit’s high sugar content made it an ideal sweetener. Dates were sometimes baked into bread, and may occasionally have been added as a flavoring agent in beer — providing some necessary sweetness to counteract the green onions and garlic. When dates weren’t available, figs could serve a similar role in any ancient Egyptian breakfast.
While most ancient Egyptians used dates and figs as sweeteners, those who could afford it used honey. Ancient beekeepers collected honey by making cylindrical hives out of mud and clay, which they kept on rafts to allow for seasonal movement along the river. Wealthier Egyptians could afford this superior sweetener, which they would eat for breakfast with their bread, or use to sweeten beer and cakes. Honey nut cakes, made with tiger nuts (a type of tuber), were particularly popular among the aristocracy.
The ancient Egyptians also valued honey for its medicinal qualities, using it to treat wounds and soothe coughs. And, when mixed with garlic and the âfai plant, honey was considered highly effective at keeping ghosts at bay, making it the perfect breakfast ingredient for anyone looking for a little extra protection for the day.
Take a look around your kitchen and chances are you’ll spot a few brand names you’ve known your whole life. From Keebler cookies to Campbell’s soup, certain monikers feel like part of the family.
Food branding — the use of a distinctive name or mark to identify a product — emerged in the late 19th century as industrialization made large-scale food production and packaging possible. Before then, most foods were sold in bulk, with no consistent labeling. Branding introduced the idea of reliability and reputation, allowing consumers to recognize and trust particular producers.
A few food and drink companies, however, had already established identities long before branding became widespread. Some of these brand names predate the industrial era itself and have remained in continuous use for centuries. How many do you have in your pantry?
In 1706, English merchant Thomas Twining opened Tom’s Coffee House in London and began offering fine-quality tea alongside the typical coffee and hot chocolate, in the hopes of standing out from the competition. Social convention prohibited women from visiting coffeehouses, so Twining expanded his business in 1717 to include a coffee and tea shop where women could buy their tea directly.
The Twinings Tea logo, created in 1787, is recognized as the oldest continuously used unaltered corporate logo in the world, and the company has operated from the same address — 216 Strand in London — since its founding. It was acquired by Associated British Foods in 1964, though members of the Twining family are still involved in its operations.
Perhaps surprisingly, the baking chocolate in our favorite homemade desserts owes its name to its founder, not its use. Baker’s Chocolate began in 1764 in Dorchester, Massachusetts, when Irish immigrant John Hannon, a chocolate maker, and physician James Baker established a small mill on the Neponset River to grind cacao beans into drinking chocolate. When Hannon disappeared on a cocoa-buying voyage in 1779, Baker assumed full control and began marketing the product under his name.
In 1823, the company was formally incorporated as Walter Baker & Company, named for Baker’s grandson Walter. At the time, chocolate was consumed primarily as a beverage — Baker’s didn’t start manufacturing sweetened chocolate bars until 1849. The company went through multiple ownership changes during the 19th and 20th centuries but continued producing chocolate under the Baker’s name, which remains in use today as part of the Kraft Heinz Company.
The company now known as King Arthur Baking traces its origins to 1790, when Boston merchant Henry Wood founded Henry Wood & Company to import and distribute flour. In 1896, the company — by then operating as Sands, Taylor & Wood — introduced its own flour brand under the name King Arthur Flour at the Boston Food Fair.
The company relocated to Norwich, Vermont, in the late 20th century and officially adopted the name King Arthur Flour Co. Employee-owned since 2004, the company updated its name to King Arthur Baking Company in 2020, maintaining continuity of its 19th-century brand name while reflecting its broader range of baking products.
Keebler began in 1853 when German immigrant Godfrey Keebler opened a small bakery in Philadelphia. From there, he took on business partners and eventually incorporated under the name Keebler-Weyl Baking Company. In 1927, the business became part of the United Biscuit Company of America, which went on to adopt the Keebler brand name for all of its products in 1966. But it wasn’t until 1969 that the first Keebler elf — dubbed J.J. Keebler — was introduced in a television commercial. In 1936, Keebler-Weyl became the first commercial baker of Girl Scout Cookies; today, a division of Keebler, Little Brownie Bakers, continues the tradition.
The company now synonymous with tomato soup began in 1869 in Camden, New Jersey, as a partnership between Joseph A. Campbell, a fruit and vegetable merchant, and Abraham Anderson, a commercial canner and packer. The business was initially called Anderson & Campbell and produced canned tomatoes, vegetables, and jellies. In 1895, the company introduced its first jar of ready-to-eat beefsteak tomato soup. Two years later, in 1897, chemist John T. Dorrance developed the firm’s condensed soup process, allowing it to be packaged in cans, and the following year the company introduced its iconic red-and-white soup can label. In 1922, the name changed to The Campbell Soup Company, and in 2024 it became The Campbell’s Company.
At the recommendation of his uncle, Charles A. Pillsbury invested in a struggling Minneapolis flour mill in 1869. The business thrived under his management and he expanded operations along the Mississippi River, renaming the company Charles A. Pillsbury and Co. In 1889, the company was purchased by an English financial syndicate and merged with other holdings to become the Pillsbury-Washburn Flour Mills Company, Ltd. The Pillsbury family eventually regained ownership in the 1920s and incorporated the business as the Pillsbury Flour Mills Company in 1935. Ownership changed hands in subsequent decades and the historic company was acquired by General Mills in 2001.
The Quaker Oats story began in 1877, when Henry Seymour and William Heston founded the Quaker Mill Company in Ravenna, Ohio, and trademarked the Quaker Oats name and image of a Quaker man to distinguish their oats from competitors. The struggling mill was bought a few years later, in 1881, by entrepreneur Henry Parsons Crowell, who turned the brand into a national name with the first magazine ad campaign for a breakfast cereal.
In 1888, the Quaker Mill Company joined six other oat producers to form the American Cereal Company, which pioneered modern packaging and marketing. By 1891, Quaker was printing recipes — such as Oatmeal Bread — right on its boxes, another industry first. The company officially became The Quaker Oats Company in 1901, and a century later, after a 2001 acquisition by PepsiCo, the name evolved to Quaker Foods and Beverages.
Advertisement
Advertisement
10 Defunct Fast-Food Restaurants That People Used To Love
There’s something timeless about fast food: the neon-lit parking lots, the scent of grilled burgers and deep-fried chicken, the thrill of unwrapping something hot and delicious. The experience is as satisfying today as it was 50 years ago.
Americans’ love affair with fast food began with White Castle, the first chain fast-food restaurant, founded in 1921 in Wichita, Kansas. Famous for its small “slider” burgers, standardized production, and spotless kitchens, White Castle created the template for fast food as we know it — and it’s still going strong today, with 345 locations across the U.S.
A few decades later, McDonald’s took the concept nationwide, opening its first franchised restaurant in 1955. Today, Mickey D’s operates more than 13,000 U.S. locations and 28,000 internationally, serving billions of burgers, fries, and shakes in more than 100 countries. Along the way, dozens of other fast-food chains emerged, but not all stood the test of time.
From burger empires to roast beef innovators and seafood specialists, here are 10 defunct fast-food restaurants that once ruled the roadsides and mall food courts before disappearing (or nearly so). How many of these vintage chains have you visited?
At its peak in the 1970s, Burger Chef was one of America’s largest fast-food chains, boasting more than 1,200 locations across 38 states. Founded in 1958, it earned a loyal following for its flame-broiled burgers, creative advertising, and its Funmeal, a kids’ combo that debuted before McDonald’s introduced the Happy Meal.
The chain’s reputation took a hit after a tragic incident in 1978 in which four employees disappeared in an unsolved murder. This tragedy, combined with growing competition from Burger King and McDonald’s and some corporate missteps, caused the chain’s decline, and it was eventually sold to Hardee’s in 1982.
With more than 3,000 locations across 19 states and several countries, Red Barn was once a major player in the fast-food landscape. Established in Ohio in 1961, this beloved dining spot was memorable for its barn-shaped buildings and its jingle, “When the hungries hit / Hit the Red Barn.” The chain pioneered salad bars and self-service beverage stations long before they became industry standards, and it introduced the cartoon characters Chicken Hungry, Big Fish Hungry, and Hamburger Hungry.
Despite it being one of the first chains to have a self-serve salad bar and introducing a double-decker burger (Big Barney) years before the McDonald’s Big Mac, corporate mergers and rising competition in the 1970s led to Red Barn’s bankruptcy in 1986.
Burger Queen was founded in 1956 in Winter Haven, Florida — two years after Burger King’s debut — serving burgers, fries, milkshakes, and later, fried chicken, breakfast, and salad bar items. At its peak, the chain had more than 200 U.S. locations and a cartoon mascot, Queenie Bee, introduced in 1971.
But legal battles with Burger King and Dairy Queen complicated growth, and the chain rebranded as Druther’s in 1980 to better reflect its broader menu. After forming an equity partnership with Dairy Queen in the late 1980s, most Druther’s locations became Dairy Queen franchises, leaving only one independent Druther’s in Campbellsville, Kentucky.
Founded in 1969 in Columbus, Ohio, this restaurant was named after English actor Arthur Treacher and capitalized on the British pop culture craze with its fast-food take on fish and chips. At its height, the chain had 826 locations across the U.S., many featuring something that hadn’t been seen before in the fast-food world: a baked potato bar.
Unfortunately for the chain, the 1970s “Cod War” between the U.K. and Iceland sent fish prices soaring, while changing consumer tastes and declining interest in fried seafood further hurt profits. By the late 1980s, most locations had closed or been rebranded. Today, only three stand-alone Arthur Treacher’s restaurants remain.
Launched by Bresler’s Ice Cream Company in 1954 in Chicago, Henry’s Hamburgers once rivaled McDonald’s, with more than 200 locations across 35 states. Known for its “10 burgers for a dollar” deal, as well as 15-cent burgers, 10-cent fries, and colorful signage, it was a family favorite during the fast-food boom.
But inconsistent franchise oversight, failure to modernize for drive-thru trends, and declining sales doomed the brand by the mid-1970s. Today, a single Henry’s remains in Benton Harbor, Michigan — and has offered drive-thru service since 1988.
Founded in 1967 in Springfield, Ohio, Rax Roast Beef quickly became a formidable rival to Arby’s — expanding to more than 500 locations across 38 states by the mid-1980s. Known for its roast beef sandwiches and buffet-style salad bars, Rax aimed to elevate fast food with upscale decor and a diverse menu.
However, a misguided rebrand featuring the controversial “Mr. Delicious” mascot and an expanded menu that included pizza and Chinese food led to confusion and declining sales. Bankruptcy filings and ownership changes followed, and by the 1990s, most locations had closed. Today, only six Rax restaurants remain: one in Illinois, one in Kentucky, and four in Ohio.
Sandy’s got its start in 1958 in Peoria, Illinois, when a group of local entrepreneurs were denied a McDonald’s franchise. With Scottish-themed branding, a tartan-clad mascot named Sandy, and signature items such as the “Big Scot” burger, the chain emphasized thrift, speed, and friendly service. At its height, Sandy’s boasted more than 240 locations across 20 states.
In 1972, the chain merged with Hardee’s, and by 1979, the Sandy’s name was phased out — though a few independent franchises carried the brand into the 1980s under names such as Zandy’s and Bucky’s.
Wetson’s, founded in Levittown, New York, in 1959, grew into a regional hamburger chain with 72 locations at its peak in the 1960s and early 1970s. The brand was known for its budget pricing, the “Big W” signature sandwich, and clown mascots named Wetty and Sonny — plus slogans such as “Look for the Orange Circles.”
As national chains such as McDonald’s and Burger King expanded into the New York market, though, Wetson’s steadily lost ground. By the mid-1970s, it had merged with Nathan’s Famous — after which most Wetson’s locations were closed or rebranded.
Promising healthier fast-food options, D’Lites offered lower-calorie burgers, diet shakes, and other healthier takes on drive-thru favorites long before wellness culture went mainstream. After the first location opened in Norcross, Georgia, in 1978, the chain rode the early ’80s fitness craze to expand rapidly, reaching more than 100 locations across 19 states.
Overexpansion and high costs eventually led to bankruptcy in 1986. By 1987, most locations were either closed or acquired and converted into Hardee’s locations, as Hardee’s sought to grow quickly in the same regions. Short-lived though it was, D’Lites helped pioneer the idea that fast food could be healthier.
Credit: Tom Kelley/ Archive Photos via Getty Images
Pup ’n’ Taco
Founded in 1956 as a drive-in stand in Pasadena, California, Pup ’n’ Taco served an unusual but much-loved menu of hot dogs, tacos, burgers, pastrami sandwiches, and tostadas. The first fast-food restaurant officially named Pup ’n’ Taco opened in Pasadena in 1965, and at its peak the chain had 108 locations, mostly in Southern California and also in Albuquerque, New Mexico.
In 1984, Taco Bell (then part of PepsiCo) acquired 99 of the Pup ’n’ Taco locations, largely for their real estate, which effectively ended the chain. A few stores in New Mexico survived the deal under the name Pop ’n’ Taco, two of which lasted until 2013.
Credit: Museum of the City of New York/ Archive Photos via Getty Images
Author Bess Lovejoy
September 17, 2025
Love it?39
Few traditions feel as universal as gathering around a frosted cake, lighting candles, and singing “Happy Birthday.” While the ritual seems timeless, the story of why we eat cake on our birthdays stretches back thousands of years — winding through ancient temples, Roman banquets, German children’s parties, and American kitchens.
The word “cake” comes from the Old Norse kaka, but cakes in the ancient world looked quite different from today’s airy, sugar-laden desserts. Early cakes were dense, breadlike creations sweetened with honey, enriched with eggs or cheese, and flavored with nuts, seeds, or dried fruits such as raisins or figs. Archaeological and textual evidence shows that cakes were baked in Mesopotamia more than 4,000 years ago, and the Roman writer Cato described cakes wrapped in leaves and served at weddings and fertility rites.
But cakes weren’t just food — they were often sacred offerings. The Greeks presented honey cakes and cheesecakes to their gods, sometimes decorated with candles. One common offering to Artemis, goddess of the moon and the hunt, was the amphiphon, a round cheesecake topped with glowing candles meant to mimic the moon. Romans, too, baked cakes for religious purposes, including the libum, a mixture of cheese, flour, and egg baked on bay leaves as an offering to household gods. In these early forms, cakes linked the human and divine, symbolizing gratitude, fertility, or cosmic cycles.
Though cakes were plentiful, birthdays were only inconsistently celebrated in antiquity. While there’s some evidence for these annual celebrations in ancient Sumer, they weren’t common in ancient Greece. The Romans were the first to mark personal birthdays, though only for men — women’s birthdays weren’t celebrated until the Middle Ages. Roman citizens honored relatives and friends with feasts, and men turning 50 received a special cake made with wheat flour, nuts, honey, and yeast.
Still, birthday cake traditions were limited and inconsistent during ancient times. For centuries, cakes remained associated with weddings, festivals, and offerings to the gods rather than private anniversaries.
The modern birthday cake owes its clearest debt to 18th-century Germany. There, children’s birthdays were marked with Kinderfeste — festive gatherings that included a sweet cake crowned with candles. Each candle represented a year of life, plus one extra for the year to come, a tradition that still survives. The candles were lit in the morning, replaced throughout the day, and finally blown out in the evening after dinner. Much like today, children were encouraged to make a wish as the smoke carried their hopes skyward.
This ritual blended ancient practices, such as Greek candlelit offerings and Roman birthday cakes, into something recognizably modern: a sweet centerpiece, flames to mark the passage of time, and the magical moment of blowing out candles.
Even with Kinderfeste in Germany, birthday cakes weren’t for everyone, as the food was considered a luxury treat. For most of history, cakes required expensive ingredients such as refined sugar, fine flour, and butter. They were labor-intensive to make and decorated with painstaking artistry. In the early 19th century, American cookbook author Catharine Beecher suggested apple bread pudding as a child’s birthday treat — simple, hearty, and inexpensive. However, children’s birthday parties weren’t common celebrations in the U.S. until after the Civil War.
The Industrial Revolution played a decisive role in democratizing cake. As ingredients became cheaper and more widely available, bakeries could mass-produce cakes and sell them at affordable prices. And for home cooks, birthday cakes reached new heights of extravagance by the early 20th century. In 1912, the popular Fannie Farmer Cookbook included a full-page photo of a “Birthday Cake for a Three-Year-Old,” complete with angel cake, white icing, candied rose petals, sliced citron leaves, and sugar-paste cups for candles. These elaborate confections reflected both rising prosperity and a more ambitious form of cake decorating.
Today’s birthday cakes are often simple — such as a frosted sheet cake with a name piped across the top (a tradition that began around the 1940s) — but the symbolism has a long and rich history. Cakes mark abundance, indulgence, and festivity. Candles represent the passage of time, each flame a year lived, each puff a wish cast into the unknown. The act of gathering around the cake, singing together, and celebrating life’s milestones connects us across centuries to ancient rites of gratitude and hope.
From moonlit cheesecakes for Artemis to a 3-year-old’s angel cake in Fannie Farmer’s kitchen, birthday cakes tell a story about how humans celebrate time, community, and the sweetness of life. Every slice is part of a lineage that is both sacred and ordinary — proof that sometimes the simplest rituals carry the deepest history.
Credit: Debrocke/ClassicStock/ Archive Photos via Getty Images
Author Tony Dunnell
August 27, 2025
Love it?25
Humans are superstitious creatures by nature, with many strange habits that seem entirely illogical. We avoid walking under ladders or opening umbrellas indoors in fear of bad luck. We knock on wood to prevent disappointment. We shun the number 13 and we can’t quite decide whether black cats are good or bad omens. None of these actions makes much practical sense, and the same is true for a range of superstitions involving food.
Food is a necessity that keeps us functioning and alive, but eating is also a cultural experience, rich with symbolic gestures, long-held traditions, and curious rituals. These include plenty of superstitions believed to bring luck, prosperity, health, wealth, and a range of other supposed benefits. And while modern science may dismiss these practices as mere folklore with no logical basis, there are plenty of common food-based superstitions we just won’t let go.
Here are six superstitions involving food, all of which demonstrate the human desire to find greater meaning or significance in the otherwise simple and essential acts of cooking, eating, and sharing meals.
According to one common food superstition, if you accidentally spill salt, you should immediately throw a pinch of it over your left shoulder. The origins of this strange belief aren’t entirely clear. It possibly dates back to the ancient world, including the Romans and Sumerians, when salt was a highly prized commodity and therefore spilling it was frowned upon.
Later, during the Renaissance, Leonardo da Vinci created one of his most famous works, “The Last Supper,” in which Judas Iscariot is portrayed knocking over a container of salt with his elbow, suggesting that the connection between spilled salt and bad luck was well established by that time. But why do we throw the spilled salt over our left shoulder? The common belief today is that the devil and evil spirits are said to lurk over the left shoulder, and the pinch of jinx-reducing salt is destined for their eyes.
In the American South, eating black-eyed peas on New Year’s Day is a common tradition and superstition said to bring luck and prosperity throughout the year ahead. When enslaved Africans brought black-eyed peas to America, the beans were initially used as food for livestock and enslaved people only. Black-eyed peas gained wider acceptance during the Civil War, when they were one of the few foods left untouched by Union troops, who considered them animal feed. Southerners therefore managed to survive on black-eyed peas during the winter, and so began the association with good luck and prosperity, and the tradition of eating them on New Year’s Day.
Breaking the wishbone of a cooked chicken or turkey is a common Christmas practice in the United Kingdom and a Thanksgiving tradition in the United States. Objectively, it’s a very strange thing to do, no matter how much luck might be up for grabs: It involves making a wish while pulling the bird bone in two with another person, and the person who gets the bigger piece will have their wish granted.
The origins of this peculiar superstition are debatable. One common theory goes all the way back to the ancient Etruscans, who saw birds as potent oracles. They extracted wishbones from chickens, dried them in the sun, and then touched the bones as a form of divination. The Romans picked up this practice later, by which time the bone was being snapped in half, possibly to double its power. The Romans then introduced the concept to Britain, and it eventually found its way to the Americas via English settlers.
It’s hard to say whether the superstition really has such a long and storied history with a direct link all the way back to the Etruscans. We do know that the wishbone-breaking tradition as we know it today existed at least as early as the 17th century or early 18th century, when the bone was known as a “merrythought.” The term “wishbone” first appeared a century or so later.
Statistically, about one in every thousand eggs has a double yolk. So, if you regularly crack open and cook eggs, there’s a chance you’ll come across one at some point. For the superstitious, an egg with two yolks is widely considered a sign of impending good luck. It’s not known where or when this superstition emerged, but the reasoning behind it is clear to see. Eggs have long been associated with life, rebirth, and potential, making a double yolk a natural candidate for a symbol of abundance, prosperity, and good luck. Double yolks are sometimes regarded as a sign of an upcoming marriage, or that a woman will soon become pregnant with twins.
The superstitious tradition of blowing out candles on birthday cakes has surprisingly ancient origins. Some historians believe it goes as far back as the ancient Greeks, who may have made cakes adorned with lit candles to honor Artemis, the goddess of the hunt and moon. This, in turn, was adopted by the Romans, who helped spread the tradition.
In modern Europe, the ritual of celebrating birthdays with a cake and candles — as we do today — has been around since at least the 18th century. One of the first documented accounts comes from the 1746 birthday party of Count Ludwig Von Zinzendorf, a German bishop, who had a massive cake covered in candles. At this point, in Germany at least, the act of extinguishing the candle flames was seen as a way to carry desires up to the gods — not dissimilar to how we blow out candles and make a wish today.
The tradition of eating a dozen grapes on New Year’s Eve comes from Spain, and while the origins are still debated, it dates back to at least the 1880s. According to some historians, the Spanish bourgeoisie decided to imitate the French New Year’s celebration of drinking champagne by skipping the middleman and going straight for the grapes. Others suggest the tradition began as a shrewd marketing tactic by grape growers who had a surplus harvest to unload in the early 1900s.
Either way, the ritual caught on: By eating one grape at each of midnight’s 12 clock chimes, you are destined for a lucky year, with each grape representing one of the 12 forthcoming months. The custom soon spread throughout Latin America, in countries as diverse as Cuba, Mexico, Puerto Rico, Argentina, and Peru. Some Latino populations in the U.S. have also maintained the grape-eating superstition.
From our modern vantage point, the culinary options of bygone cultures are sometimes difficult to comprehend. It seems that hungry people gobbled down anything they could get their hands on, including dormice (rodents), beaver tails, and fish bladder jam.
But while some of the choices seem unusual in hindsight, we can at least grasp their nutritional value. Other foods, however, were just downright dangerous to the human digestive system, and certainly wouldn’t have been on the menu had the consumer been aware of the consequences. Here are five toxic foods that people unwittingly used to eat.
Offering a rich source of vitamins, protein, and fatty acids, seafood is generally considered among the healthiest cuisine to eat — unless, of course, the specimens being consumed contain sky-high concentrations of heavy metals. Such was the case with the Atlantic cod and harp seals that comprised the meals of Stone Age settlers in northern Norway’s Varanger Peninsula around 5000 to 1800 BCE.
According to a recent study, cod bones from the settlement contained levels of cadmium up to 22 times higher than contemporary recommended limits, while seal bones showed similar dangerously elevated levels of lead. While it might seem strange that wildlife came with the risk of carcinogens in an era well before industrialization, the study authors suggest this was the result of climate change. It’s possible the thaw from the last ice age (between about 120,000 and 11,500 years ago) produced rising sea levels that carried soil containing the potent minerals into the water.
It’s well known that the ancient Romans enjoyed their wine, but it’s possible a component of the winemaking process fueled ill health in a manner that went beyond the typical hangover. The Romans made a sweet, syrupy substance called defrutum, which was prepared by boiling unfermented grape juice. This syrup was used as a preservative for wine and fruit, as well as in sauces for dishes of pork, veal, and lentils, as described in the famed Roman cookbook De Re Coquinaria.
The problem was in the sweetener’s preparation: Ancient scholars including Cato and Pliny the Elder called for the syrup to be boiled in a lead-lined pot, inevitably resulting in the syrup’s absorption of lead. Although the hazards of lead poisoning were known to the Romans, it apparently never occurred to these great minds that they were endangering the public with their instructions.
Nowadays, a typical Easter meal might include a ham and a sampling of the chocolate left by the Easter Bunny, but for Christians in medieval England, the holiday was incomplete without the serving of the tansy. The dish was named for its primary ingredient, the yellow-flowered tansy plant, which was mixed with herbs and a hearty helping of eggs to produce what was essentially a large, sweet omelet.
Coming on the heels of Lent, the tansy not only provided a welcome change from the strict diet of lentils and fish consumed over the previous 40 days, but also was said to provide relief from the gas-inducing Lenten meals. Despite its purported medicinal qualities, the plant is actually mildly toxic, its pyrrolizidine alkaloids dangerous to humans in high doses. Although the poison didn’t hinder the long-standing popularity of the tansy on dinner tables, people are generally dissuaded from eating the plant today.
You could highlight an array of foods in Victorian England that would fail to pass muster under any food safety laws, from the lead chromate found in mustard to the arsenic compounds used to color confectionery. However, given its ubiquity in households of the era, the most egregious example may well be bread.
Seeking to create thick loaves of an appealing white hue, Victorian bakers mixed in such ingredients as ground-up bones, chalk, and plaster of Paris. Another common additive was alum, an aluminum-based compound that inhibited digestion and contributed to the malnutrition rampant among the poorer population. Although the dangers of adulterated edibles were known among the more educated members of the public, there was little stopping the food producers and distributors who ignored these health risks in favor of profits.
Advertisement
Advertisement
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Rhubarb Leaves
Known for its reddish stalk and tart flavor, rhubarb in the hands of a capable chef can be used to create delicious pies, sauces, and jams. That is, the stalks can be turned into such kitchen delights — the thick green leaves are chock-full of toxic oxalic acid and therefore not meant for ingestion. Unfortunately, this fact was not well known a century ago, as rhubarb leaves were recommended as a source of vegetation during the food shortages of World War I.
Consumed in small but regular doses, the leaves inhibit the beneficial effects of calcium and trigger the buildup of calcium oxalate, leading to kidney stones. While a human would normally have to eat something like 6 pounds of the stuff to experience the more acute effects (including vomiting, diarrhea, and kidney failure), there was at least one reported case of oxalic acid poisoning during the rhubarb leaf’s brief run as a lettuce substitute.
Credit: Harold M. Lambert/ Archive Photos via Getty Images
Author Timothy Ott
May 15, 2025
Love it?185
According to popular legend, the English aristocrat John Montagu, the 4th Earl of Sandwich, was engaged in an all-night card game in 1762 when he became distracted by hunger pangs. Not wanting to stop playing, he instructed his servant to bring him a snack of beef between two slices of bread, allowing him to satiate the twin desires of filling his belly and raking in more dough.
While he was hardly the first person in history to consider eating food in this fashion — Montagu may have been inspired by culinary creations in Turkey and Greece — the earl’s idea caught on across English high society and led to the honor of having his name affixed to this particular bread-based meal.
The sandwich soon spread to other social strata across Europe and in the American colonies, its popularity underscored by increasing appearances in cookbooks through the 19th and 20th centuries. However, numerous once-popular foods have failed to survive to the present day, and the same goes for certain old-fashioned sandwiches; some of them are just too bizarre for modern palates. Here are six sandwiches that were (mostly) pushed aside by modern diners in favor of tastier options.
In the U.S. in the 19th and early 20th centuries, oysters were a popular sandwich filling. Sandwiches known as “oyster loaves” were featured in Mary Randolph’s cookbook and guide The Virginia Housewife in 1824, and numerous entries in Eva Green Fuller’s Up-To-Date Sandwich Book in 1909. The first and most basic recipe from Fuller’s book instructs readers to supply a dash of tabasco sauce, lemon juice, and oil to chopped raw oysters (without specifying measurements), slather the mixture on white bread, and then top it off with a lettuce leaf.
A sandwich aficionado by the name of Barry Enderwick has dug up a trove of old cookbooks to recreate forgotten specialties on his "Sandwiches of History" social media channels, and the yeast sandwich is one such bygone dish. Yeast is typically used for the processes of fermenting beer and leavening bread, and it's unusual to find it as a featured ingredient of a dish. But in the 1930s, there was a push on the part of Fleischmann's to promote the nutritional benefits of its product, resulting in an entry in Florence A. Cowles' 1001 Sandwiches (1936). Cowles calls for five drops of "table sauce" to be added to a cake of compressed yeast, with the resulting paste spread on a cracker or bread. Regardless of whether the table sauce was meant to be ketchup, Worcestershire sauce, or another ingredient, this sandwich did little to help the ultimately unsuccessful attempt to enhance American cravings for yeast-filled meals.
This one's actually a bit of a misnomer. Yes, there are (chopped) pickles in here, but the instructions in 1916's Salads, Sandwiches and Chafing Dish Recipes also call for mixing the brined vegetable with whipped cream, mayonnaise, and grated horseradish, and adding chopped cooked beef to the mixture atop buttered bread.
Popcorn is a popular snack, so why not pair these bite-sized goodies with yummy buttered toast to produce an extra-delectable dish? That seems to have been the idea behind the recipe in 1909's Up-to-Date Sandwich Book, except the dish also calls for readers to add "five boned sardines, a dash of Worcestershire, and enough tomato catsup to form a paste."
Yes, this is exactly what it sounds like. According to the bestselling Beeton's Book of Household Management from 1861, the recipe simply calls for inserting a piece of cold toast between two slices of buttered bread and seasoning it all with salt and pepper (although the author, Isabella Beeton, helpfully suggests that it could be livened up with slices of meat). While clearly a relic of an era of different tastes, the toast sandwich was revived by the United Kingdom's Royal Society of Chemistry in 2011, and has also surfaced on the menu of the Fat Duck restaurant in Bray, Berkshire, England.
Credit: Diana Miller/ Connect Images via Getty Images
Mashed Potato Sandwich
This creation, also known as the "Greatest Man Sandwich in the World," is attributed to the one and only Gene Kelly. While it's unclear where the recipe first appeared, or whether it was invented or adopted by the famed performer, it is clear that this sandwich is not for folks looking to limit their carbs. The recipe requires a thick layer of leftover mashed potatoes to be spread on buttered French bread and topped with onion slices, mayonnaise, and a hearty dose of salt and pepper; the product is then browned in a broiler. According to one recipe, it was to be enjoyed with the "nearest mug of beer."
Since most of us walk into a grocery store with our minds fixated on the items needed to fill up the fridge and pantry, it’s rare that we take the time to marvel at the wonders of modern food shopping. Whether it’s a small neighborhood mart, a chain supermarket, or a gargantuan superstore, today’s grocery stores offer a dizzying range of brands for any given product, allowing discerning shoppers to make a choice based on price, ingredients, or even packaging. All necessary (and unnecessary items) can be wheeled in a cart to a checkout line, where a friendly employee will happily tabulate the items and accept various forms of payment. There are also self-checkout stations, where you can scan your items yourself and be on your way even faster.
Of course, such a process would have been completely alien to early humans who relied on hunting and gathering their food. And it likely would be fairly shocking even to the people accustomed to earlier forms of food shopping. Here’s a look at what grocery stores were like before the rise of Publix, Whole Foods, Trader Joe’s, and the other popular stores we frequent today.
According to Michael Ruhlman’s book Grocery: The Buying and Selling of Food in America, the earliest grocery depots in the U.S. were the country stores that surfaced in the 17th century. Along with offering a limited supply of culinary staples such as sugar, flour, and molasses, these markets provided a smorgasbord of other necessities of colonial America, including hardware, soap, dishes, pots, saddles, harnesses, shoes, and medicine. By the early 19th century, these stores — originally constructed from logs and mud — were largely replaced by newer frame buildings, which contained cellars that were large enough to house casks of whale oil and also cool enough to store eggs, butter, and cheese.
By the middle of the 19th century, the general store was a common sight across the small towns of the expanding United States. Similar to the country store, general stores stocked goods that both satiated hunger and catered to other crucial needs of paying customers. Food items included coffee beans, spices, honey, oatmeal, and dried beans, many of which were kept in barrels and required measuring the desired amount on (often inaccurate) scales. The stores also offered nonedible wares, including cloth, buttons, undergarments, hats, lamps, rifles, and ammunition.
Normally featuring at least one large display window, these stores were typically packed with goods piled high on shelves and tables amid the boxes and barrels stuffed into available spaces. A front counter displayed smaller items as well as such contraptions as a coffee grinder, scales, and the cash register. As general stores typically doubled as community centers, they were usually fitted with a stove to warm inhabitants during cold-weather months and often featured chairs for those who planned to stay and chat.
Even so, general stores were neither the most comfortable nor the most sanitary places. Customers often dragged in dirt and animal waste from the unpaved roads outside, while cast-iron stoves could produce a layer of soot over the displayed wares.
Although the term "grocery" referred to a tavern or a saloon through much of America's early history, it came to signify the emporiums that were appearing in more densely populated urban centers by the late 19th century.
These grocery stores typically carried around 200 products, and occupied 500 to 600 square feet of floor space. As with general stores, these shops featured items piled high on shelves, with clerks reaching for everything presented to them on a list and continuing to measure out both liquid and solid goods from barrels.
Although most proprietors were willing to offer a line of credit, a precursor to today’s credit cards, unlike today they provided limited choices by stocking only one type of each product. Also unlike most modern grocery stores, these early shops largely sold dry goods, meaning customers typically needed to make separate trips to a butcher, bakery, and produce peddler to round out the full array of dining needs.
The grocery model began to change when the Great Atlantic & Pacific Tea Company, owners of the A&P grocery chain, opened the first of its "economy stores" in 1912. Along with eliminating the credit lines and free delivery offered by independent retailers, A&P stores became known for offering popular brands such as Eight O'Clock Coffee.
The Tennessee-based Piggly Wiggly chain introduced additional innovations to the model with the opening of its first store in 1916. Most groundbreaking was its procedure of "self service." While goods still lined the lone aisle that snaked through the store, they were now within reach of customers who picked out the price-marked items themselves instead of delegating the responsibility to a clerk. The customers then placed the item in a wood basket — also a novel feature — before bringing everything to a checkout counter.
The streamlined processes resulted in lower costs for customers, and in turn produced an explosion in sales for retailers. The number of grocery chain stores increased from 24,000 to around 200,000 between 1914 and 1930, with the biggest, A&P, accounting for 16,000 of those outlets.
Although the 1920s saw the rise of "combination stores" that provided a wider range of meats, produce, and dairy products under one roof, the first true supermarket arrived with the opening of New York City's King Kullen in 1930.
A former employee of the Kroger chain stores, Matt Cullen decided to launch his new enterprise in a slightly more remote section of Jamaica, Queens. The location allowed for cheaper real estate, which meant that Cullen could open a larger store and provide space for the machines that were transforming American lives: automobiles.
The first King Kullen covered 6,000 square feet and offered 10 times the amount of food products as most existing markets, much of which was sold out of packing crates. It was soon surpassed by New Jersey's Big Bear, which featured a 15,000-square-foot grocery surrounded by areas leased to vendors selling tobacco, cosmetics, and auto parts.
Although other features of the contemporary shopping experience had yet to be invented — shopping carts arrived in the late 1930s, the barcode system in the 1970s — the blueprint for the modern grocery store was in place, with supermarkets eventually expanding to stock 50,000 items, and colossal chains swelling to a footprint of more than 200,000 square feet.
Nowadays, as we casually stroll down the block-length aisles of superstores beneath towering shelves of colorful boxes, examine the fine print on packages in the organic section, or nibble on the free samples of cheese and dumplings, we can spare a thought for the evolution of this essential part of everyday life.
Few things are as refreshing as an ice-cold drink on a hot day. Indeed, ice is an essential part of the beverage industry today, with the global ice maker market valued at more than $5 billion. In the United States alone, the average person consumes nearly 400 pounds of ice per year.
Despite its popularity, most of us have probably never thought about how ice-cold drinks evolved into an everyday necessity. This simple pleasure has a long and interesting history shaped by ancient ingenuity, global trade, and evolving technology. From emperors importing ice from distant mountains to entrepreneurs revolutionizing its distribution, the journey of the ice in our drinks is a story of innovation that dates back to the first human civilizations.
Long before refrigerators and freezers, ancient civilizations found ingenious ways to keep drinks cool. The earliest recorded instance of ice storage dates back to the reign of Shulgi, the king of Ur in Mesopotamia from 2094 to 2046 BCE. Shulgi named the 13th year of his reign “Building of the royal icehouse/cold-house” (years were often named after a significant event), suggesting the construction of an icehouse during that period.
In China, the practice of harvesting and storing ice dates back to at least 1100 BCE. During the Zhou dynasty (1046 to 256 BCE), the royal court established a specialized department responsible for collecting natural ice blocks each winter and storing them in icehouses for use in the warmer months. This stored ice was used to cool food and beverages, including wine, and was also used in medical treatments.
Over time, ice collection became an organized practice, with officials overseeing its storage and distribution. Around 400 BCE, the Persians took preservation a step further by constructingyakchals — massive, domed icehouses made of heat-resistant mud brick. These structures allowed them to store ice year-round, even in the arid desert climate. By carefully directing water into shallow pools that froze overnight, they amassed ice supplies that could later be used to cool drinks or create early versions of frozen treats.
Around the world, chilled beverages were a luxury enjoyed by the wealthy — one that did not become widely accessible for centuries. Around 300 to 400 CE, the Romans imported massive blocks of ice from the Alps, transporting them over long distances to cool drinks and prepare chilled delicacies. This symbol of extravagance was primarily reserved for elite members of society, including Emperor Nero, who was known to be fond of ice-cold refreshments. He ordered ice and snow to be brought down from the mountains and mixed with fruit juices and honey, an early prototype of sorbet.
During the Heian period in Japan (794 to 1185 CE), aristocrats preserved natural ice in underground icehouses known as himuro, allowing them to store and use it during the hot summer months. Historical texts such as the The Pillow Book mention the consumption of ice in the imperial court, emphasizing its exclusivity among the noble class. The government even maintained official icehouses, and records from the period suggest that ice was carefully rationed and distributed among the elite. Unlike in Europe, where ice was primarily used for cooling drinks, in Japan, it was also consumed directly in the form of finely shaved ice, sometimes drizzled with natural sweeteners.
During the European Renaissance, the use of ice in beverages and food was particularly popular in Italy and France. In Italy, ice was harvested from the Alps and transported to cities, where it was stored in underground cellars or specially built icehouses known as neviere. These structures, often insulated with straw or sawdust, allowed ice to be preserved for extended periods. Wealthy households and royal courts used this stored ice to chill drinks, enhancing the dining experience. Records from the 16th century indicate that Italians used ice to cool wines and other beverages, a practice that became increasingly fashionable among the aristocracy.
By the 17th century, the development of ice storage facilities had expanded beyond the nobility, with vendors in Italian cities selling ice or chilled water to customers during the summer months. This period also saw the refinement of frozen desserts, with early recipes for flavored ices and sorbets appearing in Italy and later spreading to France. But the primary function of ice remained its role in cooling beverages, a practice that continued to evolve with advancements in ice harvesting and transportation.
The “Ice King” Helped Make Ice Accessible in the U.S.
For most of history, ice was difficult to come by, making chilled drinks a rare luxury. That changed in the early 19th century when a Boston businessman named Frederic Tudor saw an opportunity to monetize ice production. Dubbed the "Ice King," Tudor began harvesting ice from frozen ponds on his father’s New England farm, packing it in sawdust and shipping it to warm climates such as the Southern states and the Caribbean islands. His business took off and the modern commercial ice trade was born.
At first, ice was primarily used in bars and hotels catering to the social elite. Cocktails such as the mint julep and sherry cobbler became wildly popular in the United States, served over crushed or shaved ice. The sight of frost-covered glasses became a symbol of sophistication and modernity. By the late 19th century, thanks to improved ice-harvesting techniques and the rise of commercial production, ice became more affordable and accessible in the U.S. Then in the first half of the 20th century, home refrigerators allowed more and more families to store ice, making it possible for iced drinks to become a staple at American dinner tables.
Advertisement
Advertisement
Subscribe to History Facts
Enter your email to receive history's most fascinating happenings in your inbox each day.
Sorry, your email address is not valid. Please try again.
Your email is:
Sorry, your email address is not valid. Please try again.