Few traditions feel as universal as gathering around a frosted cake, lighting candles, and singing “Happy Birthday.” While the ritual seems timeless, the story of why we eat cake on our birthdays stretches back thousands of years — winding through ancient temples, Roman banquets, German children’s parties, and American kitchens.
The word “cake” comes from the Old Norse kaka, but cakes in the ancient world looked quite different from today’s airy, sugar-laden desserts. Early cakes were dense, breadlike creations sweetened with honey, enriched with eggs or cheese, and flavored with nuts, seeds, or dried fruits such as raisins or figs. Archaeological and textual evidence shows that cakes were baked in Mesopotamia more than 4,000 years ago, and the Roman writer Cato described cakes wrapped in leaves and served at weddings and fertility rites.
But cakes weren’t just food — they were often sacred offerings. The Greeks presented honey cakes and cheesecakes to their gods, sometimes decorated with candles. One common offering to Artemis, goddess of the moon and the hunt, was the amphiphon, a round cheesecake topped with glowing candles meant to mimic the moon. Romans, too, baked cakes for religious purposes, including the libum, a mixture of cheese, flour, and egg baked on bay leaves as an offering to household gods. In these early forms, cakes linked the human and divine, symbolizing gratitude, fertility, or cosmic cycles.
Though cakes were plentiful, birthdays were only inconsistently celebrated in antiquity. While there’s some evidence for these annual celebrations in ancient Sumer, they weren’t common in ancient Greece. The Romans were the first to mark personal birthdays, though only for men — women’s birthdays weren’t celebrated until the Middle Ages. Roman citizens honored relatives and friends with feasts, and men turning 50 received a special cake made with wheat flour, nuts, honey, and yeast.
Still, birthday cake traditions were limited and inconsistent during ancient times. For centuries, cakes remained associated with weddings, festivals, and offerings to the gods rather than private anniversaries.
The modern birthday cake owes its clearest debt to 18th-century Germany. There, children’s birthdays were marked with Kinderfeste — festive gatherings that included a sweet cake crowned with candles. Each candle represented a year of life, plus one extra for the year to come, a tradition that still survives. The candles were lit in the morning, replaced throughout the day, and finally blown out in the evening after dinner. Much like today, children were encouraged to make a wish as the smoke carried their hopes skyward.
This ritual blended ancient practices, such as Greek candlelit offerings and Roman birthday cakes, into something recognizably modern: a sweet centerpiece, flames to mark the passage of time, and the magical moment of blowing out candles.
Even with Kinderfeste in Germany, birthday cakes weren’t for everyone, as the food was considered a luxury treat. For most of history, cakes required expensive ingredients such as refined sugar, fine flour, and butter. They were labor-intensive to make and decorated with painstaking artistry. In the early 19th century, American cookbook author Catharine Beecher suggested apple bread pudding as a child’s birthday treat — simple, hearty, and inexpensive. However, children’s birthday parties weren’t common celebrations in the U.S. until after the Civil War.
The Industrial Revolution played a decisive role in democratizing cake. As ingredients became cheaper and more widely available, bakeries could mass-produce cakes and sell them at affordable prices. And for home cooks, birthday cakes reached new heights of extravagance by the early 20th century. In 1912, the popular Fannie Farmer Cookbook included a full-page photo of a “Birthday Cake for a Three-Year-Old,” complete with angel cake, white icing, candied rose petals, sliced citron leaves, and sugar-paste cups for candles. These elaborate confections reflected both rising prosperity and a more ambitious form of cake decorating.
Today’s birthday cakes are often simple — such as a frosted sheet cake with a name piped across the top (a tradition that began around the 1940s) — but the symbolism has a long and rich history. Cakes mark abundance, indulgence, and festivity. Candles represent the passage of time, each flame a year lived, each puff a wish cast into the unknown. The act of gathering around the cake, singing together, and celebrating life’s milestones connects us across centuries to ancient rites of gratitude and hope.
From moonlit cheesecakes for Artemis to a 3-year-old’s angel cake in Fannie Farmer’s kitchen, birthday cakes tell a story about how humans celebrate time, community, and the sweetness of life. Every slice is part of a lineage that is both sacred and ordinary — proof that sometimes the simplest rituals carry the deepest history.
Humans are superstitious creatures by nature, with many strange habits that seem entirely illogical. We avoid walking under ladders or opening umbrellas indoors in fear of bad luck. We knock on wood to prevent disappointment. We shun the number 13 and we can’t quite decide whether black cats are good or bad omens. None of these actions makes much practical sense, and the same is true for a range of superstitions involving food.
Food is a necessity that keeps us functioning and alive, but eating is also a cultural experience, rich with symbolic gestures, long-held traditions, and curious rituals. These include plenty of superstitions believed to bring luck, prosperity, health, wealth, and a range of other supposed benefits. And while modern science may dismiss these practices as mere folklore with no logical basis, there are plenty of common food-based superstitions we just won’t let go.
Here are six superstitions involving food, all of which demonstrate the human desire to find greater meaning or significance in the otherwise simple and essential acts of cooking, eating, and sharing meals.
According to one common food superstition, if you accidentally spill salt, you should immediately throw a pinch of it over your left shoulder. The origins of this strange belief aren’t entirely clear. It possibly dates back to the ancient world, including the Romans and Sumerians, when salt was a highly prized commodity and therefore spilling it was frowned upon.
Later, during the Renaissance, Leonardo da Vinci created one of his most famous works, “The Last Supper,” in which Judas Iscariot is portrayed knocking over a container of salt with his elbow, suggesting that the connection between spilled salt and bad luck was well established by that time. But why do we throw the spilled salt over our left shoulder? The common belief today is that the devil and evil spirits are said to lurk over the left shoulder, and the pinch of jinx-reducing salt is destined for their eyes.
In the American South, eating black-eyed peas on New Year’s Day is a common tradition and superstition said to bring luck and prosperity throughout the year ahead. When enslaved Africans brought black-eyed peas to America, the beans were initially used as food for livestock and enslaved people only. Black-eyed peas gained wider acceptance during the Civil War, when they were one of the few foods left untouched by Union troops, who considered them animal feed. Southerners therefore managed to survive on black-eyed peas during the winter, and so began the association with good luck and prosperity, and the tradition of eating them on New Year’s Day.
Breaking the wishbone of a cooked chicken or turkey is a common Christmas practice in the United Kingdom and a Thanksgiving tradition in the United States. Objectively, it’s a very strange thing to do, no matter how much luck might be up for grabs: It involves making a wish while pulling the bird bone in two with another person, and the person who gets the bigger piece will have their wish granted.
The origins of this peculiar superstition are debatable. One common theory goes all the way back to the ancient Etruscans, who saw birds as potent oracles. They extracted wishbones from chickens, dried them in the sun, and then touched the bones as a form of divination. The Romans picked up this practice later, by which time the bone was being snapped in half, possibly to double its power. The Romans then introduced the concept to Britain, and it eventually found its way to the Americas via English settlers.
It’s hard to say whether the superstition really has such a long and storied history with a direct link all the way back to the Etruscans. We do know that the wishbone-breaking tradition as we know it today existed at least as early as the 17th century or early 18th century, when the bone was known as a “merrythought.” The term “wishbone” first appeared a century or so later.
Statistically, about one in every thousand eggs has a double yolk. So, if you regularly crack open and cook eggs, there’s a chance you’ll come across one at some point. For the superstitious, an egg with two yolks is widely considered a sign of impending good luck. It’s not known where or when this superstition emerged, but the reasoning behind it is clear to see. Eggs have long been associated with life, rebirth, and potential, making a double yolk a natural candidate for a symbol of abundance, prosperity, and good luck. Double yolks are sometimes regarded as a sign of an upcoming marriage, or that a woman will soon become pregnant with twins.
The superstitious tradition of blowing out candles on birthday cakes has surprisingly ancient origins. Some historians believe it goes as far back as the ancient Greeks, who may have made cakes adorned with lit candles to honor Artemis, the goddess of the hunt and moon. This, in turn, was adopted by the Romans, who helped spread the tradition.
In modern Europe, the ritual of celebrating birthdays with a cake and candles — as we do today — has been around since at least the 18th century. One of the first documented accounts comes from the 1746 birthday party of Count Ludwig Von Zinzendorf, a German bishop, who had a massive cake covered in candles. At this point, in Germany at least, the act of extinguishing the candle flames was seen as a way to carry desires up to the gods — not dissimilar to how we blow out candles and make a wish today.
The tradition of eating a dozen grapes on New Year’s Eve comes from Spain, and while the origins are still debated, it dates back to at least the 1880s. According to some historians, the Spanish bourgeoisie decided to imitate the French New Year’s celebration of drinking champagne by skipping the middleman and going straight for the grapes. Others suggest the tradition began as a shrewd marketing tactic by grape growers who had a surplus harvest to unload in the early 1900s.
Either way, the ritual caught on: By eating one grape at each of midnight’s 12 clock chimes, you are destined for a lucky year, with each grape representing one of the 12 forthcoming months. The custom soon spread throughout Latin America, in countries as diverse as Cuba, Mexico, Puerto Rico, Argentina, and Peru. Some Latino populations in the U.S. have also maintained the grape-eating superstition.
From our modern vantage point, the culinary options of bygone cultures are sometimes difficult to comprehend. It seems that hungry people gobbled down anything they could get their hands on, including dormice (rodents), beaver tails, and fish bladder jam.
But while some of the choices seem unusual in hindsight, we can at least grasp their nutritional value. Other foods, however, were just downright dangerous to the human digestive system, and certainly wouldn’t have been on the menu had the consumer been aware of the consequences. Here are five toxic foods that people unwittingly used to eat.
Offering a rich source of vitamins, protein, and fatty acids, seafood is generally considered among the healthiest cuisine to eat — unless, of course, the specimens being consumed contain sky-high concentrations of heavy metals. Such was the case with the Atlantic cod and harp seals that comprised the meals of Stone Age settlers in northern Norway’s Varanger Peninsula around 5000 to 1800 BCE.
According to a recent study, cod bones from the settlement contained levels of cadmium up to 22 times higher than contemporary recommended limits, while seal bones showed similar dangerously elevated levels of lead. While it might seem strange that wildlife came with the risk of carcinogens in an era well before industrialization, the study authors suggest this was the result of climate change. It’s possible the thaw from the last ice age (between about 120,000 and 11,500 years ago) produced rising sea levels that carried soil containing the potent minerals into the water.
It’s well known that the ancient Romans enjoyed their wine, but it’s possible a component of the winemaking process fueled ill health in a manner that went beyond the typical hangover. The Romans made a sweet, syrupy substance called defrutum, which was prepared by boiling unfermented grape juice. This syrup was used as a preservative for wine and fruit, as well as in sauces for dishes of pork, veal, and lentils, as described in the famed Roman cookbook De Re Coquinaria.
The problem was in the sweetener’s preparation: Ancient scholars including Cato and Pliny the Elder called for the syrup to be boiled in a lead-lined pot, inevitably resulting in the syrup’s absorption of lead. Although the hazards of lead poisoning were known to the Romans, it apparently never occurred to these great minds that they were endangering the public with their instructions.
Nowadays, a typical Easter meal might include a ham and a sampling of the chocolate left by the Easter Bunny, but for Christians in medieval England, the holiday was incomplete without the serving of the tansy. The dish was named for its primary ingredient, the yellow-flowered tansy plant, which was mixed with herbs and a hearty helping of eggs to produce what was essentially a large, sweet omelet.
Coming on the heels of Lent, the tansy not only provided a welcome change from the strict diet of lentils and fish consumed over the previous 40 days, but also was said to provide relief from the gas-inducing Lenten meals. Despite its purported medicinal qualities, the plant is actually mildly toxic, its pyrrolizidine alkaloids dangerous to humans in high doses. Although the poison didn’t hinder the long-standing popularity of the tansy on dinner tables, people are generally dissuaded from eating the plant today.
You could highlight an array of foods in Victorian England that would fail to pass muster under any food safety laws, from the lead chromate found in mustard to the arsenic compounds used to color confectionery. However, given its ubiquity in households of the era, the most egregious example may well be bread.
Seeking to create thick loaves of an appealing white hue, Victorian bakers mixed in such ingredients as ground-up bones, chalk, and plaster of Paris. Another common additive was alum, an aluminum-based compound that inhibited digestion and contributed to the malnutrition rampant among the poorer population. Although the dangers of adulterated edibles were known among the more educated members of the public, there was little stopping the food producers and distributors who ignored these health risks in favor of profits.
Advertisement
Advertisement
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Rhubarb Leaves
Known for its reddish stalk and tart flavor, rhubarb in the hands of a capable chef can be used to create delicious pies, sauces, and jams. That is, the stalks can be turned into such kitchen delights — the thick green leaves are chock-full of toxic oxalic acid and therefore not meant for ingestion. Unfortunately, this fact was not well known a century ago, as rhubarb leaves were recommended as a source of vegetation during the food shortages of World War I.
Consumed in small but regular doses, the leaves inhibit the beneficial effects of calcium and trigger the buildup of calcium oxalate, leading to kidney stones. While a human would normally have to eat something like 6 pounds of the stuff to experience the more acute effects (including vomiting, diarrhea, and kidney failure), there was at least one reported case of oxalic acid poisoning during the rhubarb leaf’s brief run as a lettuce substitute.
According to popular legend, the English aristocrat John Montagu, the 4th Earl of Sandwich, was engaged in an all-night card game in 1762 when he became distracted by hunger pangs. Not wanting to stop playing, he instructed his servant to bring him a snack of beef between two slices of bread, allowing him to satiate the twin desires of filling his belly and raking in more dough.
While he was hardly the first person in history to consider eating food in this fashion — Montagu may have been inspired by culinary creations in Turkey and Greece — the earl’s idea caught on across English high society and led to the honor of having his name affixed to this particular bread-based meal.
The sandwich soon spread to other social strata across Europe and in the American colonies, its popularity underscored by increasing appearances in cookbooks through the 19th and 20th centuries. However, numerous once-popular foods have failed to survive to the present day, and the same goes for certain old-fashioned sandwiches; some of them are just too bizarre for modern palates. Here are six sandwiches that were (mostly) pushed aside by modern diners in favor of tastier options.
In the U.S. in the 19th and early 20th centuries, oysters were a popular sandwich filling. Sandwiches known as “oyster loaves” were featured in Mary Randolph’s cookbook and guide The Virginia Housewife in 1824, and numerous entries in Eva Green Fuller’s Up-To-Date Sandwich Book in 1909. The first and most basic recipe from Fuller’s book instructs readers to supply a dash of tabasco sauce, lemon juice, and oil to chopped raw oysters (without specifying measurements), slather the mixture on white bread, and then top it off with a lettuce leaf.
A sandwich aficionado by the name of Barry Enderwick has dug up a trove of old cookbooks to recreate forgotten specialties on his "Sandwiches of History" social media channels, and the yeast sandwich is one such bygone dish. Yeast is typically used for the processes of fermenting beer and leavening bread, and it's unusual to find it as a featured ingredient of a dish. But in the 1930s, there was a push on the part of Fleischmann's to promote the nutritional benefits of its product, resulting in an entry in Florence A. Cowles' 1001 Sandwiches (1936). Cowles calls for five drops of "table sauce" to be added to a cake of compressed yeast, with the resulting paste spread on a cracker or bread. Regardless of whether the table sauce was meant to be ketchup, Worcestershire sauce, or another ingredient, this sandwich did little to help the ultimately unsuccessful attempt to enhance American cravings for yeast-filled meals.
This one's actually a bit of a misnomer. Yes, there are (chopped) pickles in here, but the instructions in 1916's Salads, Sandwiches and Chafing Dish Recipes also call for mixing the brined vegetable with whipped cream, mayonnaise, and grated horseradish, and adding chopped cooked beef to the mixture atop buttered bread.
Popcorn is a popular snack, so why not pair these bite-sized goodies with yummy buttered toast to produce an extra-delectable dish? That seems to have been the idea behind the recipe in 1909's Up-to-Date Sandwich Book, except the dish also calls for readers to add "five boned sardines, a dash of Worcestershire, and enough tomato catsup to form a paste."
Yes, this is exactly what it sounds like. According to the bestselling Beeton's Book of Household Management from 1861, the recipe simply calls for inserting a piece of cold toast between two slices of buttered bread and seasoning it all with salt and pepper (although the author, Isabella Beeton, helpfully suggests that it could be livened up with slices of meat). While clearly a relic of an era of different tastes, the toast sandwich was revived by the United Kingdom's Royal Society of Chemistry in 2011, and has also surfaced on the menu of the Fat Duck restaurant in Bray, Berkshire, England.
Credit: Diana Miller/ Connect Images via Getty Images
Mashed Potato Sandwich
This creation, also known as the "Greatest Man Sandwich in the World," is attributed to the one and only Gene Kelly. While it's unclear where the recipe first appeared, or whether it was invented or adopted by the famed performer, it is clear that this sandwich is not for folks looking to limit their carbs. The recipe requires a thick layer of leftover mashed potatoes to be spread on buttered French bread and topped with onion slices, mayonnaise, and a hearty dose of salt and pepper; the product is then browned in a broiler. According to one recipe, it was to be enjoyed with the "nearest mug of beer."
Since most of us walk into a grocery store with our minds fixated on the items needed to fill up the fridge and pantry, it’s rare that we take the time to marvel at the wonders of modern food shopping. Whether it’s a small neighborhood mart, a chain supermarket, or a gargantuan superstore, today’s grocery stores offer a dizzying range of brands for any given product, allowing discerning shoppers to make a choice based on price, ingredients, or even packaging. All necessary (and unnecessary items) can be wheeled in a cart to a checkout line, where a friendly employee will happily tabulate the items and accept various forms of payment. There are also self-checkout stations, where you can scan your items yourself and be on your way even faster.
Of course, such a process would have been completely alien to early humans who relied on hunting and gathering their food. And it likely would be fairly shocking even to the people accustomed to earlier forms of food shopping. Here’s a look at what grocery stores were like before the rise of Publix, Whole Foods, Trader Joe’s, and the other popular stores we frequent today.
According to Michael Ruhlman’s book Grocery: The Buying and Selling of Food in America, the earliest grocery depots in the U.S. were the country stores that surfaced in the 17th century. Along with offering a limited supply of culinary staples such as sugar, flour, and molasses, these markets provided a smorgasbord of other necessities of colonial America, including hardware, soap, dishes, pots, saddles, harnesses, shoes, and medicine. By the early 19th century, these stores — originally constructed from logs and mud — were largely replaced by newer frame buildings, which contained cellars that were large enough to house casks of whale oil and also cool enough to store eggs, butter, and cheese.
By the middle of the 19th century, the general store was a common sight across the small towns of the expanding United States. Similar to the country store, general stores stocked goods that both satiated hunger and catered to other crucial needs of paying customers. Food items included coffee beans, spices, honey, oatmeal, and dried beans, many of which were kept in barrels and required measuring the desired amount on (often inaccurate) scales. The stores also offered nonedible wares, including cloth, buttons, undergarments, hats, lamps, rifles, and ammunition.
Normally featuring at least one large display window, these stores were typically packed with goods piled high on shelves and tables amid the boxes and barrels stuffed into available spaces. A front counter displayed smaller items as well as such contraptions as a coffee grinder, scales, and the cash register. As general stores typically doubled as community centers, they were usually fitted with a stove to warm inhabitants during cold-weather months and often featured chairs for those who planned to stay and chat.
Even so, general stores were neither the most comfortable nor the most sanitary places. Customers often dragged in dirt and animal waste from the unpaved roads outside, while cast-iron stoves could produce a layer of soot over the displayed wares.
Although the term "grocery" referred to a tavern or a saloon through much of America's early history, it came to signify the emporiums that were appearing in more densely populated urban centers by the late 19th century.
These grocery stores typically carried around 200 products, and occupied 500 to 600 square feet of floor space. As with general stores, these shops featured items piled high on shelves, with clerks reaching for everything presented to them on a list and continuing to measure out both liquid and solid goods from barrels.
Although most proprietors were willing to offer a line of credit, a precursor to today’s credit cards, unlike today they provided limited choices by stocking only one type of each product. Also unlike most modern grocery stores, these early shops largely sold dry goods, meaning customers typically needed to make separate trips to a butcher, bakery, and produce peddler to round out the full array of dining needs.
The grocery model began to change when the Great Atlantic & Pacific Tea Company, owners of the A&P grocery chain, opened the first of its "economy stores" in 1912. Along with eliminating the credit lines and free delivery offered by independent retailers, A&P stores became known for offering popular brands such as Eight O'Clock Coffee.
The Tennessee-based Piggly Wiggly chain introduced additional innovations to the model with the opening of its first store in 1916. Most groundbreaking was its procedure of "self service." While goods still lined the lone aisle that snaked through the store, they were now within reach of customers who picked out the price-marked items themselves instead of delegating the responsibility to a clerk. The customers then placed the item in a wood basket — also a novel feature — before bringing everything to a checkout counter.
The streamlined processes resulted in lower costs for customers, and in turn produced an explosion in sales for retailers. The number of grocery chain stores increased from 24,000 to around 200,000 between 1914 and 1930, with the biggest, A&P, accounting for 16,000 of those outlets.
Although the 1920s saw the rise of "combination stores" that provided a wider range of meats, produce, and dairy products under one roof, the first true supermarket arrived with the opening of New York City's King Kullen in 1930.
A former employee of the Kroger chain stores, Matt Cullen decided to launch his new enterprise in a slightly more remote section of Jamaica, Queens. The location allowed for cheaper real estate, which meant that Cullen could open a larger store and provide space for the machines that were transforming American lives: automobiles.
The first King Kullen covered 6,000 square feet and offered 10 times the amount of food products as most existing markets, much of which was sold out of packing crates. It was soon surpassed by New Jersey's Big Bear, which featured a 15,000-square-foot grocery surrounded by areas leased to vendors selling tobacco, cosmetics, and auto parts.
Although other features of the contemporary shopping experience had yet to be invented — shopping carts arrived in the late 1930s, the barcode system in the 1970s — the blueprint for the modern grocery store was in place, with supermarkets eventually expanding to stock 50,000 items, and colossal chains swelling to a footprint of more than 200,000 square feet.
Nowadays, as we casually stroll down the block-length aisles of superstores beneath towering shelves of colorful boxes, examine the fine print on packages in the organic section, or nibble on the free samples of cheese and dumplings, we can spare a thought for the evolution of this essential part of everyday life.
Few things are as refreshing as an ice-cold drink on a hot day. Indeed, ice is an essential part of the beverage industry today, with the global ice maker market valued at more than $5 billion. In the United States alone, the average person consumes nearly 400 pounds of ice per year.
Despite its popularity, most of us have probably never thought about how ice-cold drinks evolved into an everyday necessity. This simple pleasure has a long and interesting history shaped by ancient ingenuity, global trade, and evolving technology. From emperors importing ice from distant mountains to entrepreneurs revolutionizing its distribution, the journey of the ice in our drinks is a story of innovation that dates back to the first human civilizations.
Long before refrigerators and freezers, ancient civilizations found ingenious ways to keep drinks cool. The earliest recorded instance of ice storage dates back to the reign of Shulgi, the king of Ur in Mesopotamia from 2094 to 2046 BCE. Shulgi named the 13th year of his reign “Building of the royal icehouse/cold-house” (years were often named after a significant event), suggesting the construction of an icehouse during that period.
In China, the practice of harvesting and storing ice dates back to at least 1100 BCE. During the Zhou dynasty (1046 to 256 BCE), the royal court established a specialized department responsible for collecting natural ice blocks each winter and storing them in icehouses for use in the warmer months. This stored ice was used to cool food and beverages, including wine, and was also used in medical treatments.
Over time, ice collection became an organized practice, with officials overseeing its storage and distribution. Around 400 BCE, the Persians took preservation a step further by constructingyakchals — massive, domed icehouses made of heat-resistant mud brick. These structures allowed them to store ice year-round, even in the arid desert climate. By carefully directing water into shallow pools that froze overnight, they amassed ice supplies that could later be used to cool drinks or create early versions of frozen treats.
Around the world, chilled beverages were a luxury enjoyed by the wealthy — one that did not become widely accessible for centuries. Around 300 to 400 CE, the Romans imported massive blocks of ice from the Alps, transporting them over long distances to cool drinks and prepare chilled delicacies. This symbol of extravagance was primarily reserved for elite members of society, including Emperor Nero, who was known to be fond of ice-cold refreshments. He ordered ice and snow to be brought down from the mountains and mixed with fruit juices and honey, an early prototype of sorbet.
During the Heian period in Japan (794 to 1185 CE), aristocrats preserved natural ice in underground icehouses known as himuro, allowing them to store and use it during the hot summer months. Historical texts such as the The Pillow Book mention the consumption of ice in the imperial court, emphasizing its exclusivity among the noble class. The government even maintained official icehouses, and records from the period suggest that ice was carefully rationed and distributed among the elite. Unlike in Europe, where ice was primarily used for cooling drinks, in Japan, it was also consumed directly in the form of finely shaved ice, sometimes drizzled with natural sweeteners.
During the European Renaissance, the use of ice in beverages and food was particularly popular in Italy and France. In Italy, ice was harvested from the Alps and transported to cities, where it was stored in underground cellars or specially built icehouses known as neviere. These structures, often insulated with straw or sawdust, allowed ice to be preserved for extended periods. Wealthy households and royal courts used this stored ice to chill drinks, enhancing the dining experience. Records from the 16th century indicate that Italians used ice to cool wines and other beverages, a practice that became increasingly fashionable among the aristocracy.
By the 17th century, the development of ice storage facilities had expanded beyond the nobility, with vendors in Italian cities selling ice or chilled water to customers during the summer months. This period also saw the refinement of frozen desserts, with early recipes for flavored ices and sorbets appearing in Italy and later spreading to France. But the primary function of ice remained its role in cooling beverages, a practice that continued to evolve with advancements in ice harvesting and transportation.
The “Ice King” Helped Make Ice Accessible in the U.S.
For most of history, ice was difficult to come by, making chilled drinks a rare luxury. That changed in the early 19th century when a Boston businessman named Frederic Tudor saw an opportunity to monetize ice production. Dubbed the "Ice King," Tudor began harvesting ice from frozen ponds on his father’s New England farm, packing it in sawdust and shipping it to warm climates such as the Southern states and the Caribbean islands. His business took off and the modern commercial ice trade was born.
At first, ice was primarily used in bars and hotels catering to the social elite. Cocktails such as the mint julep and sherry cobbler became wildly popular in the United States, served over crushed or shaved ice. The sight of frost-covered glasses became a symbol of sophistication and modernity. By the late 19th century, thanks to improved ice-harvesting techniques and the rise of commercial production, ice became more affordable and accessible in the U.S. Then in the first half of the 20th century, home refrigerators allowed more and more families to store ice, making it possible for iced drinks to become a staple at American dinner tables.
From the mid-18th century until the mid-20th century, turtle soup was one of the most luxurious dishes in European and American cuisine. It frequently appeared on the tables of wealthy families and was served at dinners held by prominent politicians. While turtle soup is still considered a delicacy in some parts of the world, it has become all but obsolete in America. But why, exactly, did this dish enjoy more than 200 years as a prized culinary staple?
The first Europeans to eat turtle were not aristocrats, but sailors and explorers in the late 17th century. The green sea turtles found in the Caribbean Sea and Atlantic Ocean were initially seen as merely suitable sustenance for long journeys at sea. But as Indigenous peoples of the Caribbean islands taught European seafarers more about turtles, the simple food became seen as “exotic” and desirable. It caught the attention of the upper class in Europe, and before long, turtle meat became a coveted luxury on the continent.
By the early 18th century, Britain’s taste for turtle had extended to the American colonies, and while recipes for turtle casseroles and other dishes were prominent in cookbooks from the era, turtle soup was the most popular. Turtle’s delicate, veal-like taste and rich, gelatinous texture made it ideal for slow-simmered broths and stews.
With demand for turtle soup surging, overfishing quickly became a problem. The Caribbean sea turtle population rapidly began depleting in the early 1800s. In the U.S., diamondback terrapins from the country’s eastern shores became a popular substitute for sea turtles. Abraham Lincoln even served terrapin stew at his second inauguration in 1865. The dish, according to an 1880 edition of The Washington Post, was “an important part of any Washington dinner laying claim to being a pretentious affair.”
By the turn of the 20th century, “mock” turtle soup, made with calf’s head instead of turtle meat, had become almost as beloved as the real thing. But the appetite for real turtle soup remained, and overharvesting of turtle populations continued, particularly as canned turtle soup became an increasingly mainstream product.
Advertisement
Advertisement
Credit: Buyenlarge/ Archive Photos via Getty Images
The Fall of a Delicacy
When Prohibition started in 1920 and the manufacture, sale, and transportation of alcohol was banned in the U.S., a key ingredient of turtle soup, sherry, was no longer available. What’s more, without alcohol sales, many of the country’s fine dining establishments — the very ones that helped keep turtle soup flourishing among the upper class — struggled to remain open. As they gradually closed, with them went the once-ubiquitous terrapin treat.
In 1973, the Endangered Species Act made it illegal to kill sea turtles in U.S. waters — bad news for turtle soup fans but good news for sea turtles. By 2019, their population had rebounded, increasing by 980%. Even though a vintage recipe for turtle soup remained in a reissue of the classic American cookbook The Joy of Cooking as late as 1997, the hearty dish all but disappeared from menus and dining tables by the 1980s.
Whether you’re enjoying a glass of cabernet with a meal or downing IPAs with friends, you’re taking part in the multifaceted, multicultural act of alcohol consumption that dates back many thousands of years.
Indeed, although the dangers of excessive drinking are well known, and even small amounts of alcohol are now believed to come with health risks, imbibing has been part of the fabric of human existence since the dawn of recorded time. Some anthropologists argue that alcohol featured prominently in social customs that facilitated the rise and progression of civilizations. Others suggest that civilization itself was formed as a result of people settling in one area to domesticate crops for the production of alcohol.
Because spirits such as whiskey or vodka involve a more complex distillation process, beer and wine (and wine’s less-prominent cousin, mead) are the earliest forms of alcohol, left over from a time before any of humanity’s famous names, wars, or inventions etched themselves into history. Which sets up the ultimate bar debate: Which of these two ancient libations is older?
Early Humans Likely Discovered Alcohol by Accident
To let some of the air out of the suspense, we’ll note that it’s difficult to pinpoint when people first began drinking wine or beer, since proto-versions of both drinks can be formed with little to no human intervention.
Ethanol, or drinking alcohol, is created through the fermentation process that takes place when sugar meets yeast. In the case of beer, that occurs when a grain such as barley is exposed to moisture and its starches are converted into sugar, priming this component for catalyzation by deliberately introduced or naturally appearing yeast. Similarly, crushed or even overripe fruits with high sugar content including grapes or figs will naturally begin to ferment, creating the basis for wine.
It’s likely that early humans (or even animals) stumbled upon the intoxicating effects of fermented grains and fruits, and maybe even figured out how to replicate the experience by leaving their collected wares out in the elements for too long. We can only speculate on the concoctions that may have been experimentally produced by pre-Neolithic people, although they were almost certainly different from the beers and wines that emerged under more controlled conditions in later epochs.
Unsurprisingly, such conditions were well established by the civilizations that introduced writing and other major advancements in math and science: the ancient Mesopotamians and Egyptians.
Along with the 5,000-year-old remnants of barley beer discovered at the Godin Tepe ruins in modern-day Iran, evidence of a Sumerian drinking culture has surfaced in the cuneiform receipts of beer sales as well as in the “Hymn to Ninkasi,” an ode to the Sumerian beer goddess, which includes a recipe for brewing. Grain-rich beer provided nutritional benefits for Bronze Age drinkers, and may have been safer to consume than the water from rivers, which could be contaminated with animal waste.
Beer also figured prominently into the lives of Egyptians around the same time; it’s believed that workers on the pyramid of Giza received more than 10 pints per day in rations, while even children consumed low-alcohol varieties.
Meanwhile, non-native wine grapes were grown in both areas, although wine was typically reserved for the palates of royalty and priests. The Egyptians were the first known culture to document their winemaking, and left behind records of the process to go with jars of their handiwork in the burial chambers of rulers and other prominent figures.
A 9,000-Year-Old Wine-Beer Hybrid Was Found in China
Of course, humans lived in settlements for thousands of years before these celebrated civilizations emerged, and alcohol played a part in many such early cultures.
Thanks to the discovery of drinking vessels in prehistoric Andean sites, archaeologists believe that the popular South American maize-based beer known as chicha may have been around since 5000 BCE. Going back even further, the detection of wine residue in jars and grape pollen in the soil around two sites near Tbilisi, Georgia, dating to around 6000 BCE showed that the residents of these former villages were among the earliest known wine producers.
To date, the earliest confirmed chemical evidence of an alcoholic concoction is neither specifically beer or wine, but something of a combination of the two: As detailed in a 2004 paper, an examination of 9,000-year-old pottery jars from the Jiahu Neolithic site in northern China revealed the residue of a fermented beverage of rice, honey, and fruit.
Meanwhile, ongoing discoveries continue to push the beginnings of boozy beverages even further and further into the past.
While it was once thought that humans domesticated grapevines no earlier than 8,000 years ago, a 2023 study of the DNA of more than 3,500 vine samples showed that wine grapes were domesticated in the Caucasus region of Western Asia and Eastern Europe as far back as 11,000 years ago. Table grapes were also domesticated in the Middle East around that same time, and it was these crops that were crossbred with wild versions to launch the wine varieties that became popular throughout the continent.
The idea of an 11,000-year-old wine is certainly impressive, but the archaeological record suggests the possibility of an even older brew: In 2018, a Stanford University research team found the 13,000-year-old remains of starch and plant particles called phytolith, which result from wheat and barley beer production, at a Natufian gravesite near modern-day Haifa, Israel. Although critics believe the evidence points to breadmaking, the Stanford team contends that both bread and a thick, grog-type beer were created at this site.
For now, we’ll give the edge in this battle of seniority bragging rights to beer. But with more discoveries sure to pop up in the coming years, it's likely this debate will be revived — and continue past many a closing hour.
Most of us don’t give a whole lot of thought to the habit of finishing a satisfying meal with a dessert of something sweet — we’re too busy savoring the delectable mouthfuls of cake, custard, or ice cream.
Yet this is a clear culinary tradition that many people follow. While some may elect to eat sweets before a main course, and others simply dig into pie or brownies at any time of the day, most adhere to the standard operating procedure of dessert after the main course at lunch or dinner. But how and when did this order come about? Why do we eat sweets after a savory meal, and not the other way around?
To start somewhere close to the beginning, the craving for sweets is biological. Our hominid ancestors realized they derived more energy from ripe fruit with a higher sugar content than unripe fruit, and humans evolved with a hardwiring that connected sweetness to pleasurable feelings.
This primal need perhaps explains why sweets have traditionally featured into religious ceremonies for many cultures. As described in Michael Krondl’s Sweet Invention: A History of Dessert, Mesopotamian cooks prepared cakes as an offering to the goddess Ishtar. Similarly, Hindus throughout India have presented a sugar and milk concoction known as pedha to deities such as Kali for more than two millennia.
At times, the ritual of serving sweet dishes at distinct intervals has translated to something similar to the modern idea of dessert. After a day of fasting in celebration of Krishna’s birthday, Hindus traditionally indulge in treats such as bhog kheer, a pudding, or shrikhand, a sugar-flavored yogurt. In Turkey, the end of fasting at Ramadan means an opportunity for celebrants to sink their teeth into baklava, a beloved pastry.
Of course, the preparation and consumption of sweets has long been a part of secular mealtimes as well. The Deipnosophists, a work from the third-century Greek writer Athenaeus of Naucratis, describes an array of honey-coated fare served over a series of lavish banquets. However, the now-commonplace notion of specifically relegating such sweeter foods to the end of a meal has its origins in France.
Credit: Culture Club/ Hulton Archive via Getty Images
Diners in Medieval France Enjoyed Meal-Ending Sweets
According to Sweet Invention, the term "dessert" appears in French cookbooks as far back as the 14th century; the French loanword is the noun form of the verb desservir, meaning "to remove what has been served." In the Late Middle Ages, dessert was distributed after the dishes of the main meal had been cleared, although these edibles weren't necessarily of the sweet variety.
The serving that followed dessert and concluded the meal, known as the issue, was more likely to consist of sweet foods such as fruit or spiced candies. Both the dessert and issue fell under the category of entremets, smaller portions that appeared between or after the main courses.
For European diners of the Late Middle Ages, it was common to see dishes of meat and cakes served together as the main course; there was little attempt to separate these foods of radically different tastes and textures. This remained the case even after sugar became more widely available on the continent, and influential Renaissance-era Italian cooks began showering all varieties of meals with healthy doses of the valuable commodity.
Advertisement
Advertisement
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Desserts Emerged as a Distinct Course in the 17th Century
By the 17th century, there was a growing distinction between sweet and savory courses among French culinary practitioners, and with it arrived the modern notion for the proper way to end a meal. Dessert even earned an entry in the 1690 edition of the Dictionnaire Universel, defined as "the last course placed on the table... Dessert is composed of fruits, pastry, confectionery, cheese, etc."
Recipe books of the era also devoted increasing quantities of print to instructions for pastries, jams, and fruit dishes. However, the preparation of these meal-ending foods fell under a different jurisdiction than that of the chefs in charge of the main courses. Desserts were handled by confectioners who worked in the kitchen’s "office," or pantry.
Although the office initially was considered a subordinate branch of the kitchen pecking order, its confectioners came to be considered artisans in their own right thanks to the sculptural desserts served at the ostentatious dinners of King Louis XIV and other royals. Dessert as an art form arguably reached its peak in the early 19th century with the creations of French chef Antonin Carême, who built massive replications of pyramids, temples, and fountains out of sugar mixtures.
The French Revolution Led to Modern Dining Customs
The guidelines for dessert were changing even as Carême was producing his classically inspired pièces montées. The fall of the ruling class with the French Revolution meant that the chefs who once toiled in palace kitchens became unemployed. While some were able to find wealthy benefactors, others spurred the transformation of the public dining house by launching new eateries around Paris.
These restaurants introduced the concept of service à la russe, in which each customer ordered individual dishes to his or her liking, delivered one course at a time. Meanwhile, the rise of cafés and tea houses throughout the city further popularized the concept of single-portioned desserts.
By the arrival of the 20th century, the habit of dessert to polish off a meal, whether at home or a restaurant, had taken root in the country. And given France's powerful influence on culinary customs, it wasn't long before this sweet finishing touch at mealtime became standard across the rest of Europe, as well as on the other side of the Atlantic.
As far as ephemera is concerned, few things are as temporary as snack foods from the past. Snacking itself is an evanescent experience, a fleeting moment of between-meal indulgence or an inattentive nosh during a spectator event. But snacks are also a major part of American culture; snacking has doubled since the late 1970s, and according to the 2024 USDA survey “What We Eat in America,” 95% of American adults have at least one snack on any given day.
The idea of snacking has distinctly 20th-century origins. Eating between the traditional three meals per day was frowned upon during the 19th century, and proto-snack street foods of the time (such as boiled peanuts) were considered low class. But the Industrial Revolution, combined with a more enterprising spirit around the turn of the 20th century, created business opportunities for packaged, transportable foods, which were often marketed as novel expressions of modern technology.
As the nascent snack market emerged and grew, companies introduced countless products with varying degrees of success. Some, such as Cracker Jack (which debuted in 1896) and Oreos (which debuted in 1912 as a nearly exact imitation of the earlier Hydrox cookies), endure to this day. But history is littered with the wrappers of discontinued snacks. Here are some long-gone treats we’d love to see make a comeback.
The Schuler Candy Company made this distinctive chocolate and cherry candy bar from 1913 to 1987. Each Cherry Hump bar contained two cherries, cordial, and fondant, and was double-coated in dark chocolate. In an unusual final step in their production, the bars were aged for six weeks, in order for the runny cordial and thicker fondant to meld and reach a cohesive state. Despite the cohesion achieved by aging, the filling of the candy bar still contained a more liquid texture than other candies, and this ended up being its undoing.
When Schuler became a subsidiary of Brock Candy Company in 1972, Brock sought to update the production and distribution methods of Cherry Humps, and chose bulky high-volume pallet shipments instead of the previous method of fanning out multiple shipments to smaller distribution lots. What made sense on paper for efficiency was disastrous in practice for a product as fragile as Cherry Humps: The candy often arrived at its destination badly damaged, with visibly sticky packaging from leaking cordial.
Instead of shoring up the candy’s packaging to protect it, or adjusting the shipping method, Brock changed the candy’s recipe to make it sturdier. The new recipe did away with the enticingly juicy cordial and fondant filling, instead setting firmer cherries in a layer of dense white nougat. There was no more gooeyness to speak of, and in turn, no more of the candy’s signature appeal. Sales steadily declined and Cherry Humps were discontinued.
Introduced at the beginning of the 1960s, Baronet was a vanilla creme sandwich cookie Nabisco advertised in a memorable (if repetitive) jingle as being made with milk and “country good” eggs. The appeal of Baronet cookies was in their approximation of a homemade treat: The shortbread wafers had a natural buttery flavor, and they sandwiched a filling that Nabisco claimed tasted “just like the icing on a cake.”
While they might sound like a vanilla equivalent of Nabisco’s much more famous Oreos, Baronet cookies weren’t as sweet, and they had a less ornate design than Oreos, with gently scalloped edges and embossed lines radiating from the center of the cookie. They also tended to be a little more economical than Oreos.
Though today’s grocery store shelves are stocked with an amount of Oreo variants that stretches the imagination, the cookie was known for decades in a single variety: the classic chocolate wafers with vanilla creme. But when Oreos were introduced in 1912, this wasn’t the only version available; there was also a version with a lemon creme filling.
Sometimes referred to as lemon meringue, this chocolate-lemon combination is an unusual one for today’s palette, as orange tends to be the more common citrus to pair with chocolate. Then again, perhaps the chocolate-lemon combination was unusual for the early-20th-century palette as well: By the 1920s, the lemon filling was discontinued. Still, considering the sometimes bizarre reaches of today’s Oreo flavor offerings, it’s surprising the chocolate-lemon combination hasn’t made a comeback.
Tato Skins were a potato chip snack introduced by Keebler in 1985. Unlike most potato chips, they were made from dehydrated potato flakes (as opposed to potato slices), which made them somewhat similar to Pringles. But what differentiated Tato Skins from every other potato chip was right there in the name: The ingredients included the skin of the potato, in order to give the snack a taste that approximated a baked potato. The chips even had a rough “skin side” when flipped over.
Tato Skins initially came in three flavors: cheddar cheese and bacon, sour cream and chives, and “baked potato” (i.e., no seasoning other than salt), with a barbecue flavor added to the lineup in 1987. By 2000, though, the chips had been discontinued — perhaps a victim of cuts during a turbulent era of acquisition and sale from 1996 to 2000. While Keebler’s Tato Skins division was purchased by a company that reintroduced the snack as TGIFridays Potato Skins later in 2000 (thanks to a licensing deal with the eponymous restaurant chain), the newer version was formulated differently from the original.
Jell-O Pudding Pops were frozen pudding on a stick, first test-marketed on a regional basis in 1978 and introduced nationally during the early 1980s. They were developed by Jell-O parent company General Foods in order to respond to changes in eating patterns during the era. Desserts had been losing ground to portable afternoon snacks, so Pudding Pops were a way to reconfigure General Foods’ Jell-O Pudding as a portable snack. Accordingly, Pudding Pops were originally available in four classic pudding flavors: chocolate, vanilla, banana, and butterscotch. They had a smooth texture that was similar to a Fudgesicle, but with a slightly richer creaminess.
The product was a huge success from a sales standpoint. Five years after entering the market, Pudding Pops were bringing in around $300 million in annual sales. But there was trouble on the back end: Since Jell-O was generally a dry goods company, its manufacturing and distribution processes were not originally equipped to handle frozen foods. As a result, Pudding Pops required added expenditures all along the supply chain, and the snack didn’t reach profitability targets despite the encouraging sales figures.
By 1994, Pudding Pops had vanished from the freezer aisle. In 2004, Jell-O licensed Pudding Pops to Popsicle, a seemingly ideal partnership for solving the previous supply chain issues. However, Popsicle changed the formulation, using its own molds for the signature narrow and rounded-off Popsicle shape, rather than the wider and flatter shape of the original Pudding Pops. Despite the potential for the relaunched product to take advantage of a market for nostalgia, the Popsicle-branded Pudding Pops never caught on, and were discontinued in 2011.