From our modern vantage point, the culinary options of bygone cultures are sometimes difficult to comprehend. It seems that hungry people gobbled down anything they could get their hands on, including dormice (rodents), beaver tails, and fish bladder jam.
But while some of the choices seem unusual in hindsight, we can at least grasp their nutritional value. Other foods, however, were just downright dangerous to the human digestive system, and certainly wouldn’t have been on the menu had the consumer been aware of the consequences. Here are five toxic foods that people unwittingly used to eat.
Offering a rich source of vitamins, protein, and fatty acids, seafood is generally considered among the healthiest cuisine to eat — unless, of course, the specimens being consumed contain sky-high concentrations of heavy metals. Such was the case with the Atlantic cod and harp seals that comprised the meals of Stone Age settlers in northern Norway’s Varanger Peninsula around 5000 to 1800 BCE.
According to a recent study, cod bones from the settlement contained levels of cadmium up to 22 times higher than contemporary recommended limits, while seal bones showed similar dangerously elevated levels of lead. While it might seem strange that wildlife came with the risk of carcinogens in an era well before industrialization, the study authors suggest this was the result of climate change. It’s possible the thaw from the last ice age (between about 120,000 and 11,500 years ago) produced rising sea levels that carried soil containing the potent minerals into the water.
It’s well known that the ancient Romans enjoyed their wine, but it’s possible a component of the winemaking process fueled ill health in a manner that went beyond the typical hangover. The Romans made a sweet, syrupy substance called defrutum, which was prepared by boiling unfermented grape juice. This syrup was used as a preservative for wine and fruit, as well as in sauces for dishes of pork, veal, and lentils, as described in the famed Roman cookbook De Re Coquinaria.
The problem was in the sweetener’s preparation: Ancient scholars including Cato and Pliny the Elder called for the syrup to be boiled in a lead-lined pot, inevitably resulting in the syrup’s absorption of lead. Although the hazards of lead poisoning were known to the Romans, it apparently never occurred to these great minds that they were endangering the public with their instructions.
Nowadays, a typical Easter meal might include a ham and a sampling of the chocolate left by the Easter Bunny, but for Christians in medieval England, the holiday was incomplete without the serving of the tansy. The dish was named for its primary ingredient, the yellow-flowered tansy plant, which was mixed with herbs and a hearty helping of eggs to produce what was essentially a large, sweet omelet.
Coming on the heels of Lent, the tansy not only provided a welcome change from the strict diet of lentils and fish consumed over the previous 40 days, but also was said to provide relief from the gas-inducing Lenten meals. Despite its purported medicinal qualities, the plant is actually mildly toxic, its pyrrolizidine alkaloids dangerous to humans in high doses. Although the poison didn’t hinder the long-standing popularity of the tansy on dinner tables, people are generally dissuaded from eating the plant today.
You could highlight an array of foods in Victorian England that would fail to pass muster under any food safety laws, from the lead chromate found in mustard to the arsenic compounds used to color confectionery. However, given its ubiquity in households of the era, the most egregious example may well be bread.
Seeking to create thick loaves of an appealing white hue, Victorian bakers mixed in such ingredients as ground-up bones, chalk, and plaster of Paris. Another common additive was alum, an aluminum-based compound that inhibited digestion and contributed to the malnutrition rampant among the poorer population. Although the dangers of adulterated edibles were known among the more educated members of the public, there was little stopping the food producers and distributors who ignored these health risks in favor of profits.
Advertisement
Advertisement
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Rhubarb Leaves
Known for its reddish stalk and tart flavor, rhubarb in the hands of a capable chef can be used to create delicious pies, sauces, and jams. That is, the stalks can be turned into such kitchen delights — the thick green leaves are chock-full of toxic oxalic acid and therefore not meant for ingestion. Unfortunately, this fact was not well known a century ago, as rhubarb leaves were recommended as a source of vegetation during the food shortages of World War I.
Consumed in small but regular doses, the leaves inhibit the beneficial effects of calcium and trigger the buildup of calcium oxalate, leading to kidney stones. While a human would normally have to eat something like 6 pounds of the stuff to experience the more acute effects (including vomiting, diarrhea, and kidney failure), there was at least one reported case of oxalic acid poisoning during the rhubarb leaf’s brief run as a lettuce substitute.
According to popular legend, the English aristocrat John Montagu, the 4th Earl of Sandwich, was engaged in an all-night card game in 1762 when he became distracted by hunger pangs. Not wanting to stop playing, he instructed his servant to bring him a snack of beef between two slices of bread, allowing him to satiate the twin desires of filling his belly and raking in more dough.
While he was hardly the first person in history to consider eating food in this fashion — Montagu may have been inspired by culinary creations in Turkey and Greece — the earl’s idea caught on across English high society and led to the honor of having his name affixed to this particular bread-based meal.
The sandwich soon spread to other social strata across Europe and in the American colonies, its popularity underscored by increasing appearances in cookbooks through the 19th and 20th centuries. However, numerous once-popular foods have failed to survive to the present day, and the same goes for certain old-fashioned sandwiches; some of them are just too bizarre for modern palates. Here are six sandwiches that were (mostly) pushed aside by modern diners in favor of tastier options.
In the U.S. in the 19th and early 20th centuries, oysters were a popular sandwich filling. Sandwiches known as “oyster loaves” were featured in Mary Randolph’s cookbook and guide The Virginia Housewife in 1824, and numerous entries in Eva Green Fuller’s Up-To-Date Sandwich Book in 1909. The first and most basic recipe from Fuller’s book instructs readers to supply a dash of tabasco sauce, lemon juice, and oil to chopped raw oysters (without specifying measurements), slather the mixture on white bread, and then top it off with a lettuce leaf.
A sandwich aficionado by the name of Barry Enderwick has dug up a trove of old cookbooks to recreate forgotten specialties on his "Sandwiches of History" social media channels, and the yeast sandwich is one such bygone dish. Yeast is typically used for the processes of fermenting beer and leavening bread, and it's unusual to find it as a featured ingredient of a dish. But in the 1930s, there was a push on the part of Fleischmann's to promote the nutritional benefits of its product, resulting in an entry in Florence A. Cowles' 1001 Sandwiches (1936). Cowles calls for five drops of "table sauce" to be added to a cake of compressed yeast, with the resulting paste spread on a cracker or bread. Regardless of whether the table sauce was meant to be ketchup, Worcestershire sauce, or another ingredient, this sandwich did little to help the ultimately unsuccessful attempt to enhance American cravings for yeast-filled meals.
This one's actually a bit of a misnomer. Yes, there are (chopped) pickles in here, but the instructions in 1916's Salads, Sandwiches and Chafing Dish Recipes also call for mixing the brined vegetable with whipped cream, mayonnaise, and grated horseradish, and adding chopped cooked beef to the mixture atop buttered bread.
Popcorn is a popular snack, so why not pair these bite-sized goodies with yummy buttered toast to produce an extra-delectable dish? That seems to have been the idea behind the recipe in 1909's Up-to-Date Sandwich Book, except the dish also calls for readers to add "five boned sardines, a dash of Worcestershire, and enough tomato catsup to form a paste."
Yes, this is exactly what it sounds like. According to the bestselling Beeton's Book of Household Management from 1861, the recipe simply calls for inserting a piece of cold toast between two slices of buttered bread and seasoning it all with salt and pepper (although the author, Isabella Beeton, helpfully suggests that it could be livened up with slices of meat). While clearly a relic of an era of different tastes, the toast sandwich was revived by the United Kingdom's Royal Society of Chemistry in 2011, and has also surfaced on the menu of the Fat Duck restaurant in Bray, Berkshire, England.
Credit: Diana Miller/ Connect Images via Getty Images
Mashed Potato Sandwich
This creation, also known as the "Greatest Man Sandwich in the World," is attributed to the one and only Gene Kelly. While it's unclear where the recipe first appeared, or whether it was invented or adopted by the famed performer, it is clear that this sandwich is not for folks looking to limit their carbs. The recipe requires a thick layer of leftover mashed potatoes to be spread on buttered French bread and topped with onion slices, mayonnaise, and a hearty dose of salt and pepper; the product is then browned in a broiler. According to one recipe, it was to be enjoyed with the "nearest mug of beer."
Since most of us walk into a grocery store with our minds fixated on the items needed to fill up the fridge and pantry, it’s rare that we take the time to marvel at the wonders of modern food shopping. Whether it’s a small neighborhood mart, a chain supermarket, or a gargantuan superstore, today’s grocery stores offer a dizzying range of brands for any given product, allowing discerning shoppers to make a choice based on price, ingredients, or even packaging. All necessary (and unnecessary items) can be wheeled in a cart to a checkout line, where a friendly employee will happily tabulate the items and accept various forms of payment. There are also self-checkout stations, where you can scan your items yourself and be on your way even faster.
Of course, such a process would have been completely alien to early humans who relied on hunting and gathering their food. And it likely would be fairly shocking even to the people accustomed to earlier forms of food shopping. Here’s a look at what grocery stores were like before the rise of Publix, Whole Foods, Trader Joe’s, and the other popular stores we frequent today.
According to Michael Ruhlman’s book Grocery: The Buying and Selling of Food in America, the earliest grocery depots in the U.S. were the country stores that surfaced in the 17th century. Along with offering a limited supply of culinary staples such as sugar, flour, and molasses, these markets provided a smorgasbord of other necessities of colonial America, including hardware, soap, dishes, pots, saddles, harnesses, shoes, and medicine. By the early 19th century, these stores — originally constructed from logs and mud — were largely replaced by newer frame buildings, which contained cellars that were large enough to house casks of whale oil and also cool enough to store eggs, butter, and cheese.
By the middle of the 19th century, the general store was a common sight across the small towns of the expanding United States. Similar to the country store, general stores stocked goods that both satiated hunger and catered to other crucial needs of paying customers. Food items included coffee beans, spices, honey, oatmeal, and dried beans, many of which were kept in barrels and required measuring the desired amount on (often inaccurate) scales. The stores also offered nonedible wares, including cloth, buttons, undergarments, hats, lamps, rifles, and ammunition.
Normally featuring at least one large display window, these stores were typically packed with goods piled high on shelves and tables amid the boxes and barrels stuffed into available spaces. A front counter displayed smaller items as well as such contraptions as a coffee grinder, scales, and the cash register. As general stores typically doubled as community centers, they were usually fitted with a stove to warm inhabitants during cold-weather months and often featured chairs for those who planned to stay and chat.
Even so, general stores were neither the most comfortable nor the most sanitary places. Customers often dragged in dirt and animal waste from the unpaved roads outside, while cast-iron stoves could produce a layer of soot over the displayed wares.
Although the term "grocery" referred to a tavern or a saloon through much of America's early history, it came to signify the emporiums that were appearing in more densely populated urban centers by the late 19th century.
These grocery stores typically carried around 200 products, and occupied 500 to 600 square feet of floor space. As with general stores, these shops featured items piled high on shelves, with clerks reaching for everything presented to them on a list and continuing to measure out both liquid and solid goods from barrels.
Although most proprietors were willing to offer a line of credit, a precursor to today’s credit cards, unlike today they provided limited choices by stocking only one type of each product. Also unlike most modern grocery stores, these early shops largely sold dry goods, meaning customers typically needed to make separate trips to a butcher, bakery, and produce peddler to round out the full array of dining needs.
The grocery model began to change when the Great Atlantic & Pacific Tea Company, owners of the A&P grocery chain, opened the first of its "economy stores" in 1912. Along with eliminating the credit lines and free delivery offered by independent retailers, A&P stores became known for offering popular brands such as Eight O'Clock Coffee.
The Tennessee-based Piggly Wiggly chain introduced additional innovations to the model with the opening of its first store in 1916. Most groundbreaking was its procedure of "self service." While goods still lined the lone aisle that snaked through the store, they were now within reach of customers who picked out the price-marked items themselves instead of delegating the responsibility to a clerk. The customers then placed the item in a wood basket — also a novel feature — before bringing everything to a checkout counter.
The streamlined processes resulted in lower costs for customers, and in turn produced an explosion in sales for retailers. The number of grocery chain stores increased from 24,000 to around 200,000 between 1914 and 1930, with the biggest, A&P, accounting for 16,000 of those outlets.
Although the 1920s saw the rise of "combination stores" that provided a wider range of meats, produce, and dairy products under one roof, the first true supermarket arrived with the opening of New York City's King Kullen in 1930.
A former employee of the Kroger chain stores, Matt Cullen decided to launch his new enterprise in a slightly more remote section of Jamaica, Queens. The location allowed for cheaper real estate, which meant that Cullen could open a larger store and provide space for the machines that were transforming American lives: automobiles.
The first King Kullen covered 6,000 square feet and offered 10 times the amount of food products as most existing markets, much of which was sold out of packing crates. It was soon surpassed by New Jersey's Big Bear, which featured a 15,000-square-foot grocery surrounded by areas leased to vendors selling tobacco, cosmetics, and auto parts.
Although other features of the contemporary shopping experience had yet to be invented — shopping carts arrived in the late 1930s, the barcode system in the 1970s — the blueprint for the modern grocery store was in place, with supermarkets eventually expanding to stock 50,000 items, and colossal chains swelling to a footprint of more than 200,000 square feet.
Nowadays, as we casually stroll down the block-length aisles of superstores beneath towering shelves of colorful boxes, examine the fine print on packages in the organic section, or nibble on the free samples of cheese and dumplings, we can spare a thought for the evolution of this essential part of everyday life.
Few things are as refreshing as an ice-cold drink on a hot day. Indeed, ice is an essential part of the beverage industry today, with the global ice maker market valued at more than $5 billion. In the United States alone, the average person consumes nearly 400 pounds of ice per year.
Despite its popularity, most of us have probably never thought about how ice-cold drinks evolved into an everyday necessity. This simple pleasure has a long and interesting history shaped by ancient ingenuity, global trade, and evolving technology. From emperors importing ice from distant mountains to entrepreneurs revolutionizing its distribution, the journey of the ice in our drinks is a story of innovation that dates back to the first human civilizations.
Long before refrigerators and freezers, ancient civilizations found ingenious ways to keep drinks cool. The earliest recorded instance of ice storage dates back to the reign of Shulgi, the king of Ur in Mesopotamia from 2094 to 2046 BCE. Shulgi named the 13th year of his reign “Building of the royal icehouse/cold-house” (years were often named after a significant event), suggesting the construction of an icehouse during that period.
In China, the practice of harvesting and storing ice dates back to at least 1100 BCE. During the Zhou dynasty (1046 to 256 BCE), the royal court established a specialized department responsible for collecting natural ice blocks each winter and storing them in icehouses for use in the warmer months. This stored ice was used to cool food and beverages, including wine, and was also used in medical treatments.
Over time, ice collection became an organized practice, with officials overseeing its storage and distribution. Around 400 BCE, the Persians took preservation a step further by constructingyakchals — massive, domed icehouses made of heat-resistant mud brick. These structures allowed them to store ice year-round, even in the arid desert climate. By carefully directing water into shallow pools that froze overnight, they amassed ice supplies that could later be used to cool drinks or create early versions of frozen treats.
Around the world, chilled beverages were a luxury enjoyed by the wealthy — one that did not become widely accessible for centuries. Around 300 to 400 CE, the Romans imported massive blocks of ice from the Alps, transporting them over long distances to cool drinks and prepare chilled delicacies. This symbol of extravagance was primarily reserved for elite members of society, including Emperor Nero, who was known to be fond of ice-cold refreshments. He ordered ice and snow to be brought down from the mountains and mixed with fruit juices and honey, an early prototype of sorbet.
During the Heian period in Japan (794 to 1185 CE), aristocrats preserved natural ice in underground icehouses known as himuro, allowing them to store and use it during the hot summer months. Historical texts such as the The Pillow Book mention the consumption of ice in the imperial court, emphasizing its exclusivity among the noble class. The government even maintained official icehouses, and records from the period suggest that ice was carefully rationed and distributed among the elite. Unlike in Europe, where ice was primarily used for cooling drinks, in Japan, it was also consumed directly in the form of finely shaved ice, sometimes drizzled with natural sweeteners.
During the European Renaissance, the use of ice in beverages and food was particularly popular in Italy and France. In Italy, ice was harvested from the Alps and transported to cities, where it was stored in underground cellars or specially built icehouses known as neviere. These structures, often insulated with straw or sawdust, allowed ice to be preserved for extended periods. Wealthy households and royal courts used this stored ice to chill drinks, enhancing the dining experience. Records from the 16th century indicate that Italians used ice to cool wines and other beverages, a practice that became increasingly fashionable among the aristocracy.
By the 17th century, the development of ice storage facilities had expanded beyond the nobility, with vendors in Italian cities selling ice or chilled water to customers during the summer months. This period also saw the refinement of frozen desserts, with early recipes for flavored ices and sorbets appearing in Italy and later spreading to France. But the primary function of ice remained its role in cooling beverages, a practice that continued to evolve with advancements in ice harvesting and transportation.
The “Ice King” Helped Make Ice Accessible in the U.S.
For most of history, ice was difficult to come by, making chilled drinks a rare luxury. That changed in the early 19th century when a Boston businessman named Frederic Tudor saw an opportunity to monetize ice production. Dubbed the "Ice King," Tudor began harvesting ice from frozen ponds on his father’s New England farm, packing it in sawdust and shipping it to warm climates such as the Southern states and the Caribbean islands. His business took off and the modern commercial ice trade was born.
At first, ice was primarily used in bars and hotels catering to the social elite. Cocktails such as the mint julep and sherry cobbler became wildly popular in the United States, served over crushed or shaved ice. The sight of frost-covered glasses became a symbol of sophistication and modernity. By the late 19th century, thanks to improved ice-harvesting techniques and the rise of commercial production, ice became more affordable and accessible in the U.S. Then in the first half of the 20th century, home refrigerators allowed more and more families to store ice, making it possible for iced drinks to become a staple at American dinner tables.
From the mid-18th century until the mid-20th century, turtle soup was one of the most luxurious dishes in European and American cuisine. It frequently appeared on the tables of wealthy families and was served at dinners held by prominent politicians. While turtle soup is still considered a delicacy in some parts of the world, it has become all but obsolete in America. But why, exactly, did this dish enjoy more than 200 years as a prized culinary staple?
The first Europeans to eat turtle were not aristocrats, but sailors and explorers in the late 17th century. The green sea turtles found in the Caribbean Sea and Atlantic Ocean were initially seen as merely suitable sustenance for long journeys at sea. But as Indigenous peoples of the Caribbean islands taught European seafarers more about turtles, the simple food became seen as “exotic” and desirable. It caught the attention of the upper class in Europe, and before long, turtle meat became a coveted luxury on the continent.
By the early 18th century, Britain’s taste for turtle had extended to the American colonies, and while recipes for turtle casseroles and other dishes were prominent in cookbooks from the era, turtle soup was the most popular. Turtle’s delicate, veal-like taste and rich, gelatinous texture made it ideal for slow-simmered broths and stews.
With demand for turtle soup surging, overfishing quickly became a problem. The Caribbean sea turtle population rapidly began depleting in the early 1800s. In the U.S., diamondback terrapins from the country’s eastern shores became a popular substitute for sea turtles. Abraham Lincoln even served terrapin stew at his second inauguration in 1865. The dish, according to an 1880 edition of The Washington Post, was “an important part of any Washington dinner laying claim to being a pretentious affair.”
By the turn of the 20th century, “mock” turtle soup, made with calf’s head instead of turtle meat, had become almost as beloved as the real thing. But the appetite for real turtle soup remained, and overharvesting of turtle populations continued, particularly as canned turtle soup became an increasingly mainstream product.
Advertisement
Advertisement
Credit: Buyenlarge/ Archive Photos via Getty Images
The Fall of a Delicacy
When Prohibition started in 1920 and the manufacture, sale, and transportation of alcohol was banned in the U.S., a key ingredient of turtle soup, sherry, was no longer available. What’s more, without alcohol sales, many of the country’s fine dining establishments — the very ones that helped keep turtle soup flourishing among the upper class — struggled to remain open. As they gradually closed, with them went the once-ubiquitous terrapin treat.
In 1973, the Endangered Species Act made it illegal to kill sea turtles in U.S. waters — bad news for turtle soup fans but good news for sea turtles. By 2019, their population had rebounded, increasing by 980%. Even though a vintage recipe for turtle soup remained in a reissue of the classic American cookbook The Joy of Cooking as late as 1997, the hearty dish all but disappeared from menus and dining tables by the 1980s.
Whether you’re enjoying a glass of cabernet with a meal or downing IPAs with friends, you’re taking part in the multifaceted, multicultural act of alcohol consumption that dates back many thousands of years.
Indeed, although the dangers of excessive drinking are well known, and even small amounts of alcohol are now believed to come with health risks, imbibing has been part of the fabric of human existence since the dawn of recorded time. Some anthropologists argue that alcohol featured prominently in social customs that facilitated the rise and progression of civilizations. Others suggest that civilization itself was formed as a result of people settling in one area to domesticate crops for the production of alcohol.
Because spirits such as whiskey or vodka involve a more complex distillation process, beer and wine (and wine’s less-prominent cousin, mead) are the earliest forms of alcohol, left over from a time before any of humanity’s famous names, wars, or inventions etched themselves into history. Which sets up the ultimate bar debate: Which of these two ancient libations is older?
Early Humans Likely Discovered Alcohol by Accident
To let some of the air out of the suspense, we’ll note that it’s difficult to pinpoint when people first began drinking wine or beer, since proto-versions of both drinks can be formed with little to no human intervention.
Ethanol, or drinking alcohol, is created through the fermentation process that takes place when sugar meets yeast. In the case of beer, that occurs when a grain such as barley is exposed to moisture and its starches are converted into sugar, priming this component for catalyzation by deliberately introduced or naturally appearing yeast. Similarly, crushed or even overripe fruits with high sugar content including grapes or figs will naturally begin to ferment, creating the basis for wine.
It’s likely that early humans (or even animals) stumbled upon the intoxicating effects of fermented grains and fruits, and maybe even figured out how to replicate the experience by leaving their collected wares out in the elements for too long. We can only speculate on the concoctions that may have been experimentally produced by pre-Neolithic people, although they were almost certainly different from the beers and wines that emerged under more controlled conditions in later epochs.
Unsurprisingly, such conditions were well established by the civilizations that introduced writing and other major advancements in math and science: the ancient Mesopotamians and Egyptians.
Along with the 5,000-year-old remnants of barley beer discovered at the Godin Tepe ruins in modern-day Iran, evidence of a Sumerian drinking culture has surfaced in the cuneiform receipts of beer sales as well as in the “Hymn to Ninkasi,” an ode to the Sumerian beer goddess, which includes a recipe for brewing. Grain-rich beer provided nutritional benefits for Bronze Age drinkers, and may have been safer to consume than the water from rivers, which could be contaminated with animal waste.
Beer also figured prominently into the lives of Egyptians around the same time; it’s believed that workers on the pyramid of Giza received more than 10 pints per day in rations, while even children consumed low-alcohol varieties.
Meanwhile, non-native wine grapes were grown in both areas, although wine was typically reserved for the palates of royalty and priests. The Egyptians were the first known culture to document their winemaking, and left behind records of the process to go with jars of their handiwork in the burial chambers of rulers and other prominent figures.
A 9,000-Year-Old Wine-Beer Hybrid Was Found in China
Of course, humans lived in settlements for thousands of years before these celebrated civilizations emerged, and alcohol played a part in many such early cultures.
Thanks to the discovery of drinking vessels in prehistoric Andean sites, archaeologists believe that the popular South American maize-based beer known as chicha may have been around since 5000 BCE. Going back even further, the detection of wine residue in jars and grape pollen in the soil around two sites near Tbilisi, Georgia, dating to around 6000 BCE showed that the residents of these former villages were among the earliest known wine producers.
To date, the earliest confirmed chemical evidence of an alcoholic concoction is neither specifically beer or wine, but something of a combination of the two: As detailed in a 2004 paper, an examination of 9,000-year-old pottery jars from the Jiahu Neolithic site in northern China revealed the residue of a fermented beverage of rice, honey, and fruit.
Meanwhile, ongoing discoveries continue to push the beginnings of boozy beverages even further and further into the past.
While it was once thought that humans domesticated grapevines no earlier than 8,000 years ago, a 2023 study of the DNA of more than 3,500 vine samples showed that wine grapes were domesticated in the Caucasus region of Western Asia and Eastern Europe as far back as 11,000 years ago. Table grapes were also domesticated in the Middle East around that same time, and it was these crops that were crossbred with wild versions to launch the wine varieties that became popular throughout the continent.
The idea of an 11,000-year-old wine is certainly impressive, but the archaeological record suggests the possibility of an even older brew: In 2018, a Stanford University research team found the 13,000-year-old remains of starch and plant particles called phytolith, which result from wheat and barley beer production, at a Natufian gravesite near modern-day Haifa, Israel. Although critics believe the evidence points to breadmaking, the Stanford team contends that both bread and a thick, grog-type beer were created at this site.
For now, we’ll give the edge in this battle of seniority bragging rights to beer. But with more discoveries sure to pop up in the coming years, it's likely this debate will be revived — and continue past many a closing hour.
Most of us don’t give a whole lot of thought to the habit of finishing a satisfying meal with a dessert of something sweet — we’re too busy savoring the delectable mouthfuls of cake, custard, or ice cream.
Yet this is a clear culinary tradition that many people follow. While some may elect to eat sweets before a main course, and others simply dig into pie or brownies at any time of the day, most adhere to the standard operating procedure of dessert after the main course at lunch or dinner. But how and when did this order come about? Why do we eat sweets after a savory meal, and not the other way around?
To start somewhere close to the beginning, the craving for sweets is biological. Our hominid ancestors realized they derived more energy from ripe fruit with a higher sugar content than unripe fruit, and humans evolved with a hardwiring that connected sweetness to pleasurable feelings.
This primal need perhaps explains why sweets have traditionally featured into religious ceremonies for many cultures. As described in Michael Krondl’s Sweet Invention: A History of Dessert, Mesopotamian cooks prepared cakes as an offering to the goddess Ishtar. Similarly, Hindus throughout India have presented a sugar and milk concoction known as pedha to deities such as Kali for more than two millennia.
At times, the ritual of serving sweet dishes at distinct intervals has translated to something similar to the modern idea of dessert. After a day of fasting in celebration of Krishna’s birthday, Hindus traditionally indulge in treats such as bhog kheer, a pudding, or shrikhand, a sugar-flavored yogurt. In Turkey, the end of fasting at Ramadan means an opportunity for celebrants to sink their teeth into baklava, a beloved pastry.
Of course, the preparation and consumption of sweets has long been a part of secular mealtimes as well. The Deipnosophists, a work from the third-century Greek writer Athenaeus of Naucratis, describes an array of honey-coated fare served over a series of lavish banquets. However, the now-commonplace notion of specifically relegating such sweeter foods to the end of a meal has its origins in France.
Credit: Culture Club/ Hulton Archive via Getty Images
Diners in Medieval France Enjoyed Meal-Ending Sweets
According to Sweet Invention, the term "dessert" appears in French cookbooks as far back as the 14th century; the French loanword is the noun form of the verb desservir, meaning "to remove what has been served." In the Late Middle Ages, dessert was distributed after the dishes of the main meal had been cleared, although these edibles weren't necessarily of the sweet variety.
The serving that followed dessert and concluded the meal, known as the issue, was more likely to consist of sweet foods such as fruit or spiced candies. Both the dessert and issue fell under the category of entremets, smaller portions that appeared between or after the main courses.
For European diners of the Late Middle Ages, it was common to see dishes of meat and cakes served together as the main course; there was little attempt to separate these foods of radically different tastes and textures. This remained the case even after sugar became more widely available on the continent, and influential Renaissance-era Italian cooks began showering all varieties of meals with healthy doses of the valuable commodity.
Advertisement
Advertisement
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Desserts Emerged as a Distinct Course in the 17th Century
By the 17th century, there was a growing distinction between sweet and savory courses among French culinary practitioners, and with it arrived the modern notion for the proper way to end a meal. Dessert even earned an entry in the 1690 edition of the Dictionnaire Universel, defined as "the last course placed on the table... Dessert is composed of fruits, pastry, confectionery, cheese, etc."
Recipe books of the era also devoted increasing quantities of print to instructions for pastries, jams, and fruit dishes. However, the preparation of these meal-ending foods fell under a different jurisdiction than that of the chefs in charge of the main courses. Desserts were handled by confectioners who worked in the kitchen’s "office," or pantry.
Although the office initially was considered a subordinate branch of the kitchen pecking order, its confectioners came to be considered artisans in their own right thanks to the sculptural desserts served at the ostentatious dinners of King Louis XIV and other royals. Dessert as an art form arguably reached its peak in the early 19th century with the creations of French chef Antonin Carême, who built massive replications of pyramids, temples, and fountains out of sugar mixtures.
The French Revolution Led to Modern Dining Customs
The guidelines for dessert were changing even as Carême was producing his classically inspired pièces montées. The fall of the ruling class with the French Revolution meant that the chefs who once toiled in palace kitchens became unemployed. While some were able to find wealthy benefactors, others spurred the transformation of the public dining house by launching new eateries around Paris.
These restaurants introduced the concept of service à la russe, in which each customer ordered individual dishes to his or her liking, delivered one course at a time. Meanwhile, the rise of cafés and tea houses throughout the city further popularized the concept of single-portioned desserts.
By the arrival of the 20th century, the habit of dessert to polish off a meal, whether at home or a restaurant, had taken root in the country. And given France's powerful influence on culinary customs, it wasn't long before this sweet finishing touch at mealtime became standard across the rest of Europe, as well as on the other side of the Atlantic.
As far as ephemera is concerned, few things are as temporary as snack foods from the past. Snacking itself is an evanescent experience, a fleeting moment of between-meal indulgence or an inattentive nosh during a spectator event. But snacks are also a major part of American culture; snacking has doubled since the late 1970s, and according to the 2024 USDA survey “What We Eat in America,” 95% of American adults have at least one snack on any given day.
The idea of snacking has distinctly 20th-century origins. Eating between the traditional three meals per day was frowned upon during the 19th century, and proto-snack street foods of the time (such as boiled peanuts) were considered low class. But the Industrial Revolution, combined with a more enterprising spirit around the turn of the 20th century, created business opportunities for packaged, transportable foods, which were often marketed as novel expressions of modern technology.
As the nascent snack market emerged and grew, companies introduced countless products with varying degrees of success. Some, such as Cracker Jack (which debuted in 1896) and Oreos (which debuted in 1912 as a nearly exact imitation of the earlier Hydrox cookies), endure to this day. But history is littered with the wrappers of discontinued snacks. Here are some long-gone treats we’d love to see make a comeback.
The Schuler Candy Company made this distinctive chocolate and cherry candy bar from 1913 to 1987. Each Cherry Hump bar contained two cherries, cordial, and fondant, and was double-coated in dark chocolate. In an unusual final step in their production, the bars were aged for six weeks, in order for the runny cordial and thicker fondant to meld and reach a cohesive state. Despite the cohesion achieved by aging, the filling of the candy bar still contained a more liquid texture than other candies, and this ended up being its undoing.
When Schuler became a subsidiary of Brock Candy Company in 1972, Brock sought to update the production and distribution methods of Cherry Humps, and chose bulky high-volume pallet shipments instead of the previous method of fanning out multiple shipments to smaller distribution lots. What made sense on paper for efficiency was disastrous in practice for a product as fragile as Cherry Humps: The candy often arrived at its destination badly damaged, with visibly sticky packaging from leaking cordial.
Instead of shoring up the candy’s packaging to protect it, or adjusting the shipping method, Brock changed the candy’s recipe to make it sturdier. The new recipe did away with the enticingly juicy cordial and fondant filling, instead setting firmer cherries in a layer of dense white nougat. There was no more gooeyness to speak of, and in turn, no more of the candy’s signature appeal. Sales steadily declined and Cherry Humps were discontinued.
Introduced at the beginning of the 1960s, Baronet was a vanilla creme sandwich cookie Nabisco advertised in a memorable (if repetitive) jingle as being made with milk and “country good” eggs. The appeal of Baronet cookies was in their approximation of a homemade treat: The shortbread wafers had a natural buttery flavor, and they sandwiched a filling that Nabisco claimed tasted “just like the icing on a cake.”
While they might sound like a vanilla equivalent of Nabisco’s much more famous Oreos, Baronet cookies weren’t as sweet, and they had a less ornate design than Oreos, with gently scalloped edges and embossed lines radiating from the center of the cookie. They also tended to be a little more economical than Oreos.
Though today’s grocery store shelves are stocked with an amount of Oreo variants that stretches the imagination, the cookie was known for decades in a single variety: the classic chocolate wafers with vanilla creme. But when Oreos were introduced in 1912, this wasn’t the only version available; there was also a version with a lemon creme filling.
Sometimes referred to as lemon meringue, this chocolate-lemon combination is an unusual one for today’s palette, as orange tends to be the more common citrus to pair with chocolate. Then again, perhaps the chocolate-lemon combination was unusual for the early-20th-century palette as well: By the 1920s, the lemon filling was discontinued. Still, considering the sometimes bizarre reaches of today’s Oreo flavor offerings, it’s surprising the chocolate-lemon combination hasn’t made a comeback.
Tato Skins were a potato chip snack introduced by Keebler in 1985. Unlike most potato chips, they were made from dehydrated potato flakes (as opposed to potato slices), which made them somewhat similar to Pringles. But what differentiated Tato Skins from every other potato chip was right there in the name: The ingredients included the skin of the potato, in order to give the snack a taste that approximated a baked potato. The chips even had a rough “skin side” when flipped over.
Tato Skins initially came in three flavors: cheddar cheese and bacon, sour cream and chives, and “baked potato” (i.e., no seasoning other than salt), with a barbecue flavor added to the lineup in 1987. By 2000, though, the chips had been discontinued — perhaps a victim of cuts during a turbulent era of acquisition and sale from 1996 to 2000. While Keebler’s Tato Skins division was purchased by a company that reintroduced the snack as TGIFridays Potato Skins later in 2000 (thanks to a licensing deal with the eponymous restaurant chain), the newer version was formulated differently from the original.
Jell-O Pudding Pops were frozen pudding on a stick, first test-marketed on a regional basis in 1978 and introduced nationally during the early 1980s. They were developed by Jell-O parent company General Foods in order to respond to changes in eating patterns during the era. Desserts had been losing ground to portable afternoon snacks, so Pudding Pops were a way to reconfigure General Foods’ Jell-O Pudding as a portable snack. Accordingly, Pudding Pops were originally available in four classic pudding flavors: chocolate, vanilla, banana, and butterscotch. They had a smooth texture that was similar to a Fudgesicle, but with a slightly richer creaminess.
The product was a huge success from a sales standpoint. Five years after entering the market, Pudding Pops were bringing in around $300 million in annual sales. But there was trouble on the back end: Since Jell-O was generally a dry goods company, its manufacturing and distribution processes were not originally equipped to handle frozen foods. As a result, Pudding Pops required added expenditures all along the supply chain, and the snack didn’t reach profitability targets despite the encouraging sales figures.
By 1994, Pudding Pops had vanished from the freezer aisle. In 2004, Jell-O licensed Pudding Pops to Popsicle, a seemingly ideal partnership for solving the previous supply chain issues. However, Popsicle changed the formulation, using its own molds for the signature narrow and rounded-off Popsicle shape, rather than the wider and flatter shape of the original Pudding Pops. Despite the potential for the relaunched product to take advantage of a market for nostalgia, the Popsicle-branded Pudding Pops never caught on, and were discontinued in 2011.
The ancient Babylonians, who lived in modern-day Iran, Iraq, Turkey, and Syria as far back as the early second millennium BCE, were among the earliest human civilizations. They pioneered such concepts as written language, legal justice, and mathematics, and the culinary arts are no exception: The world’s earliest written recipes survive today in the form of ancient Babylonian tablets dating to around 1730 BCE. The tablets were unearthed in the 1920s, but were initially thought to be medical texts. It wasn’t until the 1980s that researchers finally deduced that the artifacts actually comprised an early cookbook — in fact, it’s considered to be the oldest surviving cookbook in the world.
I was curious to see how these early recipes tasted — especially compared to contemporary dishes — so I set out to recreate a couple of them and try them myself. One of the three tablets contains a summary of 25 recipes for various stews and broths, while the other two describe those recipes in more detail. But the tablets still lack critical specifics, such as exact ingredient amounts and cooking times. The translation for one recipe, for instance, simply says, “Meat is not used. You prepare water. You add fat…” and so on, which would have been hard to replicate. Luckily, a team at Yale University used the information on the tablets, along with their scholarly knowledge of ancient Babylon, to compile several modern recipes that are considered accurate interpretations of those ancient meals.
Using their findings, I prepared two of the recipes at home: a lamb and beet stew, and a vegetarian broth whose name has been translated as “unwinding.” (Though it’s unclear why it’s called “unwinding,” experts suggest it could have been a dish people ate to relax.) Despite the ancient nature of these recipes, it wasn’t difficult to compile the ingredients, or at the very least suitable substitutes.
Lamb and Beet Stew
The primary dish I made was a lamb stew. Lambs were among several animals that were domesticated in ancient Mesopotamia, and were a key component of cuisine in the region. Mesopotamia was also an agricultural hot spot that came to be known as the Fertile Crescent — referring to the area’s rich, fertile soil — so it comes as little surprise that the lamb stew contained plenty of fresh veggies as well.
Credit: Original photography by Bennett Kleinman
Ingredients
1 pound diced lamb leg
½ cup rendered fat (we used cow fat instead of sheep fat)
½ teaspoon salt
1 cup beer
½ cup water
1 small chopped onion
1 cup chopped arugula
1 cup spring onions
½ cup cilantro
1 teaspoon cumin
1 pound diced red beets
½ cup chopped leek
2 cloves garlic
Coriander seed, cilantro, and kurrat (we used spring onion) for garnish
For this dish, I had to make a few minor adjustments given the ingredients I had available. The recipe as written called for an Egyptian leek called kurrat, which is known for having a mild onion and garlicky flavor. Given the similar flavor profile, spring onions are a suitable alternative, so I used those instead.
The dish also calls for beer, which was a major component of Babylonian culture. It was difficult, of course, to find a beer that would have been exactly like the ones drunk by the ancient Babylonians, but I looked for an option that was as close as possible. Most of the beer recipes archaeologists have uncovered from ancient Babylon are made with barley. Tate Paulette, the author of In the Land of Ninkasi: A History of Beer in Ancient Mesopotamia, writes that “there’s no evidence for the use of hops” in Babylonian beer, unlike in modern beers. He also notes that early beers often included “date syrup, and additional flavorings,” giving them a sweetness. Brew Your Own magazine, meanwhile, points out that many ancient beers were sour in nature. Taking all this into account, I selected a fruity and sour ale to roughly replicate the taste profile of Babylonian beer.
The cooking process for this dish was rather simple, and not unlike how you’d prepare any modern stew. The major difference was how heat was applied — while cooks in ancient Babylon would’ve used an open flame, I used an electric stovetop.
The “unwinding” recipe is a broth thickened by barley-based bread. The broth contains many similar ingredients to the lamb stew, including salt, scallions, cilantro, garlic, and leeks. In addition, toasted and crushed barley seeds are formed into a dough, baked, and added to the broth. This bread, mind you, isn’t like the loaves we enjoy today. It had a hard, crunchy shell, akin to bread that’s partially gone stale. Also, the inside was crumbly as opposed to the soft and pillowy texture of modern baked breads.
Credit: Original photography by Bennett Kleinman
Ingredients
14 ounces barley seeds
¾ cup warm water
½ teaspoon salt
½ ounce spring onion
¼ ounce cilantro
2 cloves garlic
3 ½ ounces leek
2 tablespoons untoasted sesame oil
6 ¼ cups water
Creating the broth was a bit more complicated than the stew, as it required making a sourdough bread, which took multiple days. First, I soaked the barley seeds overnight, then crushed them into dust, mixed them with warm water to form dough, and let the mixture sit for 12 hours. The next day, I baked the dough for 20 minutes before removing and letting it sit on the counter to cool. Then I got to work preparing the broth, which proved to be extremely simple. After letting the mixture simmer for an hour, I crumbled the bread into the broth, resulting in a thicker texture.
Advertisement
Advertisement
The Result
After a simple yet lengthy cooking process, it was time to enjoy these classic Babylonian dishes. Both dishes were reminiscent of some stews and soups that you’d eat today, but the spices and seasoning were less complex, which created a less interesting flavor profile overall. Still, they were pretty tasty and filling.
Credit: Original photography by Bennett Kleinman
I can say with confidence that the lamb and beet stew is something I’d gladly cook and eat again, though it wasn’t without its flaws. The lamb, beets, and herbs complemented each other quite nicely in terms of flavor, with the beets providing a natural sweetness that counteracted the earthiness of the meat. The textures were also on point, as the mix of chewy lamb and crunchy root vegetables worked out well.
However, some of the bites I took were too sweet, as there was a lack of earthy spices to even things out. Some simple seasoning would have done wonders in balancing out the recipe, including a bit more salt or some black pepper. However, black pepper is native to Southeast Asia, and it didn’t make its way to Mesopotamia until Asian traders brought it to the region. (It’s unclear exactly when this occurred, but records show that black pepper made its way from India to Egypt around 1500 BCE via trade routes that would have passed through Mesopotamia.) Still, the stew was a warm and delicious dish that would be perfect for a camping trip or cold day.
Credit: Original photography by Bennett Kleinman
The broth, on the other hand, was passable albeit rather uninteresting. It contained hints of onion and garlic, and the crumbled barley bread helped to thicken the otherwise thin consistency. There was nothing unpleasant about eating this dish, but little to excite the palate either. It was simply a vegetable stock with a bit of semistale bread mixed in. So while “unwinding” may be a good choice if you’re looking for something light and easily digestible, it isn’t a complex culinary delight like the lamb and beet stew.
In summary, I was surprised by how familiar the cooking process was to the low-and-slow dishes that I frequently make in my kitchen. It’s also fascinating to think just how similar these flavors from 4,000 years ago are to our recipes today. Despite the simple flavor profile, both of these dishes were warm and filling, making them especially good for cooler days.
When we think of oranges today, we picture an almost perfectly round fruit with a sweet, citric, juicy interior that is, as the name suggests, orange in color. It’s a common fruit around the world; in the U.S., oranges consistently rank as the third-most-consumed fresh fruit behind bananas and apples — and it ranks No. 1 among juices.
But oranges weren’t always as popular or as common as they are today — and they didn’t even look the same. The oranges we’re familiar with are the result of thousands of years of cultivation and selective breeding. Here, we peel back the layers of history to discover what oranges used to look like and how they evolved into the fruit we enjoy today.
Credit: Sepia Times/ Universal Images Group via Getty Images
The Origins of Oranges
Oranges, and all other citrus fruits, can trace their roots to the southeast foothills of the Himalayas. According to DNA evidence, the first citrus trees appeared in this region about 8 million years ago; from there, they spread across the Indian subcontinent and then to south-central China. These ancient citrus fruits, however, were nothing like the oranges we know today. They were smaller, often bitter, and came in a variety of shapes and colors, from knobby, yellow fruits akin to the modern citron to large, green, smooth-skinned citruses similar to the modern pomelo.
All the oranges, lemons, limes, and grapefruits we eat today are descendants of just a handful of ancient species, namely citrons, pomelos, and mandarins, all native to South Asia and East Asia. The sweet orange we know today, which accounts for about 70% of global orange production, is a cultivated hybrid of the ancient pomelo — a large, pale green or yellow fruit with a thick rind — and ancient mandarins, which were then only a little larger than olives. That original hybrid, however, would have been quite different from the oranges we have today.
Modern oranges are the result of extensive selective breeding over thousands of years. Early oranges would certainly not have had the uniformity of our modern varieties, and they were likely smaller, with looser, bumpier skin. In terms of taste, they could have been bitter or sweet or anywhere in between — the first known reference to what are now called sweet oranges comes from Chinese literature from around 314 BCE.
As for the color, not all early oranges were orange, with shades of the fruit ranging from pale yellow to green. In fact, even today, many oranges remain green when ripe, particularly in warmer parts of the world, since the color of an orange is related to temperature and environment, not ripeness. Oranges turn orange due to exposure to cold, and some oranges grown in warmer climes are picked green and treated with ethylene gas to make them more orange, a process known as degreening.
By the 11th century, oranges were being grown in Southern Europe, though they had a bitter flavor and were used primarily for medicinal purposes. Sweet oranges, similar to what we typically consume today, did not appear in Europe until the 16th century. These soon caught on, beginning a period of breeding programs and intensive cultivation that began to shape the orange into a fruit more closely resembling the common modern variety.
At this point, oranges wouldn’t necessarily have been as large, as uniformly sized, or as vibrantly orange as we are used to, but they would have been very much recognizable to us. Evidence for this is amply supplied in paintings by various artists from the 16th century onward — for example, Giuseppe Arcimboldo’s “Winter” (part of the “Four Seasons” set, 1563); Willem Kalf’s “Wineglass and a Bowl of Fruit” (1663); Luis Melendez’s “Still Life with Lemons and Oranges” (circa 1760); and Vincent Van Gogh’s “Still Life with Basket and Six Oranges” (1888).
As for the word “orange,” it referred to a fruit first and a color later. The word comes from the Old French word for the citrus fruit, pomme d’orenge, which in turn is thought to have come from the Sanskrit word nāranga via the Persian and Arabic languages (hence the Spanish word for orange, naranja). The use of orange as the specific description for a color started to appear in English in the 1500s — many centuries after humans started eating actual oranges.