8 of the Most Expensive Foods in History

  • Black caviar
Black caviar
Credit: Natalia Lisovskaya/ Adobe Stock
Author Bess Lovejoy

February 11, 2026

Love it?

Throughout history, certain foods have carried astronomical price tags. Some were costly because they were difficult to grow or find, others because they traveled great distances along early trade routes, and still others because myth elevated them into coveted status symbols. 

From spices that once cost more than gold to tropical fruits displayed as signs of prestige, luxury foods reveal what different eras valued — and what they were willing to pay for. This list explores some of the most expensive and unexpected delicacies in history, tracing how scarcity, symbolism, and shifting tastes turned everyday ingredients into the edible treasures of their time.

Credit: orinocoArt/ Adobe Stock 

Saffron 

Sometimes called “red gold,” saffron has been prized for millennia, appearing in written records as early as 2300 BCE. In the 14th and 15th centuries, it was often worth more than gold by weight, and it’s currently the most expensive spice pound for pound (costing between $2,000 to $10,000 per pound). Its extraordinary cost comes from the staggering labor required to harvest it. Each Crocus sativus blossom yields just three fragile stigmas (the part of the flower that catches pollen), which must be handpicked during a brief fall bloom, ideally midmorning when the flowers fully open. Modern estimates suggest that producing 1 kilogram (2.2 pounds) of dried saffron requires 70,000 to 200,000 flowers and 370 to 470 hours of labor, which helps explain why the spice has historically fetched astonishing sums.

Saffron’s value also surged thanks to its many uses: Historically, it was a culinary cornerstone across Asia and the Mediterranean, a sacred dye in Hindu traditions, and a prized medicinal ingredient in ancient Rome (where Pliny the Elder claimed it was nearly a universal cure). Demand especially soared in medieval Europe; Venice dominated the saffron trade, and adulteration was taken so seriously that a merchant was once burned at the stake for trying to sell a phony product. 

You may also like

The Origin of 7 Surprising Nicknames

  • “Polly Put the Kettle On” sheet music
“Polly Put the Kettle On” sheet music
Credit: ilbusca/ DigitalVision Vectors via Getty Images
Author Bess Lovejoy

February 5, 2026

Love it?

Some of the most familiar nicknames in the English language seem to have little in common with the names they’re supposedly short for. How did Chuck become short for Charles, for instance? And why do some Margarets go by Peggy? Many of these curious nicknames carry centuries of linguistic history. Some emerged from playful medieval rhymes, others from sound shifts in everyday speech, and still others from migration, mishearing, or sheer convenience.

The word “nickname itself also has a somewhat surprising origin, making it a perfect example of how these terms evolve. It comes from the Middle English ekename, meaning “also-name” or “additional name,” built on the Old English word eaca, meaning “an increase.” An ekename was literally an extra name added to the one a person already had. By about the 15th century, saying “an ekename” aloud became “a neke name,” which eventually led to its familiar form: “nickname.” 

The word “nickname” has nothing to do with the name Nicholas or the word “nick” — it’s a linguistic accident that stuck, like many nicknames themselves.

Credit: 14031857/ iStock via Getty Images Plus 

Peggy for Margaret

At first glance, Peggy seems like a bizarre detour from Margaret — a name that already comes with a slate of more intuitive nicknames, from Meg to Maggie to Margo. But Peggy follows a classic medieval pattern: rhyming nicknames.

In Middle English, Margaret was commonly shortened to Meg or Mog. From there, medieval English speakers — famous for inventing playful, rhymed pet names that switched up the first letters of a name — spun off new versions. Playing with “m” and “r” names was especially common. Meg became Peg, and Meggy became Peggy. 

Peggy isn’t the only unexpected nickname Margaret picked up. Educated English speakers in the early modern era also used Daisy, inspired by the French name Marguerite, which means “daisy.” And in an era when families routinely reused the same handful of given names, these nickname detours made practical sense; they were useful for telling one Margaret from another. Luckily for all the Peggys, theirs was a far more charming nickname than one of the era’s less-fortunate options: Some Margarets were called Maggot.

You may also like

8 U.S. Presidents Who Struggled in School

  • JFK at Harvard, 1938
JFK at Harvard, 1938
Credit: Hulton Archive/ Archive Photos via Getty Images
Author Timothy Ott

February 5, 2026

Love it?

Given the importance of the position of president of the United States, you might expect those who have held the role to wield academic credentials that distinguish them from the general public. Some presidents, including Thomas Jefferson, John Quincy Adams, and Woodrow Wilson, to name a few, certainly demonstrated their advanced brainpower as students. Even many of those who came of age in the rough-and-tumble frontier years of the 19th century showed a capacity for learning in spite of limited opportunities, with Abraham Lincoln standing as the most famous example of a largely self-taught commander-in-chief.

Yet, there are also a fair share of presidents who either treated their school days as a necessary nuisance to slog through or required some extra assistance to avoid failing grades and expulsion. Here are eight U.S. presidents who encountered more adversity than they wanted in the halls of academia.

Credit: Universal History Archive/ Universal Images Group via Getty Images 

Zachary Taylor

Reared by a prominent landowning family outside Louisville, Kentucky, Zachary Taylor attended at least two local schools as a child. However, one has to question just how much the 12th president learned in these classrooms, as his earliest surviving writing (from when he was a young man) reveals severe deficiencies in spelling, grammar, and penmanship. 

Part of this may be attributable to the quality of schooling available on the Kentucky frontier, but it’s also likely this son of a Revolutionary War officer found his attention drawn to what he considered more exciting possibilities. Sure enough, Taylor struck out on what became a lengthy military career in 1808, although he was said to have developed a greater appreciation for education as he aged.

You may also like

The Surprising History of Tarot Cards

  • Tarot Decks Keep Changing
Tarot Decks Keep Changing
Caption: Tarot card deck
Author Bess Lovejoy

February 5, 2026

Love it?

For centuries, tarot cards have carried an aura of mystery. To many people, they are tools for reflection, storytelling, or spiritual insight, while to others, they are simply beautiful objects, or perhaps sources of fear and disdain. But few people are aware that tarot’s earliest history has nothing to do with divination. Long before the cards were used to peer into inner lives or imagined futures, they were created for a very different purpose: play.

Credit: DEA / G. CIGOLINI/ De Agostini via Getty Images 

A Renaissance Card Game

Tarot emerged in northern Italy in the early 15th century, at the height of the Renaissance, when card games were a fashionable diversion among aristocratic courts. Wealthy families commissioned ornate decks known as carte da trionfi, or “cards of triumph,” to play a game called tarocchi. Though the rules have not survived intact, the game appears to have combined skill, memory, and chance, and may have been a bit like bridge.

The most likely patron of the first tarot deck was Filippo Maria Visconti (1392-1447), the reclusive Duke of Milan, whose court delighted in symbolic display and intellectual play. Roughly 15 decks associated with the Visconti family survive today, most famously the Visconti-Sforza deck, dating from the mid-1400s. These cards were luxury objects, lavishly decorated with gold leaf and fine detail, designed to impress as much as to entertain.

You may also like

Why Do Men’s and Women’s Shoes Have Different Sizes?

  • Close-up of people’s shoes at a bar
Close-up of people’s shoes at a bar
Credit: FPG/ Retrofile RF via Getty Images
Author Tony Dunnell

January 29, 2026

Love it?

If you’ve ever tried on a pair of shoes in your size that didn’t even come close to fitting, it could be because they were labeled for the opposite sex. Men’s and women’s shoes have completely different sizing systems in the United States (unlike most of the world, where sizing is unisex). A women’s size 8 foot, for example, is roughly equivalent to a men’s 6.5 in U.S. sizing. This seemingly arbitrary system leaves many shoe-hunters scratching their heads. Why do we have different numbers for men’s and women’s shoes? Why not just use the same sizes for everyone, regardless of gender? 

Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images 

The First Shoe Sizes

Though the first known description of a shoe-sizing system appears as early as 1688 in England, it wasn’t until the 19th and early 20th centuries that concerted efforts at shoe standardization took place. Manufacturing was becoming increasingly industrialized, and shoemakers were transitioning from custom-made footwear to mass production — which required standardized sizing systems. 

In the United States, the first detailed sizing system was introduced by New York businessman Edwin Simpson in the 1880s. He based his sizes on the existing barleycorn system in the U.K.: The British standard for an inch was historically measured as three barleycorns laid end to end, and had long been used to measure bespoke shoes. 

Each full U.S. shoe size increases by one-third of an inch (a barleycorn), so to turn a foot measurement into a size number, you multiply the foot length by three (to count how many of those one-third-inch measurements fit) and then subtract a fixed amount: 22 for men’s shoes and 21 for women’s shoes. For example, a man who wears a size 9 typically has a foot about 10 and one-third inches long, because 9 plus 22 is 31 (the barleycorn measurement), and 31 divided by 3 is 10.333 inches.

This subtraction keeps the size numbers small and convenient instead of having much larger sizes such as 30 or 40. The slight difference in how much is subtracted for men’s versus women’s sizes accounts for historical differences in how the sizes were set up. The details of who determined these subtraction amounts has been lost to history, but the resulting sizes are still used to this day.

Simpson’s measuring system was adopted by the Retail Boot & Shoe Dealers’ National Association about a decade later, providing the first nationwide shoe-sizing guidelines. By the 1920s, standardized measuring devices such as the RITZ Stick and the Brannock Device were adopted by stores across the country, further cementing standardized shoe sizes. 

You may also like

6 Jobs That Paid More in the Past

  • American Airlines pilots, 1950
American Airlines pilots, 1950
Credit: Ivan Dmitri/ Michael Ochs Archives via Getty Images
Author Kristina Wright

January 27, 2026

Love it?

Today, the future is full of big promises, and careers in fields such as artificial intelligence, cybersecurity, and climate science are expected to produce the next generation of top earners. But history reminds us that many of the jobs that once paid really well no longer deliver the same economic security. Over the years, technology, deregulation, and shifting markets have steadily eroded pay and bargaining power. These six occupations are reminders that even the most lucrative careers can shift with the changing times.

Credit: Bettmann Archive via Getty Images 

Stockbroker

In the 1950s through the early 1980s, stockbrokers occupied an elite position in the American labor market. Trading commissions were fixed, information was scarce, and brokers acted as essential gatekeepers to financial markets. While precise figures are difficult to pin down because compensation was heavily commission-based and not consistently captured in government wage data, average salaries in 1969 ranged between $13,000 and $20,000, or $118,000 to $182,000 when adjusted for inflation. Top performers could earn much more, and regulations effectively guaranteed commissions.

Deregulation on May 1, 1975, known as “May Day,” eliminated fixed commissions and fundamentally changed the industry. Discount brokerages, online trading platforms, and algorithmic trading stripped individual brokers of their pricing power. But even with the new rules, many brokers continued to command impressive salaries into the 1980s, with an average salary of $79,000 in 1985 — or $242,000 in today’s dollars. Currently, the median salary for a securities, commodities, and financial services salesperson sits around $78,000 annually. Some brokers still earn commissions or performance-based bonuses on top of their base pay, but commissions are typically smaller and less central to earnings than they once were. While a small minority still earn very high incomes, the average stockbroker now makes a fraction of what the role once commanded.

You may also like

5 Unbreakable Olympic Records

  • Jackie Joyner-Kersee hurdles, 1988
Jackie Joyner-Kersee hurdles, 1988
Credit: Gray Mortimore/Allsport/ Hulton Archive via Getty Images
Author Tony Dunnell

January 27, 2026

Love it?

Every four years, the world watches as elite athletes push themselves to ever greater heights in search of Olympic gold — and, perhaps, even a new Olympic record. Some records, however, stand so far above the rest that they seem destined to endure forever. An “unbreakable” record, of course, is a little hard to prove, but some feats — such as the five below — are so exceptional that it seems unlikely they will be bested anytime soon.

Credit: Smiley N. Pool/ Houston Chronicle via Getty Images

Usain Bolt’s 100 Meters

During the 2012 London Games, Jamaican sprinter Usain Bolt set a new Olympic record time of 9.63 seconds in the 100-meter dash. The record has yet to be beaten at the Olympics and would have represented the absolute pinnacle of human speed if it weren’t for Bolt’s world record of 9.58 seconds, set at the 2009 IAAF World Championships in Berlin. (Bolt reached an astonishing 27.8 mph when he hit full stride.) 

Bolt is a towering figure, quite literally, among his rivals. The fastest sprinters tend to be comparatively short and compact in comparison to Bolt’s frame of 6 feet, 5 inches, which allowed him to complete a 100-meter race in around 41 steps — three to four steps fewer than his competitors. His perfect technique, peak competition form, and biomechanical uniqueness may never be seen again, making it unlikely that his record will be beaten in the foreseeable future. 

You may also like

What My Life Would Have Cost in 1950

  • Grocery store cashier
Grocery store cashier
Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images
Author Bennett Kleinman

January 26, 2026

Love it?

In 1950, the purchasing power of the U.S. dollar was more than 13 times greater than it is today, meaning your money went much further, at least when it came to certain expenses. For instance, the average cost of a brand-new Chevrolet sedan was just $1,450 that year, the equivalent of around $19,416 today when adjusted for inflation. The median price for a single-family home, meanwhile, was only $7,354, or around $98,474 in today’s money. (If only!)

That said, salaries were lower in the mid-20th century as well. The median salary was $3,135 (around $41,979 in today’s dollars) for white working individuals and $1,569 (around $21,009 today) for people of color — a discrepancy caused by the discriminatory hiring practices of the time. That’s compared to a nationwide median annual salary of $63,128 today.

But even accounting for the lower household income in 1950, the relative purchasing power was greater at the onset of the ’50s than it is today. For instance, it took about 2.5 years’ worth of paychecks for a person earning the average salary for white workers in 1950 to afford a new home, while the median cost of a new home today is nearly six times the average salary. 

It’s a refrain you hear a lot — life used to be much more affordable. Which got me wondering: What would my own lifestyle have cost if I lived in 1950? Would my monthly bills as a New York City resident be substantially easier to manage? 

To investigate, I took a look at the cost of housing, food, and even Yankees tickets in 1950 and input the values into inflation calculators from the U.S. Bureau of Labor Statistics as well as the Federal Reserve Bank of Minneapolis, taking the rough average of values from the start and end of the year 1950. Here’s what I found.

Credit: FPG/ Archive Photos via Getty Images 

Monthly Rent

According to housing data from the 1950 U.S. census, the average rent of an apartment in New York City that year was $49 per month (around $656 in today’s dollars). Unsurprisingly, rents were higher in Manhattan — the borough I live in — at $56 per month (equal to around $752 today). Rents on my exact block in the Murray Hill neighborhood were higher still, coming in at $88 per month (around $1,181 adjusted for inflation). 

Much to my chagrin, this is a far cry from what I’m paying monthly today. Let’s put it this way: While someone living in my building earning the median salary (for white workers) in 1950 would’ve put roughly a third of their annual pay toward rent, the percentage in my own case hovers closer to half.

It’s also worth noting that housing costs in 1950 varied substantially throughout Manhattan, depending on the neighborhood. Rents dipped as low as $14 per month (equal to around $192 today) in the city’s Two Bridges neighborhood — an area next to the Brooklyn Bridge that has no personal residences today. On the other end of the spectrum, the block between 58th and 59th streets along the East River was among the most expensive in the entire city, where average rents totalled $384 per month (around $5,144 adjusted). 

You may also like

What Did People Keep in Their Medicine Cabinets in the 1950s?

  • Antique medicines and drugs
Antique medicines and drugs
Credit: Pat Canova/ Alamy Stock Photo
Author Nicole Villeneuve

January 22, 2026

Love it?

In the mid-20th century, the medicine cabinet was a fixture of many homes. Open one and you’d likely find a thermometer, a box of bandages, maybe a tin of aspirin — basics we still recognize today. But alongside those common essentials were remedies that didn’t just look different from modern products — they were also built on ideas modern medicine has long since abandoned.

These old-school treatments reflected the medical knowledge of the era, when the risks of certain heavy metals, narcotics, and chemicals weren’t fully understood. Here are six things people regularly stocked in their medicine cabinets in the 1950s that you rarely see anymore.

Credit: Emanuela/ Adobe Stock 

Mercurochrome

For many families in the 1950s, a flash of bright red wasn’t just a sign of a scrape or cut — it was also the remedy. Products such as Mercurochrome and Merthiolate were staples in medicine cabinets, used for disinfecting skinned knees and elbows or a knick from a kitchen knife. They came in glass bottles that showed off their unmistakable bright red-orange color. A streak of the liquid, applied to the skin with a glass dobber, meant your mishap was on the way to healing. 

Mercurochrome was first developed in the early 20th century, and as the name implies, it relied on mercury compounds to stop the spread of bacteria and help prevent infection. The dangers of mercury poisoning are well known today, but for much of the 20th century, mercury was widely used in medical treatments. Mercurochrome stayed on shelves (and as an add-on in products such as pretreated bandages) as late as 1998, when the U.S. Food and Drug Administration withdrew its approval of the product’s ingredients, and it was removed from the market.

You may also like

What Exactly Is Feudalism?

  • Medieval lord receiving a grant of land
Medieval lord receiving a grant of land
Credit: duncan1890/ DigitalVision Vectors via Getty Images
Author Bess Lovejoy

January 22, 2026

Love it?

The concept of feudalism is probably a familiar one if you’ve encountered a medieval fantasy epic: Picture a stone castle overlooking fields of peasants tilling the soil while armored knights ride out at dawn. But what does the term actually mean, and how did feudalism work in practice? 

Feudalism is a term used to describe how power, land, and obligation were organized in much of medieval Europe. At its core, it refers to a system in which land was the main source of wealth and political authority, and control of that land was tied to personal relationships of loyalty and service.

In this arrangement, a monarch was considered the ultimate owner of all the land in their kingdom. The ruler granted large estates, called fiefdoms, to nobles in exchange for allegiance and military support. Those nobles could then distribute portions of their land to lesser nobles (such as knights), creating a layered hierarchy of obligation known as vassalage.

At the bottom of this structure were peasants and serfs, who worked the land and provided labor or goods in return for protection and the right to live on the estate. Power was exercised largely at the local level, rather than through a strong centralized government. For instance, a king might grant land to a duke, the duke to a knight, and the knight would then draw income and labor from the peasants who worked that land, creating a chain of obligation that ran downward while authority flowed upward. 

Credit: Hulton Archive via Getty Images 

It Started With the Fall of Rome

Europe’s feudal structure developed gradually in the centuries after the collapse of the Western Roman Empire around 476 CE. As Roman authority weakened, Western Europe experienced political fragmentation and repeated invasions by Germanic peoples, whose social structures emphasized personal loyalty to a leader rather than obedience to the laws of a centralized state.

Over time, Roman legal traditions blended with Germanic customs centered on allegiance and service. In an era marked by resource insecurity, warfare, and limited state power, people increasingly turned to powerful local lords for protection. In return, they offered these lords labor, military service, or political loyalty, reinforcing the link between landholding and authority.

You may also like