We recognize it instantly: two rounded lobes meeting at a point, the universal symbol of love. The heart shape is found everywhere — on greeting cards, jewelry, bumper stickers, and emoji keyboards. It even stars in tourism campaigns such as “I ❤ NY” and drives the $27.5 billion Valentine’s Day industry. But while the symbol represents deep emotion, it looks nothing like an actual human heart. So where did the symbol come from? The answer lies in a long history shaped by philosophy, nature, and art.
The familiar heart shape we recognize today wasn’t inspired by the anatomy of the human heart — it evolved from ancient beliefs about what the heart represented. Long before modern science defined the purpose of the heart, cultures across the world viewed the organ as the center of emotion, thought, and even the soul. Ancient people had little understanding of the importance of the brain, but they could feel the heart beating rapidly when emotions were heightened and understood the organ’s vital connection to sustaining life.
In ancient Egypt, the heart was believed to hold a person’s essence — including memory, intellect, and morality. Embalmers often left the heart inside the body or preserved it with special care, considering it far more important than the brain. Later, in Greek philosophy, Aristotle described the heart as the source of sensation and life itself. He believed it was the first organ to form in an embryo and the center of human emotion. The brain, in his view, existed only to cool the heart’s fiery temperament.
Five centuries after Aristotle, the Roman-era physician and philosopher Galen brought a more anatomical perspective to the discussion. He believed the heart was a pine cone-shaped three-chambered organ that produced the body’s vital spirit — a life-sustaining force carried through the arteries. Many of his ideas were inaccurate, inspired by philosophical tradition and limited scientific observation. With little access to human dissection, early artists and illustrators relied on metaphor, creating stylized, symmetrical shapes that reflected the heart’s symbolic role in the soul rather than its actual anatomy.
In geometric terms, the heart shape is known as a cardioid, and forms resembling it can be found throughout the natural world. In fact, one of the most compelling theories about the origin of the heart symbol points to ancient agriculture and the symbolism of plants, rather than human anatomy.
In the ancient city of Cyrene (modern-day Libya), a species of giant fennel known as silphium was so highly valued that it drove the economy of the region. Its uses were wide-ranging: Silphium was a prized seasoning, a powerful medicine, and, most notably, a natural contraceptive. It was so essential to life and love in the ancient world that it was stamped onto Cyrenian coins.
The plant’s seedpod, in particular, bore a strong resemblance to the heart symbol we recognize today — a smooth, symmetrical form tapering to a point. Because silphium was closely associated with love, sexuality, and fertility, some historians believe the shape of its seedpod may have directly influenced the development of the symbolic heart.
Other plants have also echoed the heart’s curves. Ivy leaves, long associated with fidelity and eternal love in Greek and Roman symbolism, display a heartlike silhouette. The fig leaf, tied to sensuality in ancient art and myth, and even water lily leaves, often shaped like rounded hearts, may have subtly reinforced the motif in art and decoration.
By the Middle Ages, the symbolic heart began to appear more frequently in religious art and literature in Europe, often representing divine or spiritual love. In Christian iconography, saints such as St. Augustine were depicted holding or pointing to flaming hearts, symbolizing their devotion to God. The Sacred Heart of Jesus, which emerged in medieval devotional art, emphasized themes of love, sacrifice, and compassion — establishing the heart as a spiritual symbol.
At the same time, more secular interpretations of the heart were gaining popularity. In illuminated manuscripts from the 13th and 14th centuries, lovers were sometimes shown exchanging hearts — both literally and metaphorically. A notable example appears in the 14th-century French manuscriptRoman de la Poire (“Romance of the Pear”), in which a man offers his heart, shaped much like the modern symbol, to his beloved. These early heart illustrations were often red, symmetrical, and stylized — but they didn’t always follow today’s orientation. Some were shown with the point facing upward, while others were more angular or leaflike.
Credit: GraphicaArtis/ Archive Photos via Getty Images
The Printing Press Helped Solidify the Symbol
The invention of the printing press in the 15th century helped standardize and spread the heart symbol across Europe. As printed materials became more widely available, so did visual motifs — and the heart, already associated with courtly love and devotion in art, literature, and music, was a popular choice. Early printers used woodcuts to embellish books and pamphlets with heart shapes in religious texts, emblems, and romantic illustrations. Around the same time, in 1498, Leonardo da Vinci created one of the first anatomically accurate drawings of the human heart — but his scientific rendering had little influence on the romanticized symbol already taking hold in popular culture.
As Valentine’s Day evolved into a romantic holiday in England and France, printers began mass-producing cards and tokens adorned with cupids, flowers, and heart motifs. In the 18th and 19th centuries, the heart was everywhere — on embroidery, porcelain, stationery, and jewelry. In the Victorian era, heart-shaped lockets and lace-trimmed valentines became cherished expressions of sentiment. No longer tied to anatomy or philosophy, the heart symbol fully transformed into a visual shorthand for love and emotional connection — a role it still plays today in greeting cards and text messages around the world.
Most of us, at some point, have probably seen a scattering of strange coins — perhaps at a yard sale or tucked away in a dusty drawer — and asked ourselves, “I wonder if those are worth anything?” For many of us, that’s about as close as we get to the fascinating world of coin collecting, or, to give it its technical name, numismatics. But there are plenty of proper numismatists out there. According to data from CivicScience, 38% of U.S. adults have collected coins at some point during their lives.
It’s easy to understand the appeal of collecting rare coins. For one, they offer a glimpse into the past. They can also be worth an awful lot of money. In numismatics, a decades-old manufacturing mistake can make you rich and a simple penny might pay off your mortgage. Part of the thrill of coin collecting is knowing that unexpected treasures can be hiding in plain sight, and that once-simple pocket change can become a highly collectible artifact worth thousands — or even millions — of dollars.
Here are 10 facts from the world of numismatics, from the oldest coin ever discovered to the most expensive ever sold.
The Lydian Lion is widely considered the world’s oldest coin. Minted around 600 BCE in the kingdom of Lydia (modern-day Turkey), these coins were made of electrum, a mixture of gold and silver. The creation of the Lydian Lion marked a truly significant milestone in economic history by establishing the concept of money as we know it today. A Lydian Lion coin is worth an estimated $2.5 million today due to its historical significance and rarity.
The Chain Cent Was the First Circulating Coin From the U.S. Mint
The chain cent was the first mass-produced, regular-issue coin to be struck at the fledgling U.S. Mint. Named for the interlocking chain design on its reverse, the chain cent was minted from February 27 to March 12, 1793, with only around 36,103 produced in total. Today, likely no more than 1,500 to 2,000 chain cents exist, with maybe only 10 or so in mint condition.
Advertisement
Advertisement
Credit: Rare Coin Wholesalers via Getty Images News
The First Official Silver Dollar Was Struck in 1794
The first silver dollar is known as the “Flowing Hair” dollar, as one side features a bust of Lady Liberty with noticeably flowing hair. It was produced by the U.S. Mint in 1794, and only 1,758 coins were struck in its first year. Far fewer exist today, making Flowing Hair dollars produced in 1794 rare and highly prized, even in imperfect condition. One of these rare coins went up for auction in 2013 and sold for $10 million — a world record at the time.
Credit: IURII BUKHTA/ iStock via Getty Images Plus
A Rare Gold Coin Proved a “Fake” Roman Emperor Was Real
In 1713, a collection of eight gold coins, of five different design types, was found in Transylvania, Romania. One of the coins featured an image of an unknown Roman emperor with the name Sponsian. For hundreds of years, the coins, and therefore the emperor, were believed to be fake. But in 2022, scientists at University College London proved that the coin had been in circulation 2,000 years ago. In doing so, they proved that Sponsian — whose name and portrait appear on the incredibly rare coin — was indeed a Roman emperor back in the third century.
A pattern coin is a prototype or proof of concept made to evaluate the design, composition, and technical aspects of a coin before it goes into mass production. They are normally produced in extremely limited quantities, potentially making them highly collectible. A prime example is the 1856 Flying Eagle cent. Only around 1,500 to 2,000 examples were ever minted, primarily to distribute to members of Congress and influential figures to gain support for the new coin design. As such, the rare 1856 Flying Eagle cent is highly collectible. In 2024, one was sold at auction for $312,000.
Credit: Don Bartletti/ Los Angeles Times via Getty Images
A Wartime Mistake Created a Rare Copper Penny
Due to copper shortages during World War II, most 1943 pennies were made from zinc-coated steel. But the U.S. Mint accidentally made a small batch of copper pennies, of which only around 40 are believed to exist today. That, of course, makes these 1943 copper pennies highly valuable, with one selling for $82,500 in 1996. Collector beware, however: There are plenty of fake 1943 copper pennies out there. If you think you’ve found one, first test it with a magnet. Genuine 1943 copper pennies are not magnetic, so if a coin sticks, it is made of steel and thus is not authentic.
A coin die is a specialized metal stamp used to create coins by pressing an image into a blank metal disc. A double die coin is produced when the coin die is not manufactured correctly. This error causes parts of the finished coin’s design to be slightly misaligned, with elements doubled in overlapping positions. In the world of numismatics, such errors can increase a coin’s value, and in many cases the more obvious and distinct the error, the more the coin will be worth. In 2023, a rare 1958 double die penny sold for $1.136 million.
Unpopular Initials Turned a Lincoln Cent Into a Collector’s Item
In 1909, the U.S. Mint in San Francisco began production of the Lincoln cent, the first everyday U.S. coin to feature an actual person. Controversy soon arose, however, as people began to object to the coin designer’s inclusion of his own initials, VDB (for Victor David Brenner), on the coin, which was regarded as too prominent and too self-promotional. Three days later, production was stopped and the initials removed. Only 484,000 examples of the 1909 VDB-initialed Lincoln cent were struck, making it quite rare — especially today, more than a century later.
A Missing “S” Added Half a Million to the Value of a 1975 Dime
In 1975, the San Francisco Mint produced a proof set of more than 2.8 million coins. Three years later, eagle-eyed collectors realized that, due to a manufacturing error, two of these dimes were missing the “S” mark for San Francisco, instantly making them two of the rarest and most sought-after coins in the world. Both coins have since been sold at auction for around half a million dollars.
The World’s Most Expensive Coin Sold for $18.9 Million
In the midst of the Great Depression, the U.S. Mint in Philadelphia produced a $20 gold coin known as the 1933 Saint-Gaudens Double Eagle. But nearly all of the 445,500 Double Eagles struck that year were melted down before they could circulate, thanks to Franklin D. Roosevelt’s Executive Order 6102, which prohibited the private ownership of gold coins. Only 13 of these coins exist today, and they are extremely valuable. In 2021, one 1933 Double Eagle sold at auction for a whopping $18.9 million, making it the world’s most expensive coin.
Credit: Universal History Archive/ Universal Images Group via Getty Images
Author Paul Chang
August 12, 2025
Love it?48
In February 1836, an outnumbered band of Texan independence fighters faced a Mexican army in what would become one of the most storied conflicts in American history: the Battle of the Alamo. Although they lost the battle, the Texan fighters’ final stand became a historic symbol of resistance and freedom, immortalized in the famous battle cry, “Remember the Alamo!” Here’s a look back at why this fascinating battle was important — militarily, politically, and symbolically.
After winning independence from Spain in 1821, Mexico allowed pioneers from the expanding United States to settle in the northern Tejas region of Mexico that eventually became the state of Texas. Over the next decade, these “Texians,” as they were known at the time, enjoyed a relative degree of autonomy far from Mexico’s capital.
However, as the number of settlers grew, the Mexican government responded by prohibiting U.S. immigration and imposing tariffs on the Texas settlers, causing tensions to escalate. This eventually boiled over into armed clashes between the settlers and the Mexican government with the Battle of Velasco in 1832 — a prelude to the brewing Texas Revolution.
Against this backdrop, the Texas settlers believed that Antonio López de Santa Anna — a celebrated general vying for the Mexican presidency — backed their continued autonomy due to his Federalist campaign platform, which supported a division between federal and local governance. However, upon winning the presidency in 1833, Santa Anna did an about-face and abolished the Mexican Constitution of 1824, which had enshrined the Federalist system, seeking to centralize power as a military dictator. This was the straw that broke the camel’s back: On October 2, 1835, tensions reached a breaking point and the Texas Revolution began in earnest with the Battle of Gonzales. The revolutionaries won their first fight, but the quest for independence was just beginning — and the stage for the Battle of the Alamo was set.
Named after the Spanish word for the cottonwood trees that surrounded it, the Alamo is a former Spanish mission used as a military fort starting in the early 19th century. In December 1835, a group of Texan volunteers captured the Alamo from Mexican forces. Its location in the town of San Antonio de Bexar (now San Antonio, Texas) was of strategic importance for supply lines and communications, making it one of the first frontier outposts to encounter the advancing Mexican army.
On February 23, 1836, General Santa Anna arrived at the Alamo with an army, intending to take back the fort and put down the revolution. Though estimates of the Mexican army’s size vary between 1,800 and 6,000 people, what’s not in dispute is that the Alamo’s defenders were greatly outnumbered with fewer than 200 fighters. Santa Anna’s demands for unconditional surrender were met with a cannon shot from the Alamo — and thus began a 13-day siege.
The Texan volunteers, led by Colonel James Bowie, a famous adventurer and knife fighter, and 26-year-old Lieutenant Colonel William B. Travis, came from all walks of life. In addition to early American settlers — including Davy Crockett, the legendary frontiersman and Tennessee congressman — their number included native-born San Antonians of Mexican heritage and European immigrants.
On February 24, surrounded by enemy forces, Travis penned one of U.S. history’s most famous letters. Addressed “To the People of Texas & All Americans in the World,” the letter was a passionate call for aid from supporters of the revolution, reprinted in newspapers all around the United States and even Europe. Seen by many as emblematic of the courage of the fighters, Travis’ missive ended with the battle’s only possible outcome: “victory or death.”
On March 6, Mexican forces breached the fort and overpowered the defenders. On Santa Anna’s orders to take no prisoners, almost all of the Alamo’s defenders were killed and their remains burned, including Bowie, Travis, and Crockett. The Mexican forces also suffered significant losses, estimated between 600 and 1,600. In the end, the outnumbered defenders held off the Mexican army for 13 days, buying time for Texas General Sam Houston to gather forces and prepare for future victories in the Texas Revolution.
Though the Battle of the Alamo was a military loss for the revolutionaries, it became a powerful symbol of resistance. When Texas forces led by Houston ultimately defeated Santa Anna’s army at the Battle of San Jacinto on April 21, 1836, ending the revolution and establishing Texas as an independent republic, the Texan fighters shouted the legendary rallying cry, “Remember the Alamo!” The Republic of Texas existed as an independent country for nearly a decade before being annexed by the United States in 1845 — an event that sparked the Mexican-American War.
The story of the Alamo has been invoked by U.S. leaders throughout history — including Franklin D. Roosevelt, Lyndon B. Johnson, and George W. Bush — to inspire courage, patriotism, and sacrifice. It was also immortalized in the 1960 film The Alamo starring John Wayne as Davy Crockett. Yet in recent years, some have argued that the traditional story of the Alamo omits darker motivations behind the Texas revolution: namely, the Texan settlers’ desire to continue using slave labor to cultivate cotton, which Mexico aimed to abolish. While the role of slavery in the story of the Alamo and the Texas Revolution continues to be a source of heated debate, the famous battle cry “Remember the Alamo!” endures as a call to stand firm even against overwhelming odds.
The apparent one-hit wonder of the U.S. Founding Fathers, John Hancock is largely known today solely for inscribing the first and largest signature at the bottom the Declaration of Independence — an act that resulted in his name becoming a synonym for the legally identifying scribbles we apply to checks and other important forms today.
It may seem curious that Hancock’s name stands front and center among the signatures on this most cherished document of American history, ahead of far more famous founders such as Thomas Jefferson, John Adams, or Benjamin Franklin. Yet Hancock was very much a leading man of his time. His reputation rendered him worthy of the handwritten flourish that placed him first among the luminaries who called for independence on July 4, 1776, even if his memory has all but vanished beyond the contours of that famous signature.
As described in Brooke Barbier’s biography King Hancock, John Hancock was born on January 23, 1737, in Braintree, Massachusetts, the son and grandson of Harvard College-groomed ministers (also both named John). Hancock likely would have headed down a similar career path, but his life took a sharp turn after his father died in 1744, and he was sent to live with his wealthy uncle, Thomas, in the large port city of Boston.
A largely mediocre student — although he proved adept at penmanship — Hancock went to work for his uncle’s import-export business after graduating from Harvard in 1754. He became a partner after a stint overseas in London in the early 1760s, and then inherited the firm following Thomas’ death in 1764. As befitting a man who then ranked among the wealthiest in Boston, Hancock was named a city selectman in 1765, before earning election to the Massachusetts House of Representatives the following year.
His rising social and political clout came at a time of increasing tensions toward the British Parliament, with many Bostonians frustrated by the taxes imposed by the Sugar Act of 1764 and the Stamp Act of 1765. A merchant with strong business ties to London, Hancock did not share the rebellious viewpoints harbored by local firebrands such as Samuel Adams and James Otis. Nevertheless, he joined a network of merchants who agreed to stop importing British goods as long as the Stamp Act remained in effect, an endeavor that proved successful with the act’s repeal in 1766.
Hancock’s shift toward resistance leader continued following a new round of import duties with the passage of the Townshend Acts in 1767. Drawing suspicions of smuggling from the local customs board, Hancock was celebrated for ejecting a pair of prying customs officials from one of the ships. Another of his vessels was seized two months later, and he was arrested in the fall of 1768 for smuggling, before getting the charges dropped the following spring with the help of lawyer and childhood friend John Adams.
Meanwhile, Hancock was among the group of assemblymen who helped draft the widely distributed “Circular Letter” that argued against the Townshend Acts, and he was also among the majority that voted to reject British demands to retract the letter. Two years later, following the violence of the Boston Massacre on March 5, 1770, Hancock chaired a committee that demanded the removal of British soldiers from Boston, and he agreed to oversee their safe and orderly withdrawal via the harbor.
Credit: Nastasic/ DigitalVision Vectors via Getty Images
Boston’s Leading Citizen
Although he took a step back from politics following the partial repeal of the Townshend Acts in 1770, Hancock again found himself at the center of a growing storm with the passage of the Tea Act in May 1773. After moderating committees that sought to block the latest unpopular tax, he endorsed — and may even have been a primary instigator of — the December 1773 destruction of cargo that became known as the Boston Tea Party.
After Parliament responded in early 1774 with the punitive Coercive Acts, which led to the shutdown of the port of Boston and elimination of the state assembly, the assembly members reorganized in October as the Provincial Congress, the first autonomous government in the colonies. Hancock was subsequently elected its president, making him the most powerful colonist in Massachusetts.
This also made him a target of increasingly agitated British authorities. As part of his famous midnight ride on April 18, 1775, Paul Revere made a beeline to Lexington, Massachusetts, which was hosting the Provincial Congress, to warn Hancock and Sam Adams that they were in danger of being captured. The two men slipped out of town, hours before the opening shots of the American Revolution.
A few weeks later, Hancock led a procession of carriages to Philadelphia as a delegate to the Second Continental Congress, receiving a rousing reception from colonists along the way. He was also warmly welcomed by the other delegates; after Continental Congress President Peyton Randolph was recalled to the Virginia Assembly, Hancock was deemed a suitable replacement by both the conservatives who valued his business acumen and the radicals who admired his track record of resistance. According to John Adams, Hancock also sought the position of commander in chief of the Continental Army, and was distressed when the job instead went to George Washington.
As president, Hancock mainly moderated congressional meetings and occasionally joined in the debates. His moderate stances infuriated John Adams and Samuel Adams, who pushed for independence from Britain, although Hancock found it harder to advocate for compromise after Parliament passed the Prohibitory Act in December 1775, cutting off all British trade with the colonies. By the summer of 1776, Hancock was on board with the momentum that was carrying the delegates toward declaring independence.
As president of the Continental Congress, Hancock was the first to sign the Declaration of Independence on August 2, 1776. According to popular legend, after he applied his famous signature to the engrossed copy of the document, he said something along the lines of, "Let King George read that without spectacles!" While that story is apocryphal, it is true that Hancock and Continental Congress Secretary Charles Thomson were the only two people to have their names affixed to the original printed version of the document, initially leaving them as the two people liable for treason had the quest for independence been squashed by the British. Hancock's large and bold signature was likely deliberate — a signal of his courage and resolve in affixing his name to the rebellious document.
Although Hancock gamely pored through the voluminous congressional paperwork and worked to raise money and troops for the Revolution, he eventually tired of the long hours required of the job, worn down by his recurring battles with gout, and he requested a leave of absence in October 1777. He briefly returned to the Continental Congress the following summer, but left again after it became clear that the new president, Henry Laurens of South Carolina, had no intention of relinquishing the job.
Credit: Sepia Times/ Universal Images Group via Getty Images
Massachusetts Governor and U.S. Constitution Proponent
In 1780, Hancock easily defeated fellow merchant James Bowdoin to become the first nonroyal governor of Massachusetts. A beloved figure throughout the state for his years of service and acts of charity, the governor nevertheless saw his popularity tested as citizens struggled to pay the high taxes to settle the Revolutionary War debts. A sympathetic Hancock neglected to crack down on tax collecting and dragged his feet on signing bills that would add to the financial burden of the people. The stresses of the job again taking a toll on his health, he stepped down from the position in 1785.
After Bowdoin took over as governor in 1786 and the discord fomented into Shay's Rebellion, Hancock refused to join the group of wealthy merchants who funded an army to put down the uprising. Instead, after being reelected governor in 1787, he accepted a salary cut and pardoned all but two of the most egregious participants in the rebellion.
Hancock enjoyed one more moment in the limelight as the states weighed in on whether to approve the U.S. Constitution. Named president of the Massachusetts Ratifying Convention in January 1788, Hancock missed the first three weeks due to his poor health before making a dramatic entrance, carried by servants, to deliver a speech in favor of ratifying the Constitution with amendments. He delivered another stirring speech a few days later, urging unity no matter the outcome, before a final vote gave the pro-ratification side a narrow victory.
Said to have been interested in either the presidency or the vice presidency, Hancock lost out to Washington for the former role and to fellow Massachusettsan John Adams for the latter. Regardless, he remained an esteemed figure in his home state, and even found common ground with ally-turned-critic-turned-ally Samuel Adams over their shared distrust of federal government overreach. Holding office until his death on October 8, 1793, Hancock was honored with a funeral procession that drew a massive turnout of mourners and shut down his adopted home city of Boston for the day.
Ultimately, Hancock never logged the military service that made Washington a national hero and the first president, and he also lacked the intellectual gifts and ideological passions that enabled John Adams and Jefferson to succeed Washington and cement their legacies. Nevertheless, he was an accomplished businessman and battle-tested leader who left his fingerprints all over Revolution-era Massachusetts, making it fitting that he's uniquely remembered for another deft touch of finger work.
Advertisement
Advertisement
What’s the Real Story of Ben Franklin’s Kite Experiment?
Credit: Fine Art/ Corbis Historical via Getty Images
Author Timothy Ott
August 6, 2025
Love it?16
It’s one of the most well-known moments in American history: Ben Franklin attaching a key to a kite during a thunderstorm to demonstrate the electrical nature of lightning. Yet, like a centuries-long game of telephone, the details of the celebrated 1752 experiment have been exaggerated or misinterpreted through countless retellings, creating a popular myth that may be more fiction than fact.
Look no further than the oil-on-canvas work “Benjamin Franklin Drawing Electricity From the Sky,” painted by Benjamin West in the early 19th century. In the painting, a confident Franklin raises his fist to receive a charge from a key suspended by a kite string, hair and cape billowing around him, as a team of cherubs wrestles with the string and another pair engage with some sort of electrical apparatus in the background.
This work of art encapsulates much of the myth surrounding the famous experiment. The dramatic portrayal clearly isn’t meant to be taken as a historically accurate re-creation, and West took several liberties with his depiction of the event. For one thing, Franklin was actually assisted in this endeavor by his young adult son William, not a team of cherubs. The inventor was also a relatively spry 46 at the time, not yet the wizened elder seen in the painting. And he likely undertook his experiment from the shelter of a shed, as opposed to being exposed to the elements of a thunderstorm.
What’s more, the kite and key story, retold to countless schoolchildren over the past two centuries and often repackaged as Franklin’s “discovery” of electricity, may not have taken place at all. While that’s certainly a more extreme interpretation of what happened, it also underscores the scarcity of verified details about the most famous experiment from one of the most famous figures in American history. So what really happened?
As told in Walter Isaacson’s Benjamin Franklin: An American Life, the titular polymath, then best known as a Philadelphia printer, turned his considerable intellectual gifts toward exploring the little-understood properties of electricity in the 1740s. Conducting an array of experiments with a Leyden jar, a simple capacitor fitted with a cork and wire, Franklin formed what became the single-fluid theory of electricity with his observation of a flow between “positive” and “negative” bodies with an excess or absence of the fluid.
Franklin also became intrigued by the similarities between electrical sparks and lightning, and devised ways in which to demonstrate their shared nature. In a 1749 collection of notes, later relayed in a 1750 letter to Franklin’s London business partner Peter Collinson, Franklin described how such a demonstration could be administered: “On the top of some high tower or steeple, place a kind of sentry-box, big enough to contain a man and an electrical stand. From the middle of the stand, let an iron rod rise and pass bending out of the door, and then upright 20 or 30 feet, pointed very sharp at the end. If the electrical stand be kept clean and dry, a man standing on it when such clouds are passing low, might be electrified and afford sparks, the rod drawing fire to him from a cloud.”
The European scientific community began seriously considering Franklin’s work around this time, particularly after a series of his letters and notes were published in the 1751 pamphletExperiments and Observations on Electricity. In May 1752, French naturalist Thomas-François Dalibard followed Franklin’s proposed instructions for drawing sparks from a storm cloud, his success inspiring colleagues to produce their own demonstrations that proved the American’s theory that lightning is a form of electrical energy.
Credit: ClassicStock/ Archive Photos via Getty Images
The Kite Experiment
Meanwhile, Franklin was impatiently waiting for the steeple of Philadelphia's Christ Church to be completed so he could use its lofty height for his own tests. Apparently unaware of the successes of the "Philadelphia experiments in France, he eventually elected to forgo the steeple and embark on a modified version of his initial plans.
According to contemporary accounts of the event, Franklin and his son William trekked out to a field as a thunderstorm approached in June 1752. They brought a homemade silk kite with a sharp metal wire affixed to the top, a hemp string connected to the bottom, and a metal key attached to the hemp line. Franklin also tied a silk string to the hemp one, surmising that the dry silk would be insulated from the "electric fire" running through the wet hemp line and therefore be safe for holding.
With the kite aloft in the air, Franklin waited for a telltale sign of electrical activity that wasn't apparent until he noticed a few loose threads of the hemp string standing erect. Extending his hand toward the key, he felt the anticipated spark, and held up his trusty Leyden jar to collect more of the charge for later investigations.
Curiously, Franklin provided no immediate documentation of the experience. Even after learning of the French demonstrations sometime that summer, and reprinting a letter about them in a late August 1752 edition of his Pennsylvania Gazette, Franklin neglected to mention that he, too, had proved the electrical properties of lightning.
The first public notice of Franklin's efforts came with a letter he wrote to Collinson that October, which was reprinted in the Pennsylvania Gazette that month and later read before the Royal Society of London. The letter describes how a version of the electrical experiments in France "succeeded in Philadelphia," without mentioning that he was the one who pulled it off, and proceeds to list the steps for following through with his modified plan.
The only other contemporary account of Franklin and his kite came with the 1767 publication of scientist Joseph Priestley's The History and Present State of Electricity. Priestley's retelling, derived from his recent personal introduction to Franklin, provides most of the known plot points of the story: that it took place in June 1752, how the charged threads of the string hovered in the air, and even how the inventor was anxious about being ridiculed and as such told no one besides William of his intentions.
Even with the details furnished by Priestley, there remain a few important unresolved questions surrounding this event. For example, we don't know exactly when Franklin sent his kite into the turbulent sky. The date is often cited as June 10, 1752, although other dates in June, and even other months, have been floated. It's also unclear why Franklin himself said so little about the subject, and why his confirmation of the experiment's success failed to place him in the middle of it.
Given the lack of clarity, it was perhaps inevitable that someone would push the theory that it was all a hoax, as suggested by author Tom Tucker with his 2003 book Bolt of Fate. Tucker is in the clear minority in this opinion, however. Other historians have pointed out that Franklin's kite experiment was never disputed at the time, and that he had little reason to lie about his actions after acknowledging that the French beat him to the punch.
Even if the key and kite anecdote can't be proved beyond the existing evidence, there's no arguing that it was Franklin's ideas that opened the pathway to our modern understanding of electricity, and led to the practical — and lifesaving — invention of the lightning rod. In the meantime, the mythical element exemplified by West's painting nicely fills in the holes of a celebrated American origin story, illustrating the ingenuity of one of the cleverest founding fathers.
Advertisement
Advertisement
Does the Thumbs-Up Sign Come From Gladiator Fights?
Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images
Author Tony Dunnell
August 5, 2025
Love it?17
The thumbs-up sign is one of the most instantly and universally recognized symbols of approval in modern Western culture. This ubiquitous gesture appears in everyday acknowledgments between friends and colleagues, in emojis across social media, and in numerous TV shows and movies, with famous fictional proponents including “The Fonz,” Borat, and Arnold Schwarzenegger’s T-800 in Terminator 2: Judgment Day.
Despite this popularity, a fair amount of confusion and misinformation surrounds the origin of the thumbs-up sign. Most notably, many people believe the gesture has its origins in ancient Roman gladiator fights, where spectators supposedly used a thumbs-up to spare defeated fighters and a thumbs-down to condemn them to death. This narrative has been reinforced by popular culture — particularly the 2000 Academy Award-winning movie Gladiator. The historical reality, however, is not nearly as clear cut as Hollywood would have us believe.
Credit: Bettmann via Getty Images
The Problem With Pollice Verso
It’s true that there’s a link between thumb gestures and gladiator fights in ancient Rome, but we don’t know exactly how the gesture was used. At the heart of the historical debate is the Latin phrase pollice verso, meaning “with a turned thumb.” This phrase appears in ancient Roman literature, including in connection with gladiatorial contests, but its exact meaning remains unclear to historians. We don’t know whether pollice verso referred to a thumb being turned up, turned down, held horizontally, or concealed inside the hand to indicate positive or negative opinions — or, in the arena, to signal whether a gladiator was spared or killed.
The ambiguity of ancient sources has allowed later interpreters to project their own meaning onto the gesture. The most significant example of this in the modern era is Jean-Léon Gérôme’s 1872 painting “Pollice Verso.” The painting brilliantly captures the power and drama of a gladiatorial contest, with one gladiator standing above his fallen opponent, who, lying stricken on the ground, raises two fingers to plead for mercy. In the stands of the Colosseum, Roman spectators, including an animated group of vestal virgins, signal death for the defeated gladiator with a thumbs-down gesture. The painting greatly popularized the idea that a thumbs-up signaled life, and a thumbs-down signaled death for a defeated gladiator.
It didn’t take long, however, for scholars to highlight the painting’s lack of a solid historical foundation in its portrayal of the gladiatorial contest. In 1879, a 26-page pamphlet titled Pollice Verso: To the Lovers of Truth in Classic Art, This Is Most Respectfully Addressed presented evidence against the historical accuracy of the thumb gestures in Gérôme’s painting.
Debate surrounding the gesture did not end there — to this day, historians argue about the meaning of the thumb sign in gladiatorial contests. In recent years, for example, Anthony Corbeill, a professor of classics at the University of Virginia, told Timethat the commonly held belief about the thumbs-up gesture is incorrect. According to Corbeill, “Sparing [a gladiator] is pressing the thumb to the top of the fist and death is a thumbs-up. In other words, it’s the opposite of what we think.”
But widely held notions can be hard to shift, especially when Hollywood puts its full weight behind something. Ridley Scott’s hugely successful historical epic Gladiator cemented in the public consciousness the idea that the thumbs-up we use today came from ancient Rome. Gladiator was itself inspired by Gérôme’s “Pollice Verso”painting, which was shown to the director to convince him to make the movie. Scott in turn took some ideas from the painting, including the use of the thumb gesture, despite a lack of historical evidence for it. In the same way that “Pollice Verso” shaped public opinion regarding the thumbs-up sign in the 19th century, Gladiator reinforced the idea through the power of the modern-day blockbuster.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
A Wartime Gesture
Despite the persistent Roman gladiator narrative, the modern positive meaning of the thumbs-up gesture likely developed through different cultural channels over many centuries. Though the exact origins are unknown, historians believe the gesture gained widespread recognition during the 20th century, particularly in English-speaking countries. And the reason for its spread might be found in war.
According to the Oxford English Dictionary, one of the first recorded instances of the thumbs-up gesture used to indicate approval occurs in the 1917 book Over the Top by Arthur Guy Empey, an American who served in the British army in World War I. In the book, he explains the thumbs-up sign as being “Tommy’s expression which means ‘everything is fine with me’” (“Tommy” being a slang term for a British soldier).
Another theory suggests the gesture was used by pilots to communicate with the person starting a propeller in early preflight checks. The gesture then became common among fighter pilots during World War II, who used it to indicate to the crew that everything was good and they were ready to take off. So, while the modern thumbs-up sign might not have come from the gladiatorial arena, it was likely popularized during the two largest conflicts of the modern age.
Credit: John Kobal Foundation/ Moviepix via Getty Images
Author Timothy Ott
July 30, 2025
Love it?21
A look through the life journeys of all 45 people who have served as U.S. president reveals a general blueprint for ascending to the highest office in the land. Many spent a sizable chunk of their early careers in the military and/or as lawyers, before climbing the political ladder with increasingly prominent roles that garnered the national attention and support needed to make a successful run at the White House.
Of course, there is no one set path that leads to the presidency. Many future commanders in chief navigated unusual first jobs or failed ventures along the way. Here are nine early roles held by people who eventually became known for calling the shots from the Oval Office.
Abraham Lincoln held down an array of jobs during his young adult years in the town of New Salem, Illinois, although the one that often stands out to contemporary eyes is his stint as a tavern owner. To be specific, the venue Lincoln co-owned with his militia colleague William F. Berry was a “grocery,” a store that sold alcoholic beverages to be consumed on the premises. Because a license was needed for such transactions, Lincoln is sometimes described as the only licensed bartender to become president. Unfortunately, Berry supposedly spent too much time indulging in the liquor stockpile, and Lincoln sold his share of the store to his co-owner after less than a year. But the business relationship came back to haunt the future president when Berry died two years later, leaving Honest Abe responsible for the grocery’s debts.
Although he became Lincoln's most celebrated Civil War general and springboarded from that role to the White House, Ulysses S. Grant largely struggled in most of his other professional endeavors. One such endeavor, as described in Kate Havelin's biography Ulysses S. Grant, was his attempt to farm the 60 acres of land in St. Louis, Missouri, that had been gifted to him by his father-in-law. The appropriately named "Hardscrabble" farm initially failed to produce anything of significant growth beyond its trees, which Grant chopped down and sold as firewood on the streets of St. Louis as his family's main source of income. By the time Hardscrabble finally began yielding quality crops, Grant was unable to capitalize due to an economic downturn and his own health problems, forcing him to sell the farm by the late 1850s.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
Calvin Coolidge: Toy Maker
Unlike the previous presidents on this list, a 14-year-old Calvin Coolidge didn't need to support himself or a family while attending Black River Academy in Ludlow, Vermont. However, his father insisted on the fiscal lessons to be learned from employment, according to Hendrik Booraem's biography The Provincial, and so the elder Coolidge set up his son with a weekend job at the Ludlow Toy Manufacturing Company in 1886. It's unclear what specific duties the 30th president performed, but they likely included tasks such as sawing, gluing, and painting the wares. In his autobiography, Coolidge noted simply that he "came to know how toys and baby wagons were made" during his brief tenure at the factory.
Credit: Historical/ Corbis Historical via Getty Images
Harry S. Truman: Haberdasher
Upon returning home from World War I, Harry S. Truman teamed up with his Army buddy Eddie Jacobson to open a men's clothing store on the ground floor of the Glennon Hotel in Kansas City, Missouri. A popular meeting place for other veterans, the Truman & Jacobson Haberdashery initially enjoyed brisk business, with Truman handling sales and bookkeeping and Jacobson overseeing inventory. However, the good fortune dissipated with the onset of a recession in the early 1920s, resulting in the store's closure in September 1922. Similar to Lincoln, Truman was saddled with the financial burden after his former partner declared bankruptcy, finally settling his debts in 1935.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
Richard Nixon: Frozen Orange Juice Executive
After graduating from Duke Law School in 1937, Richard Nixon landed a job with the Wingert and Bewley law firm in his hometown of Whittier, California. However, the legal position apparently didn't quite satisfy his professional ambitions, and Nixon soon added the role of president of a frozen orange juice venture called the Citra-Frost Company to his résumé. According to eyewitness accounts, Nixon spent much of his free time diligently cutting and squeezing oranges in an attempt to make his side business a success. However, he couldn't quite solve the problem of proper packaging in a time before the development of frozen juice concentrate, and Citra-Frost went bankrupt after an attempt at storing plastic juice bags blew up a refrigerated boxcar.
As told in Volume 1 of Robert Dallek's Lone Star Rising biographical series, a 17-year-old Lyndon B. Johnson spent part of 1925 aiding his lawyer cousin in San Bernardino, California, with the hope of getting a leg up on a fledgling legal career. When it became apparent that the cousin couldn't provide the professional assistance he sought, Johnson spent about a month working as an elevator operator, before heading back home to Texas. Johnson later returned to San Bernardino during the 1964 presidential campaign and demonstrated that he remembered his vocational training from four decades earlier by taking passengers up and down the same elevator he once oversaw.
Advertisement
Advertisement
Credit: Historical/ Corbis Historical via Getty Images
Gerald Ford: Football and Boxing Coach
After starring on a pair of national championship-winning football teams at the University of Michigan, Gerald Ford leaped at the chance to become an assistant football coach at Yale in 1935, with an eye toward earning his law degree from the esteemed institution. Things didn't initially go as smoothly as he'd hoped; the Yale administration denied his request to apply to the law school, believing he'd be too busy to juggle classes and professional responsibilities, before grudgingly allowing him to take a reduced course load in 1938. As part of his obligations, Ford also had to serve double duty as the school's boxing coach. As he recalled in Sports Illustrated in 1974, he took a crash course in the sport the summer before joining Yale by venturing "to the YMCA three times a week to get punched around by the Y's boxing coach." He wrote, "I didn't get good, but I got good enough to fool the Yale freshmen."
Ronald Reagan, of course, was a Hollywood actor before heading into politics, but even before that he was a celebrity in his hometown of Dixon, Illinois, for his exploits as a lifeguard. According to Anne Edwards' profile Early Reagan: The Rise to Power, the tall, athletic teenager embarked in 1926 on what became a regular summer gig on the banks of the Rock River, a tributary notorious for its strong undertow. Reagan frequently plunged into the water to pull out flailing swimmers, and afterward would carve a notch on a log to mark his success; by the end of seven summers, he had embedded 77 notches into the log. Reagan later referred to the experience in his autobiography as "one the best jobs I ever had," citing the figure of 77 lives saved as "one of the proudest statistics of my life."
Bill Clinton: Grocery Stock Boy/Comic Book Salesman
As he described to Conan O'Brien in 2017, Bill Clinton landed one of his first jobs at a local Hot Springs, Arkansas, grocery store at age 13. However, the store owner was suspicious of his stock boy's left-handed ways, thinking it a sign of demonic influence, and insisted on right-hand usage until Clinton awkwardly knocked over two glass jars of mayonnaise. The pair eventually smoothed things over, and the owner even let the young assistant sell his used comic books out of the store. But that particular venture didn't work out in the long run for Clinton, who lamented that he never should have parted with the classic comics that surely would fetch hundreds of thousands of dollars from collectors today.
Credit: UniversalImagesGroup/ Universal History Archive via Getty Images
Author Nicole Villeneuve
July 30, 2025
Love it?46
In the years before the Civil War, the United States was rapidly expanding geographically, economically, and politically. This era, roughly spanning from the end of the War of 1812 (which lasted until 1815) to the start of the Civil War in 1861, is often called the “antebellum” era, after the Latin term for “before the war.”
The term “antebellum” is also used more specifically in reference to the American South during this time, describing an idealized vision of plantation life and grand, columned estates that has been popularized by films such as Gone With the Wind. But the term has long been controversial, seen as a romanticization of a society built on slavery and racial oppression. In 2020, country band Lady Antebellum said that a newfound perspective on the “injustices, inequality, and biases Black women and men have always faced and continue to face everyday” prompted them to change their name to Lady A.
More broadly, the term refers to any period preceding a war, but it’s most often used in reference to the decades leading up to the U.S. Civil War. During this time period, the tension between reform and resistance reached a tipping point, setting the stage for the one of the largest conflicts in American history.
Credit: Bettmann via Getty Images
A Nation Divided
One of the key issues leading up to the Civil War was whether slavery should be allowed to continue in the United States, and if so, where. While the Northern states were slowly moving toward abolition, the South’s economy was deeply tied to enslaved labor. Cotton production was booming, and it depended on the forced labor of millions of Black Americans.
Over the course of the 19th century, Congress passed a series of compromises to try to navigate these growing divisions. In 1820, the Missouri Compromise admitted Missouri as a slave state and Maine as a free one in order to maintain the balance. The Compromise of 1850 let California join as a free state but included a harsh law requiring the return of people who escaped slavery. And in 1854, the Kansas-Nebraska Act allowed residents in those territories to decide the issue for themselves.
These were ultimately attempts to maintain a delicate political balance and avoid Southern threats of secession, which were fueled by fears that the federal government would override state decisions. But they merely delayed conflict, and in some cases, these compromises actually highlighted how unstable the political climate had become: In the 1850s, a period of violent clashes known as Bleeding Kansas erupted between the opposing sides.
Credit: Fotosearch/ Archive Photos via Getty Images
Westward Expansion
At the same time, the United States was growing fast. The 1803 Louisiana Purchase nearly doubled U.S. territory, and during the antebellum years, that push westward continued. Driven by a belief in “Manifest Destiny” — the idea that Americans had a divine right to settle the entire continent — settlers moved steadily into new areas, often displacing Indigenous peoples and redrawing territorial boundaries along the way. Each new territory raised the question of whether slavery would be allowed, and with each debate, the stakes and tensions continued to grow.
Along with deepening ideological divisions over the future of the country, the antebellum years also saw dramatic changes in American life. The rise of the manufacturing industry was reshaping the North, where factory work, canals, and railroads connected people like never before. In the agrarian South, meanwhile, economic life still revolved around large-scale plantations. As demand for cotton soared worldwide, plantations expanded and the South’s reliance on enslaved labor only grew deeper.
The period before the Civil War was also a time of rapid social reform. Advocates pushed for public education, the temperance movement gained ground, and women’s rights groups organized. For many people, no movement felt more urgent than abolition, and the voices for it grew louder. Abolitionist leaders such as Frederick Douglass and Sojourner Truth, both of whom were formerly enslaved, helped bring the realities of enslaved people into wider public view. This era also saw the rise of transcendentalism, led by influential figures such as Ralph Waldo Emerson and Henry David Thoreau, who often spoke out against slavery. The arts reflected the tensions of the time: hope for progress, but also fear of what was to come.
By the late 1850s, whatever fragile peace the country had maintained was unraveling. An 1857 Supreme Court decision in Dred Scott v. Sandford denied citizenship to Black Americans and ruled that Congress had no authority to stop slavery in the territories. The North was outraged, while the South celebrated.
In 1860, Abraham Lincoln ran for president on a platform that rejected the Dred Scott decision outright, calling it a “dangerous political heresy.” Lincoln’s election that November made the conflict unavoidable: South Carolina declared its secession from the U.S. the next month, and by February, six more Southern states seceded as well. One month after Lincoln’s March 1861 inauguration, Confederate forces attacked Fort Sumter, marking the end of the antebellum years and the start of a new chapter in U.S. history.
Credit: Harold M. Lambert/ Archive Photos via Getty Images
Author Tony Dunnell
July 29, 2025
Love it?68
Middle names are a strange concept. They often lie silent and unused, only to emerge when we fill out official forms and documents, providing an extra piece of proof as to who we are, despite our near-total disregard for the name in our daily lives. In the U.S., a majority of people have a middle name, but only around 4% of people are referred to by it. And, according to a poll by The Atlantic, only about 22% of Americans think they know the middle names of at least half of their friends or acquaintances.
A valid question therefore arises: Why do we have middle names? What’s the point, and who got us started with this seemingly superfluous naming process? Here, we take a look back through history to see when and why middle names emerged, and how they became commonplace.
Though historians don’t know exactly when middle names originated, we do know the ancient Romans used a naming system that, at times, involved what can be considered a middle name. Some Romans, especially members of the aristocracy, used a three-part naming system called tria nomina, consisting of a praenomen (personal name), nomen (family name), and cognomen (additional identifier). But the nomen, while having the same placement as a middle name, had a different function — as a family identifier, similar to modern surnames — so it’s not a clear precursor to the middle names we use today.
Instead, we have to fast-forward to medieval Europe. According to historian Stephen Wilson in The Means of Naming: A Social History, the custom of giving middle names emerged (or possibly reemerged) in Italy around the late 13th century. The naming practice became common among the Italian elite, who saw the middle name as extra real estate for honoring saints, family members, or political allies — offering a perceived spiritual or social boost.
The trend caught on, and by the Renaissance era it was increasingly common for wealthy families across Europe to include middle names during baptisms. From there, it filtered down through the social classes to become commonplace among rich and poor alike. In France, for example, more than half of all boys were given just a first name during the first decade of the 19th century. In the last decade of that century, less than a third had only a first name, while 46% had one middle name and 23% had two. By that time, middle names were common in Europe and had also traveled to the United States, helping cement their position as a standard part of Western naming practices.
As societies became more urbanized and bureaucratic, middle names became more than symbols of faith or prestige: They increasingly became practical necessities. The Industrial Revolution brought about unprecedented population growth in cities, and situations began to arise in which multiple people shared identical first and last names. Middle names therefore provided an additional layer of identification that helped distinguish between people in official records and legal documents such as birth certificates, marriage licenses, and property deeds. This practical importance reinforced the cultural expectation that children should receive a middle name at birth.
But that’s not to say middle names lost their former function. While they certainly help with administrative systems — and likely became commonplace to a certain extent due to this practicality — having a middle name was never so important that it became a legal requirement. To this day, middle names are primarily used for the same reason as back in the 13th century: as a way to honor someone, be it a godparent, relative, best friend, favorite pop star, or any other name that holds a certain significance.
In many other cases, parents choose a middle name for their child just because it sounds nice — and thanks to the middle name’s general anonymity, it presents an opportunity for parents to get creative, choosing a colorful moniker they wouldn’t necessarily be happy with as their child’s first and primary name.
Rose Bertin, an 18th-century French dressmaker often referred to as the world’s first fashion designer, once told her most famous client, Queen Marie Antoinette, “There is nothing new except what has been forgotten.”
Some gowns, however, steadfastly defy the passage of time and refuse to be forgotten. They are destined to remain fresh in the public consciousness, no matter how many years pass or how many trends come and go. Such dresses have transcended mere fashion to become enduring symbols of power or romance, tragedy or transformation, capturing moments in history in finely crafted fabric and thread.
Here’s a look at five iconic gowns from history, from the elaborate wardrobes of prerevolutionary France to the heights of Hollywood royalty.
Marie Antoinette was the epitome of excess in prerevolutionary France. Her name became synonymous with profligacy, promiscuity, and the decline of moral authority within the French monarchy, and her legendary quote, “Let them eat cake,” is widely known even today — despite there being no evidence that she ever uttered the words.
The French queen’s reputation for frivolity was only heightened by her magnificent court gowns. There was no holding back when it came to her dresses, which were often constructed using panniers — hoop skirts that added volume around the hips — giving her gowns impressively impractical width. Combined with luxurious silk garments, large box pleats, bodices, ribbons, bows, frills, and jewelry, the finished look was nothing short of glorious — and, for the increasingly irate revolutionaries, entirely inappropriate. Today we can still admire the splendor of Antoinette’s court dresses in various contemporary portraits, perhaps none more iconic than “Marie Antoinette in Court Dress” by French painter Elisabeth Louise Vigée Le Brun.
In 1840, England’s Queen Victoria married Prince Albert. It was a massive public occasion, and one that revolutionized bridal fashion. Rather than wear the traditional red ermine robe of state, Victoria decided to marry in white.
At the time, it was common for wedding dresses to come in a variety of colors, but white dresses were more rare as they were so hard to keep clean. But as soon as Victoria stepped out in her white satin wedding gown, made with English Honiton lace — a deliberate choice to support and stimulate Britain’s lace industry — she established a tradition that endures to this day. While initially embraced only by wealthier brides, the trend of wearing a white wedding gown eventually spread across all economic levels, becoming standard in the 20th century and beyond.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
Marilyn Monroe's “Happy Birthday, Mr. President” Dress
Marilyn Monroe wore an array of iconic dresses, but perhaps none pushed the boundaries as much as the skintight, crystal-embellished gown she wore to serenade President John F. Kennedy in 1962. It instantly became one of the most talked about dresses in history, seen by many as nothing less than scandalous, especially when combined with the star’s sultry and flirtatious rendition of “Happy Birthday.”
The sheer, flesh-colored gown, designed by French fashion designer Jean Louis, was so formfitting that Monroe had to be sewn into it. Overall, it created the illusion of nudity while technically remaining modest. Along with the performance, the gown became a symbol of 1960s glamour and rebellion against conservative social norms, and remains iconic to this day.
In 1961, the romantic comedy film Breakfast at Tiffany’s was released in cinemas to widespread critical acclaim. The fashion world also took note, thanks to star Audrey Hepburn’s instantly iconic little black dress. Designed by Paris couturier Hubert de Givenchy, the sleek, floor-length black satin gown was custom-made for Hepburn. Accessorized with pearls, long black gloves, and a tiara, the dress represented a sophisticated minimalism that went on to influence fashion for decades. Building on French designer Coco Chanel’s introduction of the “little black dress” in the 1920s, Hepburn’s gown is now considered one of the most influential dresses in the history of 20th-century costume design.
On the night of June 29, 1994, Prince Charles confessed on British television that he had cheated on his estranged wife, Princess Diana, after their marriage had, in his words, “irretrievably broken down.” The same night as the broadcast, Diana attended the Vanity Fair gala at London’s Serpentine Gallery, wearing a slinky and daringly short black dress with an off-the-shoulder, cleavage-baring neckline — something rarely if ever seen worn by British royals at the time.
The press caught on immediately, quickly dubbing it Diana’s “revenge dress.” While never confirmed, it was widely regarded as Diana’s unspoken and yet very public response to Charles’ infamous television interview. In the 1998 book Diana: Her Life in Fashion, author Georgina Howell called the princess’ gown “possibly the most strategic dress ever worn by a woman in modern times.” Whatever the intention, the “revenge dress” remains one of the most famous dresses of the last half century.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.