The Irish writer and socialist Robert Wilson Lynd once wrote, “The belief in the possibility of a short, decisive war appears to be one of the most ancient and dangerous of human illusions.” It’s true — many wars drag on far beyond initial expectations, in some cases for centuries. For example, the conflict known as the Reconquista, in which Christian kingdoms fought the Moors to reconquer the Iberian territories, lasted a staggering 781 years. Many other conflicts have also spanned a century or more, perhaps most famously the Hundred Years’ War between England and France, which actually lasted for 116 years.
In the last two centuries, however, most wars have lasted an average of three to four months (though there are many exceptions, including World War I and World War II). But even these months-long conflicts seem lengthy in comparison to history’s shortest wars, which lasted just days, hours, or even minutes.
Slovenia declared its independence on June 25, 1991, becoming the first of the six republics to formally leave Yugoslavia (along with Croatia, which declared the same day). Two days later, the Yugoslav People’s Army intervened, sending in an armored battalion. The ensuing conflict was of low intensity, casualties were not high, and it ended after 10 days with the withdrawal of Yugoslav forces and a victory for the newly independent Slovenia. The reason for the relatively short length of the war was the simultaneous Croatian War of Independence (Croatia declared its independence on the same day as Slovenia). The Yugoslav People’s Army didn’t want to get stuck in a lengthy conflict with Slovenia because Croatia, with its sizable ethnic Serb minority, was a priority. The war with Croatia was a far bloodier and more brutal affair that lasted four years and seven months.
Photo credit: Hulton Archive/ Archive Photos via Getty Images
Six-Day War (6 Days)
Some short wars are comparatively low on casualties and consequences, but that was not the case with the Six-Day War, also known as the June War or Third Arab-Israeli War, which began on June 5, 1967. Friction between Israel and its Arab neighbors had been constant since the 1949 Armistice Agreements, and tensions had become dangerously heightened by the time Egypt closed the Straits of Tiran to Israeli vessels and mobilized its military into defensive positions along the border with Israel. Israel launched a series of preemptive airstrikes against Egypt, on whose side Syria and Jordan both later entered the war. Despite lasting only six days (in part because the Israeli military objectives were very limited), the war resulted in more than 20,000 Arab and almost 1,000 Israeli casualties, while fundamentally reshaping the region’s landscape in terms of territory and military strength.
On August 7, 2008, tensions between Russia and the former Soviet nation of Georgia spilled over into full-scale war. Russia invaded Georgia under the pretext of a “peace enforcement” operation in aid of the two Russian-backed, self-proclaimed republics — South Ossetia and Abkhazia — that existed within Georgian territory, unrecognized by most of the international community. The land, air, and sea invasion included incursions into undisputed Georgian territory, with Russia brazenly demonstrating its military strength in the region. Hundreds of Georgian servicemen and civilians were killed during the conflict, which ended on August 12 when French President Nicolas Sarkozy negotiated a ceasefire agreement. The Russo-Georgian War is regarded as the first European war of the 21st century and, in retrospect, a sign of further Russian aggressions to come.
The Hundred Hours’ War, also known as the Football War, was a brief but brutal conflict fought between El Salvador and Honduras in 1969. In the weeks prior to the war, both countries had played each other in a series of heated World Cup-qualifying soccer matches, all of which were followed by violent scenes. The matches were just a small part of rising tensions between the two countries, which had fallen out over land reform, immigration issues, and the expulsion of thousands of Salvadoran laborers from Honduras. Tensions reached a boiling point on July 14, 1969, when El Salvador launched a military attack against its Central American neighbor. After a little over four days of fighting and around 3,000 lives lost, international diplomatic efforts resulted in El Salvador reluctantly withdrawing its troops.
On August 25, 1896, the pro-British sultan of Zanzibar died. The British had their own preferred replacement in mind, but Prince Khālid ibn Barghash went against the British protectorate and took control of the sultanate. Two days later, the British arrived with two cruisers, three gunboats, and 150 marines, as well as about 900 Zanzibari soldiers. The new sultan, meanwhile, barricaded himself inside the palace, protected by almost 3,000 of his personal soldiers and supporters. The British cruisers bombarded the palace, setting it ablaze and causing around 500 casualties among the sultan’s soldiers; only one British soldier was seriously wounded. The whole engagement lasted between 38 and 40 minutes, at which point Barghash surrendered. It is considered the shortest war in recorded history.
The White House is undoubtedly one of Washington, D.C.’s most recognizable landmarks. But while many Americans have seen it in person or on the back of a $20 bill, there’s still much to be learned about the history of this iconic building. From its massive renovations to the many rooms that were converted into entertainment venues, the White House boasts a rich history that makes it one of the most remarkable places in the nation’s capital.
When George Washington took office as the first President of the United States, the White House was just a concept. In fact, original proposals called for an even grander “President’s Palace” that would have been four times bigger than the White House we know today. Architect James Hoban later proposed a more modest neoclassical design based on the Leinster House in Dublin, and he was chosen to spearhead the project. Upon its completion in 1800, John Adams became the first President to call the White House home, and the building’s legacy has only grown from there. Here are six little-known facts about the White House.
The White House Has a Bowling Alley, Movie Theater, and Pool
The White House is not only a place to conduct government business; it’s also the first family’s home. Over the years, Presidents and their families have repurposed some of the building’s 132 rooms into entertainment venues to make their lives more enjoyable. One such room is the White House bowling alley, which Harry Truman opened in 1947 in the West Wing. While Truman wasn’t a frequent bowler himself, White House staffers formed the White House Bowling League in 1950. Those original lanes were closed by Dwight D. Eisenhower in 1955, but years later, Richard Nixon opened a new bowling alley directly underneath the North Portico.
Other notable spaces found throughout the White House include a 40-seat movie theater, which was converted from a former cloakroom in the East Wing. During Bill Clinton’s administration, a third-floor sitting room was repurposed as a music room, where the President practiced playing saxophone. The White House also has a storied history of swimming pools; the first White House swimming pool was built indoors for Franklin D. Roosevelt in 1933, as he often swam for exercise in the wake of his polio diagnosis. In 1975, Gerald Ford commissioned the construction of an outdoor pool that he proudly showed off by taking a dip in front of reporters on July 5 of that year. This secluded escape is located just south of the West Wing, and remains open to Presidents and their families.
The White House wasn’t always treated with the same care and respect that it is today. In 1948, during the Truman administration, the building was deemed on the verge of structural collapse, forcing President Truman to move into the nearby Blair House (a building normally used as the President’s guest residence). Analysts determined that the White House deteriorated in part throughout FDR’s presidency, as the Depression and World War II caused Roosevelt to put off much-needed repairs.
Though Truman wasn’t pleased with the displacement, he also wasn’t in a rush to return to an unsafe building. The President authorized an extensive renovation that lasted from 1948 until 1952, which saw the digging of deeper foundations and the addition of a steel frame skeleton within the White House’s interior. Truman had previously installed a balcony outside the presidential living quarters, which was completed in early 1948. While most of the White House had to be revamped, this balcony was one of the few elements left untouched, as it was built so well.
Photo credit: Stock Montage/ Archive Photos via Getty Images
The White House Requires 570 Gallons of Paint Every Few Years
Keeping the White House so white is an expensive and time-consuming process. The building requires a new coat of paint every four to six years, and recently underwent an extensive recoating in 2019. The cost of this massive paint job is around $85,000, as workers use a special German-made paint by Duron. The specific shade — called Whisper White — is used for the preservation of historic buildings, and costs up to $150 per gallon.
While $20,000 for new paint may seem like a lot, it’s actually a small portion of the total annual maintenance costs needed to keep the White House up and running. The 2023 Financial Services and Appropriations Act set aside $2.5 million in annual funds for White House upkeep.
On August 24, 1814, the White House was lit aflame by British troops as part of the Burning of Washington, an invasion during the War of 1812. The fire devastated the structure, and only two items were salvaged that still hang in the modern White House today: a full-length portrait of George Washington that was saved by First Lady Dolley Madison, and a small wooden medicine chest. The rebuilding process proved to be an arduous one, and Congress even considered temporarily moving the nation’s capital. However, in the end, the White House’s original architect, James Hoban, was tasked with rebuilding the President’s home, and repairs were completed in 1817.
Tragedy struck the White House yet again on Christmas Eve in 1929, as a fire swept through a storage area containing 200,000 government pamphlets. Aides were alerted to the blaze around 8 p.m., while President Herbert Hoover hosted a Christmas party downstairs. After being informed of the inferno, the President and his aides rushed to the executive offices to save important documents, the desk chair, and a presidential flag — just in time. More than 100 firefighters rushed to the White House, and despite working in freezing temperatures, they successfully put out the fire around 10:30 p.m. Though the building on the whole was still usable, the executive offices were heavily damaged and the press room destroyed, necessitating repairs that continued until the following spring.
The Building Wasn’t Officially Named the White House Until 1901
The White House was painted white in 1798 to protect the building’s sandstone exterior, and the press began colloquially referring to it as the “White House,” though it was just a nickname at the time. Throughout the 19th century, the building was formally known as the “President’s House” or “Executive Mansion,” two names that were used until Theodore Roosevelt took office in 1901.
On October 17, 1901, Roosevelt directed his secretary, George B. Cortelyou, to alert various cabinet departments of the building’s new name. Roosevelt’s bulletin mandated the change of “the headings, or date lines, of all official papers and documents requiring [Roosevelt’s] signature, from ‘Executive Mansion’ to ‘White House.’” The President believed that “Executive Mansion” was too generic a term used around the world. Roosevelt determined that by changing the name to the “White House,” the building would be instantly recognizable as the home of the President of the United States.
Jackie Kennedy Won an Emmy for Her Televised Tour of the White House
While President Truman worked to ensure the stability of the White House’s structure, it was First Lady Jacqueline Kennedy who revolutionized its interior. In 1941, long before she became a resident of the White House, Jackie toured the building with her mother and sister, and was dismayed by the lack of historical furnishings. Shortly after moving in with her husband in 1961, she made it her mission to overhaul the White House experience and create a more comfortable environment that also highlighted the building’s extensive legacy.
Enlisting the help of Americana collector Henry Francis du Pont, French designer Stéphane Boudin, and decorator Dorothy Parish, the First Lady began work on a massive restoration project. Her goal was not merely to redecorate but to showcase the history of the mansion and the country itself. From outfitting the Blue Room with French furniture that President James Monroe had ordered back in 1818, to redesigning the Treaty Room in a Victorian style, Jackie left almost no corner of the White House untouched. On February 14, 1962, Mrs. Kennedy led a guided tour of the building that was broadcast on CBS and NBC, drawing an estimated 80 million viewers and earning an honorary Emmy Award.
German-born physicist Albert Einstein (1879-1955) was so influential, his very name has become synonymous with genius. While working as a patent clerk in 1905 at the age of 26, Einstein submitted four papers to the German journal Annalen der Physik that changed humanity’s perception of time, gravity, and light. Today, historians mark the year as Einstein’s annus mirabilis, or “miracle year” — and he was just getting started.
Much of Einstein’s work is famously dense. Few people other than physicistsneed to fully comprehend the mind-bending ideas behind the general theory of relativity and Einstein’s other theories, but these discoveries form the bedrock of technologies the rest of us enjoy every day. Here are five ways Einstein’s ideas changed the world, and continue to provide a roadmap for humanity’s future.
GPS Would Be Impossible Without the General Theory of Relativity
Some 10,900 nautical miles above our heads, 31 satellites orbit Earth as part of the Global Positioning System (GPS) — but if it wasn’t for Einstein, those satellites would be little more than space junk. The very foundation of GPS is accurate timekeeping, as satellites need to keep time to correctly log the distance from a ground-based receiver (such as your smartphone). GPS satellites are so precise, the atomic clocks on board are accurate to within three-billionths of a second, a feat impossible without Einstein’s special and general theories of relativity. The special theory of relativity states that time flows differently depending on velocity. Because satellites travel at 8,700 miles per hour, they “lose” 7 microseconds per day compared to Earth-based receivers. Additionally, Einstein’s general theory of relativity — an idea published in 1915 that basically elaborates on his previous theory by throwing gravity in the mix — similarly states that distance from a source of mass, in this case the Earth, also affects the flow of time. This means that technically speaking, your head ages slightly faster than your feet because your feet are closer to the Earth (on time scales that are ultimately negligible). Today, GPS takes into account this “time dilation,” so satellites always know where you are when you open Google Maps.
The Explanation of Photoelectric Effect Helped Make Modern Solar Power Possible
It probably comes as no surprise that Einstein won the Nobel Prize for physics in 1921, but what many people don’t realize is that the award wasn’t honoring the wunderkind’s groundbreaking general theory of relativity, but rather his revolutionary yet often overlooked explanation of the photoelectric effect. The initial discovery of the photoelectric effect came in 1887 from German physicist Heinrich Rudolf Hertz (yes, that Hertz), who noticed that when ultraviolet light hit a metal plate, it created sparks. What was puzzling was that different metals required different frequencies to produce the same effect. Then, in 1905, 26-year-old Einstein solved this conundrum by introducing a new conception of light, which he published in his first paper submitted to Annalen der Physik. He argued that light wasn’t just a wave, as some scientists suggested, but also a stream of particles, later known to science as “photons.” Einstein posited that these photons contained a fixed amount of energy depending on their frequency, and his theory — though derided for years — successfully explained the photoelectric phenomenon. Though solar cells predated Einstein’s discovery by dozens of years, it wasn’t until Einstein’s theory that scientists understood why they worked, which helped make solar panels even more efficient.
Lasers Were Developed Thanks to Einstein’s Quantum Theory of Radiation
Lasers (an acronym for “Light Amplification by Stimulated Emission of Radiation”) scan your groceries at the supermarket, make self-driving cars possible, and form the backbone of optical communication. And yes, we can thank Einstein for this one, too. In 1917, Einstein published a paper detailing his quantum theory of radiation. The theory basically states that atoms can be stimulated to change energy levels when hit with a specific frequency. If that excited atom is hit with another photon of the same frequency, it’ll produce two coherent photons (traveling in the same direction) while the atom’s electron returns to its ground state. This means you can artificially create a sudden burst of coherent light as atoms discharge in a chain reaction, otherwise known as “stimulated emission of radiation” (the “ser” in “laser”). It wasn’t until after World War II that scientists found a use for Einstein’s discovery; the laser was developed by using mirrors to create light amplification.
Photo credit: MPI/ Archive Photos via Getty Images
The E=MC2 Equation Formed the Scientific Basis for the Nuclear Bomb
The final discovery of Einstein’s “miracle year” was the concept that light and energy are equivalent, and that their relationship can be explained with the elegantly simple equation E=MC2, meaning energy equals mass times the speed of light squared. Describing mass as essentially super-dense energy, Einstein’s equation shows how even small amounts of mass at atomic levels can produce a tremendous amount of energy when multiplied by the speed of light squared — and you probably see where this is going.
This process explains how a neutron fired from a uranium atom splits it into smaller atoms while releasing a tremendous amount of energy. It’s known as nuclear fission, and when the process is controlled, it provides low-emission nuclear energy. When released in an uncontrolled state, it can be used to produce an atomic bomb. Einstein himself never worked on the Manhattan Project, the secret government program to make the first nuclear bomb, but he rubber-stamped the idea in a 1939 letter to Franklin D. Roosevelt that argued for the U.S. to make the bomb before Nazi Germany. Einstein later regarded that letter as the “one great mistake in my life.”
The E=MC2 Equation Could Point to the Future of Energy
As previously described, nuclear fission works by breaking apart an element such as a heavy uranium-235 atom into two smaller atoms (krypton and barium). However, something interesting also occurs: If two light nuclei (i.e., hydrogen) can overcome electrostatic repulsion, theyfuse together to form a heavy helium-4 atom — sort of like fission but in reverse. Similarly, following the E=MC2 equation, this process produces a tremendous amount of energy and heat. This is known as nuclear fusion, and it’s the atomic science that is the energy-producing engine of stars.
On paper, nuclear fusion could provide the answer to humanity’s expanding energy needs. There’s no enriched material involved; nuclear proliferation with fusion reactors isn’t a worry; a meltdown is scientifically impossible; there’s no radioactive material produced as a byproduct; it’s completely carbon-free; and fusing atoms together releases 4 million times more energy than the chemical process of burning coal. There’s just one catch: Building a fusion reactor is immensely complicated. That’s never stopped people before, though. An international coalition of scientists and agencies is hard at work creating the International Thermonuclear Experimental Reactor, or ITER, which is set to go online in 2025.
Abraham Lincoln led the United States through the Civil War and helped bring about the abolition of slavery. But the 16th president also had a lighter side. Lincoln had a varied list of interests outside of politics — he was a farmer, carpenter, animal lover, and inventor — and was known for his keen sense of humor. Here are five funny facts you might not know about the man known as the Railsplitter, Honest Abe, and, according to many historians, the nation’s greatest president.
Photo credit: Fotosearch/ Archive Photos via Getty Images
Lincoln Was Known to Respond to Insults With Jokes
Abraham Lincoln stood out, visually. He was the tallest president, at 6 feet, 4 inches (at a time when the average American male was 2 inches shorter than today), and he had the largest feet of any president, at a size 14. Consequently, Lincoln’s political opponents frequently took absurd shots at his appearance. In 1860, The Houston Telegraphwrote that he had “the leanest, lankiest, most ungainly mass of legs, arms, and hatchet face ever strung upon a single frame.” The Southern Confederacy similarly published a poem stating that “his nose was as long and as ugly and big / as the snout of a half-starved Illinois pig.” Lincoln took it all with characteristic good humor and was not above the occasional self-deprecating joke. He once recounted a story in which someone called him a “self-made man,” to which he replied, “Well, all I’ve got to say is that it was a damned bad job.” And when Illinois Senator Stephen Douglas called him “two-faced” in a debate, Lincoln famously replied, “If I had another face, do you think I’d wear this one?”
Lincoln Grew His Beard at the Suggestion of an 11-Year-Old Girl
Lincoln’s opponents may have made fun of his appearance, but it was a letter from a supporter that led to him becoming the first fully bearded president. An 11-year-old named Grace Bedell saw a poster of a clean-shaven Lincoln that her father brought home from a county fair, and decided she needed to encourage the candidate to go for a glow-up. Bedell wrote Lincoln, “I have yet got four brothers and part of them will vote for you any way and if you let your whiskers grow I will try and get the rest of them to vote for you. You would look a great deal better for your face is so thin.” Lincoln wrote back to Bedell, seemingly considering her advice with the response, “Do you not think people would call it a piece of silly affection?” He ultimately grew out his beard after being elected president in November 1860. Just a few months later, Lincoln met Bedell when his train tour stopped in New York, and let her know that she was behind his makeover: “You see,” Lincoln told her, “I let these whiskers grow for you, Grace.”
An animal lover, Lincoln owned dogs and cats throughout his life, and he let his sons Tad and Willie keep rabbits, turkeys, horses, and goats at the White House. The animal that got the most special treatment, however, was Lincoln’s cat Tabby, whom he let dine at the table, including once during a formal dinner at the White House. When Mary Todd Lincoln said it was “shameful in front of their guests,” the president replied, “If the gold fork was good enough for [former President James] Buchanan, I think it is good enough for Tabby.” Lincoln thought highly of his cat Dixie, as well; he once remarked that she was “smarter than my whole Cabinet!”
The stovepipe hat was one of Lincoln’s signature accessories, and the final hat he ever wore is now kept at the Smithsonian’s National Museum of American History. The top hat helped the 6-foot-4 president tower over crowds even more than he naturally did, but the adornment wasn’t just used for looks: The president actually kept documents in the hat while he was wearing it. Lincoln would often remove papers (letters from friends, as well as speeches), from his hat while addressing constituents, and he was also known to take documents from atop his head and throw them down in front of generals in anger. According to some historians, the phrase “keep it under your hat” — meaning to keep something secret — comes from Lincoln’s habit.
Photo credit: Three Lions/ Hulton Archive via Getty Images
Lincoln Was Granted a Patent for an Invention That Didn’t Work
In May 1849, right after the end of his term in the U.S. House of Representatives, Lincoln was granted a patent for “adjustable buoyant air chambers,” which were meant to help buoy boats over shoals. He got the idea from his time working as a ferryman, when on two different occasions he was on a riverboat that got stuck after running aground on the Mississippi River. Lincoln whittled the patent model himself, and submitted sketches showing how the invention would work. The air chambers would be attached to the side of the boat and inflated to lift the vessel over an obstruction — at least, that was the theory. The device was never produced and it turned out that the amount of force needed to lower and fill the air chambers made it impractical. Nevertheless, Lincoln remains the only U.S. president ever to receive a patent.
Though she’s one of the most famous leaders of the ancient world, Cleopatra’s life is still shrouded in mystery. Cleopatra VII Thea Philopator ruled Egypt for 22 years as a powerful queen, and while her legacy is filled with tales of a goddess incarnate who seduced men to get what she wanted and had no problem killing anyone who got in her way (even her own siblings), much of this image is thanks to Hollywood and other pop culture depictions of the Egyptian queen. Actress Elizabeth Taylor famously played her in the big-budget 1963 filmCleopatra, and there have been numerous other portrayals of this enigmatic leader in art, fiction, and film — most of them filled with anachronisms and exaggerations and lacking in historical accuracy.
What historians do know is that when Cleopatra’s father, Ptolemy XII, died in 51 BCE, 18-year-old Cleopatra was named his successor. Over the course of her reign, she ruled alongside two of her brothers and her oldest son. She envisioned herself as the sole ruler of Egypt, however, and formed alliances with two of Rome’s most powerful generals in order to protect and maintain her power. In 47 BCE, she bore a son by Julius Caesar, nicknaming him Caesarion, or “little Caesar,” despite his illegitimacy. A few years later, in 44 BCE, Cleopatra’s relationship with Caesar came to an abrupt end when the Roman leader was assassinated, forcing her to develop new strategic alliances to secure her reign.
The Egyptian queen found a new political and romantic partner in Caesar’s friend and ally Mark Antony. With Antony, Cleopatra continued her political alliance with Rome, and they had three children together. However, Caesar’s adopted son Octavian declared war on the pair, leading to their untimely deaths. Cleopatra died in 30 BCE at age 39, as the last Egyptian queen and next-to-last Egyptian pharaoh. (Octavian had the last pharaoh, Cleopatra and Caesar’s 17-year-old son Caesarion, put to death just days later.) Octavian went on to become the first Roman emperor, dubbed Augustus Caesar, embracing his role as Caesar’s heir and ending the Ptolemaic kingdom.
It has been over 2,000 years since Cleopatra’s death, but her fascinating life still captures the imagination. Here are five popular myths about the Egyptian queen that separate the truth from the legend.
Photo credit: Hulton Archive/ Hulton Royals Collection via Getty Images
Myth: Cleopatra Was Ethnically Egyptian
Cleopatra is one of the best-known figures in Egypt’s history , but she wasn’t ethnically Egyptian. Though she was born in Alexandria, Egypt, around 69 BCE, Cleopatra’s lineage is traced to Macedonian Greece. She was the daughter of Ptolemy XII, a descendant of Ptolemy I Soter, a Macedonian general who served under Alexander the Great and founded the Ptolemaic dynasty that ruled in Egypt. Historians aren’t certain about the identity of Cleopatra’s mother, but theories suggest Cleopatra was the daughter of either Ptolemy’s first wife, Cleopatra V; his second wife, whose name is unknown; or a concubine.
Photo credit: Print Collector/ Hulton Archive via Getty Images
Myth: Cleopatra Wasn’t Prepared to Be Queen
Little is known about Cleopatra’s life before she became queen, but as a member of Ptolemaic royalty, she was highly educated and received a well-rounded Hellenistic education that included rhetoric, philosophy, astronomy, music, and Greek literature. She spoke around nine languages (Egyptian, Greek, Latin, Syrian, Arabic, Hebrew, Ethiopian, Persian, and Aramaic) and was the first of the Ptolemaic line to learn the Egyptian language. Praised for her intellect, she was knowledgeable in a wide variety of subjects, including economics, military strategy, law, and linguistics.
Photo credit: Print Collector/ Hulton Archive via Getty Images
Myth: Cleopatra Was Known for Her Beauty
Accounts of Cleopatra’s life often suggest she was a beautiful seductress, a myth likely started by Octavian to justify his ongoing rivalry with Mark Antony. Few ancient historians characterized Cleopatra as beautiful and the existing artifacts bearing her likeness are inconsistent in their portrayal of the Egyptian queen. Some coins, for instance, show Cleopatra having more masculine features, such as a strong jaw, sloped forehead, and aquiline nose, perhaps as a way of emphasizing her leadership strength. Other artifacts present her with a more conventionally feminine appearance, accentuating rounded cheeks, stylishly curled hair, and a small chin. While legend attributed Cleopatra’s power to her beauty, it was her intellect and charisma that garnered her the devotion of others. In his 75 CE biography,Life of Antony, Greek philosopher and historian Plutarch wrote of Cleopatra, “For her actual beauty, it is said, was not in itself so remarkable that none could be compared with her… but the contact of her presence, if you lived with her, was irresistible.”
Born into royalty, Cleopatra was the wealthiest woman in the world during her lifetime and is still one of the wealthiest people in all of history, with an assigned net worth of tens of billions in today’s currency. She identified as the living manifestation of the goddess Isis, adorning herself in beautiful fabrics and priceless jewels, and enjoying an extravagant and decadent lifestyle. But far from being a mere figurehead, Cleopatra was a savvy public relations expert, skilled at both political and military tactics. She took an active role in leading Egypt, using her intelligence and charisma to build valuable strategic alliances in order to protect Egypt’s independence from the Roman Empire. By establishing trade with Eastern nations, she grew Egypt’s economy and solidified its position as a world power.
In 31 BCE, Antony and Cleopatra were overwhelmed by Octavian’s formidable forces and lost the Battle of Actium, forcing the pair to retreat to Alexandria. As the war raged on, the lovers made a pact to take their own lives rather than risk capture by Octavian. When Octavian’s forces entered Alexandria in 30 BCE, Antony, believing Cleopatra was already dead, fell on his own sword. Cleopatra, however, was still alive and barricaded in the seaside mausoleum she was constructing for herself.
The most well-known legend about Cleopatra’s death features a grief-stricken queen coaxing a venomous viper or cobra to bite her. It is generally accepted that Cleopatra died by suicide, but the details of how it was executed may never be known. It seems unlikely that she would have used an imprecise method such as a snakebite, and many historians believe she may have drunk poison or used a toxic ointment instead. While Cleopatra might have grieved the loss of Antony in the days following his death, it was more likely the threat of being paraded through the streets as Octavian’s prisoner that motivated her to end her life.
Harold M. Lambert/ Archive Photos via Getty Images
Author Nicole Villeneuve
August 3, 2023
Love it?66
The objects we use in our everyday lives can easily be taken for granted. Simple conveniences such as lighting or the cars that get us from point A to point B are so ingrained in the day-to-day that we don’t stop to think about what life would be like without them — let alone how they even got here in the first place.
Some stories are more familiar than others: Thomas Edison famously toiled for years (and built on the work of others) before finalizing the first practical incandescent lightbulb, while Karl Benz’s 1901 Mercedes became the prototype for all modern cars that followed. But what about our toothbrushes? Air conditioning? Or the most vital of daily tools, the intangible but indispensable Wi-Fi network? Read on to learn about the surprising origin stories of six everyday objects.
The Basis for Wi-Fi Was Invented By a Hollywood Starlet
The invention of Wi-Fi has sparked plenty of debates and disputes over the years. Various individuals and organizations contributed to its development, and while the specific inventor of Wi-Fi is a matter of contention, one unexpected notable figure played a significant role in laying the foundation that made it possible: actress and inventor Hedy Lamarr.
Lamarr is known for her Hollywood career in the 1930s and ’40s, but her accomplishments went beyond the silver screen. During World War II, she teamed up with composer George Antheil to create a secure communication system that would prevent signal interference by enemy forces. This “frequency hopping” system was intended to guide torpedoes, and is widely considered the precursor to not only Wi-Fi, but GPS and Bluetooth technologies as well. However, Lamarr and Antheil’s patent expired before it got used, and only in modern times is the actress receiving the credit she deserves for enabling these transformative technologies.
Air Conditioning Came From a New York Printing Press
Air conditioning has become a necessity for living and working comfortably in increasingly hot temperatures, but the AC unit actually originated as a way to fix a faltering printing press. In 1902, an engineer named Willis Carrier was working at Buffalo, New York’s Sackett-Wilhelms Lithographing and Publishing Company. He was tasked with finding a solution to control the humidity levels in the plant, which were wreaking havoc on paper and ink quality. His solution, known as the Apparatus for Treating Air, marked the birth of modern air conditioning.
Carrier's system consisted of steam coils and an industrial fan. The cold water in the coils produced excess condensation, which would be blown out of the room to lower humidity and cool the air. It not only solved the printing problem, but also inadvertently introduced a revolutionary technology with wide-ranging applications. While other people had experimented with cooling technology before Carrier, his pioneering work impacted architecture, engineering, and everyday life for generations to come.
If there’s one item most people use every single day, it’s a toothbrush. While the humble oral hygiene tool dates back to ancient civilizations, when frayed twigs were used to scrub teeth, the model for bristle brushes as we know them today didn’t emerge until the late 15th century. According to the American Dental Association, a Chinese emperor patented a brush made of stiff, coarse hog hairs set into a handle made of bone or bamboo.
Hog or horse hair toothbrushes continued to be used for hundreds of years; by the late 1700s, they were even being mass-produced. It wasn’t until nylon was invented by a team at DuPont in 1935 that the material — the world’s first fully synthetic fiber — was put into toothbrushes. By 1938, the revolutionary product hit the market. Called Dr. West’s Miracle-Tuft Toothbrush, it wasn’t initially a hit, since early nylon was still far too stiff and abrasive, and brushing one’s teeth still wasn't considered a daily necessity. By the end of World War II, Americans were influenced by the hygiene habits of returning soldiers, and by the time DuPont introduced softer bristles in 1950, a booming industry was born.
Advertisement
Advertisement
Photo credit: Hulton Archive/ Archive Photos via Getty Images
The First Barcode Was Drawn in the Sand on Miami Beach
In 1948, Joseph Woodland, an inventor and grad student at the Drexel Institute in Philadelphia, was posed a challenge by a local grocer who wanted to speed up his checkouts. Given the limited technology at the time, how could they automate the process? Woodland was keen on the challenge, and pondered the problem while visiting family in Miami Beach. One day, inspiration struck on the beach. As Woodland drew in the sand with his fingertips, a vision of elongated bars, inspired by Morse code, came to him. Using a black-and-white bull's-eye design, he created a code that machines could decipher, pulling both the product information and its price. While early experiments were successful, it took several more decades of work from many thinkers before an IBM engineer designed the rectangle-shaped barcode we know and use regularly today.
While the barcode was originally created to answer supermarkets’ need for a faster checkout process, its true significance ended up being its ability to offer statistical insight into product sales. This innovation revolutionized market research, offering more detailed insights into consumer preferences, while also making manufacturing more efficient. In 1992, President George H.W. Bush presented Woodland with a National Medal of Technology and Innovation for his contributions to American retail and beyond.
The First Disposable Diapers Were Handmade by a Mom
Baby diapers may not be an everyday object for everyone, but anyone who has used them is keenly aware of their indispensability. While disposable diapers are now a multibillion-dollar global business, they began as a humble homemade project. In 1947, Valerie Hunter Gordon was just about to welcome her third child when she decided she’d had enough of washing soiled cloth diapers. She went on the hunt for single-use options; to her surprise, there were none available, so she sat down and made them herself.
Using her Singer sewing machine at her kitchen table, Gordon fashioned the “Paddi” out of gauze, for absorption, and with an outer nylon layer to hold the absorbent pad in place. (The nylon was actually a piece of parachute she got from her husband’s army base.) Snap closures were added for additional ease. As soon as her friends saw what she was making, they wanted some of their own; Gordon figures she handmade more than 600 of them at that time. The Gordons applied for a patent, and by 1949, they were producing the diapers in partnership with a U.K. company. By the 1960s, an American brand known as Pampers came along, and the Paddis business dwindled.
Advertisement
Advertisement
Photo credit: Lucas Oleniuk/ Toronto Star via Getty Images
Kleenex Started as Wartime Gas Mask Filters
Kleenex tissues, like many other new products during the 1920s, were a wartime innovation. With an increased demand for cotton supplies on the battlefield, American paper company Kimberly-Clark developed a cotton substitute made from wood pulp. The company called it cellucotton, and sent it overseas to use as bandages and as filters in gas masks. Following the war, Kimberly-Clark sought civilian applications for its abundance of cellucotton. One employee, Walter Luecke, was inspired by Army nurses who used it as makeshift disposable sanitary napkins. The company initially pushed back, claiming sanitary napkins were “too personal” to produce and market, but it eventually relented, and in 1920, released its first consumer product, the revolutionary Kotex sanitary pads. Shortly after the success of Kotex, the surplus of cellucotton was further adapted into a thinner, softer product that was released in 1924 and marketed as a cold cream and makeup remover: Kleenex.
Though it’s often thought of as a single trail, the Silk Road was actually a vast network of trade routes spanning multiple centuries and continents, connecting cultures as far as 6,000 miles away from each other. The network started around 138 BCE, when Han dynasty China sent out an envoy to make trading connections with other Asian countries. Over the next two centuries, trade routes extended westward through the Indian subcontinent, the Syrian desert, and the Arabian Peninsula, all the way to Greece and Rome. Some of these connections were made over land, but many were made by sea, too. This vibrant network lasted around 1,500 years, ending in 1453 CE when the Ottoman Empire closed off trade with the West — but not before the global exchange of goods and ideas changed the course of history. Here are seven of the most influential and sought-after things that were traded on the Silk Road.
Craftspeople in China had been raising silkworms and working with silk for thousands of years before the luxurious textile became a valuable commodity. Silk was so prized in ancient Rome that one 19th-century German geographer named the Silk Road after the coveted material. Silk reached India in the second century BCE, and in the third century CE, Persia became a major silk-trading hub that connected Europe to East Asia. The trade route spread the popular textile around the world, paving the way for the complex woven patterns of Byzantium and Iran. Silk production, however, remained a closely guarded secret in Asia even after Byzantine Emperor Justinian I had silkworms smuggled over in bamboo tubes.
Silk wasn’t the only fiber that changed hands along the Silk Road, however. Hemp, cotton, and wool were all popular items as well. The cultural exchange also included finished fabric and weaving techniques. Different types of clothing traveled between nations, too; trousers, which made horseback riding easier, originated in Mongolia, and various sorts of woven belts evolved throughout the era.
Photo credit: Print Collector/ Hulton Archive via Getty Images
Paper
It’s easy to take paper for granted now, but in the early days of the Silk Road, it was a new technology for many cultures. Early writing appeared on clay, bone, wax, and parchment, which was made from animal skins and was labor-intensive to create. The first known paper, made from mulberry fibers and other discarded materials, appeared in China during the Han dynasty (25 to 220 CE). Buddhist monks started sharing religious writing on paper because it was durable and easy to transport. It spread through religious communities first and eventually hit trade routes.
Paper was extraordinarily useful — merchants both sold it and used it themselves for recordkeeping — so it spread quickly. It was a popular item in its own right, as well as a means to convey other valuable commodities, such as scientific ideas and literature. Many regions set up their own paper industries; Baghdad, for example, became known for producing stationery. Paper production eventually reached Europe via Sicily and Spain, but Chinese paper remained a valuable export because it was considered higher quality.
Gunpowder is a carefully measured mix of potassium nitrate, charcoal, and sulfur, designed to burn quickly and trap enough gas to propel an object, be it a firework or a cannonball. It was a later addition to Silk Road trade routes, and its exact history is unclear, though it’s believed to have originated in China, where it was in use by the 10th century CE — and possibly a few centuries earlier — for signaling and fireworks. Its use in weaponry originated in China, too, starting between the 10th and 12th centuries CE, with a precursor to a gun made out of a bamboo reed. Full-fledged guns evolved by the end of the 13th century, and soon moved westward. Guns and gunpowder reached the Middle East by 1304 CE, and were introduced to Europe, including England and France, by the end of the 14th century CE.
Spices are among the oldest goods to make their way along the Silk Road; cinnamon was being traded throughout Asia as early as 2000 BCE. Many plants had limited distribution at that time, so specific seasonings became especially prized — nutmeg and cloves, for example, grew only in the Moluccas, a small group of Indonesian islands known at the time as the Spice Islands. Traders often made up dazzling stories about the origins of spices to drive up their intrigue and value. Spices such as cinnamon, cardamom, and ginger were so prized that the word “spice” is even derived from the Latin word for “special wares.” Around the turn of the second century CE, Alexandria, Egypt, then under Roman rule, became an important spice-trading hub, and soon the tasty goods spread northward to Greece. Spices reached northern Europe via Genoa and Venice starting around the 11th century.
The Silk Road saw a robust tea trade, too. Camellia sinensis, the plant that grows tea leaves, originated in Southeast Asia (roughly where China, India, and Myanmar meet today) and has been part of Chinese culture since at least as far back as the 10th century BCE. Its first trips on the Silk Road were eastward to Japan and Korea, where the plant began to be cultivated. Over the next several centuries, these East Asian nations developed a culture and ritual around both brewing and drinking tea. Associated pottery, such as teapots, followed tea as it spread to India, the Middle East, North Africa, and Europe.
If you’ve ever heard porcelain goods referred to as “china” — as in “china dolls” or “fine china” — it’s because porcelain was a distinctly Chinese art for many years. Sculpted from a special clay only available in a certain region of China at the time of the Silk Road, porcelain stood out from other ceramics for both its durability and its translucent white color. The form that became best known in the West was developed during the Yuan dynasty, which spanned the 13th and 14th centuries CE. The classic blue-and-white wares became prized collector’s items, especially in the Islamic world, and inspired similarly styled ceramics in other regions.
Glassware, meanwhile, traveled in the other direction. Glassblowing techniques, particularly with soda-lime glass, developed in the Mediterranean and Middle East starting around 3500 BCE, and examples of that work dating back to the first millennium BCE have been found in East Asia. Roman glass, such as purple glass mosaic bowls, was especially prevalent — Romans loved silk, so they may have swapped the glass for Chinese silk. While Chinese craftspeople produced glass beads in the first few centuries BCE, it was chemically distinct from Western imports. Romans worked with soda-lime glass, the most commonly made type of glass today, which isn’t particularly durable. Imagine keeping it intact for 5,000 miles!
The global exchange of ideas was just as impactful as the exchange of physical goods along the Silk Road. Astronomy, used for navigation, spread from India and ancient Iran. The Islamic Golden Age from the eighth century through the 13th century CE marked innovations in mathematics that we take for granted today — including the base 10 number system and decimal fractions — and it drew heavily from Greek and Indian knowledge. Science scholarship in Baghdad and Cairo also led to major advancements in medicine, enabled by knowledge, materials, and traditions from other civilizations. Alchemy was a spiritual precursor to some very real modern science, and led to discoveries in chemistry that eventually spread westward to Europe from scholars in the Middle East and India.
As goods exchanged hands, so did the knowledge of how to use and create them. Some crops, such as grapes, traveled eastward, while others, such as rice, traveled westward, along with information on how to cultivate them. Different metalworking techniques, including types of armor, spread as craftspeople traveled to sell their wares. Bakers from Central Asia opened shops in China and became part of the evolution of Chinese cuisine. And religious traditions, including Judaism, Buddhism, Zoroastrianism, Christianity, Islam, and local folk traditions, spread and influenced one another as missionaries traveled the vast Silk Road.
Ted Streshinsky Photographic Archive/ Corbis Historical via Getty Images
Author Nicole Villeneuve
August 3, 2023
Love it?62
In 1967, San Francisco’s Haight-Ashbury district became the home base for a burgeoning counterculture. Known as the “Summer of Love,” the social movement was defined by a collective rejection of mainstream values and an embrace of ideals centered around peace, love, and personal freedom. An estimated 100,000 young people descended on the area; these artists, musicians, and drifters — collectively referred to as “hippies” — created an unforgettable cultural shift, touching everything from the way we view the self, to innovations in music, fashion, and art, and our approach to making an impact on society. More than 50 years later, the Summer of Love still dances freely in America’s memory.
Contrary to its name, the Summer of Love actually kicked off in the wintertime. In January 1967, in San Francisco’s Golden Gate Park, more than 20,000 people who shared a desire for peace, personal empowerment, and unity gathered for an event called the Human Be-In. It was a loud and proud harbinger to the blossoming counterculture movement set to congregate in Haight-Ashbury in just a few months.
The idea for the Human Be-In — also known as the “Gathering of the Tribes” — sprung from the similar, but much smaller, Love Pageant Rally that was held on October 6, 1966 — the day that California made LSD illegal. Organizers Allen Cohen and Michael Bowen, co-founders of the underground newspaper the San Francisco Oracle, wanted to re-create the peace and unity of that day, only on a larger scale. Their aim for the Human Be-In was to spread positivity and bridge the counterculture’s anti-war and hippie communities, while raising awareness around the pressing issues of the time: questioning authority, rethinking consumerism, and opposing the Vietnam War. On January 14, 1967, the idea came together. Counterculture icons such as Beat poet Allen Ginsberg and LSD advocate Timothy Leary spoke to the masses — the latter famously urged participants to “turn on, tune in, drop out” — and the Grateful Dead, Jefferson Airplane, and other legends performed at the event. The optimism that collective action could have a tangible impact on society felt stronger than ever. San Francisco Chronicle columnist Ralph Gleason said it was “truly something new,” calling it “an affirmation, not a protest… a promise of good, not evil.” The wheels for the Summer of Love were in motion.
The Summer of Love not only introduced a cultural revolution — it also marked a turning point in pop culture. It made stars of some of music’s most enduring names and introduced major music festivals as we know them today. After the inaugural Human Be-In, other similar events unfolded around the world, laying the blueprint for large outdoor live performances. The first event to specifically call itself a music festival took place on June 10 and 11, 1967, on Mount Tamalpais in Marin County, just north of San Francisco. The KFRC Fantasy Fair and Magic Mountain Music Festival featured performances by the Doors, Jefferson Airplane, the Byrds, Steve Miller Band, and many others, and is considered America’s first true rock festival. One week later, another pivotal event — the centerpiece of the Summer of Love — changed live music forever.
The Monterey Pop Festival took place across three days, June 16, 17, and 18. Organized by influential figures in the music scene, including John Phillips of the Mamas and the Papas, former Beatles publicist Derek Taylor, and record producer Lou Adler, the event attracted upwards of 200,000 attendees over the weekend. Prior to the festival, the release of “San Francisco (Be Sure to Wear Flowers in Your Hair)” by Scott McKenzie, a song penned by Phillips to promote the event, garnered significant global attention, becoming not only a chart-topping hit, but a driving force in enticing young people to join the hippies in Haight-Ashbury that summer. Press coverage turned Monterey Pop into a worldwide media spectacle. Iconic images from the event captured in a 1968 documentary by D.A. Pennebaker became lasting symbols of the hippie movement. The festival also catapulted artists such as Jimi Hendrix, Janis Joplin, Otis Redding, and The Who to fame, thanks to their legendary performances during that weekend. Monterey became the template for the modern festival industry, showcasing emerging artists alongside blockbuster bands in a massive outdoor setting.
Photo credit: Graphic House/ Archive Photos via Getty Images
American Counterculture Was Catapulted Into the Mainstream
Although little attention had previously been given to the burgeoning free-love community, national media flocked to the Human Be-In and the events that followed. During the Summer of Love, scenes from Haight-Ashbury were reported on by major print and broadcast outlets across the world, instilling fear of strange new unknowns in some, inspiring others, and nonetheless firmly planting counterculture ideals and visuals front and center for America to see.
The cultural revolution was further bolstered by the music of the era. Psychedelic rock, folk, and protest songs became anthems of the movement, resonating with both the youth and older generations. Eventually, the anti-establishment sentiments and activism of the counterculture began to influence mainstream politics and social movements. Issues such as civil rights, environmentalism, gender equality, and opposition to the Vietnam War gained broader support and attention as these ideas permeated mainstream discourse. Though the Summer of Love was itself short-lived, its legacy continued to shape popular culture, fashion, music, and social norms for decades to come.
Advertisement
Advertisement
5 Facts About the Infamous Crime Duo Bonnie and Clyde
American Stock Archive/ Archive Photos via Getty Images
Author Kristina Wright
August 2, 2023
Love it?94
In January 1930, Clyde Barrow and Bonnie Parker met at a friend’s house in Dallas, Texas, and, as the legend goes, it was love at first sight. Their budding courtship was disrupted when Clyde was jailed a month later in Waco, but at Clyde’s request, Bonnie smuggled a gun into the jail, allowing Clyde and two other convicts to escape. It was a temporary freedom, however, as Clyde was soon captured in Ohio and extradited to Texas, serving almost two years in prison before being paroled in February 1932. Bonnie and Clyde were reunited soon after, and Bonnie became part of the Barrow Gang, which included several of Clyde’s friends, his brother Buck, and Buck’s wife, Blanche.
The news stories of Bonnie and Clyde’s criminal adventures captivated a downtrodden nation at the height of the Great Depression. Their outlaw antics and unlikely love story helped turn the gangster and his moll into folk heroes akin to Robin Hood and Maid Marian or Romeo and Juliet. But it wasn’t meant to last. After an increasingly violent crime spree that stretched almost two years, the pair was ambushed and killed by law enforcement in Louisiana in 1934. Their deaths made headlines across the nation, and thousands of people attended their funerals.
Over the years, the exploits of Bonnie and Clyde became synonymous with a kind of romantic lawlessness usually reserved for tales of the Wild West. Here are five surprising facts about one of the most infamous crime duos in American history.
Bonnie and Clyde were partners in crime who became immortalized in myth and legend — but they were never married, because Bonnie already had a husband. In 1926, just a few days before she turned 16, Bonnie Parker married her high school sweetheart, Roy Thornton. Their marriage was tumultuous and Thornton was often absent or in trouble with the law. The couple separated numerous times, and Bonnie’s mother Emma recommended divorce, but Bonnie refused. Though she was identified as “Mrs. Roy Thornton” in wanted posters and was still wearing her wedding ring when she was killed, Bonnie personally reverted to her maiden name, and her tombstone reads “Bonnie Parker.” Thornton, who was in prison for robbery when he learned of Bonnie’s death, said, “I’m glad they went out like they did. It’s much better than being caught.” Thornton was shot and killed three years later during an attempted prison break.
Their Own Photos Contributed to Their Notoriety as Outlaws
When law enforcement raided a Barrow Gang hideout in Joplin, Missouri, officers recovered a camera and undeveloped film. The prints were developed and a few of the shots of Bonnie and Clyde ran in newspapers and tabloids. In one, Bonnie pointed a rifle at Clyde; in another, she had a cigar in her mouth and was holding a revolver. The images contributed to the couple’s notoriety, leading newspapers to describe Bonnie as a “cigar-smoking gun-moll.” But Bonnie’s cigar was just a prop borrowed from another member of the gang. “Tell them I don’t smoke cigars,” she later told a police officer they’d taken hostage and released, when he asked what she wanted the press to know. As for the guns she posed with, there’s no evidence that Bonnie ever killed, or even fired at, anyone. The FBI describes Bonnie’s criminal association with Clyde this way: “Though she probably never fired a shot, she was his willing accomplice.”
Photo credit: Hulton Archive/ Archive Photos via Getty Images
They Both Wrote Poetry About Their Life of Crime
In 2019, a collection of written material connected to Bonnie and Clyde was auctioned, including several poems attributed to the couple. Before dropping out of school to marry Thornton, Bonnie showed academic promise and enjoyed writing poetry. Even when she joined the Barrow Gang, she continued to pen poems. In a final visit with her family less than two weeks before she and Clyde were killed in Louisiana, Bonnie gave a copy of her poem “The Story of Bonnie and Clyde” to her mother. In the final lines of her poem, Bonnie acknowledged their inevitable end: “Some day they’ll go down together / And they’ll bury them side by side / To few it’ll be grief, to the law a relief / But it’s death for Bonnie and Clyde.” In a poem attributed to Clyde, he wrote, “Bonnie’s just written a poem / The Story of Bonnie and Clyde. / So I will try my hand at poetry / with her riding by my side.” He ended the poem acknowledging that Bonnie was the better writer but, like her, he seemed resigned to their deadly fate.
One of the reasons Bonnie and Clyde were able to elude police for so long was that they were constantly on the run. With law enforcement pursuing them across the Midwest and Southwest — in Arkansas, Illinois, Iowa, Louisiana, Missouri, Oklahoma, Kansas, and Texas — the couple resorted to setting up camp in rural areas and sleeping in cars they had stolen. Once there was enough evidence to pursue federal interstate auto theft charges, the FBI became involved in the chase, and it was only a matter of time before the infamous duo was caught. Today, the stolen car in which Bonnie and Clyde died is on display outside Las Vegas. Visitors to Primm Valley Casino Resorts can see the bullet-riddled 1934 V8 Ford sedan as well as the shirt Clyde was wearing when he was killed. The casino is believed to have paid $250,000 for the car in 1988.
The 1967 Film Was a Hit, But It’s More Myth Than Truth
Following Bonnie and Clyde’s highly publicized deaths on May 23, 1934, their funerals were held on different days and they were buried in different cemeteries, despite Bonnie’s wishes. Their story was told, and retold, by family members and strangers, but for the most part it was just another tragic tale relegated to the annals of true crime history. The 1967 film Bonnie and Clyde, starring Faye Dunaway and Warren Beatty, changed that forever, resurrecting Bonnie and Clyde’s legend for the counterculture generation of the 1960s. The film portrayed the couple as glamorous, reckless gangsters who would rather die than surrender. However, Bonnie and Clyde biographer Jeff Guinn said the silver screen portrayal was “less than 5% historically accurate,” glossing over the grim reality of two young lovers from impoverished backgrounds who felt they had nothing left to lose. Despite the inaccuracies, Bonnie and Clyde became a seminal film of the New Hollywood era, hailed by film critic Roger Ebert as “a milestone in the history of American movies.”
During the 19th century, the discovery of gold in various parts of the United States sparked a phenomenon known as “gold fever.” An obsession with getting rich off the precious metal caused a stampede of hopefuls to rush to California and other areas where gold had been found. The 1848 discovery of gold in California wasn’t the first, but it led to the largest gold rush of the era, the California Gold Rush, which gave way to the northern Klondike Gold Rush in Canada in the late 1890s. Hundreds of thousands of prospectors flooded to these regions, sometimes enduring treacherous journeys in search of fortune. The gold rushes led to the rapid development of towns and cities and irreversibly changed landscapes; they also left behind several fascinating, if sometimes overlooked, moments in history — moments that can only be born from the frenzied allure of striking it rich.
Photo credit: Hulton Archive/ Archive Photos via Getty Images
The California Gold Rush Bankrupted the Man Who Helped Start It
John Sutter was a Swiss-born businessman who played a crucial role in the California Gold Rush, but unlike some lucky business owners who profited from its riches, he ended up broke as a result. In 1848, Sutter was having a sawmill built along the American River in Coloma, California (near present-day Sacramento), when his carpenter discovered gold on the property. Though they tried to keep it a secret, news spread, and thousands of prospectors flocked to the area, trespassing on Sutter’s land, stealing his livestock, and causing extensive damage. Sutter’s attempts to profit from the gold rush were undermined not only by this recklessness, but also by the failure of his sawmill, due to every able-bodied worker’s preoccupation with finding gold. By 1852, he was bankrupt.
It Was the Largest Mass Migration in U.S. History at the Time
The California Gold Rush is one of the largest mass migrations in United States history. In 1848, when gold was discovered at Sutter’s Mill, it triggered a hopeful frenzy and attracted an unprecedented number of people to the region from all corners of the globe. A rush of prospectors, commonly referred to as "forty-niners," flooded California; by the mid-1850s, an estimated 300,000 people had arrived, meaning one in every 90 people in the United States was living in California at the time. This massive migration reshaped the social, economic, and cultural landscape of the region, leaving a lasting impact on California’s history.
Photo credit: Heritage Images/ Hulton Archive via Getty Images
Levi’s Jeans Have Roots in the Gold Rush
In 1853, Levi Strauss, a German immigrant, saw an opportunity for business in booming California, so he packed up and moved west to expand his family’s New York City-based wholesale dry goods store. Over the next 20 years, he built a successful business supplying the stores that had popped up during the gold rush with clothing, textiles, and other goods. Then, in 1873, Strauss, along with a customer of his who happened to be a tailor, began producing sturdy denim pants reinforced with copper rivets — the first blue jeans. Although the gold rush had passed, there were still mining, logging, and other industries desiring durable workwear; by the end of the year, the pants exploded in popularity, and by the 1920s, Levi’s “denim waist overalls” were the top-selling men’s work pants in the country. In 2016, in honor of Levi’s success, denim was designated as one of California’s official state symbols.
Advertisement
Advertisement
Photo credit: Rischgitz/ Hulton Archive via Getty Images
Ships That Brought Prospectors to San Francisco Are Still Buried Under the City
During the California Gold Rush, ships arrived on San Francisco’s waterfront one after the other, carrying eager prospectors seeking fortune. These vessels, often abandoned as their crews fled in search of gold, were eventually used as storage or were demolished and their materials repurposed to build stores, hotels, homes, and more as the city grew. More still ended up being sunk and were buried as the city’s shoreline was expanded outward with landfill. To this day, beneath the bustling streets of San Francisco, the buried remnants of these gold rush-era ships remain, serving as a reminder of the city’s rich history and the transformative impact of the gold rush.
The Mayor of Seattle Quit to Chase Klondike Riches
The temptation of gold rush riches was irresistible to many people — and the mayor of Seattle was no exception. In 1897, Mayor William D. Wood made the bold decision to step down from politics and capitalize on the “stampede” of prospectors heading for Canada’s Yukon Territory, by founding a successful company that transported miners to the region. Some 100,000 hopefuls attempted the arduous journey through northern Canadian and Alaskan terrain as part of the Klondike Gold Rush, and only around 30,000 made it.
Canada Required That Each Prospector Bring a Literal Ton of Goods to the Yukon
As thousands of hopefuls decided to head north during the Klondike Gold Rush, the Canadian government made it a requirement that everyone take with them a year’s worth of supplies. This included clothing, tools, camping equipment, and approximately 1,000 pounds of food. Altogether, it amounted to roughly a “ton of goods,” as the haul became known, which had to be carried in smaller loads across many trips, causing more than a few stampeders to turn back. The rule aimed to prevent shortages and protect prospectors from the harsh realities of the Yukon's unforgiving environment — and to attempt to regulate and prepare those venturing into the wilderness. The Klondike Gold Rush ended almost as soon as it began, and many prospectors moved on to Alaska in 1899 after only two years in the Yukon.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.