Based on the way trends shift over time, you can often guess how old someone is by their name. Elmer, Willard, Fred, and Harold are currently the male names with the oldest median ages, so men with those names today will generally be older; for women, the names with the oldest median ages are Gertrude, Mildred, and Opal. These, of course, are all still in use today. But what is the oldest known name of all time, period?
The answer to that question can be found on a collection of tablets from the ancient Mesopotamian city of Uruk (in modern-day Iraq), which date back to approximately 3100 to 3000 BCE, more than 5,000 years ago. The text inscribed on the tablets describes transactions from the ancient Sumerian Temple of Inanna, as well as the name of the person who recorded the transactions: Kushim.
The tablets were written in cuneiform, a Sumerian script that evolved from pictographs. While pictographs were rudimentary line drawings consisting of shapes imprinted into clay using a fine-pointed tool, cuneiform was a more complex script that emerged around 3200 BCE when a wedge-shaped reed stylus replaced the pointed tool. Instead of pictures, a variety of angular shapes could be made with the stylus, representing words, sounds, and syllables.
Cuneiform tablets were often used for bureaucratic records, and historians believe that Kushim was likely an accountant or government official. On the Uruk tablet, now known as MS 1717, an unusual combination of two symbols appear: the cuneiform symbols for the sounds “Ku” and “Sim.” Both sounds are known to scholars, but there is no known cuneiform word that corresponds to the combination of the two sounds. That fact, combined with the positioning of the symbols at the end of a sentence, suggests that the cuneiform “Ku-Sim” is the name of the individual responsible for the transaction recorded.
The Schøyen Collection in Oslo, where some of the Uruk tablets are housed, displays a translation of one of the transactions: “Beer production: 134,813 litres of barley to be delivered over 3 years (37 months) to the government official Kushim responsible for the brewery at the Inanna Temple in Uruk.” The MS 1717 tablet is also notable for another reason: Its inscriptions are the first known depiction of an industrial process.
Advertisement
Advertisement
Can we be certain that Kushim was truly someone’s name, though? While there is room for argument that Kushim may simply be a job title, there is additional evidence on the tablet in support of it being a name. The word “Sanga” appears in the text immediately preceding Kushim, which is a word known to historians as a Mesopotamian job title. This makes it unlikely that Kushim is a title, as it would be redundant in context.
It’s worth noting that historians do not consider Kushim to be the first-ever given name, but rather a contender for the oldest given name that exists on record — tracking the earliest human history doesn’t tend to yield simple and definitive answers. To that end, there are a few other possible oldest names, a few of which appear on a tablet discovered in the ancient Sumerian city of Shuruppak (in modern-day Iraq). The Shuruppak tablet reads, “Two slaves held by Gal-sal” and then goes on to name them: “En-pap X and Sukkalgir.” Historians believe this artifact is likely one or two generations more recent than MS 1717, but in the context of thousands of years of antiquity, a timeframe of a few decades is difficult to prove with certainty. Another possible oldest name is the ancient Egyptian ruler Iry-Hor, who is thought to have lived contemporaneously to Kushim, Gal-sal, En-pap X, and Sukkalgir.
As for today, is there anywhere in the world where the name Kushim is still used? According to the demographics site Forebears, approximately 76 people currently have that name. It’s most common in Russia, but also appears in Uzbekistan. It’s the 2,168,711th most popular name in the world.
Long before the advent of mechanical showers, nail clippers, and electric razors, people’s personal grooming habits were far different than they are today. Many modern grooming tools weren’t invented until the late 19th and early 20th centuries, which left anyone who lived before that with limited options, creating some hygiene habits that seem quite unusual today. Here’s a look at some of the strangest personal grooming etiquette in the history of the Western world.
Credit: Rogers Fund, 1917/ The Metropolitan Museum of Art
Romans Tweezed Their Bodies
Hair removal has been practiced for ages, but some ancient Romans took the practice to a whole new level. In Roman society, body hair was viewed as an unfavorable trait for both genders. Romans believed maintaining a clean-shaven look helped differentiate them from the uncouth “barbarians” elsewhere in the region. Men also removed their body hair for athletic purposes; athletes were admired for their hairless aesthetic at the time, as smooth skin meant less for an opponent to potentially grab onto. Romans used pumice stone to remove stubble and an early razor called a novacila to achieve a closer shave. But tweezers were among the preferred methods when it came to getting rid body hair — from head to toe.
Many people opted to painstakingly remove each strand of hair using a pair of tweezers, which produced a smoother effect than the rudimentary razors and scrubs that were available at the time. In 2023, archaeologists uncovered more than 50 sets of these grooming devices from the ancient Roman city of Wroxeter in modern-day England. Tweezers were a popular tool of choice because they were cheap to make and didn’t pose any risk of serious harm. That said, plucking out body hair with tweezers was a painful process: The Roman politician Seneca once wrote a letter complaining about the loud shrieks coming from the Roman baths where people had their hair pulled out. Rather than tweeze the hair themselves, many Romans often relied on enslaved people to remove their hair for them, especially around the armpit area, where hair was considered particularly undesirable.
One Etiquette Book Suggested Using Bird Dung To Cure Baldness
In a passage in his 17th-century book The Path-Way to Health, British author Peter Levens suggested that chicken droppings could be used to cure baldness. He wrote, “Take the ashes of Culver-dung in Lye, and wash the head therewith” — the term “Culver-dung” referring in this case to bird poop, likely that of a pigeon or dove. This was one of several questionable theories to promote hair growth that Levens proposed in his book, and by far the most unpleasant.
Levens also suggested several solutions for removing hair, and yes, they also required animal droppings. He wrote, “Take hard Cats dung, dry it and beat it to powder, and temper it with strong Vinegar, then wash the place with the same, where you would have no haire to grow.” Today, the idea of mixing cat poop and vinegar together sounds like one of the most unpleasant odors imaginable, and indeed it’s unclear how many contemporary readers, if any, actually tested Levens’ solution.
People Used “Ear Pickers” To Clean Ears, Fingers, and Teeth
Q-tips weren’t invented until 1923, but earwax was always a major grooming issue. One of the best solutions for tackling clogged ears was the use of “ear pickers” — an ornate, handheld tool that was common throughout Europe and the American colonies during the 16th and 17th centuries. The device was often made of a precious material such as gold, silver, or ivory, and was worn around the neck on a chain as a status symbol.
One end of a traditional ear picker was formed into a scoop shape that could be used to pry wax out of the ear canal. Doctors often advised people to remove earwax, which at the time was thought to cause deafness. The other end of a standard ear picker was pointed, and often used to dislodge food from teeth or clean dirt out from under the fingernails.
Throughout the 17th and 18th centuries, wigs were considered fashionable for both men and women. Many men used the hairpieces to hide premature balding, which was increasingly common in Western society at the time due to untreated cases of syphilis. These wigs were usually made from real human hair and then slathered with a pomade made of animal fat to style them and maintain their shape, and they were worn so often that they quickly became pretty disgusting.
Not only did the rancid animal fat cause wigs to stink to high heaven, but they also became breeding grounds for fleas and lice. The wigs were delicate and quite difficult to wash, so many wig-wearers simply persevered through the unpleasantness caused by odors or creatures. But the hazards didn’t end there; the animal fat also made wigs highly combustible, putting the wearer at serious risk if they stood near a candle or other open flame.
Milk baths were once a popular practice among elite members of society, to keep their skin looking soft and wrinkle-free. One popular legend claims that Cleopatra frequently bathed in sour donkey milk as part of her daily beauty ritual, though the veracity of this claim has long been debated. According to historian Adrian Goldsworthy, that story is likely a myth, and it was actually Poppaea, the wife of Roman Emperor Nero, who helped popularize the practice of bathing in milk. Centuries later, Queen Catherine Parr of England (the sixth and last wife of Henry VIII) was also known to take milk baths, which she enjoyed with “perfumes and scented oils,” according to historian Susan E. James. It’s also said that Queen Elizabeth I bathed in milk to maintain a pale complexion.
Indeed, paleness was such a popular beauty standard in the Western world that women even applied arsenic powder to whiten their faces and hair. This trend was made common in Europe during the 19th century, as doctors promoted “complexion wafers” made of arsenic — a toxic element believed to be harmless at the time — to achieve a fair skin tone, which was associated with the upper-class privilege of not working outdoors.
Credit: Three Lions/ Hulton Archive via Getty Images
Author Timothy Ott
September 19, 2024
Love it?63
These days, we take for granted the organized chaos that accompanies the U.S. presidential election every four years, from the lengthy nominating cycles and raucous party conventions to the relentless media coverage that analyzes the candidates’ every word and gesture. By that standard, the 1789 election that made George Washington the first American president was far quieter, but in some ways it was even stranger.
America’s first election looked very different from today’s presidential races: There were no official political parties and campaigning, and nearly everyone wanted the same candidate to win. The election of 1789 served as a blueprint for how presidents would be picked in the United States — though many rules have changed since then. Here’s a look at this bizarre and historic experiment in democracy.
Credit: Fotosearch/ Archive Photos via Getty Images
We Can Thank This Election for the Electoral College
The origins of America’s first presidential election were the passionate discussions held by the delegates at the 1787 Constitutional Convention. Until that point in the nation’s brief history, the Articles of Confederation had proven inadequate as the basis for a unified central government, which lacked the power to levy taxes, regulate commerce, or enact foreign policies. As such, the convention delegates haggled over the details of a new system. “Federalists,” including James Madison and Alexander Hamilton, were eager to imbue the central government with a raft of powers, while “anti-Federalists,” such George Mason, were leery of diminishing the rights of the individual states.
Although the participants came to agree on the creation of an office for the head of government, one major point of contention was just how this chief executive would be selected. Given the prevailing belief in the separation of powers, it was determined that a parliamentary system in which the legislature voted for an executive leader (like in Great Britain) was a bad idea. The convention’s delegates also reasoned that it wasn’t feasible to leave the vote directly up to the people, who harbored diverse interests and were likely to put forth an array of provincial candidates in lieu of a unifying national figure.
The delegates ultimately settled on a system of “electors,” now known as the Electoral College, to be appointed by each state according to a process of its choosing. The number of electors was equal to the state’s number of congressional representatives (ranging from three in Delaware to 12 in Virginia), for a total of 69 electors in all. As eventually stated in Article II, Section 1 of the Constitution, each of these electors was to vote for two people — at least one of them from another state — after which the leading vote-getter would become president and the runner-up would be vice president. If two candidates tied, or if anyone failed to accumulate a majority, the winner would be determined by the House of Representatives.
Adhering to the language of the Constitution, each state determined its own method for naming electors, which had to be chosen by January 7, 1789. In Connecticut, Georgia, and South Carolina, electors were appointed by the state legislatures, while in New Jersey, the governor and a privy council did the deed. In New Hampshire, residents voted on a list of candidates, and the legislature selected five of the top 10 finalists; in Massachusetts, the legislature chose one of the two candidates who received the most votes by residents in each of eight districts and appointed two additional electors at large.
Elsewhere, the populace largely determined the winning electors, with various regional wrinkles. Pennsylvania and Maryland both had statewide ballots, although the latter instituted a rule in which the majority of electors had to come from the western shore and the rest from the eastern shore. Delaware’s electors were determined by the winners of three districts, while Virginia split the votes among 12 electoral districts that were distinct from its 10 congressional districts.
Noticeably absent from this list are North Carolina and Rhode Island, the two commonwealths that had not yet ratified the Constitution and as such were not formally part of the United States. New York was a member of the union by then, but its legislature failed to agree on a process for determining electors by the January 7 deadline, rendering the state ineligible to participate in the election.
Advertisement
Advertisement
Credit: Ed Vebell/ Archive Photos via Getty Images
George Washington Was Basically the Only Serious Candidate
So who were the candidates to lead the nation from this brand-new office? The overwhelming favorite was George Washington, the Revolutionary War hero who had also presided over the 1787 Constitutional Convention. The only question was whether he would accept the job. The former general, who was in his mid-50s at the time, had expressed a preference for “living and dying a private citizen on my own farm,” although he seemed resigned to the likelihood of a return to public service.
John Adams, the former Massachusetts delegate and U.S. minister to Great Britain, emerged as a likely choice for vice president, in part because his Northern roots would provide a balance to the Southern sensibilities of the Virginia-born Washington. Meanwhile, anti-Federalists who had lingering misgivings about the Constitution coalesced around New York Governor George Clinton, who was a prominent advocate for states’ rights and a limited central government.
While support for Washington was strong, some of the Founding Fathers were concerned about a process that failed to distinguish between votes cast for president or vice president. Worried that anti-Federalist electors would siphon votes away from Washington and inadvertently tip the election to Adams, a Federalist faction led by Alexander Hamilton privately pressured select electors to name someone besides Adams as the second name on their ballots.
Washington Swept Into Office With a Unanimous Vote
Ultimately, the concerns about an unfavorable outcome ended up being overblown. When the votes were counted on April 6, 1789, Washington had appeared on all 69 electoral ballots to become unanimously selected as the first U.S. president. Adams collected 34 votes, causing him to finish second and become the first vice president. However, he was upset by what he considered a relatively meager total, and reportedly held a grudge against Hamilton after learning of the back-channel plot to limit his support. The remaining votes were spread among 11 names, with John Jay (9), Robert H. Harrison (6), John Rutledge (6), and John Hancock (4) all surpassing the three votes cast for Clinton.
Shortly after receiving news that a life of leisure would have to wait, Washington undertook the weeklong journey from Virginia to New York to take the oath of office at Manhattan’s Federal Hall on April 30, 1789. The inauguration being held in New York City was just one of many elements of the U.S. presidential election that later changed, along with the formation of political parties, the 1804 ratification of the 12th Amendment that separated voting for the president and vice president, and the emergence of a national identity spawned by the success of this once-novel system of government.
From hearty steaks to sugary snacks, the culinary preferences of U.S. presidents have always fascinated the American public. That’s perhaps no surprise, as the quirks of presidential palates offer a unique glimpse into the personalities behind the Oval Office. And when it comes to comfort foods and guilty pleasures, be it FDR’s love of grilled cheese sandwiches or Ronald Reagan’s obsession with jelly beans, you might find that presidents are more like us than you’d think. Here are the favorite foods of 14 U.S. presidents.
Credit: Alex Wong/ Getty Images News via Getty Images
George Washington: Hoecakes With Honey
George and Martha Washington often hosted guests at their home at Mount Vernon, with large spreads laid out for hungry visitors. Washington’s favorite dish was surprisingly simple and reflected his farming roots: He loved hoecakes, a type of cornmeal pancake. According to Martha Washington’s granddaughter Nelly Custis, he preferred them “swimming in butter and honey,” and would regularly eat them for breakfast.
Thomas Jefferson's fondness for macaroni began during his time in France, and upon his return to America he imported a pasta machine to make his own. A recipe for macaroni written in Jefferson’s own hand still exists, and his instructions for creating something similar to modern mac and cheese is credited with popularizing the dish in the United States.
Abe Lincoln was known for his frugal eating habits, often to the dismay and concern of his wife, Mary Todd Lincoln. Some sources note his fondness for two particular dishes: chicken fricassee with biscuits, and oyster stew. But Lincoln’s favorite food might well have been apples, which, according to his friends, he ate with gusto on a daily basis.
William Howard Taft: Steak Breakfasts
William Howard Taft, who weighed 354 pounds when he took his oath of office, remains the heaviest person to ever occupy the White House. His weight owes much to his breakfast habits. According to his head housekeeper Elizabeth Jaffray, he began each day by eating a “thick, juicy 12-ounce steak” served alongside two oranges, buttered toast, and a “vast quantity of coffee, with cream and sugar.”
FDR’s White House was known for serving terrible food, due in part to First Lady Eleanor Roosevelt’s well-intentioned desire to show solidarity with regular Americans during the Great Depression. It was slim pickings for FDR, but he did very much enjoy classic grilled cheese sandwiches. He also liked hot dogs, which were infamously served to the visiting king and queen of England.
John F. Kennedy: New England Fish Chowder
JFK was particularly fond of soup, and New England fish chowder was a favorite. In 1961, a young girl named Lynn Jennings wrote to President Kennedy asking what he liked to eat. He promptly replied and passed along the recipe for his preferred fish chowder. Kennedy was also known to be a fan of waffles for breakfast; in fact, his family waffle recipe is preserved in the National Archives.
LBJ’s love of barbecue was legendary. In the 1950s and ’60s, he and Lady Bird Johnson hosted large Texas-style barbecues at their ranch along the Pedernales River in the Texas Hill Country. The president served brisket, ribs, and his favorite Texas-style chili to hundreds of guests from around the world.
Richard Nixon: Cottage Cheese and Ketchup
Richard Nixon was a cottage cheese devotee, initially as part of his diet regimen. It appears he grew to love the stuff and ate it on a daily basis, often alongside fresh fruit, wheat germ, and coffee. In one of the stranger presidential food combos, he also enjoyed his cottage cheese topped with ketchup and black pepper.
Gerald Ford preferred hearty, homestyle cooking. His favorite meal was a pot roast with red cabbage, followed by butter pecan ice cream for dessert.
Jimmy Carter: Cheese Grits
Jimmy Carter's Southern roots showed in his love of grits — especially cheese grits. In 1976, his daughter Amy toldThe New York Times, “Daddy makes grits for breakfast, then breaks a couple of eggs into it and adds some cheese, and it's yummy.” The family dog was even named Grits.
Ronald Reagan began eating jelly beans as part of his successful attempt to give up pipe smoking, but then his candy consumption became something of an obsession. While he was president, Reagan placed a standing order of 720 bags of jelly beans per month — that’s 306,070 total jelly beans — to be distributed among the White House, Capitol Hill, and other federal buildings.
George H.W. Bush: Pork Rinds
During his 1988 presidential campaign, George H.W. Bush expressed his love for pork rinds, which he at times enjoyed with a splash of Tabasco sauce. Sales of the fried pig skins soared, despite some sections of the media claiming his comment was nothing more than Bush attempting to appear down-to-earth, and that he actually preferred popcorn and martinis.
Before his heart issues and subsequent switch to veganism, Bill Clinton was famous for his love of fast food; he even took reporters on jogs to McDonald’s. He particularly enjoyed jalapeño cheeseburgers with lettuce, tomato, mayonnaise, pickles, and onions.
George W. Bush: Cheeseburger Pizza
In 2007, White House chef Cristeta Comerford told reporters about George W. Bush’s fondness for a peculiar pizza topping. “For dinner,” she explained, “the president loves what we call homemade ‘cheeseburger pizzas’ because every ingredient of a cheeseburger is on top of a margherita pizza.” Soon after, the topping could be found in pizzerias across the nation.
The image of a pirate with a peg leg, an eye patch, and a parrot on the shoulder is deeply ingrained in popular culture, from classic literature to Hollywood movies. The peg-legged pirate, in particular, has become an enduring trope — it’s easy to picture such a character standing on the deck of a pirate ship, growling, “Arr, me hearties” and “shiver me timbers” while a motley crew runs up the Jolly Roger.
But how much of this image is based on historical reality? Did many pirates actually have peg legs, or is this merely a romanticized myth perpetuated by literature and film? Here, we delve into the history of piracy — and maritime medicine — to separate fact from fiction.
Between 1650 and 1720 — a period often considered the golden age of piracy — more than 5,000 pirates sailed the seas and caused all kinds of havoc. It was an incredibly dangerous profession, not just due to battles and skirmishes, but also because of accidents, diseases, and the primitive medical care available at the time.
The nature of sea engagements often involved firing cannon broadsides at medium or close range between vessels, while muskets, pistols, swords, and grenades were used in close combat. All of this made the loss of limbs an occupational hazard, and injuries that resulted in amputations were not uncommon among sailors and pirates.
Surviving these injuries, however, was a long shot at best. On a pirate ship, the job of surgeon often fell to the ship’s carpenter or even the cook — the two most qualified people simply because they were accustomed to cutting things. Lacking adequate skills and equipment, and relying on rum as an anesthetic, these surgeons carried out amputations with a low chance of success. Even if the patient survived the procedure, they would often die from infection. The point at which a prosthetic was an option was a long way away for an amputee pirate. Even if they could afford a peg leg, most amputees would simply use crutches. Either way, their days of pirating would normally be over — making actual peg-legged pirates very rare indeed.
Credit: Bequest of Grace M. Pugh, 1985/ The Metropolitan Museum of Art
History’s Real Peg-Legged Pirates
While peg legs weren’t nearly as ubiquitous as popular culture suggests, they weren't entirely fictional either. We do know of a couple of documented cases involving high-profile pirates with prosthetic limbs.
The most famous is arguably François Le Clerc, a 16th-century French privateer. When not being paid by various European rulers to act on their behalf, Le Clerc sailed independently as the leader of a pirate fleet. He was one of the first notable pirates of the Caribbean and was known for his peg leg. His nicknames included “Jambe de Bois” and “Pata de Palo” (in French and Spanish, respectively), both of which translate as “wooden leg” or “peg leg.”
Another example is Cornelis Jol (1597-1641), a Dutch corsair, admiral, and privateer — in other words, pirate-adjacent — who was nicknamed “Houtebeen” (Dutch for “peg leg”) due to his wooden leg. It’s worth noting, however, that both Le Clerc and Jol were sailing the high seas before the golden age of piracy, which is the era typically associated with peg legs, parrots, and eye patches.
The Rise of the Peg-Legged Pirate in Popular Culture
The image of the peg-legged pirate is so pervasive in large part due to literature, in particular Robert Louis Stevenson’s 1883 adventure novel Treasure Island. The character of Long John Silver, with his amputated leg and pet parrot, became — and remains — the archetypal image of the pirate in the public imagination.
Stevenson’s original Long John Silver, however, was missing his leg entirely and moved around on crutches. It was only in later adaptations, including on stage and screen, that Silver was often portrayed with a wooden leg. This image began to permeate the public consciousness — so much so that many people erroneously believe that the original Silver had a peg leg, too. The iconic pirate image was most recently reinforced by the Pirates of the Caribbean movies (the character Hector Barbossa has a prosthetic leg) and the popular television series Black Sails, which features the character of John Silver with a peg leg. It’s safe to say, then, me hearties, that the image of the peg-legged pirate is very much alive, despite actual historical examples being rare indeed.
Credit: Culture Club/ Hulton Archive via Getty Images
Author Nicole Villeneuve
September 11, 2024
Love it?29
For three decades between 1455 and 1487, the House of Lancaster and the House of York vied for control of the English throne. Originating in a period of instability during the reign of King Henry VI, whose struggles with mental illness left the throne vulnerable, this series of civil wars lasted for more than 30 years, though there were only about 15 months of active battle. The country was nonetheless mired in civil strife throughout the Wars of the Roses. The two rival houses were actually branches of the same family, the Plantagenet dynasty, and the power shifted back and forth over the years of fighting.
In 1485, at the Battle at Bosworth, the final battle of the conflict, Henry Tudor — a tenuous descendant of the Lancastrian house through his mother, Margaret Beaufort — defeated Richard III of the House of York. He was crowned King Henry VII, effectively ending the wars and establishing the transformative Tudor dynasty. Centuries later, a romanticized vision of the dynastic struggle continues to influence literature, film, and television, including George R.R. Martin’s Game of Thrones series and its wildly popular television adaptation, which follows the warring houses of Stark and Lannister (sound familiar?). The question is: How exactly did this tumultuous period in English history come to be named after a beautiful and sweet-smelling flower?
Like many of history’s wars, the Wars of the Roses weren’t named until many years after the conflict ended. The moniker refers to the symbols of the two houses: a white rose for the House of York and a red rose for the House of Lancaster, although roses weren’t the only heraldic imagery used at the time of the fighting. Yorkists, for instance, were also known to sport Richard III’s white boar badge before and during his brief reign as king (1483-1485). Lancastrian monarch Henry VI featured an antelope on his royal badge, and when Henry VII won the Battle of Bosworth, he carried a banner featuring the family’s red dragon crest on a white and green background.
While the white rose was just one of many badges used by the Yorks, the red rose may have been absent during the civil wars altogether. Before 1485, red roses appeared only minimally in distant Lancaster heraldic imagery. Historians believe the image was only made a prominent Lancastrian symbol by Henry VII and his advisors after his victory. Seeking to unify the nation — and to strengthen his shaky familial claim to the throne — Henry married a York princess, Edward IV’s daughter Elizabeth, six months after the Wars of the Roses had ended. Manuscripts from this era show that he purposefully resurrected the seldom-used red rose and, in a strategic move, combined it with the Yorkists’ white rose.
Together, the two roses made what came to be known as the Tudor rose, a prominent symbol of the Tudor dynasty. It was not only a powerful piece of political imagery, but also a symbol of unity and reconciliation. The Tudor rose became an enduring emblem and spread throughout England, often incorporated into architecture and decor throughout the country. Today, red and white roses — as well as red dragons — appear in the garden at the Hampton Court Palace, a historical Tudor court.
The Tudor rose was firmly entrenched as a symbol of the English civil wars by the late 1400s, but the conflict’s modern name didn’t become common for another 400 years. Much of its popularity is attributed to the 19th-century writer Walter Scott, who, in his 1829 novel Anne of Geierstein, referred to the civil wars as “the wars of the White and Red Roses.” Scott was a famous novelist at the time, and his work resonated with the wider public and bolstered the Wars of the Roses’ name through the 19th century and beyond. He wasn’t the first to make the literary link, however. In his late-1500s Henry VI trilogy, William Shakespeare famously dramatized a scene in which characters pluck red and white roses to signify their loyalties. Influential Scottish philosopher David Hume also helped cement the connection. In a 1775 version of his bestselling book The History of England, Hume referred to the period as “the quarrel between the two roses.”
Credit: Scott McPartland/ Archive Photos via Getty Images
Author Timothy Ott
September 11, 2024
Love it?107
The existence of ready-made food has been around for centuries, from ancient Rome’s takeout restaurants, known as thermopolia, to the bread, soup, and meat vendors that have populated the streets of metropolitan centers around the world since antiquity. However, the burgers, fries, wings, and milkshakes that constitute the typical fast-food meal today are a more recent invention — and a distinctly American one. Here’s a brief taste of how a colossal global industry took flight.
A direct predecessor of modern fast-food service were the automats that fed urbanites in the northeastern U.S. in the early 20th century. Essentially a self-service cafeteria, the automat featured rows of windowed compartments along its walls, from which hungry customers could retrieve an array of prepared dishes by depositing a coin. Introduced in Berlin, Germany, in 1895, this new form of casual dining made its way to Philadelphia in 1902 courtesy of restaurateurs Joe Horn and Frank Hardart.
The concept hit its stride after Horn and Hardart debuted their service in the busy New York City neighborhood of Times Square in 1912, and then expanded to more than 80 locations across the Big Apple and Philly. However, the popularity of the automat began to decline as city dwellers increasingly migrated to the suburbs after World War II, and the service slowly fizzled out over the following decades (though it saw a comeback amid the COVID-19 pandemic).
Credit: Win McNamee/ Getty Images News via Getty Images
White Castle Introduces the Hamburger Chain
As described in Adam Chandler's book Drive-Thru Dreams, a cook named Walt Anderson began churning out batches of small, square hamburgers from his stand in Wichita, Kansas, in 1916. Although Americans were leery of ground beef following the exposé of the meatpacking industry that featured in Upton Sinclair's 1906 novel The Jungle, Anderson eased those concerns by preparing his food in public view. Locals quickly took to the compact 5-cent burgers, later known as "sliders," and Anderson soon opened two more stands. In 1921, he teamed with real-estate broker Billy Ingram to open a fourth location in a building that resembled a small castle, a look that partly inspired the new eatery's moniker, "White Castle."
As burger stands began popping up with increasing frequency throughout the decade, Anderson and Ingram sought to distinguish their chain through the comfort of uniformity. Regardless of location, diners enjoyed burgers and coffee that were prepared in accordance to exact instructions, amid interior dining areas marked by white-painted walls, stainless steel counters, and identically dressed servers.
At the same time that Kansans were familiarizing themselves with the White Castle brand, Texans were being introduced to a new form of dining service. Visitors hankering for a pork sandwich or other barbecue fare from the Dallas-based Pig Stand didn't even have to leave their vehicles, as they were greeted by smiling carhops who were ready to order and retrieve meals for them. As mentioned in Drive-Thru Dreams, considering the number of cars on American roads grew from 9 million in 1920 to 23 million in 1931, it was only natural that drive-in restaurants became a staple across the American landscape in the following decades.
The next phase in driver-oriented dining came with the emergence of the drive-thru restaurant in the late 1940s. Although it's unclear exactly where this service first originated, credit is often given to Red's Giant Hamburg on Route 66 in Springfield, Missouri, in 1948. That year also brought the drive-thru to Southern California with the very first In-N-Out Burger, which began expanding to other regional locations in the 1950s.
Meanwhile, another change in the industry was underway. Richard and Maurice McDonald built a successful burger drive-in restaurant in San Bernardino, California, in 1940, but eventually sought enhanced efficiency for their business. By 1948, the brothers had gotten rid of carhops and short-order cooks, divided food preparation into individual stations, simplified their menu, and replaced dishes and glassware with disposable versions. This "Speedee Service System," as it was called, enabled the brothers to rake in even larger profits, and directly inspired competitors such as Keith G. Cramer, who opened the chain that became Burger King in 1953.
Around that time, the development of the Interstate Highway System and migration to the suburbs that hastened the demise of automats also ignited a fast-food boom that was no longer primarily centered on hamburgers. Colonel Harland Sanders, who began experimenting with a pressure cooker at his gas station eatery in the 1930s, started franchising his Kentucky Fried Chicken recipe to restaurants in 1952. Brothers Dan and Frank Carney introduced the first Pizza Hut in Wichita in 1958. Glen Bell switched from burgers to tacos in 1954, before launching Taco Bell in Downey, California, in 1962. And 17-year-old Fred DeLuca teamed up with nuclear physicist Peter Buck to open the sandwich chain that became Subway in Bridgeport, Connecticut, in 1965.
With so many options for consumers to choose from, fast-food entrepreneurs sought to expand their businesses both nationally and overseas. Early pioneers in the international space included KFC, which brought the colonel's secret recipe to Canadain 1955 and Mexico in 1963, and A&W, which cracked the Asian market with shops in Malaysia and Japan in 1963.
As the 20th century pressed on, the fast-food industry's major players pursued various marketing and promotional techniques to reel in more customers. The now-defunct Burger Chef was the first to offer a burger, fries, and drink combo meal after the franchise launched in the late 1950s. Burger Chef also proved ahead of the game with the debut of their food-and-toy "Fun Meal" for kids in 1973, although they were unsuccessful at blocking McDonald's from nationally rolling out the Happy Meal in 1979.
While the mid-Atlantic chain Rays Kingburger claims it became the first fast-food restaurant to offer breakfast in 1972, McDonald's had already been selling breakfast items at select locations. The fast-food giant then upped the ante by introducing the eggs Benedict-based Egg McMuffin sandwich in 1972, before unveiling its national breakfast menu in 1977.
Following the onslaught of advertising between McDonald's, Burger King, and upstart Wendy's that marked the "burger wars" of the 1980s, Wendy's created a model for smaller chains to follow by introducing a 99-cent value menu in 1989. McDonald's also established a menu trend in 1987 with the introduction of "supersized" meals that provided bigger portions, before discontinuing the option in 2004.
The death of the supersized meal was partly due to shifting public tastes that sought healthier choices for quick dining. While the innovators at Burger Chef offered salads as far back as 1974, and Wendy's debuted a salad bar in 1979, the major franchises largely delivered the same high-sodium and high-calorie content until the 2001 satirical film Fast Food Nation and Morgan Spurlock's 2004 documentary Supersize Me called attention to rising obesity rates and environmental problems caused by the production and consumption of this type of food. While most fast-food joints opted not to follow Burger King’s introduction of a vegetarian patty in 2002, an examination of 20 popular chains between 2004 and 2015 revealed industry-wide efforts to add healthier items to menus, particularly for children. And the rise of franchises such as CAVA, Veggie Grill, and Native Foods provided diners with an array of genuinely nutritious options for quick-service meals. Today, regardless of whether consumers seek a lower-calorie grilled chicken wrap or the old standby of burgers and fries, fast food remains as popular as ever, with this American contribution to the culinary industry accounting for $978 billion in global sales in 2023.
Advertisement
Advertisement
Why Did Doctors Wear Beak Masks During the Bubonic Plague?
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Author Tony Dunnell
September 11, 2024
Love it?252
Few images in medical history are as striking (or as creepy) as those of plague doctors with their long, beaked masks. This peculiar costume, worn by physicians during outbreaks of bubonic plague in Europe, has become an enduring symbol of the disease. But why did doctors wear these strange masks, which surely must only have added to the fear felt by people in times of suffering? What purpose did the design serve? Here’s the reasoning behind the mask, which came about in an age when the true nature of disease transmission was still shrouded in mystery.
Contrary to common belief, the plague doctor costume was not a medieval-era invention. Despite its common association with the Black Death — the name given to the bubonic plague pandemic that devastated Europe in the mid-1300s — there is no evidence to suggest it was worn during the 14th-century epidemic or at any point in the Middle Ages. It emerged much later, in the 17th century, when plague outbreaks were still common in Europe.
We know that the striking attire was worn in 1619 by the French physician Charles Delorme during an eruption of the bubonic plague in Paris. Delorme, who some historians credit as inventing the outfit, described the plague doctor costume in full in a mid-17th century text, complete with leather hat, gloves, a waxed linen robe, boots, and a mask with glass eyes and beak.
Plague doctors across Europe soon adopted the outfit; they also carried a stick with which to remove the clothes of the infected. The look was so widely recognized in Italy that it became commonplace in Italian commedia dell’arte — an early form of comedic theater — and carnival celebrations, and it remains a popular costume today.
By far the most distinctive, and some would say ominous, aspect of the plague doctor costume was the mask with its long, birdlike beak. This relates to a prevailing view in medical science during the Middle Ages and in the centuries that followed: that diseases were spread through “miasma,” or bad-smelling air, that caused an imbalance in a person’s “humors,” or bodily fluids. (The miasma theory was later discarded when the germ theory of disease was developed.)
The shape and function of the beak were directly tied to this theory of miasma. Plague doctors filled their long beaks with strong smelling herbs and flowers, including lavender and mint, or sponges soaked with vinegar or camphor. Some also stuffed their beaks full of theriac, a compound of more than 55 herbs and other components including viper flesh powder, cinnamon, myrrh, and honey. These aromatic substances, it was believed, would absorb the foul-smelling miasma, purifying the air as it traveled along the beak, and so protect the wearer from inhaling the harmful air.
The concept behind the plague doctor’s outfit wasn’t entirely misguided. Creating a barrier between the wearer and the patient — and potentially contaminated air — was logical. Modern personal protective equipment (PPE) and hazmat suits are based on the same idea. The plague doctor costume could have even offered some protection against droplets from coughing (in the case of pneumonic plague) or contamination through splattered blood (from bubonic plague).
But fundamental flaws existed in the design. Bubonic plague, caused by the bacterium Yersinia pestis, is transmitted through the bite of an infected flea. The plague doctors, believing miasma to be the cause, were not aware of this. Their outfits may have helped protect against flea bites to some extent, but they were not specifically designed for this task. As for the beak masks, they too would have offered some protection, despite the flawed understanding of how contagious diseases spread. The simple fact that plague doctors wore masks was a positive — but stuffing their beaks full of herbs and powdered viper flesh was of no great use, apart from making the air smell somewhat nicer while they treated their patients.
Nostalgia is a powerful feeling, and it’s easy to spend hours reminiscing about days gone by. It’s especially fun to look back at — and of course, listen to — the music that was popular during our childhood. Since 1940, Billboard magazine has been compiling the most widely purchased and played songs year after year. Originally, the Billboard charts ranked songs based on various categories, such as record sales and radio playtime. But in 1958, Billboard unveiled the Hot 100 chart, which compiled those metrics into a definitive list of the top tunes each year. Let’s take a look back at the most popular songs of each year of the past century, based on these lists and other early data.
The 1920s were the decade in which pop music became a distinct genre of its own. In fact, the term “pop music” was actually coined in 1926 to refer to any widely “popular” songs. But given that it was still such a nascent concept, there were no existing methods for tracking a song’s popularity over time. In fact, it wasn’t until the 1930s that anyone compiled an official weekly music chart that took into account sales and airplay. Despite the lack of an authoritative industry list, it’s possible to identify the most popular songs based on contemporary records such as The Billboard theatrical digest and historical compilations that generally reference the same tunes as the biggest hits in a given year. Here’s a look at the top tunes from each year of this formative decade.
1920 — “Swanee” by Al Jolson 1921 — “I Ain’t Got Nobody” by Marion Harris 1922 — “My Man” by Fanny Brice 1923 — “Down Hearted Blues” by Bessie Smith 1924 — “Rhapsody in Blue” by George Gershwin 1925 — “Sweet Georgia Brown” by Ben Bernie 1926 — “Bye Bye, Blackbird” by Gene Austin 1927 — “Stardust” by Hoagy Carmichael 1928 — “Blue Yodel No. 1 (T for Texas)” by Jimmie Rodgers 1929 — “Makin’ Whoopee” by Eddie Cantor
The 1930s were dominated by legendary artists such as Cab Calloway and Fred Astaire. In 1935, a program called Your Hit Parade debuted on TV and radio, and published the first weekly music chart in the U.S., preceding the Billboard charts by five years. The list took into account multiple factors such as record sales and total radio plays, and it debuted with “Soon” by Bing Crosby in the No. 1 spot for the inaugural week of April 20, 1935. The top song of the entire decade, meanwhile, was “My Reverie” by Larry Clinton, which stayed at No. 1 for eight weeks in 1938.
1930 — “Happy Days Are Here Again” by Ben Selvin 1931 — “Minnie the Moocher” by Cab Calloway & His Cotton Club Orchestra 1932 — “Night and Day” by Fred Astaire and Leo Reisman 1933 — “Stormy Weather (Keeps Rainin’ All the Time)” by Ethel Waters 1934 — “Moonglow” by Benny Goodman 1935 — “Cheek to Cheek” by Fred Astaire 1936 — “The Way You Look Tonight” by Fred Astaire 1937 — “Once in a While” by Tommy Dorsey 1938 — “My Reverie” by Larry Clinton 1939 — “Over the Rainbow” by Glenn Miller
Billboard unveiled its first music chart in July 1940, and the song “I’ll Never Smile Again” — which featured a young Frank Sinatra on vocals — topped the chart for the first 12 weeks. From 1940 through 1943, Billboard only took into account retail sales for determining the top song, but later years saw the introduction of charts that tracked other metrics, such as total jukebox plays and radio play. Throughout the 1940s, orchestral bandleaders such as Tommy Dorsey, Artie Shaw, and Glenn Miller dominated the music scene in terms of retail sales and on-air playtime. Here’s a look at the most popular Billboard songs from the 1940s based on total sales, the one metric that was used throughout the decade.
1940 — “I’ll Never Smile Again” by Tommy Dorsey and His Orchestra with Frank Sinatra and the Pied Pipers 1941 — “Frenesi” by Artie Shaw and His Orchestra 1942 — “Moonlight Cocktail” by Glenn Miller and His Orchestra 1943 — “I’ve Heard That Song Before” by Harry James and His Orchestra with Helen Forrest 1944 — “Swinging on a Star” by Bing Crosby with John Scott Trotter and His Orchestra and the Williams Brothers Quartet 1945 — “Till the End of Time” by Perry Como with Russ Case and His Orchestra 1946 — “The Gypsy” by the Ink Spots 1947 — “Heartaches” by Ted Weems and His Orchestra with Elmo Tanner 1948 — “Mañana (Is Soon Enough for Me)” by Peggy Lee with Dave Barbour and the Brazilians 1949 — “Riders in the Sky (A Cowboy Legend)” by Vaughn Monroe and His Orchestra
The early 1950s continued many of the musical trends from the 1940s, as bandleaders and musical standards continued to dominate the top of the charts — that is, until Elvis Presley showed up and revolutionized the music scene. Elvis topped the charts in both 1956 and 1957, the two years that preceded the first-ever Hot 100 ranking. In 1958, the weekly Hot 100 chart debuted with Ricky Nelson’s “Poor Little Fool” in the No. 1 spot, though it was “Volare” by Domenico Modugno that ended up being the best-performing song that year, based on the number of weeks it spent on the chart, the number of records sold, and how much airplay it received.
1950 — “Goodnight Irene” by Gordon Jenkins and His Orchestra and the Weavers 1951 — “How High the Moon” by Les Paul and Mary Ford 1952 — “Cry” by Johnnie Ray and the Four Lads 1953 — “The Song From Moulin Rouge (Where Is Your Heart)” by Percy Faith and His Orchestra featuring Felicia Sanders 1954 — “Little Things Means a Lot” by Kitty Kallen with Jack Pleiss and His Orchestra 1955 — “Cherry Pink and Apple Blossom White” by Pérez Prado and His Orchestra 1956 — “Don’t Be Cruel” by Elvis Presley 1957 — “All Shook Up” by Elvis Presley 1958 — “Volare (Nel Blu Dipinto Di Blue)” by Domenico Modugno 1959 — “The Battle of New Orleans” by Johnny Horton
The 1960s saw a seismic shift in the types of songs people listened to. While the year 1960 saw the orchestral “Theme From a Summer Place” top the Hot 100, it wasn’t long until pop acts began dominating the charts: The Beatles reached No. 1 for the first time with “I Want to Hold Your Hand” in 1964, and also produced the most popular song of 1968, “Hey Jude.” Some songs, such as 1965’s “Wooly Bully,” never actually reached No. 1, but still performed better in terms of overall sales and total airplay than any other song that year. By the end of the 1960s, pop and rock music had completely displaced orchestral tunes as the most popular music genres in the country.
1960 — “Theme From a Summer Place” by Percy Faith 1961 — “Tossin’ and Turnin’” by Bobby Lewis 1962 — “Stranger on the Shore” by Acker Bilk 1963 — “Sugar Shack” by Jimmy Gilmer and the Fireballs 1964 — “I Want To Hold Your Hand” by the Beatles 1965 — “Wooly Bully” by Sam the Sham and the Pharaohs 1966 — “The Ballad of the Green Berets” by Staff Sergeant Barry Sadler 1967 — “To Sir With Love” by Lulu 1968 — “Hey Jude” by the Beatles 1969 — “Sugar, Sugar” by the Archies
The 1970s were a time of rich musical diversity. The decade opened with the folksy vocal stylings of Simon & Garfunkel atop the charts, but by 1979, the Hot 100 was all about powerful drums and raging guitar solos, and the Knack’s “My Sharona” was the most popular song in the decade’s final year. During the years in between, legendary performers such as Barbra Streisand, Rod Stewart, and Paul McCartney’s Wings topped the charts.
1970 — “Bridge Over Troubled Water” by Simon & Garfunkel 1971 — “Joy to the World” by Three Dog Night 1972 — “The First Time Ever I Saw Your Face” by Roberta Flack 1973 — “Tie a Yellow Ribbon Round the Ole Oak Tree” by Dawn featuring Tony Orlando 1974 — “The Way We Were” by Barbra Streisand 1975 — “Love Will Keep Us Together” by Captain & Tennille 1976 — “Silly Love Songs” by Wings 1977 — “Tonight’s the Night (Gonna Be Alright)” by Rod Stewart 1978 — “Shadow Dancing” by Andy Gibb 1979 — “My Sharona” by the Knack
Acts such as George Michael, Prince, and the Police dominated the 1980s with their smash hit songs, many of which began to incorporate melancholic themes and relied heavily on minor keys. What’s notable about this decade is that pretty much every one of the top-performing songs continues to get widespread airplay today, proving just how much of an impact the 1980s had on the musical world.
1980 — “Call Me” by Blondie 1981 — “Bette Davis Eyes” by Kim Carnes 1982 — “Physical” by Olivia Newton-John 1983 — “Every Breath You Take” by the Police 1984 — “When Doves Cry” by Prince 1985 — “Careless Whisper” by Wham! featuring George Michael 1986 — “That’s What Friends Are For” by Dionne and Friends 1987 — “Walk Like an Egyptian” by the Bangles 1988 — “Faith” by George Michael 1989 — “Look Away” by Chicago
Credit: Denise Sofranko/ Michael Ochs Archives via Getty Images
1990s
The 1990s saw the rise of R&B and rap music. Boys II Men topped the Hot 100 in 1992 with “End of the Road,” and 1995 saw Coolio’s “Gangsta’s Paradise” climb to No. 1. Meanwhile, Elton John’s “Candle in the Wind 1997” topped the Hot 100 chart as a tribute to the late Princess Diana. In fact, that song is the second-highest-selling physical single of all time, with 33 million copies sold, around 17 million fewer copies than Bing Crosby’s “White Christmas.”
1990 — “Hold On” by Wilson Phillips 1991 — “(Everything I Do) I Do It for You” by Bryan Adams 1992 — “End of the Road” by Boyz II Men 1993 — “I Will Always Love You” by Whitney Houston 1994 — “The Sign” by Ace of Base 1995 — “Gangsta’s Paradise” by Coolio featuring L.V. 1996 — “Macarena (Bayside Boys Mix)” by Los del Río 1997 — “Candle in the Wind 1997” by Elton John 1998 — “Too Close” by Next 1999 — “Believe” by Cher
Advertisement
Advertisement
Credit: Scott Gries/ Getty Images Entertainment via Getty Images
21st Century
As we push further into the 21st century, we’ve come a long way since the songs of the 1940s. This century has seen an eclectic mix of music top the Hot 100 charts, including the harder rock stylings of Nickelback, hip-hop tunes from Usher, and songs produced by larger-than-life celebrities such as Beyoncé and Justin Bieber. Here’s a look at the most popular songs from each year of the 21st century thus far.
2000 — “Breathe” by Faith Hill 2001 — “Hanging by a Moment” by Lifehouse 2002 — “How You Remind Me” by Nickelback 2003 — “In da Club” by 50 Cent 2004 — “Yeah!” by Usher featuring Lil Jon and Ludacris 2005 — “We Belong Together” by Mariah Carey 2006 — “Bad Day” by Daniel Powter 2007 — “Irreplaceable” by Beyoncé 2008 — “Low” by Flo Rida featuring T-Pain 2009 — “Boom Boom Pow” by the Black Eyed Peas 2010 — “Tik Tok” by Kesha 2011 — “Rolling in the Deep” by Adele 2012 — “Somebody That I Used to Know” by Gotye featuring Kimbra 2013 — “Thrift Shop” by Macklemore & Ryan Lewis featuring Wanz 2014 — “Happy” by Pharrell Williams 2015 — “Uptown Funk” by Mark Ronson featuring Bruno Mars 2016 — “Love Yourself” by Justin Bieber 2017 — “Shape of You” by Ed Sheeran 2018 — “God’s Plan” by Drake 2019 — “Old Town Road” by Lil Nas X featuring Billy Ray Cyrus 2020 — “Blinding Lights” by the Weeknd 2021 — “Levitating” by Dua Lipa 2022 — “Heat Waves” by Glass Animals 2023 — “Last Night” by Morgan Wallen
Credit: Edward Gooch Collection/ Hulton Archive via Getty Images
Author Tony Dunnell
September 4, 2024
Love it?55
When we think of William Shakespeare today, we picture a literary colossus who is widely regarded as the greatest dramatist — and arguably the greatest writer — who ever lived. His works have shaped not only the literary world for centuries, but also the English language itself. But how famous was the “Bard of Avon” during his own lifetime, from 1564 to 1616? The answer is perhaps not as straightforward as one would expect, considering his truly monumental status today. Shakespeare’s rise to enduring renown was certainly not immediate, and reflects the nature of fame in Elizabethan and Jacobean England.
Shakespeare was born in 1564 in the English town of Stratford-upon-Avon. At the age of 18, he married Anne Hathaway, and the couple had three children. It’s hard to say, however, exactly when Shakespeare’s career began or when he emerged on the London theater scene. The Taming of the Shrew is considered to be one of his earliest works, generally believed to have been written before 1592. It was in the mid-1590s that the playwright’s name started to become known, at least in literary circles. In 1593, he had an overnight sensation with his narrative poem Venus and Adonis. The witty, at times erotic poem was such a success that it remained, during his lifetime, Shakespeare’s most popular published work, and was widely commented upon and quoted in many journals, letters, and plays of the period.
Despite the success of Venus and Adonis, a single popular work wasn’t enough to make Shakespeare a household name. In reality, he was likely known among London’s theater set, and not far beyond. His reputation within this circle would have been bolstered by his role in the Lord Chamberlain’s Men, later known as the King’s Men under James I. This acting company was the most important and successful troupe of players in England’s Elizabethan era (Queen Elizabeth I’s reign from 1558 to 1603) and Jacobean era (King James I’s reign from 1603 to 1625). The troupe played almost continuously in London from 1594 to 1603, including performances at the royal court. Shakespeare wrote many of his plays during this period, producing an average of two a year, and these works formed the bulk of the company’s repertory. He also acted with them.
Advertisement
Advertisement
The Limits of Fame in Shakespeare’s Time
Venues such as the Globe Theatre in London — built by the Lord Chamberlain’s Men — could hold up to 3,000 people. Shakespeare’s plays, therefore, certainly drew large crowds during his time, and were popular with audiences across social classes. The success of his plays, however, didn’t necessarily translate into fame. In Shakespeare’s time, there was no mass media, and the concept of celebrity was not the same as we know it today. What’s more, even if he was well-known in London, Shakespeare wouldn’t have been a national celebrity — and certainly not an international celebrity — because back then there wasn’t nearly as much communication between cities, or even a city and its surrounding region, as there is today.
Shakespeare also wasn’t the only popular dramatist writing in the late 16th century, and he had significant competition — and in some cases rivalries — with various writers. Critics and theatergoers didn’t necessarily consider him superior to his contemporaries. He was respected in literary circles, but no more so than other prominent playwrights of the time, such as Christopher Marlowe, Ben Jonson, Thomas Middleton, and Thomas Kyd. So, while Shakespeare was certainly well known and respected in the theatrical world of his time, he was far from the only act in town — and his contemporary fame was not comparable to his posthumous legacy.
Advertisement
Advertisement
Credit: Scott Barbour/ Getty Images News via Getty Images
A Legacy Like No Other
Shakespeare’s enduring reputation owes much to developments that took place after his death in 1616, perhaps most notably the publication of the First Folio in 1623. This lavish book was the first published collection of Shakespeare’s plays. At the time, about half of Shakespeare’s plays had never previously appeared in print, including As You Like It, Julius Caesar, Macbeth, and The Tempest. Were it not for the First Folio, 18 of his plays might well have been lost forever. Not only did the collection preserve much of Shakespeare’s work, but it also helped ensure that Shakespeare became the universally revered cultural and literary icon — and household name — that he is today. As Ben Jonson wrote in his introduction to the First Folio, “The applause! Delight! The wonder of our Stage... He was not of an age, but for all time!”
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.