Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Author Nicole Villeneuve
June 18, 2025
Love it?26
Summer vacation has been an integral part of American family life for more than a hundred years. This season of leisure is primarily due to one thing: School’s out. But why is there no school in summer? You may assume it’s a holdover from the country’s agrarian days, when children were required to help out on the family farm. But the summer vacation we know today actually had more to do with urban health concerns and public policy than with hay bales or cornfields. Here’s a look at the origin of summer vacation.
In the early 1800s, there was nothing even close to a standard school calendar in the United States. Communities ran schools as best suited their needs or abilities, leading to a very loose patchwork of local schedules. Rural schools were typically open in winter and summer in order to accommodate busy farm seasons that required planting in the spring and harvesting in the fall. (That said, even in the summertime there was plenty of farm work to do, and school attendance was often low.) City schools worked differently. Many operated almost year-round, taking only a short break each quarter. And the inconsistencies didn’t end there: Even within a single county, schools were extremely localized and their operational calendars could be starkly different.
By the 1840s, education reformers such as Horace Mann were pushing for change. In line with the broader Common School Movement, Mann thought that free, tax-supported schools should be accessible to all children, and organizing the school calendars was a big piece of the puzzle. Reformers believed rural schools, with their agrarian schedules, were too inconsistent and the number of school days inadequate. Growing urban populations, meanwhile, gave rise to a new model of schooling that grouped students by age in separate classrooms. This “age-graded” approach, which Mann had admired during a visit to Prussia, required a more coordinated school calendar to work smoothly across different districts.
Advertisement
Advertisement
Credit: Underwood Archives/ Archives Photos via Getty Images
Why Summers Off Made Sense
In the 19th century, many city schools were in need of renovations and repairs, which were best done in the summer months. The buildings were also, of course, not yet air conditioned, and their overcrowded, poorly ventilated rooms often fueled outbreaks of disease in the summer’s stifling heat. Increasingly, the schools’ wealthiest families took to fleeing the city’s soaring summer temperatures for vacation escapes, further driving social fragmentation and inequalities within the schools.
At the same time, medical theories claimed that too much mental exertion could overstimulate young minds and lead to exhaustion — or worse. Reformers saw summer as the best time to rest the mind. Summer vacation also addressed urban schools’ public health concerns and gave teachers a much-needed break. The Common School Movement sought to further professionalize the teaching profession, and the extended break gave educators time to take advantage of summer conventions and training programs.
By the end of the 19th century, states had begun implementing school funding eligibility rules, including a minimum number of instructional days. A 180-day calendar took shape, spanning late summer to late spring, with summertime carved out as a clear and consistent break lasting from about the end of June to the end of August. Between 1880 and 1920, the United States’ school calendar fully fell into alignment, and a national rhythm of summer break has been the norm ever since.
Credit: PHAS/ Universal Images Group via Getty Images
Author Kristina Wright
June 12, 2025
Love it?83
Furniture isn’t just about form and function — it’s a reflection of how we live. As technology evolves and lifestyles shift, pieces that were once considered household essentials can quietly fade into obscurity. From furniture designed around now-outdated technology to those that catered to social customs of another era, many former decor staples have all but disappeared from modern homes.
If you’ve ever tried to fit a heavy television into a hulking TV cabinet or spent an hour at the “gossip bench” catching up with an old friend, you’re not alone. These once-popular furnishings tell the story of how we used to live. A few may linger in basements, guest rooms, or antique shops, quietly reminding us of how much things have changed. Here are seven kinds of furniture that were once common but are now rarely seen. How many have you owned?
Popular in the 1970s and ’80s, waterbeds promised a futuristic sleep experience with their wavelike motion and adjustable temperature. Though patented in California in the late 1960s, the concept had already rippled through science fiction — sci-fi author Robert Heinlein described similar beds in his novels years earlier, imagining them as ideal for both comfort and hygiene.
Once marketed as both cutting-edge and sexy, waterbeds quickly gained popularity, peaking in 1987 when they accounted for nearly 20% of all mattress sales in the U.S. One memorable slogan captured the era’s enthusiasm: “Two things are better on a waterbed. One of them is sleep.” But the charm faded as waterbeds’ drawbacks mounted: heavy frames, tricky maintenance, awkward moves, and a constant risk of leaks. Though rarely seen today, waterbeds remain a quirky relic of a bygone era — a ripple in sleep history that once made big waves.
Once a symbol of organized sophistication, the secretary desk combined elegance and style with utility. Featuring a hinged, fold-down writing surface and a nest of drawers, cubbies, and sometimes even hidden compartments, it served as the ideal command center for managing household correspondence, finances, and daily affairs.
The name is derived from the French secrétaire à abattant, a drop-front desk. This form rose to prominence in 18th-century France and later found popularity in U.S. homes in the 19th and early 20th centuries, often passed down as family heirlooms. But as handwritten letters and billing statements gave way to email and online banking, the need for a dedicated writing nook waned. Today, the secretary desk is more likely to be admired in antique shops or repurposed as an accent piece — a charming nod to a time when “inbox empty” involved an actual wooden drawer.
Advertisement
Advertisement
Credit: Otto Kühn/ ullstein bild via Getty Images
Telephone Tables
The telephone table, affectionately known as a “gossip bench,” was the most popular spot in the house in mid-20th-century America. This compact piece of furniture was designed for one very specific mission: managing the family’s single landline telephone. Originating sometime after the invention of the telephone in 1876, this nifty piece evolved out of the telephone stand of the 1890s. It combined a small seat with a tabletop for notepads and pens, and often a built-in shelf or cubby for the bulky phone book. It was the designated spot where people went to share news, plan social events, or simply chat for hours.
As cordless phones freed us from the seat and mobile phones untethered calls from our homes entirely, the telephone table was rendered obsolete. Today, it stands as a charming reminder of a time when phone calls were events — and if you weren’t at home to get the call, you might miss all the good gossip.
For decades, the dressing table, also known as a vanity table or vanity, was more than just a piece of furniture — it was a dedicated beauty retreat. With its attached mirror, small drawers, and delicate design, the vanity offered women a personal space to style their hair, apply makeup, and prepare for the day or a glamorous night out. The concept dates back centuries, with origins in late-17th-century Europe, where aristocratic dressing tables were elaborate symbols of refinement.
During the art deco movement of the early 20th century, when beauty routines were marketed as both necessary and aspirational, vanity tables found a place in American homes. Hollywood helped cement the allure: Silver-screen stars were often photographed perched gracefully at their vanities, surrounded by perfume bottles and powder puffs.
By the late 20th century, however, the rise of modern bathrooms with bright lighting, expansive counters, and built-in storage gradually made the bedroom vanity redundant. As homes evolved for efficiency and space-saving, the once-essential vanity became more of a decorative throwback than a daily necessity. While vanity tables can still be found in vintage-inspired bedrooms, they’re mostly a charming reminder of the glamour of a bygone era.
By the 1960s, most American households had a television — but that didn’t mean anyone wanted to see it on display. Early sets were bulky, unattractive, and often viewed as disruptive to a well-appointed living room. So, manufacturers dressed them up in polished wood consoles to make the machines look like part of the living room decor.
As TVs got bigger and the gadgets piled on — VCRs, cable boxes, gaming consoles, stereo receivers, speaker systems — the simple disguise no longer cut it. Enter the TV cabinet and, later, the full-blown entertainment center: hulking structures designed not just to hold a TV, but to contain and conceal the entire home media system. These cabinets peaked in popularity in the 1980s and ’90s, often spanning entire walls and weighing more than the electronics they held. Not only were they practical, but they also soothed lingering anxieties that TV-watching was lazy, lowbrow, or somehow a design failure. Closing the cabinet doors felt like restoring order — and taste — to the room. But as televisions got sleeker and streaming services replaced stacks of movie boxes, the furniture built to hide it all quickly lost its relevance. Now TV cabinets are seen as bulky, old-fashioned relics — like building a closet just to store your iPad.
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Chaise Longues
Once the epitome of leisure and elegance, the chaise longue, from the Frenchfor “long chair,” has a history stretching back to ancient Egypt, Greece, and Rome. The form evolved through centuries of European refinement, particularly in 16th-century France, where it became a fixture in aristocratic homes as a symbol of comfort and status. By the 19th century, it had taken on new life in Victorian parlors as the fainting couch — a semi-reclined piece thought to offer a graceful landing spot for corseted women who found themselves short of breath.
In the first half of the 20th century, the chaise had a solid run in U.S. homes, especially in bedrooms, sunrooms, and formal living spaces. It offered a spot for reading, relaxing, or striking a languid pose à la an Old Hollywood starlet. It paired comfort with glamour, and no therapist’s office or upscale photo shoot was complete without one.
But as rooms got smaller and furniture leaned more functional, the chaise longue became harder to justify. It takes up the footprint of a recliner or armchair — with less versatility. And unlike a sofa or sectional, it’s typically built for one, making it less practical for families or shared lounging. The chaise still has its fans, but these days it’s mostly reclining in retirement.
Once a fixture in midcentury homes, smoking stands were compact, single-purpose pieces designed to hold everything a well-prepared smoker might need: an ashtray, matches, lighters, and a stash of cigarettes or cigars. Often crafted from wood or metal with decorative flair, smoking stands stood proudly beside armchairs, inviting guests to light up without leaving the conversation. From the 1920s through the 1950s, they were as common as coffee tables — a mark of hospitality and, often, masculine style.
Over time, these more elaborate tables gave way to simpler pedestal ashtray stands — functional but stripped of the frills — as smoking became more routine and less ceremonial. But as the dangers of tobacco use became widely recognized and indoor smoking fell out of favor, even those disappeared. Public bans, health awareness, and minimalist living left little space — literally or socially — for dedicated smoking furniture. Now they’re rarely seen outside of antique shops and estate sales, a reminder of when lighting up was trendy.
For those who live in the Northern Hemisphere, there’s something about the culmination of ever-warming weather, the full ripening of foliage, and the seemingly endless daylight hours that makes June a magical time of year. And while many of us harbor fond recollections of beachfront jaunts or backyard games that took place as spring swept into summer, June also comes with a collective well of memories that transpired within its 30-day span.
Here are seven historical events from Junes of both recent vintage and centuries past. From crucial battles to transformative treaties and political scandals, each has stood the test of time as a major checkpoint of the shared human experience.
Credit: Rischgitz/ Hulton Royals Collection via Getty Images
June 15, 1215: King John Affixes His Seal to the Magna Carta
King John’s ascent to the English throne in 1199 seemingly brought nothing but trouble for the monarch, who subsequently lost control of several French territories, was excommunicated by Pope Innocent III, and faced an uprising from powerful barons who chafed at high taxes. Seeking to at least pacify the barons, John agreed to a series of terms that provided specific limits on the king’s power over matters of land ownership, debts, and the election of church officials, signing the agreement on June 15, 1215.
Although peace between the two sides was short-lived, the charter was revised and reissued multiple times over the years, and the 1225 version of what is now known as the Magna Carta is recognized as the definitive issue. And while it was never intended to define the course of human rights beyond the immediate feudal concerns, the document did exactly that with its clauses that guaranteed basic liberties and justice for citizens, words that later influenced the United States Declaration of Independence and Constitution.
June 18, 1815: Napoleon Loses the Battle of Waterloo
Making a triumphant return from exile in March 1815, Napoleon Bonaparte embarked on what became his Hundred Days campaign to reestablish dominance over mainland Europe. His renewed military ambitions commenced in Belgium, where a split French army separately engaged forces under Prussian Field Marshal Gebhard von Blücher and England’s Duke of Wellington on June 16.
Two days later, Napoleon advanced on the duke’s men stationed outside the village of Waterloo, Belgium. However, the combination of superior defensive positioning and questionable tactical decisions by the attackers enabled the opposing force to hold steady until Prussian reinforcements arrived. By nightfall, not even Napoleon’s feared French Imperial Guard could stem the losing tide. Napoleon is said to have fled from the battlefield in tears, and he soon followed with his abdication and surrender to the British, resulting in another exile — this one permanent.
June 19, 1865: Federal Troops Enforce Emancipation in Texas, Inspiring Juneteenth
Although General Robert E. Lee’s surrender on April 9, 1865, is generally considered the end of the American Civil War, many Confederate-controlled areas continued resisting the inevitable defeat and lifestyle changes as long as possible. As such, emancipation of enslaved people was not actually enforced in Texas until June 19 that year, when Major General Gordon Granger and a federal force descended on Galveston with an order declaring an end to the institution of slavery once and for all.
Although many enslavers delayed relaying the news to their enslaved workers, or refused to comply until forced, members of Texas’ newly freed community marked the first commemoration of the date the following year. The regional celebrations gradually spread as Black people in the South established new roots throughout the country, although it was more than a century before Texas became the first of the 50 states to make Juneteenth a holiday in 1979, and it took another four decades from there for the day to become a federal holiday in 2021.
June 3, 1937: The Duke of Windsor Marries Wallis Simpson
When King George V died in January 1936, his eldest son followed long-standing tradition in the United Kingdom by assuming the crown as Edward VIII. However, Edward was romantically involved with American socialite Wallis Simpson, who had divorced her first husband and was still married to her second, and the new king refused to end their relationship despite warnings that the Church of England and Parliament would never accept her as queen.
Edward subsequently — to the shock of the nation — abdicated the throne on December 10, passing the kingdom to his younger brother George VI. The following day, he famously announced over the airwaves, “I have found it impossible to carry on the heavy burden of responsibility and to discharge the duties of king, as I would wish to do, without the help and support of the woman I love.” Simpson and Edward cemented their controversial union with a wedding at France’s Chateau de Cande on June 3, 1937, and they remained largely out of the limelight as George VI, followed by his daughter, Elizabeth II, carried on the royal line.
June 6, 1944: D-Day Commences on the Beaches of Normandy
Following a brief postponement due to bad weather, the Allied armies of World War II embarked on their long-planned crossing of the English Channel with the largest amphibious assault in history on June 6, 1944. The opening round of Operation Overlord, remembered simply as D-Day, brought more than 130,000 troops across 7,000 ships and landing craft to five beaches along the coastline of Normandy, France.
Despite a deception campaign designed to point German defenses toward other landing spots, resistance proved fierce at the section dubbed Omaha Beach, where U.S. forces faced a torrent of machine-gun fire raining down from steep cliffs. But Omaha and the other four beaches — Utah, Gold, Juno, and Sword — were secure by the end of the day, providing a base for some 850,000 troops to reach Normandy by the end of June, and generating the crucial momentum needed to help bring the war to a close within a year’s time.
Credit: Ken Feil/ The Washington Post via Getty Images
June 17, 1972: The Arrest of Five Burglars at DNC Headquarters Ignites the Watergate Scandal
In the early morning hours of June 17, 1972, a security guard at the Watergate complex in Washington, D.C., summoned police to apprehend five men who were poking around the Democratic National Committee headquarters. Thus commenced perhaps the most consequential political scandal in U.S. history, which culminated with the imprisonment of several White House staffers and the resignation of President Richard Nixon.
The Nixon campaign’s links to the break-in were uncovered by Washington Post reporters Bob Woodward and Carl Bernstein (with an able assist from their insider informant, “Deep Throat”), and the dominoes began to fall with the convening of a Senate committee and appointment of a special prosecutor to probe the links in the spring of 1973. Despite Nixon’s repeated denials, it was his own words that provided the “smoking gun” for investigators by way of taped conversations in the Oval Office. After the Supreme Court rejected Nixon’s attempts to withhold all the tapes, and the House Judiciary Committee passed three articles of impeachment, the embattled president acknowledged the writing on the wall and announced he was stepping down on August 8, 1974.
June 18, 1983: Sally Ride Becomes the First American Woman in Space
Shortly before earning her doctorate in physics from Stanford University in 1978, Sally Ride came across a newspaper advertisement seeking recruits for NASA’s astronaut program; she soon became one of six women and 35 overall candidates accepted from a pool of 8,000 applicants, setting her on course for a historic space flight. The big moment came on June 18, 1983, when Ride blasted off with four crew members aboard the space shuttle Challengerto become the first American woman in space.
Following the successful six-day mission, which involved the use of the shuttle’s robot arm to deploy and retrieve a satellite, Ride returned to orbit the following October. She was even named to a crew for a third mission, before being diverted to an investigation team in the wake of the tragic Challenger explosion in January 1986. Ride went on to serve as a physics professor and director of the University of California’s Space Institute, before launching the Sally Ride Science outreach program for students. While she wasn’t the first woman to lift off from Earth — two Soviet cosmonauts, Valentina Tereshkova and Svetlana Savitskaya, preceded her — Ride’s journey nevertheless stands as a seminal moment in the American space program and an inspiration to those who dare to reach for the stars.
Credit: Universal History Archive/ Universal Images Group via Getty Images
Author Kristina Wright
May 8, 2025
Love it?29
Strolling through an old European city or colonial American village, one structure often draws the eye before anything else: the clock tower. Often reaching high above rooftops and marketplaces, these architectural timekeepers have marked the passage of hours for centuries. While we no longer rely on them to schedule our days, their presence is more than nostalgic — it’s deeply symbolic.
Clock towers connect us to an era when time was a shared resource, when clocks were heard on the hour (and sometimes on the half-hour and quarter-hour) and seen from nearly every corner of a town center. Today, they stand as reminders of our shared past and of the beauty in building something meant to last.
From the animated figures of Munich’s Rathaus-Glockenspiel to the precision of Kyoto’s Seiko House Ginza clock tower to the somber chimes of Big Ben echoing through Parliament Square, these landmarks remind us that time isn’t just about minutes and hours, but also about memory, identity, and connection.
Having a clock tower was once a mark of prestige for towns and cities. In the Middle Ages and Renaissance era, public clocks symbolized a town’s wealth, technological abilities, and political status. These structures weren’t just functional — they were an architectural flex, built to impress both residents and visitors. The Zytglogge in Bern, Switzerland, is a good example. Originally constructed in the 13th century as a guard tower, it later became a grand astronomical clock, with rotating figures and intricate dials that still attract tourists to marvel at its construction.
Meanwhile, Venice’s Torre dell’Orologio, built in the 15th century, has a blue and gold astronomical face and two bronze figures striking the bell to reflect Venetian wealth and creativity. In the Middle East, the Ottoman-era Jaffa Clock Tower in modern-day Tel Aviv was one of several clock towers built to celebrate the 25th anniversary of Sultan Abdul Hamid II’s reign. Completed in 1903, it combined European clockmaking with local limestone and became a symbol of modernization in the region.
Before Watches, Clock Towers Were How People Kept Time
Before we could glance at a watch or unlock a phone screen, time was a communal experience. Most people didn’t own personal timepieces until the late 19th century, so towns relied on a clock tower to keep track of the minutes and hours. Positioned in market squares, atop town halls, or on church steeples, clock towers helped everyone from farmers and merchants to schoolchildren and their teachers stay on schedule.
In towns where personal timepieces were rare or unavailable, the clock tower served as the central, public authority on time. Its bell signaled the community’s schedule, helping people coordinate their activities. Markets opened, laborers began their work, and life flowed according to the bell's toll. This communal timekeeping was a reflection of the collective daily rhythm — time was not a personal concept but something shared by everyone in the town.
In medieval Europe, monastic communities initially set the pace of the day, following canonical hours with the aid of sundials and water clocks. But as secular life and commerce expanded, towns needed a visible — and audible — way to regulate daily activities. That’s when mechanical clock towers began to appear. One early example is the Salisbury Cathedral clock in England, believed to date back to 1386. It’s one of the oldest working mechanical clocks in the world and still runs today without a face because its purpose was simply to chime the hour.
The sound of clock tower bells became a constant in the lives of townspeople, creating a synchronized community rhythm. The bell’s toll, heard across the town, marked not only the hour but also the moments of the day. Whether it was indicating the start of work, a midday break, or the time to gather for prayer, the bell provided structure to the day. For agricultural or working-class communities, this auditory cue transformed time into a shared experience, where every day was divided by the same familiar toll, uniting people through a common schedule.
The bells weren’t interchangeable, either. Many were custom-cast, precisely tuned, and designed to carry across hills or cut through noisy streets. Their unique tones became part of a town’s identity. In Boston, the Old North Church still rings the same set of bells cast in 1744 — making them among the oldest in North America. These historic chimes continue to connect the modern city to its colonial history.
Credit: Grisel/RDB/ ullstein bild via Getty Images
Clock Towers Were Built To Last
Clock towers were not only timekeepers but also public monuments, engineered to impress and endure. Early towers relied on intricate mechanisms of weights, gears, and escapements and were maintained by full-time clock keepers who wound the clocks and kept the bells running on schedule. Over time, technological innovations such as pendulums and electric motors improved precision, but many towns resisted modernization in favor of preserving the timepieces’ original craftsmanship.
This commitment to preservation says as much about the cultural role of clock towers as it does about their practical one. In Auxerre, France, the Tour de l’Horloge still operates with a 15th-century mechanism — a living link to a time when keeping time was a hands-on task. In Japan, the Shinkorō Clock Tower in Izushi, originally built in 1871, blends traditional castle architecture with Western clockwork and is the oldest Japanese-style clock tower in the world. In Prague, the medieval astronomical clock on the Old Town Hall has been carefully maintained since 1410, drawing crowds daily as its animated figures still mark the hour. And at Fort Monroe in Hampton, Virginia, two clock towers that are more than a century old still operate with their original mechanical systems, maintained by volunteers who wind them weekly. Though no longer essential for timekeeping, these clock towers — and the care they require — preserve a sense of continuity, tradition, and shared memory.
Advertisement
Advertisement
What the World Smelled Like Before Industrialization
Before the churn of factories and the tang of coal smoke came to dominate modern life during and after the Industrial Revolution, the smells of daily life were intensely organic, shaped by proximity to animals, bodies, plants, and decay. Urban and rural environments offered distinct olfactory experiences, but both were pungent, earthy, and changed with the seasons.
Once industrialization and modern sanitation systems had taken hold in the industrialized world by the mid-1800s (following a transformation that lasted about a century), the smells of waste, sewage, manure, and other organic materials were significantly less common, even in rural areas. Changes in agriculture, the decline of small cottage industries, and advances in chemistry also pushed scents away from earthy and toward synthetic. But understanding these historical odors offers a visceral glimpse into how people once experienced the world — as they say, “the nose knows.”
Before industrialization transformed cities in Britain and the U.S., urban areas were often crowded, unsanitary, and deeply aromatic environments. Unfortunately, some of the most dominant smells were related to waste, both human and animal.
In an era before modern plumbing, human waste flowed unchecked in waterways, pooled in cesspits, or was collected in “night soil” buckets to be used later as fertilizer. Open gutters often carried sewage and refuse, while heaps of offal and carts of dung were common sights — and smells — on city streets.
Animals were also a major contributor to the aromatic landscape, such as it was. Horses were ubiquitous in cities, and their manure (and occasional carcasses) filled the air with ammonia and other not-so-pleasant smells. In New York City there were 10,000 horses by 1835,each producing 15 to 30 pounds of manure and a quart of urine. In her book Taming Manhattan: Environmental Battles in the Antebellum City, historian Catherine McNeur describes how rotten food and dead animals mixed with “enormous piles of manure to create a stench particularly offensive” in the heat of a New York summer.
Meanwhile, in England, the River Thames served as a dumping ground for sewage, emitting overpowering odors that were also especially ripe in the summer. As in America, streets were littered with horse manure, and industries such as tanneries and slaughterhouses contributed to the pervasive foul smells.
In fact, urban centers on both sides of the Atlantic were full of small-scale trades and markets — tanners, butchers, fishmongers — each adding their own pungency. Tanning leather required soaking hides in urine and lime, producing a rank, acrid scent. Butcher shops dumped blood and offal into gutters. In the U.S., industries such as slaughterhouses and leather tanners were called the “offensive trades” because of how they offended the nose, according to historian Melanie Kiechle.
Street vendors contributed too: Roasting chestnuts, boiling tripe, and frying fish could be welcome or foul scents, depending on your appetite and the weather.
Credit: Heritage Images/ Hulton Archive via Getty Images
What Rural Areas Smelled Like
The countryside offered a different aromatic profile — still strong, but more vegetal and animal in character. Farmland, livestock, and the rhythms of nature defined rural scentscapes.
Agricultural areas were steeped in the scent of animals. Cows, pigs, sheep, and chickens produced manure that lingered in the air, particularly near barns or pens. Hay, wet wool, and milking sheds had their own distinctive odors. Plowed fields released earthy, loamy smells. Manure was prized as fertilizer, especially human waste in places such as colonial New England, where the outhouse “night soil” was repurposed.
Crops also shaped rural smells. In spring and summer, the air might carry the scent of blooming clover, ripening grain, or the resin of pine forests. In autumn, apples, root vegetables, and fermenting fruit filled the air. Occasionally, the burning of wood or peat for heating and cooking added a smoky aroma, alongside smoke from hearths and outdoor fires, especially in the colder months.
But while the countryside generally smelled better than the city, not all its smells were good. Rotting vegetation, dead animals, and stagnant water introduced more offensive smells, while ponds, privies, and uncollected waste could create swampy or sulfurous odors.
Advertisement
Advertisement
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
In both rural and urban areas, humans contributed their own smells.In an age before deodorant, daily bathing, or sanitation infrastructure, body odors could be strong. People often wore wool, which retained scents. Scented pomades, herbal sachets, and handkerchiefs were used to mask unpleasant aromas, at least for those who could afford to buy them. Planting sweet-smelling flowers and greenery, such as lilac, was a common method for combatting bad smells, as was wearing a nosegay (a scented flower) on one’s lapel.
Yet for most people, strong and not necessarily pleasant scents were simply part of life. Odors that would be considered offensive today were normal, even unnoticed, in the preindustrial world.
Urban reformers in the 18th and 19th centuries eventually began to link smell with disease (so-called “miasma theory”), sparking sanitation movements that transformed the olfactory landscape by the late 1800s. Today, of course, billion-dollar industries exist to deodorize and perfume both our bodies and living spaces. In the modern world, it’s nearly impossible to imagine how our ancestors tolerated the aromas that were once seen as simply part of being alive.
On March 13, 2013, the former Cardinal Jorge Mario Bergoglio, archbishop of Buenos Aires, Argentina, appeared for the first time as Pope Francis, the 266th head of the Roman Catholic Church. Following his death on April 21, 2025, the church turned once again to an ancient tradition that’s been in place for centuries: the election of a new pope.
There were some novelties associated with Pope Francis’ ascension to Bishop of Rome: He was the first pope from the Americas, as well as the first to assume the name of Francis. He was also the rare pope to take charge while his predecessor was still alive, after an aging Benedict XVI became the first pontiff in nearly 600 years to voluntarily resign.
But for all the unusual components of his particular case, Francis’ assumption of the papacy still adhered to the traditions of the church — some that are relatively new and others that have been faithfully followed for centuries. Here’s a look at exactly how the Catholic Church elects a new pope.
The origins of papal elections are a little murky. Some evidence suggests that St. Peter, generally considered the first pope, designated a group of two dozen priests and deacons to name his successor. Other sources say that the second pope, Linus, was elected from a pool of neighboring bishops and the Roman clergy.
After Roman Emperor Constantine I legitimized the spread of Christianity in Rome in the early fourth century, subsequent generations of European monarchs sought to influence control over the increasingly influential post of bishop of Rome. This was exemplified by the actions of the Holy Roman Emperors Otto III and Henry III, who installed a combined half-dozen popes during their respective reigns in the late 10th and mid-11th centuries.
The first step toward the modern voting process came in 1059, when Pope Nicholas II decreed that only cardinal-bishops would be allowed to select a pope. Another major change came in 1274, when Pope Gregory X codified the “conclave” system that required voters to be sequestered until a new pope was chosen — a system still used today.
Later pontiffs continued to modify the rules as they saw fit. In 1970, Pope Paul VI determined that only cardinals below the age of 80 were eligible to vote, and in 1975, he established a maximum number of electors at 120.
While some form of a two-thirds voting majority has been required since 1179, Pope John Paul II decreed in 1996 that the winner could be determined by a simple majority after about 12 or 13 days of deadlocked voting. His successor, Benedict XVI, reintroduced the full two-thirds vote to prevent blocs from holding on to a 50% majority until the time when that percentage would be enough to decide the election.
Nowadays, after the death or resignation of a pope, the designated cardinals — including bishops and other church officials — are summoned to the Vatican. When all confirmed electors are accounted for, the conclave commences with the sequestering of cardinals in the Sistine Chapel. All phones, televisions, and communications with the outside world are removed, with only two doctors and select members of the Vatican staff permitted to enter the secure premises.
On the first day of the conclave, the cardinals celebrate Mass in the morning before proceeding to the chapel, where they have the option of holding a single round of voting in the afternoon. For this process, the most junior elector randomly selects the names of nine colleagues: three “scrutineers” to record the votes, three “infirmarii” to collect the ballots from any sick cardinals confined to their quarters, and three “revisers” to check the work of the others.
Each of the voters receives a ballot with the printed words Eligio in Summum Pontificem ("I elect as Supreme Pontiff") and a space to enter their choice. Instructed to write in a manner that does not clearly identify the voter, and forbidden from casting a vote for themselves, the electors then fold the paper twice so the name is concealed. Any baptized Roman Catholic man can be elected pope, although it's been a cardinal every time since 1379.
Each cardinal brings his ballot to an urn at the altar, and repeats the oath in Latin, which translates to, "I call as my witness Christ the Lord, who will be my judge, that my vote is given to the one who, before God, I think should be elected."
After all the cardinals have taken their turn, the designated scrutineers count the ballots to make sure the number corresponds with the known number of electors. One of them then reads the names on each ballot out loud, and pierces each page with a needle through the word Eligio (“I elect”) to collect the ballots on a single thread.
On all but the first day, when voting begins in the afternoon, a failure to elect a pope in one round of voting is immediately followed by a second round. From that point forward, two successive rounds can be held during the morning session, with up to two more held in the afternoon.
If no winner emerges after successive rounds, the ballots are burned in a stove in the back of the chapel with a chimney that reveals the results to the outside world. Except for on the first day when there's just one round of voting, the burning happens after the conclusion of two voting rounds in the morning, and again after two rounds in the afternoon. The activation of a cartridge releases chemicals that produce black smoke, a signal that the election remains ongoing.
On the other hand, if a candidate receives two-thirds of the vote, that outcome will be signaled with white smoke from the burning ballots.
Once a two-thirds vote is achieved, the winner is asked whether he accepts the results, and what name he wishes to adopt as pope. He then receives the homage of his former fellow cardinals and is taken to be fitted for his papal robes in the Vatican’s Room of Tears, a small chamber next to the Sistine Chapel.
The new pontiff is then introduced to the world with an appearance on the balcony over St. Peter’s Basilica, a continuation of the chain of command that began when the building’s namesake was said to have accepted the enormous responsibility some two millennia ago.
In the 1960s, New York City’s Greenwich Village — the Manhattan neighborhood located roughly between Houston and 14th streets, from the Hudson River to Broadway — was a hub of American counterculture. Once an upscale residential area in the 1800s, the neighborhood had changed by the early 20th century as low-income tenement houses drove its wealthy residents to other parts of the city.
At the same time, the Village’s central location and low rents started attracting artists, writers, and bohemians from across the country. A community of creativity and political activism flourished in the local coffeehouses, and the neighborhood became a hub of the folk music, protests, and free-spirited style that came to define the 1960s counterculture. These photos are but a small glimpse into the people, places, and moments that made the Greenwich Village scene so iconic.
Credit: Bettmann via Getty Images
The Gaslight
MacDougal Street may well be the place that best captures the essence of 1960s Greenwich Village. Though just a few blocks long, the strip was home to a dense collection of coffeehouses and clubs that launched some of the best-known artists of the decade and beyond. Among its most revered venues was the Gaslight Cafe. Opened as the Village Gaslight in 1958, the low-ceilinged former coal cellar originally hosted readings by influential Beat poets including Allen Ginsberg and Diane di Prima before evolving into a cornerstone of the folk scene.
Getting a regular slot at the Gaslight meant earning the approval of insiders such as musician Dave Van Ronk, known as the “Mayor of MacDougal Street.” It also meant getting paid weekly. Though the space was far from glamorous, it was a launching pad for major talent, including Van Ronk, Len Chandler, and of course Bob Dylan, who, upon arriving in New York, said the Gaslight was the club he “wanted to play, needed to.” In 1966, the famed club even hosted Joni Mitchell’s first New York City performance.
Just steps away, Cafe Wha? built its own legacy as one of the first stages for artists such as Jimi Hendrix and Bruce Springsteen (and yes, Dylan, who performed there on his first day in NYC). Meanwhile, down the block, Gerde’s Folk City and the Cafe Au Go Go cemented the Village’s reputation as a hotbed of talent with regular performances from Pete Seeger, Emmylou Harris, Tim Buckley, Judy Collins, and many more.
Credit: Sigmund Goode/ Michael Ochs Archives via Getty Images
Bob Dylan at the Bitter End
Today, the Bitter End calls itself the oldest rock ’n’ roll club in New York City, and it was certainly an early staple of Greenwich Village’s music scene. Opened in 1961 by future Hollywood film producer Fred Weintraub, the unassuming Bleecker Street coffeehouse quickly gained a reputation for showcasing new talent. One of the earliest acts to play the Bitter End was a then-unknown trio called Peter, Paul, and Mary. In late 1961, the club placed ads in The New York Times urging people to catch the group before they outgrew the intimate room. By January 1962, Peter, Paul, and Mary had signed a record deal, and later that year, they used the club’s signature exposed brick walls as the backdrop for their debut album cover.
When Dylan arrived in the Village from Minnesota in early 1961, he became one of the up-and-coming performers who graced the Bitter End’s stage in its early years. Though he began as one of many young folk singers on the scene, Dylan quickly emerged as the voice of a generation; his songwriting helped to define the 1960s folk revival and position Greenwich Village as the center of a cultural shift. Later, in the 1970s, Dylan returned to the Bitter End not just as a performer but as a fan, dropping by to watch others play or to shoot pool with peers such as Kris Kristofferson. The club briefly changed its name to the Other End under new owner Paul Colby in the 1970s and ’80s, but eventually returned to its original moniker. It remains the Bitter End to this day.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
Allen Ginsberg Protests Vietnam
Greenwich Village was a breeding ground for the countercultural energy of the 1960s. Though the neighborhood had been home to artists and freethinkers for much of the 20th century, by the mid-1960s, the folk revival was in full swing, and musicians such as Joan Baez, Pete Seeger, and Phil Ochs were increasingly vocal in the anti-war movement. In March 1966, Vietnam War protests swelled across the country; tens of thousands of people marched on Fifth Avenue in New York City in what was then the largest anti-war demonstration to date.
More than 20,000 protesters were joined by some of Greenwich Village’s leading counterculture figureheads. In this photo, legendary Beat poet and vocal Vietnam War critic Allen Ginsberg is pictured in an Uncle Sam hat. He’s joined by members of the underground psychedelic folk rock group the Fugs. Formed by poets Ed Sanders and Tuli Kupferberg, the Fugs and their irreverent critiques of mainstream American culture made them a cult favorite in the 1960s Village scene where they frequented live clubs such as Cafe Wha?.
Even before the folk revival hit full swing, Washington Square Park was already a sanctuary for musicians and music lovers alike. Throughout the 1950s, people armed with guitars, banjos, bongos, and mandolins gathered every Sunday to sing and share folk, blues, and more music.
But in the spring of 1961, NYC Parks Commissioner Newbold Morris began enforcing a city law requiring permits for musical performances in the park. While the crackdown was said to be supported by nearby residents concerned about crowds, many suspected that authorities wanted to silence the voices of a growing youth movement.
On Sunday, April 9, 1961, musicians gathered in the park even though permits had been denied. The protest was led by Izzy Young, a pillar in the music scene as the owner of Greenwich Village’s iconic Folklore Center. Police responded with force and arrests in a clash that became known as the Beatnik Riot. That Sunday marked a cultural turning point: a new generation pushing back against authority. The city eventually relented, and throughout the rest of the decade, Washington Square Park remained a hub for protest, performance, and political expression.
Advertisement
Advertisement
Credit: Bettmann via Getty Images
Bohemian Style
Bohemianism took hold in New York long before the Beat Generation or hippie countercultures. In the mid-19th century, Greenwich Village was home to writers such as Henry Clapp Jr., known as the “King of Bohemia” and credited with introducing the Parisian ideal of bohemianism — an artistic and non-materialistic lifestyle — to the American literary scene. A century later, the Village attracted a new wave of dreamers. Through the 1950s, the literary rebels known as the Beats inspired a subsequent beatnik subculture, and in the 1960s, hippies adapted similar ideals into a movement centered on personal freedom and social justice. Their shared bohemian sensibilities helped keep the Village a haven for free spirits and artistic freedom.
The fashion seen around the Village in the 1960s was both eclectic and deliberate, as well as comfortable and carefree. This 1967 snapshot captures the vibe: A young woman in a gingham dress and beads has her face painted by a fellow hippie in a bold-patterned headband and shirt, as both of them audition as extras for the Sidney Poitier film For Love of Ivy. By the end of the 1960s, the bohemian spirit that shaped Greenwich Village had spread far beyond it, reaching its peak just 100 miles upstate at the Woodstock Music Festival, where upwards of 500,000 people gathered in the name of peace, love, and music.
It’s hard to surpass the romance and adventure embodied by hidden treasure. The allure of lost riches has lived with us throughout human history, and the interest in such fables has never wavered — hence the enduring popularity of fictional works such as Robert Louis Stevenson’s Treasure Island and, more recently, the Indiana Jones movies.
Unlike many legendary troves — such as Montezuma’s treasure, which has fired the imagination of treasure hunters for centuries, despite little evidence as to its actual existence — some hidden riches are known to be very real, but their whereabouts are now tantalizingly lost. Here are five of these lost treasures, all of which continue to inspire treasure hunters and historians alike in their ongoing quests for discovery and long-lost riches.
Credit: Bettmann via Getty Images
Lost Fabergé Imperial Eggs
Few things in life are more jaw-droppingly lavish than Fabergé eggs, ornate decorations commissioned by Russian tsars and created by the jewelry company House of Fabergé between 1885 and 1917. The most well known and extravagant are the Imperial eggs, of which 50 were created but only 44 are known to have survived.
The most recent discovery came to light in 2011, when the long-lost Third Imperial Egg was accidentally discovered in an American flea market. It later sold for an undisclosed amount in 2014 after being valued at $33 million.
After the start of the Russian Revolution in 1917, the Bolsheviks ransacked and looted the imperial family’s palace and nationalized the House of Fabergé, and some of the Imperial eggs were lost. Researchers believe that as many as five Imperial eggs have been destroyed, but there’s still a chance that at least one Imperial egg is out there waiting to be found.
The Florentine diamond is a clear, pale-yellow diamond with nine sides and 126 facets, weighing 137.27 carats (27.454 grams). Originally from India, it later found its way to Europe. Legend has it that the magnificent jewel was owned by Charles the Bold, Duke of Burgundy, but he lost it when he fell in battle during the Battle of Nancy in 1477.
The jewel’s first documented owner was Ferdinando II, the Grand Duke of Tuscany and a member of the powerful Medici family, in the mid-1600s. It then found its way into the hands of the storied Habsburgs, and by 1743 had become part of the Austrian crown jewels. There it remained until the fall of the Austro-Hungarian Empire after World War I. To keep the diamond safe, Emperor Charles I smuggled the stone to Switzerland, where he was being sent into exile.
This is where the fate of the Florentine diamond becomes unclear. Rumors abound, including one that claims a household servant or other person close to the imperial family took the stone and fled with it to South America. Another claims it was broken into smaller pieces and sold to unwitting jewelers. The hunt continues, with the diamond potentially worth around $12 million in today’s money — if it remains intact.
Before it disappeared during World War II (a fate suffered by many priceless works), the Amber Room was one of the most treasured artifacts in Russia. The room, sometimes referred to as the “Eighth Wonder of the World,” was an ornate chamber decorated with amber panels backed with gold leaf and mirrors. Originally designed in the early 18th century as a showpiece chamber for Frederick I, king of Prussia, it was then gifted to the Russian Tsar Peter the Great in 1716 and installed in Catherine Palace near St. Petersburg.
The room was quite a sight, with its 180-square-foot floor surrounded by walls decorated with 6 tons of glowing amber. But when World War II broke out, the Nazis looted the room and transported it to Königsberg Castle in what is now Kaliningrad, Russia. In early 1944, with the Allies closing in, the room was disassembled and crated for storage in the castle basement — and then the trail went cold.
Despite decades of investigation by both Russian and German authorities, the original Amber Room has never been recovered. Theories abound, with some believing it was destroyed during Allied bombing raids or loaded onto a ship or submarine that was then sunk. Alternatively, it might still be hidden in an abandoned German bunker or mine. With its finishing touches of gold and other semiprecious stones, historians estimate the room would be worth at least $142 million in today’s dollars, with some estimates as high as $500 million.
The Honjō Masamune is arguably the most significant lost sword in Japanese history. Crafted by the legendary swordsmith Gorō Nyūdō Masamune in the early 14th century, it represents the pinnacle of Japanese sword-making. The katana, with Masamune’s unique wavy pattern along the hamon, or blade, was passed down from one shōgun or general to another. It was wielded by the samurai general Honjō Shigenaga, the namesake of the sword, who won it in battle in 1561 by defeating its previous owner. From there, the sword was passed along to ever-more prestigious owners, cementing its legendary status.
Then, when Japan lost World War II and the Allies demanded the Japanese hand over all their weapons — including their swords — the Honjō Masamune was lost. Its owner at the time, Tokugawa Iemasa, turned in the sword to a police station in 1945. According to records, the sword was handed over to a Sgt. Coldy Bimore — but no such person ever seems to have existed. While it’s possible the Honjō Masamune was melted down or tossed into Tokyo Bay, there’s still a chance that one day it will be found, once again allowing Japan and the world to lay eyes on one of the finest swords ever made.
In 1783, George III, king of Great Britain and Ireland, established the Order of Saint Patrick as an equivalent order to England’s Order of the Garter and Scotland’s Order of the Thistle. Half a century later, William IV introduced a new set of regalia for the order, commonly known as the Irish crown jewels, that included a heavily jeweled badge, ceremonial collars, and an eight-pointed star. The items in the collection contained hundreds of gems and precious stones, including emeralds, rubies and, most notably, brilliant Brazilian diamonds.
On July 6, 1907, the Ulster King of Arms, Arthur Vicars, opened the safe in Dublin Castle where the crown jewels were housed, and made a terrible discovery: The jewels had been stolen. No one had checked on the collection in weeks, making it hard to attach a culprit to the crime, and the investigating detectives were stumped — there was no sign of a forced entry, suggesting the thief had their own key.
The king was enraged and the heist became a huge national scandal. Rumors spread and theories were put forward, with suspicion falling on everyone from Arthur Vicars himself to Francis Shackleton (the brother of Antarctic explorer Ernest Shackleton), who also possessed a key to the tower. Such was the mystery surrounding the robbery that even Arthur Conan Doyle, the creator of Sherlock Holmes, offered his assistance.
Despite the outcry and massive public attention, the crime was never solved and the crown jewels never recovered. It’s possible that the pieces were broken down and the precious stones sold off separately, but we may never know. What we do know is that the collection was worth around £50,000 in its day, equivalent to some $5 million today.
The idea of a love potion created to win the heart of an uninterested companion has been around for virtually as long as recorded history. While no one knows for sure when these elixirs first bubbled into existence, their development through the years, in many guises, serves as a snapshot for the cultures these creative concoctions have passed through.
Credit: Culture Club/ Hulton Archive via Getty Images
The Ancient Greeks Set the Tone
According toLove Potions Through the Ages: A Study of Amatory Devices and Mores by Harry E. Wedeck, the ancient Greeks were among the earliest civilizations to foster the regular use of love potions, also known as “philtres.” The physician Xenocrates, who lived in the third century BCE, suggested that drinking the sap of the mallow plant would arouse passions in women.
The stimulating effects of the roots of the satyrion and mandrake plants were well known to both the Greeks and the Romans that followed. The Greek physician Dioscorides, who served as an army surgeon for the Roman Emperor Nero, wrote of how the mandrake root dipped in wine would help win over prospective lovers.
Even those who lacked wealth and power enjoyed access to love-inducing aides, as they could find various charms and concoctions in a seedy district of ancient Rome known as the Sabura. Yet the widespread availability of such philtres, with their varying degrees of effectiveness, could also be a source of trouble.
The poet Lucretius, a contemporary of Julius Caesar’s, was said to have been driven mad by a potion administered by his wife. Later, the Roman writer Apuleius stood trial for his alleged concoction of love potions to win the heart of a widow, with recipes including such stimulating seafood as spiced oysters, cuttlefish, and lobsters.
India’s Kama Sutra Provided Instructions for Reeling in Lovers
The Greeks and Romans weren't the only ancients to encourage the common use of love-inducing ingredients. The Kama Sutra, composed in North India around the third century BCE, is famed as an erotic how-to manual, but also includes instructions for how to win over a mate. One such entry describes how a man who plays a reed pipe treated with the juices of various plants could ignite the passions of his love interest.
The Islamic cultures that flourished in the Middle East and Africa devised their own methods of romantic inducements culled from local specialties. The 11th-century physician and philosopher Ibn Sina (known in the West as Avicenna) wrote of a love brew composed of honey, pepper, and ginger.
Honey was also a prime ingredient cited in Muhammad ibn Muhammad al-Nefzawi's The Perfumed Garden, a 15th-century Arabic erotic manual. One fail-proof recipe for summoning a lover, according to Nefzawi, involved heating purified honey with onion juices, before adding water and pounded chickpeas. The resulting broth was to be taken in small doses, just before going to bed, during a spell of cold weather.
Things Took a Turn Toward the Bizarre in the Middle Ages
The European Middle Ages marked a culmination of centuries of potion-making on the continent. Easily found in marketplaces, these philtres included many traditional plants and foods, but also crossed into the realm of the bizarre by incorporating human and animal body parts and fluids.
The 13th-century German polymath Albertus Magnus, known as Doctor Universalis, wrote of one such recipe: "If thou wilt that a woman bee not visious nor desire men, take the private members of a Woolfe, and the haires which doe grow on the cheekes or eyebrowes of him, and the haires which bee under his beard, and burne it all, and give it to her to drinke, when she knoweth not, and she shal desire no other man."
By the Elizabethan era in late-16th-century England, William Shakespeare was among those who dramatized the appeal of love potions among the populace. One such spell appears to comedic effect in A Midsummer Night's Dream, when petals are laid on the eyes of the sleeping fairy queen Titania, enchanting her to fall in love with the hapless weaver Nick Bottom.
Yet love potions remained very much in existence beyond the fictional realm of the stage and printed page. King Louis XIV of France, who was known to partake in pleasures of the flesh at the Palace of Versailles, was famously administered such mind-warping tonics by a cunning mistress, Madame de Montespan. Supplied by alleged sorceress Catherine Monvoisin, Montespan treated the Sun King's meat and wine with mixtures made of "blood, bones, intestines, along with parts of toads and bats," binding Louis XIV to her wiles long enough to produce seven children.
While many cultures continue to rely on local dishes designed to stimulate amorous feelings, the old-fashioned love potion comprised of roots and hairs and crafted to win over a longed-for mate primarily endures among the tropes of popular culture. "Love Potion No. 9" was a hit song for the Clovers in 1959, as well as the title of a 1992 Sandra Bullock film. And Harry Potter fans might recall that the dreaded Lord Voldemort's parents were brought together by a philtre.
Of course, pharmaceuticals serve as something of a replacement for these traditional mixtures, from the libidinous inducements of Viagra to the warm and fuzzy feelings ignited by MDMA. Along those lines, scientists have teased the unveiling of new drugs that trigger the release of the "love hormone" oxytocin among affection-seeking users.
The difference is that these stimulants would be geared toward couples who are looking to rekindle a spark, as opposed to the old (and problematic) method of eliminating the autonomy of a would-be lover. It remains to be seen how effective such doses would be, but given the long history of love potions as a remedy for unrequited lovers, it’s a good bet that people will be willing to find out.
When visiting a historic site — whether an ancient or medieval castle, cathedral, or statehouse — you may have noticed an eye-catching detail about the architecture: doors that are far larger than those found in modern buildings.
Today, a standard interior door is typically 80 inches (6 feet, 8 inches) tall and 28 inches to 36 inches wide, while exterior doors are usually the same height but range from 32 inches to 42 inches wide for single doors and 60 inches to 72 inches for double doors. Historically, however, door sizes varied widely, reflecting architectural styles and cultural priorities. Doors built on an impressive scale, often towering over their modern counterparts, adorned buildings of all kinds, but these oversized entryways weren’t just for aesthetics. Here are some reasons historical doors tend to be so large.
Large doors have long symbolized power, authority, and social hierarchy. In ancient civilizations including Mesopotamia, Egypt, and Rome, monumental doorways marked temples, palaces, and civic buildings, emphasizing their divine or political significance. The Ishtar Gate of Babylon, built in the sixth century BCE under King Nebuchadnezzar II, was a massive entryway adorned with glazed blue bricks and images of sacred animals. It served as both a protective barrier and a symbol of the city’s splendor. Similarly, Rome’s grand entrances, such as those of imperial forums and temples, reinforced the might of the empire. Medieval European cathedrals later adopted this tradition, using towering doors to inspire awe and humility.
A striking example is the set of bronze doors at the Basilica of St. John Lateran in Rome, originally part of the Curia Julia, the ancient Roman Senate House completed in 29 BCE. These massive doors were relocated to the Basilica of St. John Lateran in the late 17th century under Pope Alexander VII. Standing more than 25 feet tall, they reflect both the opulence of imperial Rome and the authority of the Catholic Church.
Beyond their symbolic role, large doors served as tangible displays of wealth and prestige. Only the richest individuals and institutions could afford the materials, labor, and craftsmanship required to create these grand entrances. One of the most famous examples is at the Palace of Versailles in France, built for King Louis XIV in the 17th century. The large gilded doors in the Hall of Mirrors and royal apartments were designed to reflect the king’s immense wealth and absolute power. Intricately adorned, these doors reinforced the exclusivity of the spaces they guarded, permitting only the most privileged individuals to pass through.
Another impressive example of large doors reflecting the owner’s wealth is at Hearst Castle in California, constructed by newspaper magnate William Randolph Hearst in the early 20th century. The estate features enormous bronze and carved wooden doors, many imported from Europe. By incorporating historic architectural elements, Hearst not only put his wealth on display, but also aligned himself with the grandeur of European aristocracy.
But it wasn’t just those with vast wealth who favored oversized doors for their homes. Historical townhouses in both Paris and London often featured oversized entryways, reflecting the architectural styles of certain periods. In Paris, during the Haussmannian era of the mid-19th century, buildings were designed with impressively large, ornately decorated entrances. Similarly, London’s Georgian-era townhouses of the 18th and early 19th centuries often featured oversized front doors with decorative moldings and fanlights. These architectural features were symbolic representations of the wealth and status of those who lived there.
Beyond their symbolic and aesthetic roles, large doors also met important practical needs in historical architecture. In medieval castles and fortified cities, oversized doors allowed for the passage of horses, carriages, and large groups of people. These grand entryways balanced accessibility with security, often reinforced with heavy wood, iron, or bronze to withstand sieges while still permitting the movement of goods and troops. Religious structures, such as cathedrals and mosques, also required large doors to manage the flow of congregations and accommodate ceremonial entrances. Their scale complemented the grand architecture, creating visual harmony while inspiring awe and reverence.
A notable example is the Florence Baptistery in Italy, famous for its massive bronze doors, particularly the Gates of Paradise by sculptor Lorenzo Ghiberti. Standing more than 15 feet tall, these doors allowed for large crowds to enter during baptisms in the Middle Ages and Renaissance era. Similarly, the Pantheon in Rome features enormous bronze entrance doors, nearly 24 feet high, which facilitated the movement of worshippers and religious statues during ceremonies. Their durability and scale also showcased Roman engineering mastery.
The shift toward smaller doors began at the turn of the 19th century, as advancements in construction made oversized entrances less necessary. Earlier architecture relied on heavy pivots and iron strap hinges, requiring oversized doors for structural support. But the development of newer materials and designs allowed for more practical, durable doors without the need for excessive size.
Urbanization and industrialization further reinforced this downsizing trend. As cities became denser and streets narrower, space constraints made large doorways impractical. In the 20th century, standardized building materials and construction techniques led to more compact, functional doors in residential and commercial buildings. And as societies grew more egalitarian, the rise of the middle class and private homeownership reduced the appeal of large and imposing entryways.
Advertisement
Advertisement
Subscribe to History Facts
Enter your email to receive history's most fascinating happenings in your inbox each day.
Sorry, your email address is not valid. Please try again.
Your email is:
Sorry, your email address is not valid. Please try again.