Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Author Nicole Villeneuve
June 18, 2025
Love it?27
Summer vacation has been an integral part of American family life for more than a hundred years. This season of leisure is primarily due to one thing: School’s out. But why is there no school in summer? You may assume it’s a holdover from the country’s agrarian days, when children were required to help out on the family farm. But the summer vacation we know today actually had more to do with urban health concerns and public policy than with hay bales or cornfields. Here’s a look at the origin of summer vacation.
In the early 1800s, there was nothing even close to a standard school calendar in the United States. Communities ran schools as best suited their needs or abilities, leading to a very loose patchwork of local schedules. Rural schools were typically open in winter and summer in order to accommodate busy farm seasons that required planting in the spring and harvesting in the fall. (That said, even in the summertime there was plenty of farm work to do, and school attendance was often low.) City schools worked differently. Many operated almost year-round, taking only a short break each quarter. And the inconsistencies didn’t end there: Even within a single county, schools were extremely localized and their operational calendars could be starkly different.
By the 1840s, education reformers such as Horace Mann were pushing for change. In line with the broader Common School Movement, Mann thought that free, tax-supported schools should be accessible to all children, and organizing the school calendars was a big piece of the puzzle. Reformers believed rural schools, with their agrarian schedules, were too inconsistent and the number of school days inadequate. Growing urban populations, meanwhile, gave rise to a new model of schooling that grouped students by age in separate classrooms. This “age-graded” approach, which Mann had admired during a visit to Prussia, required a more coordinated school calendar to work smoothly across different districts.
Advertisement
Advertisement
Credit: Underwood Archives/ Archives Photos via Getty Images
Why Summers Off Made Sense
In the 19th century, many city schools were in need of renovations and repairs, which were best done in the summer months. The buildings were also, of course, not yet air conditioned, and their overcrowded, poorly ventilated rooms often fueled outbreaks of disease in the summer’s stifling heat. Increasingly, the schools’ wealthiest families took to fleeing the city’s soaring summer temperatures for vacation escapes, further driving social fragmentation and inequalities within the schools.
At the same time, medical theories claimed that too much mental exertion could overstimulate young minds and lead to exhaustion — or worse. Reformers saw summer as the best time to rest the mind. Summer vacation also addressed urban schools’ public health concerns and gave teachers a much-needed break. The Common School Movement sought to further professionalize the teaching profession, and the extended break gave educators time to take advantage of summer conventions and training programs.
By the end of the 19th century, states had begun implementing school funding eligibility rules, including a minimum number of instructional days. A 180-day calendar took shape, spanning late summer to late spring, with summertime carved out as a clear and consistent break lasting from about the end of June to the end of August. Between 1880 and 1920, the United States’ school calendar fully fell into alignment, and a national rhythm of summer break has been the norm ever since.
The modern English word “history” comes partly from the Latin historia and partly from the French storie (or estoire), but those terms both trace their roots back to the same place. Not coincidentally, that place is one of the bedrocks of Western civilization: ancient Greece.
Going back about as far as we can, the word “history” can be traced to the ancient Greek verb οἶδα (heda), meaning “to know.” From there, the Greek ἵστωρ (histōr) arose, which had a variety of meanings depending on the context. As a noun, it could mean “wise man,” “judge,” “witness,” or simply “one who knows.” As an adjective, it meant “knowing” or “learned.”
The Greek word historia evolved from that histōr root. It originally meant “a learning or knowing by inquiry,” “an account of one’s inquiries,” or simply “inquiry” or “narrative.” That word was borrowed into Latin (also as historia), where it meant “narrative of past events, account, tale, story.” Interestingly, although historia was borrowed from Latin into Old English as stær (or ster or steor) to mean “history, narrative, story,” our modern English word “story” comes from the French storie or estoire.
Credit: PHAS/ Universal Images Group via Getty Images
As so often happens in English, the modern word is a blend of various influences and threads that can be hard to extricate. But etymologists (those who study the origin of words) note that in post-classical Latin, historia sometimes became istoria, which in Old French became estoire or estorie, meaning “story, chronicle, history.” Of course, it makes sense that “history” and “story” would be linked, given that history is itself a kind of story — perhaps the ultimate story.
According toMerriam-Webster, the term “history” first showed up in English sometime before the 12th century, with the meaning “a chronicle of significant events.” At that point, according to the Oxford English Dictionary, the word was borrowing from both Latin and French.
The term “historian” came later — around the middle of the 15th century, likely from the medieval Latin word historianus. (Sadly, the Old English wyrd-writere, to mean“one who writes an account of events, a historian or historiographer,” never caught on.)
Of course, the world had historians long before the modern term was in use. The Greek scholar Herodotus, who lived in the fifth century BCE, is often called the first historian. He’s thought to have been the first to collect historical materials in a systematic and investigative way, to vet their accuracy, and to construct them into a narrative (or a historia).
The Histories (or Historíai in Greek), Herodotus’ lengthy account of the rise of the Persian Empire and the Greco-Persian Wars, is the work that made his reputation. Yet his legacy is controversial: Herodotus is called the “father of history” as well as the “father of lies,” a duality that points to the difficulties of reconstructing the past and the more problematic ways that history can be a story. But it’s worth remembering that whatever the challenges, the root of the term goes back to one of the most fundamental human impulses: the desire to know.
Credit: Sheridan Libraries/Levy/Gado/ Archive Photos via Getty Images
Author Timothy Ott
June 12, 2025
Love it?24
While the Coinage Act of 1792 established the United States Mint and the dollar as the unit of currency for the fledgling nation, it was quite some time before a fully standardized monetary system took root. In the meantime, people in possession of valuable metals continued to use them for transactions, local banks offered their own currency, and foreign money continued to flow until being banned as legal tender in 1857.
Despite this relative instability, Americans today would largely recognize the various forms of currency that exchanged hands in the 19th century, even if the designs and denominations of the coins and paper bills often differed from those in circulation today. Here’s a glimpse at what money looked like as the United States came of age.
The 1792 Coinage Act stipulated that all U.S. coins were to feature a depiction of the goddess Liberty on the front, while gold and silver coinage also required the display of an eagle on the back. As a result, the gold eagle ($10), half-eagle ($5), and quarter-eagle ($2.50) coins that went into circulation in the 1790s were all engraved in this fashion through the 1800s.
By the mid-19th century, the discovery of gold in California ushered a new wave of the precious metal into the economy, resulting in the creation of the new $1 piece, the $20 “double eagle,” and the octagonal $50 unit, aka the “slug.” Unusual denominations from this era include the $3 coin, which featured Liberty in a Native American headdress, and the $4 “Stella,” which displayed a five-pointed star instead of an eagle.
Although the silver half-dime, dime, quarter, half-dollar, and dollar pieces all surfaced by the mid-1790s, the U.S. Mint rarely churned out the first three in the early years of the 19th century, and the dollar was temporarily discontinued in 1804. Nevertheless, silver endured as an important component of the economy, with the full figure of the "Seated Liberty," accompanied by a shield and pole with a cap, adorning most coinage in the 19th century.
A clear outlier of the bunch was the 3-cent "trime" that was minted between 1851 and 1873, the smallest of all coins at just 14 millimeters in diameter and the first silver piece to not display an eagle on the reverse. Although the federal government attempted to make gold the sole monetary standard in 1873, the Bland-Allison Act of 1878 reintroduced silver into circulation, with the "Morgan Dollar," and its large profile of Liberty's face, emerging as a popular coin.
Although gold and silver dominated U.S. currency in the antebellum period, the first coins to enter circulation following the Coinage Act were the large cent and half-cent in 1793. The large cent, which was roughly the size of a modern half-dollar, displayed either the "Matron Head" or "Braided Hair" profile of Liberty for most of its 19th-century run, while the half-cent featured similar designs across its quarter-sized dimension.
Both denominations underwent major changes in 1857; the cent was shrunk to its current size, while the half-cent was discontinued altogether. Another notable copper coin from the era was the short-lived 2-cent piece, which displayed a shield instead of Liberty, and became the first coin to feature the motto "In God We Trust" when it appeared in 1864.
As part of efforts to fund the Civil War, the Union issued a temporary series of Demand Notes in 1861, before switching over to United States Notes in 1862. Both became known as "greenbacks" for the distinct green ink used as an anti-counterfeiting measure. As the earliest widespread form of U.S. paper currency, these notes resembled current legal tender in some ways, including the green color, but with some noticeable design differences.
The initial versions of the $1 and $10 notes respectively featured Treasury Secretary Salmon P. Chase and President Abraham Lincoln on the front, while the reverse sides of various bills included extensive wording or patterns such as the crisscross sawhorse design. Additionally, greenbacks and other early paper currency were printed in the larger dimensions of 7 3/8 inches by 3 1/8 inches, until shrinking to 6 1/8 inches by 2 5/8 inches in 1929.
Of course, the Confederacy also needed to fund its own economy and war efforts, resulting in paper currency that predated the Union version by a few months. Printed in denominations ranging from 10 cents to $1,000, these notes featured Southern figures such as Confederate President Jefferson Davis and South Carolina First Lady Lucy Holcombe Pickens, as well as allegorical symbols of industry, commerce, and the South defeating the North.
However, there was a lack of uniform size and design across the printings, and many of these "greybacks," as they were known, went into circulation with nothing printed on the reverse side. What's more, these low-quality notes faced the double whammy of being easily counterfeited and lacking the backing of government reserves, rendering them virtually worthless by the end of the Civil War.
Along with reintroducing silver into circulation, the Bland-Allison Act of 1878 led to a form of paper currency that was redeemable in silver dollars on demand. These silver certificates resembled the greenbacks that were already in widespread distribution, although certain sets are highly coveted by collectors for their ornate designs.
Chief among these are the 1896 "educational series," which notably featured the allegorical image of "History Instructing Youth" on the $1 certificate, and "Electricity as the Dominant Force in the World" on the $5 denomination. The $1 silver certificate, which debuted in 1886, is also celebrated for being the first and only U.S. paper currency to date to feature a woman on the front: Martha Washington.
Credit: Cincinnati Museum Center/ Archive Photos via Getty Images
Author Nicole Villeneuve
June 12, 2025
Love it?26
Today, being outdoors on a hot, sunny day usually means traveling with a few sun-blocking essentials: sunscreen, sunglasses, and a hat. Though our knowledge of sun damage is relatively recent — it wasn’t until the 1800s that scientists began to understand ultraviolet rays’ harmful potential — humans have always tried to avoid the unpleasant sting of too much sun. Yet the first commercial sunscreens didn’t arrive until the 20th century — before that, people had to find other ways to prevent getting a sunburn.
While it’s hard to pinpoint exactly when people first began actively protecting themselves from the sun, evidence suggests that even in prehistoric times, attempts were made to cover the skin both to stay warm in cold weather and also to block the heat of the sun. People covered themselves with animal hides, plant fibers, and later, woven textiles.
By at least 3000 BCE, some societies started to rely on parasols and umbrellas not only as accessories but also for shade; in ancient Egypt, they were often made out of palm leaves or feathers. Egyptians also wore lightweight, loose-fitting linen garments and headdresses to shield themselves from the sun. In ancient Greece, people commonly wore wide-brimmed hats such as the petasos, protecting their faces and necks from direct sunlight.
Early humans also used primitive versions of sunscreen made from natural compounds. Red ochre, a type of claylike iron oxide, has been mixed with water and applied as a paste to the skin since the time of early Homo sapiens. This mixture was used for ceremonial reasons, but scientists believe it may also have served as a physical barrier against the sun.
Ancient Egyptians, meanwhile, used skin treatments made from ingredients such as rice bran (which absorbs UV light), jasmine (to help repair sun-damaged skin), and lupine (believed to lighten the complexion). Ancient Greece had its own approach: Olive oil was commonly applied to the skin between 800 and 500 BCE. While it offered limited protection, modern studies have found it has a natural sun protection factor (SPF) of about 8 — enough to slightly reduce burning, though far lower than today’s common SPF 30 (the minimum recommended by dermatologists).
Like with ochre, other pastes were made from a variety of natural compounds, including mud and clay. These were used not only as camouflage or ceremonial decoration, but also as protection from the sun. Zinc oxide was used in India as early as 500 BCE, and water reeds and spices were turned into sunscreen by the Sama-Bajau peoples of Southeast Asia around 840 CE. Indigenous peoples in the Americas, meanwhile, used sunflower oil, pine needles, western hemlock bark, and deer fat, while thanaka, a mixture made from ground bark and water, has been used in Myanmar (formerly Burma) for more than 2,000 years.
Over time, in some societies — including in ancient Egypt and later in Europe and parts of Asia — protecting the skin became a mark of social status. A pale complexion signaled that you could afford to avoid outdoor manual labor and instead spend your days indoors or in the shade.
In Egypt, this desired look was achieved through use of parasols and topical skin treatments that blocked the sun. In Europe in the 16th century — particularly in France and England — upper-class women wore striking-looking visard masks to prevent sunburn and preserve that pale complexion.
The visard mask was made of black velvet with a silk interior lining, and its only features were a slight protrusion for the nose and small holes for the eyes and mouth. The masks weren’t just eerie to look at, either — they were rather unsettling to wear. Most versions didn’t have straps and were instead held in place by a bead or button gripped between clenched teeth; the wearer couldn’t speak while the mask was on.
By the early 1700s, visard masks had spread beyond the aristocracy, and beyond their intended purpose of preventing sunburns. Women of various social classes wore them, including sex workers, who often used them to discreetly enter public spaces such as theaters. In 1704, Britain’s Queen Anne even banned visard masks from the theater, but they’d already lost their status among the elite and eventually faded out of use.
By the end of the 19th century, dermatologists confirmed that prolonged exposure to the sun’s UV rays could inflame or burn the skin, and scientists began experiments to develop effective and suitable topical sun protection. In 1878, Austrian physician Otto Veiel promoted tannins — natural compounds found in many plants — as viable sun protection, but they also discolored the skin.
In 1891, a German doctor experimented with what was likely the first true attempt at a chemical sunscreen, a quinine-based ointment to treat skin sensitivity to sunlight. And in the early 1900s, German physician Paul Unna came up with another sunscreen precursor, a paste made of natural ingredients such as chestnut extract. Ultimately, these early products didn’t apply well, either discoloring the skin or going on too thick, and so the experiments to find a better solution continued.
It wasn’t until the mid-1900s that sunscreens began to resemble what we use today. In 1942, the U.S. military tapped the American Medical Association to study products or substances that could help protect soldiers from getting sunburned during particularly hot World War II Pacific campaigns. The solution was a thick, red, veterinary petroleum salve, also known as “red vet pet.” It was waterproof, durable, nontoxic, inexpensive, and, most importantly, relatively effective.
Florida pharmacist Benjamin Green had served in the Air Force throughout the war, and in 1944, he began experimenting with ways to make the sticky substance more appealing. He added ingredients such as cocoa butter and coconut oil, creating a smoother, nicely scented lotion — the earliest version of what later became Coppertone. Throughout the 1950s and 1960s, sunscreen formulas continued to improve in texture and ingredients, offering broader protection against UVA and UVB rays, and by the 1970s and ’80s, sunscreen was widely marketed for sunburn protection and as a tanning aid.
Today, sun protection can be as subtle as a swipe of SPF lip balm or as advanced as UV-reflective clothing and tinted window film. The methods may have changed, but the instinct remains the same: When the sun beats down, we find ways to keep cool, stay covered, and avoid the burn.
Credit: Henry Guttmann Collection/ Hulton Archive via Getty Images
Author Bess Lovejoy
June 5, 2025
Love it?22
For the ancient Greeks, Romans, and folks in other cultures, dreams were far more than idle nighttime fancies. They were powerful, often sacred experiences that shaped lives, politics, religious practices, and art.
While ancient people likely dreamed about many of the same themes we do today — love, fear, death, power, the divine — their dreams were widely seen as significant messages, often believed to come directly from gods or supernatural forces. Ancient dreamers sought meaning in their visions, often finding answers to illness, moral dilemmas, or matters of state, and they acted on their dreams with great seriousness. Here’s a look at what people in ancient times likely dreamed about, and what they believed those visions meant.
Credit: Culture Club/ Hulton Archive via Getty Images
Divine Messengers and Prophecy
One of the most common types of dreams in antiquity featured divine or semidivine figures delivering a message — what later Roman thinkers such as the scholar Macrobius classified as “oracles,” and later scholars have called “epiphany dreams.” These dreams usually involved a god, ancestor, or venerable figure announcing future events or prescribing actions to take.
A prominent example is Penelope’s dream in Homer’s Odyssey, where she sees an eagle slay her flock of geese. The eagle speaks, revealing himself as Odysseus and foretelling his return and vengeance. In another example, from ancient Sumer, King Eanatum I dreamed that Ning̃irsu — the Sumerian god of thunderstorms and floods — told him he would triumph in a war. And in Egypt during the 15th century BCE, a deity told Prince Thutmose IV that he would become pharaoh if only he freed the Sphinx from the sand engulfing its body.
In some early Christian writing, dreams offered opportunities for moral instruction, although it can be hard to distinguish between sleeping dreams and what we’d now be more likely to call visions. But it wasn’t unusual for dreams to influence early religion: Ptolemy I Soter, one of Alexander the Great’s generals, had a dream of a giant statue that led him to found the cult of Serapis.
For many ancient people, especially those in the Greco-Roman world, dreams were not only forecasts but also instructions for therapy. The prominent Greek physician Galen reported that he had performed surgery based on instructions received in a dream. (In fact, Galen owed his entire career to a dream his father had.)
In Roman times, the orator Aelius Aristides kept extensive dream journals detailing his interactions with the god Asclepius, the god of healing. In his first such dream, the deity directed him to walk barefoot in cold weather. Aristides also wrote of being instructed to plunge into a freezing river in winter. Despite the bitter cold, he followed the divine command and emerged feeling renewed, with a “certain inexplicable contentment” that lasted through the day and night.
The temple of Asclepius in Pergamum, where Aristides spent years undergoing dream-based treatments, was a center of “incubation” — a practice in which patients slept in the temple in hopes of receiving a healing dream, or simply being healed by the god while they slept. Aristides believed his ailments were not only healed through these dreams, but that the dreams themselves revealed a deeper layer of his identity.
Not all dreams in the ancient world were seen as straightforward. Some were symbolic puzzles requiring interpretation. The second-century dream theorist Artemidorus, in his book Oneirocritica (or The Interpretation of Dreams), distinguished between direct (“theorematic”) and allegorical dreams. The former might show you exactly what was coming (for instance, dreaming of a shipwreck and waking up to discover it coming true), but the latter cloaked meaning in metaphor.
In symbolic dreams, one thing signified another — an eagle could mean a king; a journey, an impending change; a flood, internal unrest. Interestingly, interpretation relied not on fixed meanings but on context, such as who the dreamer was as well as their emotional state, social status, health, and personal concerns.
Dreams also fed ancient literature and drama. The playwright Aristarchus of Tegea reportedly wrote a tragedy at the command of the god Asclepius, who appeared in a dream after the playwright’s recovery from illness.
Homer’s famous epics are also infused with dream logic and imagery. In the Iliad Book 2, Zeus sends a deceitful dream to Agamemnon in the form of a person urging him to attack Troy. It’s an example of dreams as political tools used by gods to shape human affairs.
Of course, not everyone in the ancient world took dreams so seriously. In the fourth century CE, the philosopher Diogenes the Cynic had this to say about people who got worked up about dreams: “They did not regard what they do while they are awake, but make a great fuss about what they fancy they see while they are asleep.”
Yet as with most things, Diogenes was an outlier. For many in the ancient world, dreams were seen as legitimate and often essential tools for navigating illness, ethics, and divine will. Ancient people dreamed of gods and ghosts, rivers and birds, death and healing, fear and redemption. And whether interpreted as prophecy, therapy, or metaphor, those dreams were treated with reverence and awe.
Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images
Author Timothy Ott
June 5, 2025
Love it?37
Even with the decline of letter writing in the digital age, the ZIP code remains an American institution, a neat five-digit number that caps an address and, like an area code, can serve as a point of pride and prestige.
Given the ZIP code’s place as an oft-used and universally recognized symbol, it may come as a surprise that the ZIP, short for “Zone Improvement Plan,” isn’t all that old. The system was enacted on July 1, 1963, within many of our lifetimes and just a few months before another famous entity, the Beatles, also arrived in the United States.
But unlike the mop-topped quartet, the five-digit zoning plan wasn’t immediately welcomed by Americans. Here’s a look at how the ZIP code came to be, and ultimately overcame a bumpy start to emerge as a signature accomplishment of the United States Postal Service.
Credit: Bettmann via Getty Images
An Early Zoning System Came Out of World War II
Like many innovations, the ZIP code’s origins can be traced back to World War II. At the time, the Post Office Department, as the U.S. Postal Service was then called, was dealing with the loss of personnel to military duty, specifically the departure of experienced sorters who could properly funnel letters and parcels marked with incomplete addresses.
As a result, in 1943, the department assigned one- to two-digit zone numbers to more than 100 high-density urban areas across the country to help make sorting for these areas more efficient. The numbers were to be written on the address of the recipient after the city name, such as “Indianapolis 24, Indiana.”
Although the zone numbers provided some organizational relief, they only papered over the problem of keeping up with the ever-growing volume of mail. Fueled by the country’s postwar population and economic boom, the number of individual pieces of mail jumped from 33 billion in 1943 to 66.5 billion in 1962. By the latter date, a letter was handled by an average of eight to 10 postal employees, increasing the possibility of human error.
The issue wasn’t going unnoticed by the department’s employees. In 1944, a prescient postal instructor named Robert Moon sought to get ahead of the volume problem with his proposal of splitting the country into a network of regional processing centers, each marked by a three-digit code. Nine years later, another inspector, H. Bentley Hahn, completed a six-year study of the department’s outdated operations with a report titled “Proposed Reorganization of the Field Postal Service.”
But despite the modernization efforts of Postmaster General Arthur Summerfield, who introduced the country’s first automated post office in Providence, Rhode Island, in 1960, the department was still struggling to meaningfully address its problems as it faced down a new decade.
Credit: Roland Witschel/ picture alliance via Getty Images
The Zone Improvement Plan Is Born
The tide turned in the early 1960s with the appointment of Postmaster General J. Edward Day, who saw in Moon's long-overlooked proposal and the newly implemented West German postal code system a solution for improving delivery speed and eliminating backlog.
Under the new regime, the department devised a five-digit code that built on Moon's idea of regional processing centers to create a national network of delivery zones. The first digit of the new code denoted one of 10 large geographical regions of the country, in most cases encompassing several states. The second digit signified a smaller division within each region, which included either multiple states, one state, or a portion of a heavily populated state.
The third digit represented either a large city post office or a designated regional center within reach of urban centers and more remote outposts. The final two digits, which incorporated the 20-year-old zone numbers already in place, marked either a division of a big city post office or a smaller one directly served from a regional center.
Hahn, who helped devise the numerical system, was also tasked with helping to explain the finer details of the process in late 1962, before the plan’s official launch. Meanwhile, it was another executive, D. Jamison Cain, who conceived the acronym-friendly name of Zone Improvement Plan, as well as the ideal mascot to deliver the message to the public.
In the 1950s, the Cunningham and Walsh advertising agency designed a smiling cartoon mailman for a Chase Manhattan Bank mail campaign. The logo was acquired by AT&T and then the Post Office Department, where it was rechristened as "Mr. ZIP" and designated as the face of the department's new promotional push.
With the arrival of the ZIP code on July 1, 1963, the image of Mr. ZIP was plastered across post offices, mail trucks, brochures, and buttons worn by mail carriers as part of efforts to urge the public to include the five digits on all outgoing mail. He also appeared in a commercial that featured the voice of Broadway star Ethel Merman singing about the joys of the ZIP code to the tune of "Zip-a-Dee-Doo-Dah."
Despite the ubiquitous presence of the friendly mascot, there was some built-in resistance to adopting the new code. This was in part due to other recent widespread changes foisted on the public, including Social Security numbers being required for individual tax returns and phone numbers changing from the old alphanumeric system to all numerals. Bulk mailers also objected to ZIP code implementation, which required the investment of time and money to update their Automatic Data Processing equipment.
Credit: Rae Russel/ Archive Photos via Getty Images
The ZIP Code Was Widely Accepted by the End of the 1960s
The bulk mailers were forced to go with the flow via congressional enforcement, and the rest of the public soon got used to writing out the five-digit code thanks to persistent reminders from Mr. ZIP. According to a 1969 survey, some 83% of Americans were using ZIP codes by that time, while 90% believed the system to be a good idea.
Additionally, the ZIP code’s reach was already extending beyond the Post Office Department. The five digits became a focus of marketing companies, insurance firms, and the U.S. Census Bureau, all of which found the system beneficial to supplying more targeted data.
The Beatles had broken up by the time the Post Office Department reconfigured itself as the USPS in 1971, but the ZIP code was still going strong, the formidable five-digit system outlasting even the Fab Four as an enduring and thriving symbol of 1960s ingenuity.
Credit: Heritage Images/ Hulton Archive via Getty Images
Author Bess Lovejoy
June 5, 2025
Love it?91
Spend any time gazing at medieval European paintings, and one question tends to emerge: What is going on with those babies? Far from the sweet, chubby cherubs we might expect to see, these infants often resemble balding middle-aged men, complete with wrinkled foreheads and dour expressions. What could possibly explain this bizarre artistic choice? To understand, we have to dive into how European art — and the perception of children — evolved from the Middle Ages to the Renaissance.
The unsettling “man-baby” of medieval art wasn’t a mistake or the result of a lack of skill. These depictions were intentional, shaped by artistic and religious ideas of the Middle Ages (roughly defined as between the fifth century and 13th century). Chief among these ideas was the concept of the homunculus, Latin for “little man,” which influenced how artists portrayed Jesus Christ as an infant. In many medieval works, baby Jesus appears with a full adult face, sometimes even showing signs of male-pattern baldness. The idea was that Jesus, being divine, was fully formed and unchanged from birth (a notion referred to as “the homuncular Jesus”). This theological concept seeped into broader portrayals of children, especially since the majority of child depictions in medieval art were religious commissions — portraits of Jesus or the occasional saintly infant.
As a result, artistic conventions leaned heavily toward depicting children as miniature adults. There was little interest in anatomical accuracy or realism. Instead, medieval artists followed established norms that prioritized symbolic meaning and spiritual messaging over lifelike representation. These conventions flattened individuality; adults and children often looked similarly stylized.
It also didn’t help that painters in this period lacked full artistic freedom. Many were working within strict church guidelines or copying earlier models, so even if they had wanted to create more accurate depictions, they weren’t supposed to do so. This lack of realism means that children in medieval paintings are often difficult to recognize as such. Some appeared disproportionately large or small, while others simply looked like shrunken grown-ups.
Around the 14th century, things began to shift. The Renaissance — an intellectual and cultural movement that most scholars agree began in Florence, Italy — brought a renewed focus on nature, humanism, and realism in art. Artists became more interested in depicting the world as it actually appeared, rather than as a symbolic landscape.
This shift wasn’t instantaneous or universal. You could still stumble across a man-baby in a Renaissance-era painting if the artist was particularly committed to older styles. And the Renaissance itself was not a monolith; it didn’t blanket Europe uniformly or overnight. But the changes it ushered in had profound effects on artistic representation, including how babies were depicted.
One key factor was the rise of secular art. As Florence’s middle class grew in wealth and influence, more people could afford to commission portraits of their families, including their children. And understandably, they didn’t want their toddlers immortalized as frowning and balding. This demand for more personalized, relatable portraiture helped shift artistic norms. Over time, even depictions of baby Jesus became softer, rounder, and more recognizably infantile.
Renaissance idealism also played a major role in how children were depicted. Artists began studying anatomy, observing real babies, and drawing inspiration from classical sculpture. The result was a new visual vocabulary: plump cheeks, curling tufts of hair, playful expressions. These weren’t just more realistic babies — they were idealized versions, blending the best features into adorable cherubic forms.
Alongside these artistic developments, a subtler change was happening in how society viewed children. Although we should be cautious not to project too much onto medieval families (there’s no evidence that parents loved their children any less), the Renaissance introduced a new philosophical current. Children began to be seen less as incomplete adults and more as innocent beings, untainted by sin or worldly knowledge. As adults’ attitudes toward childhood evolved, so too did their depictions of children. Cute babies weren’t just aesthetically pleasing — they reflected deeper cultural values around family, morality, and human development.
In short, those weird medieval babies weren’t just artistic oddities. They were visual expressions of a different worldview. So next time you see a medieval baby that looks like it’s about to pay taxes, don’t laugh too hard. That little face is telling a rich story about art, belief, and how we’ve come to see the smallest among us.
Credit: ullstein bild Dtl./ ullstein bild via Getty Images
Author Tony Dunnell
May 27, 2025
Love it?28
In the world of eyewear, few accessories have captured the imagination quite like the monocle. Widely regarded today as an eccentric throwback from an earlier age, the monocle began life as a fairly simplistic and imperfect device for correcting eyesight. But something odd happened during the 19th century: The unassuming corrective lens began taking on an entirely new significance as a powerful symbol of class, intellectual prowess, and cultural identity.
But why did this simple optical device evolve into a status symbol? And why did people start wearing monocles in the first place, when spectacles — those of the two-lens variety — had been around since the 13th century? Here’s a close look at why people used to wear monocles, and why the curious eyepiece ultimately went out of fashion.
Credit: Sepia Times/ Universal Images Group via Getty Images
The Origins of the Monocle
The origin of the monocle is somewhat blurry. It likely developed from the “quizzing glass,” which was a magnifying lens on a handle that was held up to the user’s eye to aid in reading or inspecting objects. The monocle, of course, did away with the handle altogether, and was instead held in place by the eye socket itself.
Monocles helped with reading small print and other tasks requiring near vision. They also had the benefit of allowing both hands to be used freely (unlike the quizzing glass) while also being easy to carry, slipping comfortably into a top pocket.
Monocles, however, have one obvious issue: They correct vision in only one eye. This might be fine for someone with anisometropia, in which only one eye needs correcting, but virtually all monocle users require optical correction in both eyes.
This issue was being discussed by medical practitioners as far back as the early 1800s — and their comments were often critical. An anonymous German treatise published in 1824 stated, “The monocle with which a single eye is used must be avoided because it disturbs the balance of binocular vision.” The same year, London optician William Kitchener warned, “This pernicious plaything will most assuredly in a very few years bring on an imperfect vision in one or both eyes.”
By that time, eyeglasses with lenses for both eyes had already been around for several centuries; they were invented in Italy in the 13th century and became increasingly sophisticated over the years. So why were monocles so popular in the 19th and early 20th centuries? In a word: fashion.
One of the first well-known monocole adopters was the Prussian antiquary Philipp von Stosch, a highly influential art and antiquities dealer in the 18th century. (He also happened to be a spy for the British government.) Von Stosch was a notable figure in European high society, and is widely credited with having made the monocle a fashionable accessory.
In Britain, it’s possible that monocles became popular among theatergoers in the early 1800s — a theory presented by Pitt H. Herbert in his 1950 essay “An Eye on the Monocle” — and from there spread to the upper classes, primarily as a matter of style. The Austrian inventor J.F. Voigtlander, who was studying optics in London, introduced the monocle to Vienna in 1814, helping the new fashion spread to Russia and Germany.
By this point, monocles had been wholeheartedly adopted by the rich and powerful, particularly in England and Germany. (According to Herbert, monocles arrived in America in the 1880s, with the fad reaching its height around 1913, but they never became as popular in the U.S. as they were in Europe.) Style quickly overcame any practical purpose, and people began wearing monocles purely as a fashion accessory, with no actual need of a corrective lens — many monocle lenses were made of plain glass with no corrective properties at all.
By the 1830s, the eyewear had completed its strange transformation. Monocles were no longer primarily used as a visual aid, but had taken on a new life as a fashion accessory representing education, refinement, and social standing.
In Britain, the craze began to fade by the end of the 1830s. But monocles didn’t disappear entirely for at least another century; they were still worn in British high society, whether by members of Parliament or well-dressed dandies. By the beginning of the 20th century, the monocle had even become a cultural symbol. Political cartoonists and playwrights used the eyepiece as visual shorthand for the privileged classes — often as a symbol of pomposity and snobbery. But it wasn’t the satirizing of the monocle that led to its fall from grace — it was war.
The monocle was fashionable in Germany, where it was popular among the German aristocracy as well as military leadership. During both World War I and World War II, many notable German generals wore monocles, including World War I General Erich Ludendorff, and the Nazi field marshals Walter von Reichenau and Walter Model, to name but a few. It was this association that ultimately brought about the monocle’s decline, as the eyewear fell out of favor in much of Western Europe and the United States in the wake of these conflicts. Today, the monocle exists primarily as a historical curiosity, a nostalgic and eccentric symbol of pretension and intellect.
The Middle Ages weren’t just shaped by monarchs and wars — they were lived by everyday people whose names appear in the records they left behind. Parish registers, royal charters, tax rolls, and literature give us a glimpse into the history of common names in medieval England and other parts of Europe and what they meant to the people who carried them.
Some of the most valuable insights about what people were called, and why, come from medieval books created specifically to record names. One example is the local Liber Vitae (Latin for “Book of Life”), which listed individuals — often clergy or benefactors — remembered in the prayers of religious communities. These books were relatively rare and typically associated with major monastic centers in England, such as Durham and Winchester, where they served both spiritual and administrative purposes. TheDurham Liber Vitae, which was updated over a span of 700 years, documents a wide variety of Anglo-Saxon and Norman names.
Other sources of medieval names, such as the 14th-centuryYork Registers, document naming trends among the clergy and nobility. Together with legal records and monastic rolls, these texts reveal how names reflected faith, status, region, and tradition. The names below give us a look at how people in the Middle Ages expressed their religious beliefs, honored their ancestors, and signaled their social standing — all through the names they gave their children.
The spread of Christianity throughout medieval Europe had a tremendous influence on naming practices. Biblical names and those honoring saints were popular — and in many regions, they were even required for baptism. Here are some of the most common examples.
John John, the Latin form of the Greek name Joannes, was one of the most widely used male names in medieval Europe. It originates from the Hebrew name Yochanan, meaning “God is gracious,” and gained prominence through early Christian texts and the popularity of biblical figures such as John the Baptist and John the Apostle. The name was embraced by kings, popes, clergy, and commoners, making it a universal choice across social classes. There were many variations of this popular medieval name, including Johan (Germanic and Scandinavian regions), Jean (French), Giovanni (Italian), and Juan (Spanish).
Thomas Thomas, the Greek form of the AramaicTa’oma’, meaning “twin,” became a common name throughout medieval Europe due to its biblical roots — most notably St. Thomas the Apostle. Its popularity grew further after the 12th-century martyrdom of St. Thomas Becket, archbishop of Canterbury. The name was used widely by both clergy and commoners. Medieval variations included Tommaso (Italian), Tomás (Spanish), and Thomasse (French).
Margaret Margaret was a popular name for girls across medieval Europe. It originates from the Greek margaritēs, meaning “pearl,” and entered Latin as Margareta through early Christian usage. The name was associated with purity and virtue, especially due to the widespread veneration of St. Margareta of Antioch and St. Margaret of Scotland. Some of the medieval variations of this beloved name included Marguerite (French), Margherita (Italian), Margarita (Spanish), Margarethe (German), and Margit (Hungarian).
Agnes Agnes comes from the Greek hagnē, meaning “pure” or “chaste.” It was a classic name among Christian families, inspired by St. Agnes of Rome, a young martyr celebrated for her steadfast faith and innocence. Her story made the name especially popular among parents seeking a model of virtue for their daughters. Medieval variations included Ines (Spanish and Portuguese), Agnès (French), and Agnese (Italian).
Throughout the Middle Ages, warriors and kings provided naming inspiration, reflecting ideals of power, bravery, and nobility. These names were chosen not just for their meanings, but also to honor heroic figures and royal lineage, shaping the way individuals were perceived in society.
William William rose to prominence after the Norman Conquest of 1066, thanks to William the Conqueror. It derives from the Germanic name Willahelm, meaning “resolute protector.” The name became a symbol of power and legitimacy throughout Norman-ruled lands. There were many medieval variations, including Guillaume (French), Wilhelm (German), Guilherme (Portuguese), and Guglielmo (Italian).
Richard Richard, from the Latin Ricardus, was a strong and noble name meaning “brave ruler.” It originates from the Old High German Ricohard, combining ric (meaning “power, ruler”) and hard (“brave, hardy”). Its popularity soared due to figures such as Richard the Lionheart, whose legacy of leadership and valor made the name especially popular. Medieval variations included Richart and Rikard in the French and Germanic regions.
Henry Henry, from the Latin Henricus, was a royal favorite throughout medieval Europe. It comes from the Old High German name Heimirich, meaning “home ruler.” A name popular among future kings and high-ranking nobles, it conveyed authority and stability. Some medieval variations included Heinrich (German), Henri (French), Enrique (Spanish), and Enrico (Italian).
Geoffrey Geoffrey, from the Latin Galfridus, was a distinguished name of Norman origin. Its roots lie in the Germanic Gaufrid or Walfrid, meaning “God’s peace.” Though less common today, the name Geoffrey (and its variations) was widely used in medieval records and literary circles, largely due to the influence of the Welsh cleric Geoffrey of Monmouth, whose writings helped popularize the name and its legendary associations, particularly with the Arthurian tales. Medieval variations included Geoffroi (French), Goffredo (Italian), Gofraid (Old Irish/Scottish), and Jofré (Catalan).
Though women were often excluded from political power, many female names during the Middle Ages reflected strength and elegance. Several prominent women of the era, whose names were closely tied to royal courts and political influence, helped shape naming trends that persist to this day.
Mathilda Mathilda (or Matilda) comes from the Latin Matildis, which traces back to the Germanic elements maht (“might”) and hild (“battle”), meaning “mighty in battle.” The name was widely used among medieval royalty and nobility, notably by Empress Matilda, daughter of Henry I of England and a central figure in the 12th century. It conveyed strength and high status, making it a favorite in aristocratic circles. Medieval variations included Matildis (Latin), Mahaut (Old French), and Mechthild (German).
Eleanor Eleanor, derived from the earlier Alienor, rose to prominence in medieval Europe following the fame of Eleanor of Aquitaine, queen of both France and England in the 12th century. The name came to symbolize noble grace and royal authority. Its origin is likely linked to the Provençal name Aliénor, possibly a form of Helen, though its exact roots remain uncertain. Eleanor became widely used across France, England, and parts of the Iberian Peninsula. Medieval variations included Aliénor (French), Alianor (Old French), and Ellinor (Middle English).
Alice Alice, from the Old French Aalis, traces its roots to the Old High German name Adalheidis, meaning “noble kind.” Brought to England by the Normans, Aelis was favored among the aristocracy and frequently appeared in courtly literature and historical records. Over time, it evolved into the Middle English name Alice, which became common in both noble and common households. Medieval variations included Aelis (Old French) and Alis (Middle English).
Beatrice Beatrice, from the Latin Beatrix, meaning “she who blesses,” rose to prominence in medieval Italy. The name was popular among noble families and gained lasting literary significance through Beatrice Portinari, the muse of Dante Alighieri. Immortalized in La Vita Nuova and The Divine Comedy, she cemented the name’s association with virtue and idealized femininity. Medieval variations included Beatriz (Spanish and Portuguese) and Béatrice (French).
From our modern vantage point, the culinary options of bygone cultures are sometimes difficult to comprehend. It seems that hungry people gobbled down anything they could get their hands on, including dormice (rodents), beaver tails, and fish bladder jam.
But while some of the choices seem unusual in hindsight, we can at least grasp their nutritional value. Other foods, however, were just downright dangerous to the human digestive system, and certainly wouldn’t have been on the menu had the consumer been aware of the consequences. Here are five toxic foods that people unwittingly used to eat.
Offering a rich source of vitamins, protein, and fatty acids, seafood is generally considered among the healthiest cuisine to eat — unless, of course, the specimens being consumed contain sky-high concentrations of heavy metals. Such was the case with the Atlantic cod and harp seals that comprised the meals of Stone Age settlers in northern Norway’s Varanger Peninsula around 5000 to 1800 BCE.
According to a recent study, cod bones from the settlement contained levels of cadmium up to 22 times higher than contemporary recommended limits, while seal bones showed similar dangerously elevated levels of lead. While it might seem strange that wildlife came with the risk of carcinogens in an era well before industrialization, the study authors suggest this was the result of climate change. It’s possible the thaw from the last ice age (between about 120,000 and 11,500 years ago) produced rising sea levels that carried soil containing the potent minerals into the water.
It’s well known that the ancient Romans enjoyed their wine, but it’s possible a component of the winemaking process fueled ill health in a manner that went beyond the typical hangover. The Romans made a sweet, syrupy substance called defrutum, which was prepared by boiling unfermented grape juice. This syrup was used as a preservative for wine and fruit, as well as in sauces for dishes of pork, veal, and lentils, as described in the famed Roman cookbook De Re Coquinaria.
The problem was in the sweetener’s preparation: Ancient scholars including Cato and Pliny the Elder called for the syrup to be boiled in a lead-lined pot, inevitably resulting in the syrup’s absorption of lead. Although the hazards of lead poisoning were known to the Romans, it apparently never occurred to these great minds that they were endangering the public with their instructions.
Nowadays, a typical Easter meal might include a ham and a sampling of the chocolate left by the Easter Bunny, but for Christians in medieval England, the holiday was incomplete without the serving of the tansy. The dish was named for its primary ingredient, the yellow-flowered tansy plant, which was mixed with herbs and a hearty helping of eggs to produce what was essentially a large, sweet omelet.
Coming on the heels of Lent, the tansy not only provided a welcome change from the strict diet of lentils and fish consumed over the previous 40 days, but also was said to provide relief from the gas-inducing Lenten meals. Despite its purported medicinal qualities, the plant is actually mildly toxic, its pyrrolizidine alkaloids dangerous to humans in high doses. Although the poison didn’t hinder the long-standing popularity of the tansy on dinner tables, people are generally dissuaded from eating the plant today.
You could highlight an array of foods in Victorian England that would fail to pass muster under any food safety laws, from the lead chromate found in mustard to the arsenic compounds used to color confectionery. However, given its ubiquity in households of the era, the most egregious example may well be bread.
Seeking to create thick loaves of an appealing white hue, Victorian bakers mixed in such ingredients as ground-up bones, chalk, and plaster of Paris. Another common additive was alum, an aluminum-based compound that inhibited digestion and contributed to the malnutrition rampant among the poorer population. Although the dangers of adulterated edibles were known among the more educated members of the public, there was little stopping the food producers and distributors who ignored these health risks in favor of profits.
Advertisement
Advertisement
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Rhubarb Leaves
Known for its reddish stalk and tart flavor, rhubarb in the hands of a capable chef can be used to create delicious pies, sauces, and jams. That is, the stalks can be turned into such kitchen delights — the thick green leaves are chock-full of toxic oxalic acid and therefore not meant for ingestion. Unfortunately, this fact was not well known a century ago, as rhubarb leaves were recommended as a source of vegetation during the food shortages of World War I.
Consumed in small but regular doses, the leaves inhibit the beneficial effects of calcium and trigger the buildup of calcium oxalate, leading to kidney stones. While a human would normally have to eat something like 6 pounds of the stuff to experience the more acute effects (including vomiting, diarrhea, and kidney failure), there was at least one reported case of oxalic acid poisoning during the rhubarb leaf’s brief run as a lettuce substitute.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.