There was a time when seeing the doctor didn’t mean sitting in a crowded waiting room or logging in to a patient portal. Instead, the doctor came to you, carrying a black bag and bringing their expertise and equipment to your bedside.
Today, health care looks very different. We drive to medical campuses filled with imaging suites and labs, check in electronically, and have our patient notes transcribed by AI. The transformation has been so complete that it can be hard to imagine the house call was a central feature of American medicine little more than a century ago. So what changed?
From the earliest days of American medicine through the early 20th century, house calls were a routine part of medical care in the United States. Physicians regularly traveled to patients’ homes in cities and rural areas alike. In 1930, approximately 40% of physician visits were house calls, according to the New England Journal of Medicine.
Most doctors were general practitioners who worked with patients of all ages. They delivered babies, set fractures, drained infections, treated pneumonia and influenza, and managed chronic illnesses. Medications were often dispensed directly from the physician’s bag. Payment could be made in cash or, particularly in rural areas, in goods or services.
Doctors did maintain offices, but they were often modest — sometimes located in the physician’s home — and equipped with limited diagnostic tools. Hospitals existed, but they were typically reserved for surgery, serious trauma, or advanced illness. Much everyday medical care happened in the home.
The decline of house calls was gradual at first and then dramatic. By 1950, house calls accounted for roughly 10% of physician visits; by 1980, they made up less than 1%.
Several forces drove this change. First was the rapid expansion of medical technology in the 20th century. X-rays, laboratory diagnostics, safer anesthesia, blood banking, antibiotics such as penicillin, and eventually intensive care units transformed medicine. These advances required equipment, sterile environments, and trained support staff that could not be replicated in private homes. Hospitals became safer and more effective, particularly after improvements in antiseptic technique and infection control.
Second, medicine became increasingly specialized beginning in the late 19th and early 20th centuries. Rather than one physician handling nearly every complaint, patients began seeing cardiologists, obstetricians, surgeons, and other specialists who relied on centralized offices and hospital facilities. As care grew more complex, the home visit became less practical.
Alongside advances in medical technology, economics played a decisive role in the decline of house calls. Even before the mid-20th century, home visits were time-consuming. A physician might see only a handful of patients in an afternoon of travel, compared with many more in a centralized office setting. As automobile ownership expanded and suburbs spread outward after World War II, the distances between patients grew, making travel even less efficient. Office-based care allowed physicians to treat more people in less time.
World War II intensified these pressures. An estimated one-third of American physicians entered military service during the war, creating shortages in many civilian communities. With fewer doctors available, maximizing efficiency became essential. Centralized offices and hospital-based care enabled the remaining physicians to manage larger patient loads, making routine house calls even more impractical.
At the same time, the structure of payment was changing. The growth of private health insurance in the 1930s and 1940s — followed by the establishment of Medicare and Medicaid in 1965 — formalized billing systems. Office visits were easier to standardize, document, and reimburse consistently. Home visits required travel time and longer appointments, yet reimbursement did not always reflect those additional costs, making them less financially sustainable.
Meanwhile, group practices and multi-physician clinics became increasingly common in the postwar decades. Shared equipment, centralized staff, and predictable scheduling improved productivity and stabilized revenue for doctors.
Social change similarly played a role in the decline of house calls. As more families owned cars, traveling to a physician’s office became feasible for many folks who once relied on home visits. Urbanization, improved road systems, and expanded hospital networks also reduced geographic isolation.
Home births illustrate the broader trend. In 1900, nearly all U.S. births took place outside of hospitals, attended by either a doctor or midwife. That rate fell to 44% by 1940 and just 1% by 1969, reflecting growing confidence in hospital-based obstetric care, anesthesia, and neonatal medicine. As childbirth shifted to hospitals, one of the most common reasons for physician house calls largely disappeared.
Concerns about safety and liability also influenced practice patterns. Controlled clinical environments allowed for better infection control and standardized procedures. As medicine professionalized and regulatory standards expanded during the 20th century, office- and hospital-based care became the default model.
By the 1980s, traditional house calls had largely vanished — but they have not disappeared entirely. Geriatric and palliative care programs now provide in-home services for elderly or homebound patients. And research from the Department of Veterans Affairs and other programs has shown that home-based primary care for patients with complex chronic conditions can reduce hospitalizations and lower overall health care costs while maintaining high patient satisfaction. Telehealth has also created a new version of the house call. During the COVID-19 pandemic, virtual visits expanded dramatically after regulatory barriers were temporarily eased. While telemedicine has declined from its 2020 peak, it remains a routine part of care in many health systems.
Advertisement
Advertisement
What’s the Real Story of Isaac Newton and the Apple?
It’s one of the most iconic images in scientific history: Isaac Newton is sitting beneath an apple tree when a piece of falling fruit hits him on the head, sparking his revolutionary theory of universal gravitation. The tale has been told in countless textbooks and popular accounts, and has become a metaphor for “eureka” moments and the process of scientific discovery in general.
But did an apple actually fall on Newton’s head? Or is this account a fanciful fiction that attached itself to the story of Newton’s brilliant scientific and mathematical insights? Here, we take a closer look at the well-known tale of Newton and the falling apple, and the truth behind one of history’s greatest scientific discoveries.
You’re probably familiar with the basic tale of Newton and the apple tree. The story typically has a young Newton sitting beneath an apple tree in the gardens of Woolsthorpe Manor, his childhood home, which he happened to be visiting in 1666. Then, suddenly, an apple falls from the tree, hitting Newton square on the head and triggering his moment of inspiration. (Here the teller of the tale may insert a shout of “Aha!” or “Eureka!”) In this moment, Newton comes to the magnificent realization that the force that made the apple fall is the very same force that keeps the moon and planets in their orbits. And with this, his theory of universal gravitation is born.
This story wasn’t plucked out of thin air: The now-legendary moment was based on Newton’s own account, which he told to several acquaintances near the end of his life. Among the first people was his niece Catherine Barton (later Catherine Conduitt). Barton then recounted the story to others, including French Enlightenment philosopher Voltaire, who briefly mentioned the incident in his 1727 Essay on Epic Poetry: “Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree.”
The previous year, in 1726, Catherine Barton’s husband, John Conduitt, also referenced the incident in his notes, writing how Newton first came upon his “system of gravitation… by observing an apple fall from a tree.”
The most detailed contemporary account comes from William Stukeley, an archaeologist and one of Newton’s first biographers. In his Memoirs of Sir Isaac Newton’s Life (published in 1752), Stukeley related a conversation he had with Newton, a year before the scientist’s death in 1727: “We went into the garden, and drank thea under the shade of some apple trees, only he, and myself. Amidst other discourse, he told me, he was just in the same situation, as when formerly, the notion of gravitation came into his mind. ‘Why should that apple always descend perpendicularly to the ground,’ thought he to himself: occasion’d by the fall of an apple, as he sat in a contemplative mood: ‘Why should it not go sideways, or upwards? But constantly to the earth’s centre?’”
From these accounts, there’s strong evidence to suggest that Newton did indeed tell a number of acquaintances about how a falling apple helped form his theory of universal gravitation. But while the moment is often presented as a true epiphany, exactly how much an apple inspired Newton is open to debate, and we can’t know to what extent the falling fruit informed his scientific thinking.
Some academics — most notably John and Mary Gribbin in their bookOut of the Shadow of a Giant: Hooke, Halley, and the Birth of Science — have argued that Newton made up the apple story altogether, possibly to support his claim that he had the idea of a universal theory of gravity before Robert Hooke, who famously accused Newton of plagiarism.
One part of the story that has no supporting evidence — despite being a popular aspect of the tale — is the apple actually falling right on Newton’s head. This seems to be nothing more than a later embellishment, yet one that has managed to become engrained in popular culture up until the present day.
Advertisement
Advertisement
Why Did Doctors Wear Beak Masks During the Bubonic Plague?
Credit: Bildagentur-online/ Universal Images Group via Getty Images
Author Tony Dunnell
September 11, 2024
Love it?252
Few images in medical history are as striking (or as creepy) as those of plague doctors with their long, beaked masks. This peculiar costume, worn by physicians during outbreaks of bubonic plague in Europe, has become an enduring symbol of the disease. But why did doctors wear these strange masks, which surely must only have added to the fear felt by people in times of suffering? What purpose did the design serve? Here’s the reasoning behind the mask, which came about in an age when the true nature of disease transmission was still shrouded in mystery.
Contrary to common belief, the plague doctor costume was not a medieval-era invention. Despite its common association with the Black Death — the name given to the bubonic plague pandemic that devastated Europe in the mid-1300s — there is no evidence to suggest it was worn during the 14th-century epidemic or at any point in the Middle Ages. It emerged much later, in the 17th century, when plague outbreaks were still common in Europe.
We know that the striking attire was worn in 1619 by the French physician Charles Delorme during an eruption of the bubonic plague in Paris. Delorme, who some historians credit as inventing the outfit, described the plague doctor costume in full in a mid-17th century text, complete with leather hat, gloves, a waxed linen robe, boots, and a mask with glass eyes and beak.
Plague doctors across Europe soon adopted the outfit; they also carried a stick with which to remove the clothes of the infected. The look was so widely recognized in Italy that it became commonplace in Italian commedia dell’arte — an early form of comedic theater — and carnival celebrations, and it remains a popular costume today.
By far the most distinctive, and some would say ominous, aspect of the plague doctor costume was the mask with its long, birdlike beak. This relates to a prevailing view in medical science during the Middle Ages and in the centuries that followed: that diseases were spread through “miasma,” or bad-smelling air, that caused an imbalance in a person’s “humors,” or bodily fluids. (The miasma theory was later discarded when the germ theory of disease was developed.)
The shape and function of the beak were directly tied to this theory of miasma. Plague doctors filled their long beaks with strong smelling herbs and flowers, including lavender and mint, or sponges soaked with vinegar or camphor. Some also stuffed their beaks full of theriac, a compound of more than 55 herbs and other components including viper flesh powder, cinnamon, myrrh, and honey. These aromatic substances, it was believed, would absorb the foul-smelling miasma, purifying the air as it traveled along the beak, and so protect the wearer from inhaling the harmful air.
The concept behind the plague doctor’s outfit wasn’t entirely misguided. Creating a barrier between the wearer and the patient — and potentially contaminated air — was logical. Modern personal protective equipment (PPE) and hazmat suits are based on the same idea. The plague doctor costume could have even offered some protection against droplets from coughing (in the case of pneumonic plague) or contamination through splattered blood (from bubonic plague).
But fundamental flaws existed in the design. Bubonic plague, caused by the bacterium Yersinia pestis, is transmitted through the bite of an infected flea. The plague doctors, believing miasma to be the cause, were not aware of this. Their outfits may have helped protect against flea bites to some extent, but they were not specifically designed for this task. As for the beak masks, they too would have offered some protection, despite the flawed understanding of how contagious diseases spread. The simple fact that plague doctors wore masks was a positive — but stuffing their beaks full of herbs and powdered viper flesh was of no great use, apart from making the air smell somewhat nicer while they treated their patients.
The ancient Greeks are widely regarded as the founders of modern medicine. Yet initially, they saw illness as a divine punishment and healing as a literal gift from the gods — beliefs not uncommon in the ancient world. By the fifth century BCE, however, the Greeks began testing and advancing medical theories based on actual scientific observations — cause and effect — rather than spiritual beliefs alone.
Three factors began to take prominence in ancient Greek medicine: diet, drugs, and surgery. Diet was particularly important and, when combined with medicine and surgery, created a holistic approach to health and healing. Still, this was more than 2,000 years ago, and the ancient Greeks never entirely separated the spiritual world from the physical. Modern medicine has come a very long way in two millennia, and today, some medical practices from ancient Greece seem strange at best, and even downright shocking.
Credit: GraphicaArtis/ Archive Photos via Getty Images
Tasting the Humors
The Greek physician Hippocrates was fundamental to the medical advancements of ancient Greece, and he is still revered for his ethical standards in medical practice. (Many doctors still take a modernized version of the Hippocratic oath.) Hippocrates was particularly taken by the idea that the human body contained four humors, or fluids: black bile, yellow or red bile, blood, and phlegm. In humoral theory, these fluids held the key to medical diagnosis. As such, Hippocrates routinely tasted his patients’ urine, pus, and earwax, and smelled and scrutinized their stools and vomit. You certainly can’t fault his dedication, even if such practices seem gruesome today.
Credit: Buyenlarge/ Hulton Archive via Getty Images
The Wandering Womb
Greek physicians were obsessed with the womb, and believed it could explain the physical and mental differences between men and women. Specifically, they believed these differences could be put down to a “wandering womb” that moved about the female body, causing all kinds of adverse conditions; the physician Aretaeus of Cappadocia even considered the womb “an animal within an animal” that could move around of its own accord. Treatment involved repelling or enticing the womb using foul- or sweet-smelling substances, respectively, placed near the woman’s nose or groin. Womb fumigation was another option, in which the woman would sit over a heated pot containing scented vapors. It was believed that after a few days, this could draw the problematic womb to its correct position.
Some ancient Greek physicians, including Aëtius of Amida and the famed gynecologist Soranus of Ephesus, recommended the use of hare’s brains to help with the pain of teething, either added to food or rubbed directly into the gums. (If no hares were available, then lamb brain would do just as well.) Aetios also advised that teething babies could wear bracelets or amulets containing colocynth, a wild vine that is both bitter and toxic. Alternatively, an amulet containing the tooth of a viper could be hung around the baby’s neck.
Modern music therapy began as an organized and respected profession in the 1950s, and is now accepted as a scientifically based method of emotional, cognitive, and physical healing. But the ancient Greeks also used song as a therapeutic aid. The philosopher Theophrastus saw music as a cure for many ailments, including fainting, panic attacks, sciatica, and epilepsy. Another Greek philosopher, Democritus, believed that viper bites could be cured by skillfully played flute music.
The Greeks used a wide range of narcotic drugs as cures or remedies, including mandrake, opium poppy, nightshade, henbane, and hellebore. The last, in particular, was used to treat numerous ailments, including gynecological disorders, vomiting, skin conditions, humoral imbalance, and lung problems. This was a tricky business, however, as hellebore is toxic and potentially fatal in high doses — in fact, poisoning by hellebore may have caused the death of Alexander the Great. Ancient Greek physicians, when recommending the use of hellebore, were particularly careful to warn of its potentially lethal nature if administered carelessly.
Bloodletting — the removal of blood for medical treatment — is considered one of the oldest medicinal practices. It is thought to have originated in ancient Egypt before spreading to Greece. The Greeks believed that many illnesses stemmed from an overabundance of blood (one of the four humors) in the body, a condition known as plethora (a word of Greek origin meaning “fullness”). In order to restore balance, excess blood was removed from the patient’s body through surgical incisions or the use of leeches.
The ancient Greeks proposed a number of contraceptive theories and methods, involving everything from cedar oil ointments to eating pomegranate seeds. Two of the strangest contraceptive suggestions came from Soranus of Ephesus, who suggested that women jump backward seven times after intercourse. Alternatively, and equally bizarrely, he recommended that women wishing to prevent pregnancy drink the water used by blacksmiths to cool their metal.
Earth has experienced at least five significant ice ages in its history — periods in which colder global temperatures caused glaciers to expand across the planet’s surface. Homo sapiens, which emerged about 300,000 years ago in Africa, survived two such ice ages. The most recent, known as the Last Glacial Period, or simply the “last ice age,” occurred between 120,000 and 11,500 years ago. It reached peak conditions between 24,000 and 21,000 years ago, in a period known as the Last Glacial Maximum, when vast ice sheets covered North America and northern Europe.
At that point, Homo sapiens had already spread around the world. Many of our ancestors, therefore, found themselves in a survival situation during the frigid ice age, along with animals such as brown bears, caribou, and wolves — as well as large animals known as megafauna. These impressive creatures included woolly mammoths, mastodons, and saber-toothed cats — all of which went extinct during the last ice age.
How, then, did humans survive? It was no easy task, for sure, but our ancestors were highly adaptable. Here’s how humans not only managed to survive the last ice age, but also emerged as the most dominant species on Earth.
Contrary to the popular image of ice age humans — or “cavemen” — living in deep caves, our ancestors were more likely to have built sturdy rock shelters. While these shelters often made use of natural features, such as a depression in a cliff face, early humans would also have made extensive modifications to further weatherproof their shelters, such as draping large animal hides from overhangs to block out the bitter winds. With a warm fire blazing inside, these shelters provided ample protection from the cold. In the brief but slightly warmer summer months, when hunters moved out onto the open plains, they built dome-shaped huts or tents out of mammoth bones, which were then covered with animal skins.
Unsurprisingly, warm clothing was absolutely vital to survival in the ice age. While humans might have once worn rudimentary, loose-fitting animal hides, such clothing would not have been adequate in freezing temperatures. Thankfully, about 30,000 years ago, our ancestors developed what anthropologist Brian Fagan called the most important invention in human history: the needle. Carved out of ivory or bone, with tiny eyes bored through by a fine-pointed flint drill, these ice age needles would be instantly recognizable today. They allowed for the manufacture of tight-fitting clothing that was tailored to the individual and often sewn together in layers, providing effective protection from the cold.
Needles weren’t the only innovation that helped humans survive the ice age. As part of their adaptation, Homo sapiensimproved upon existing tools, some of which had been used by the Neanderthals, while also creating new innovations in toolmaking and weaponry. One of the most important tools created during the ice age was the burin, a type of rock chisel used to cut grooves and notches into materials such as bones and antlers, allowing for the creation of intricate and lightweight spearheads and harpoon tips. Not only did this mark one of the first instances of detachable and interchangeable technology — known as compound tools — but it was also the first time that tools had been developed exclusively for making other tools.
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Language and Art
Scientists suggest that Homo erectus, an extinct early human species, may have used a primitive form of conversation, or protolanguage, when it walked the Earth some 2 million years ago. Fast-forward to the last ice age, and members of Homo sapiens were most definitely talking among themselves. Language was arguably as important as anything else when it came to surviving the ice age. It allowed humans to share knowledge, whether regarding new technology, edible plants, or animal migrations. And through spoken language, as well as symbolic activities such as rituals, personal adornments, and art (the cave paintings at sites such as Lascaux in France, for example), our ancestors created a shared sense of social identity. This, in turn, allowed them to band together and forge connections beyond their immediate communities. By collaborating, early humans had a far greater chance of surviving the extremes of the ice age — and, ultimately, they came out stronger than ever before.
Geography Photos/ Universal Images Group via Getty Images
Author Tony Dunnell
March 14, 2024
Love it?61
UFOs aren’t just a modern phenomenon: If we look back through history, we find that people have reported seeing unidentified flying objects since ancient times. In classical Greece and Rome, the philosopher Plutarch wrote of flaming spears and shields that moved in formation in the heavens; the historian Livy told of a “phantom navy” seen shining in the sky; and the writer Julius Obsequens told of golden spheres of fire that flew through the air. It’s only far more recently, however, that the idea of UFOs as an indication of extraterrestrial life has become established in the wider public consciousness — whether we believe in visitations from little green men or not.
Scientifically and statistically speaking, there’s a rational argument to be made for the existence of intelligent species in the universe apart from ourselves. When it comes to UFOs, however, the question is whether these species have actually crossed the vast expanse of space to visit Earth. This is where things get complicated. As the scientist Enrico Fermi posited in his famous Fermi paradox, if there are other civilizations spread throughout the galaxy, then why haven’t we heard from them? Scientists continue to tackle the question, with theories such as the “Great Filter” (that alien civilizations have existed but were wiped out) and the “zoo hypothesis” (that extraterrestrial life exists, but is choosing not to contact Earth). Then there are the ufologists and budding Fox Mulders out there who argue that the aliens are already here — that UFOs are spacecraft from other worlds, and the truth is out there for anyone willing to accept it.
The modern history of UFOs is, of course, full of tantalizing details that have convinced many to believe that actual UFOs of the extraterrestrial variety have visited Earth and continue to do so. According to recent polling, 42% of Americans believe in UFOs, and most of these Americans believe that unidentified flying objects are alien spacecraft visiting Earth from other planets or galaxies. Here are some of the most pivotal events in the history of UFOs — events that have helped convince many people that aliens are indeed among us.
On June 24, 1947, a private pilot named Kenneth Arnold was flying close to Mount Rainier in Washington state when he saw nine shiny, circular objects flying in formation. Each object was about 100 feet across, and each one flew at what he estimated as about 1,200 miles per hour (roughly twice as fast as any known aircraft at the time). The report was soon picked up by the Associated Press, which described Arnold’s strange sighting of “nine bright saucer-like objects.” The story then exploded across the United States, with the new term “flying saucers” on everyone’s lips. Just like that, the modern age of UFO sightings had begun.
Ten days before Kenneth Arnold saw his flying saucers, a rancher named W.W. “Mac” Brazel and his son were driving across their ranchland about 80 miles northwest of Roswell, New Mexico. They came across a patch of land in the desert strewn with rubber strips and metallic-looking, lightweight fabric. Baffled, they returned home. It wasn’t until weeks later on July 4 that they returned to collect the debris—unaware of the nationwide flying saucer fervor that had recently begun—which they then delivered to the local sheriff. From there, things escalated quickly, and the case passed up through the military ranks until it reached the commander of the Eighth Air Force in Fort Worth, Texas. Then, the Air Force made a curious decision. Rather than admit the true nature of the wreckage — it was a crashed Air Force balloon that was part of the secretive Project Mogul — they released an extraordinary press release stating they had recovered a “flying disk.”
Naturally, the story spread like wildfire. The Air Force soon tried to backtrack, stating that it was actually a weather balloon carrying a radar target, but the damage had been done. To this day, conspiracy theories permeate the so-called Roswell incident, which in some accounts has been embellished to include recovered gray aliens and extraterrestrial technology.
Following Kenneth Arnold’s flying saucer sighting, the Roswell Incident, and the general furor surrounding UFOs that began in the late-1940s, the United States Air Force launched a systematic investigation into unidentified flying objects. Known as Project Blue Book, it was the first large-scale government investigation into UFOs. Formed in 1952, the project investigated 12,618 reported sightings that occurred between 1947 and the project’s termination in 1969. Most of the cases were found to be caused by weather balloons, swamp gases, meteorological events, or temperature inversions. But more than 700 Project Blue Book entries could not be explained by investigators. In the end, the project appeared to be a refreshingly open examination of UFO phenomena, as it was open to the public and not conducted in secret, yet it ultimately provided even more fodder for the ufologists.
On a September night in 1961, Betty and Barney Hill were driving along an empty country road in New Hampshire’s White Mountains. All was normal, apart from a strange light in the sky that appeared to be following them. When they stopped, the light — which Betty described as a “spinning disk” — came closer until it was right above them. They drove off in a panic, dazed and groggy from the experience, and when they arrived home they realized that the journey had taken two hours longer than it should have. Two years later, they finally talked publicly about the incident. As media interest grew, the Hills underwent hypnosis, revealing further details about their supposed abduction: a flying disk, gray beings, metal tables, and bodily probing. The story was turned into a bestselling book and later a movie (The Interrupted Journey and The UFO Incident, respectively) which helped establish the incident as the archetypal abduction case.
In 1980, United States Air Force personnel stationed at RAF Woodbridge in Suffolk, England, saw a series of strange lights in the nearby Rendlesham Forest. The first sightings occurred on the night of December 26. When Air Force security guards went to investigate, they saw what one of them described as a red, oval, sunlike object in a clearing in the woods, which then lifted up through the trees and shot back toward the coast. Over the next couple of days, servicemen went back into the forest to investigate further. They found impressions on the ground in a triangular shape as well as burned and broken trees. Then, in the early hours of December 28, things got even stranger. The deputy base commander described how a flashing light seemed to descend upon him and his colleagues, and how three bright objects hovered just above the horizon, occasionally emitting streams of light down to the ground. The Rendlesham Forest incident, often referred to as “Britain’s Roswell,” remains an extremely strange case — and one of the most famous UFO sightings in history.
In 2020, the Pentagon officially released three short UFO videos shot from U.S. Navy fighter jets showing “unidentified aerial phenomena” (“UAP” now being the government’s preferred term for UFOs). The videos had been released previously by a private company, but this was the first time the Pentagon officially acknowledged their existence. Naturally, these videos garnered a great deal of media and public attention, which in turn prompted the Pentagon to take the whole UFO/UAP situation more seriously. For the first time since Project Blue Book, a major investigation was launched, culminating in the unclassified “2022 Annual Report on Unidentified Aerial Phenomena.” The report covered 510 cataloged UAP sightings from branches of the United States military. Of those, 171 remained “uncharacterized and unattributed,” with some described as demonstrating “unusual flight characteristics or performance capabilities.” It’s certainly an intriguing turn of events, and people are once again looking to the skies in search of UFOs.
When blizzards come at full force, they’re certainly nothing to sneeze at. In the most extreme cases, visibility is reduced to just a few feet, wind blows faster than traffic on the highway, and snow is so deep it’s impossible to drive — or even walk. The term “blizzard” is often used to describe any snowstorm, but the National Weather Service officially defines a blizzard as having ““winds in excess of 35 mph and visibilities of less than 1/4 mile for an extended period of time (at least 3 hours).” That, however, is at the mildest end of the spectrum. Blizzards can become far more intense, with waist-deep snow and whiteouts bringing entire cities — and sometimes entire regions — to a standstill. Here are five facts about some of the biggest blizzards in history.
The Blizzard of 1888 Led to the Creation of the New York Subway
On March 11, 1888, a massive blizzard struck the Atlantic coast of the United States, from the Chesapeake Bay up to Maine. As much as 55 inches of snow was dumped in some areas, and New York City ground to a halt. Many people had to seek refuge in hotel lobbies, where temporary beds were put up. (Mark Twain was among those stuck in his New York hotel for several days.) As many as 15,000 people were stranded on the city’s elevated trains, and the drifting snow and howling winds also felled telegraph lines, water mains, and gas lines. In the aftermath, the storm was a wake-up call to city planners across the nation. There was a shift toward burying infrastructure underground, including lines of communication and utilities. The blizzard also prompted New York City to begin planning its vast subway system to replace the exposed high-line trains.
The Great Appalachian Storm Created Some Strange Weather Anomalies
The Great Appalachian Storm of 1950 wasn’t notable solely for its intensity — it was also downright strange. The slow-moving system dumped heavy snow across much of the central Appalachians, with several locations receiving more than 50 inches. (Coburn Creek in West Virginia reported a whopping 62 inches.) The storm’s wild temperature gradients also created some true meteorological anomalies. In Pittsburgh, for example, heavy snow fell in 21-degree air, which rapidly plunged to 9 degrees as the winds shifted from northwesterly to southeasterly winds, bringing with them a sudden cold front. And while snow was falling in frigid Pittsburgh, in Buffalo, New York — only 200 miles to the north — temperatures were in the high 40s with no snow recorded at all.
The Chicago Blizzard of 1967 Left Some 20,000 Cars Abandoned in the Streets
On the morning of January 26, 1967, as people in Chicago were heading to work or school, a light snow began to fall. By noon, the city was covered in 8 inches of snow, shutting down O'Hare International Airport and forcing schools and businesses to close early. They didn’t close early enough, however, and people returning home were caught in the increasingly heavy snow. By the next morning, a record 23 inches of snow had blanketed the city, with drifts of up to 6 feet. In the streets of Chicago, some 20,000 cars and 1,100 CTA buses were left abandoned in the blizzard. (Some estimates are as high as 50,000 cars.) It took the city another two days to properly dig out from the snow.
The Iran Blizzard of 1972 Hit a Region the Size of Wisconsin
In 1972, a massive blizzard struck Iran. It followed one of the nation’s worst droughts, which had lasted for 1,460 days. When moisture finally began to gather, it fell not as rain but as snow. The resulting blizzard lasted for an entire week and resulted in snow at least 10 feet deep — and in some places a staggering 26 feet deep. A region about the size of Wisconsin, encompassing much of western Iran, was buried in snow for more than a week.
The 1993 “Storm of the Century” Stretched From Canada to Central America
On March 12, 1993, a gargantuan storm system began to form that, over the next few days, affected nearly half the U.S. population. The Storm of the Century, as it was called, was felt from Canada all the way to Central America. It was strongest along the eastern United States, which was subjected to three days of blizzard conditions and heavy snowfall (up to 55 inches in Tennessee), as well as rough seas, coastal flooding, tornadoes, and bitterly cold temperatures. By the time it had dissipated, the Great Blizzard of ’93 — as it was also known — had caused about $5.5 billion in damages (the equivalent of about $11.6 billion today), making it one of the most costly weather events of the 20th century.
The humble calendar of one of civilization’s oldest staples. The earliest means of measuring days and weeks dates back 10,000 years, and timekeeping techniques adopted by the ancient Babylonians, Egyptians, and Romans slowly evolved into the calendar we use today. Yet the emphasis here is on “slowly.” The evolution from charting moon phases to separating seasons to measuring fiscal years was one of controversy and chaos across centuries. Still, humans never stopped working to perfect how we mark the passage of time. Here’s a brief look at the fascinating history of calendars, just in time to start a new one.
Photo credit: Buyenlarge/ Archive Photos via Getty Images
The First Known Calendar Is From Prehistoric Scotland
In 2013, British archaeologists discovered what they consider the world’s oldest calendar, dating back to around 8000 BCE. The prehistoric calendar, located at Warren Field in Scotland, consists of 12 pits believed to have contained wooden posts representing months of the year. Positioned to chart lunar phrases, the pits are aligned with the southeast horizon and were likely used by hunter-gatherer societies to track seasons. The site precedes Stonehenge by several thousand years.
Though the exact purpose of Stonehenge remains a mystery, archaeologists believe that in addition to a burial site, the prehistoric monument may have been a solar calendar. Keeping track of months and years in ancient societies relied on charting the stars, sun, and lunar patterns (and the subsequent weather they’d bring). Stonehenge, built around 3000 BCE, consists of stones and archways arranged in a circular pattern, and experts believe time was measured by the way the sun, moon, and stars lined up with these markers. Archaeological evidence suggests lunar and solar eclipses were also observed, as were the winter and summer solstices.
The Babylonian Calendar Had a 12-Month Year and Seven-Day Week
The Babylonians, a civilization that began in the Fertile Crescent of ancient Mesopotamia (modern-day Iraq) around 4000 BCE, used lunar patterns to determine the length of the year. They divided the calendar into 12 months based on the cycle of the moon, starting with the first sighting of the crescent moon. This put the year at 354 days, slightly shorter than a solar year. (The moon cycle is actually 29.5 days long, so there are 13 lunar phases each year.) To sync the lunar and solar calendars, the Babylonians adopted a system known as intercalation, which allowed them to add an extra month when necessary. They also divided each month into seven-day weeks, based on (and named for) the seven celestial bodies they were able to observe: the sun, the moon, and the five nearest planets.
Like many ancient agricultural societies, the ancient Egyptians measured time according to the planting and harvest seasons. The Nile river flooded each year between the months of what is now May and August, and this annual flooding was crucial to Egyptian society, as it determined the success of that year’s crops. As a result, the Egyptians divided the year into three seasons, each related to the river’s status and the agricultural phases associated with it: Akhet (translated as Flood or Inundation), Peret (Emergence or Going Forth), and Shemu (Low Water, Deficiency, or Harvest).
Our Modern Calendar Was (Partially) Created by Julius Caesar
In 46 BCE, Julius Caesar introduced the Julian calendar as a means of reforming the existing and flawed Roman timekeeping system. The Roman Republic’s calendar (which referred to the first day of each month as kalends, the origin of the word “calendar”) not only miscalculated the length of a solar year, but also could be manipulated to control political power, which created further inaccuracies. Caesar’s response was a new system based on calculations by the Greek astronomer Sosigenes of Alexandria, who determined each year consisted of 365.25 days. To account for that .25, the Julian calendar added an extra day every fourth February: leap year. Though the calendar was still imperfect, much of what Caesar put in place remains in use today — including, of course, the name of the seventh month, July.
Photo credit: PHAS/ Universal Images Group via Getty Images
The Gregorian Calendar Fixed Caesar’s Math
For all its improvements, the Julian calendar still miscalculated the length of a solar year by 11 minutes. That may not seem like much, but the discrepancy added up over time, and by 1582 CE, the months of the calendar and the seasons were misaligned. This concerned Pope Gregory XIII, who noticed that Easter was becoming increasingly out of sync with the spring equinox — a problem for the Catholic Church. In response, the pope launched the Gregorian calendar on October 4, 1582, which set the following day as October 15, jumping the calendar ahead 10 days. Chaos ensued: Catholic countries adopted the reformed calendar, but many Protestant nations waited more than a century to get on board. The British Empire, including its colonies in America, waited until 1752 to switch to the Gregorian calendar, and Russia didn’t adopt the new calendar until as late as 1918, after the Russian Revolution.
France Had a Calendar With 10-Day Weeks During the Revolution
In 1793, during the French Revolution, the revolutionary government attempted to create a new secular calendar for the French Republic, keen to do away with Christian associations as part of its break from the ruling elite. Known as the French Republican calendar, it set each month at 30 days, divided into three weeks consisting of 10 days each. The calendar never took off, however. It was abandoned for its Gregorian counterpart on January 1, 1806, after Napoleon Bonaparte became emperor.
Advertisement
Advertisement
7 Items You Would Find in a Doctor’s Office 100 Years Ago
In many historical contexts, 100 years isn’t a very long time. But when it comes to science, technology, and medicine — particularly in the last century — it’s a veritable eternity. The seeds of modern medicine were just being planted in the early 20th century: Penicillin was discovered in 1928, physicians were still identifying vitamins, and insulin was a new breakthrough.
The doctor’s role itself was different than it is today, as preventative care was not yet an established practice; there was no such thing as a routine visit to a doctor’s office 100 years ago. A visit to the doctor typically meant that you were ailing (though in some cases during the Prohibition era, it meant that you and your doctor had agreed on a way around the alcohol ban). Thanks to advances in technology, doctors’ offices in the 1920s were also stocked with very different items than we see today. These are a few things you likely would have found there a century ago.
A metallic disc attached to a headband is generally considered part of a classic doctor costume, but what is the genuine article, exactly? It’s called a head mirror, and your doctor 100 years ago would’ve been wearing one. It wasn’t just an emblem; it provided a very core function, which was illumination for the examination of the ear, nose, or throat. The patient would be seated next to a lamp that was pointed toward the doctor, and the head mirror would focus and reflect the light to the intended target. Today, the easier-to-use pen light or fiber optic headlamp have largely replaced the head mirror, though some ENT specialists argue that the lighter weight and cost-effectiveness of the latter mean it may still have a place in contemporary medicine.
Photo credit: Marka/ Universal Images Group via Getty Images
Floor-Standing Spirometer
One hundred years ago, a spirometer was a large floor-standing unit made of metal, used to evaluate pulmonary function. The patient would breathe into a tube, and a dial on the top would indicate lung capacity and respiratory volume, allowing the doctor to diagnose pulmonary ailments. Today, spirometers are still very much in use, but they are much smaller and made of plastic. In fact, they’re so compact nowadays that patients can hold the entire unit themselves while they’re in use.
Advertisement
Advertisement
Photo credit: FPG/ Archive Photos via Getty Images
Electric Vaporizer
The electric vaporizer was similar to today’s at-home humidifiers, but it was more complex, and could be used to make vapor out of water or other liquid medication. In the doctor’s office, vaporizers were used to treat sinus or bronchial illnesses. Vaporizers were also used in hospital settings in order to administer anesthesia.
Considering that a doctor’s workspace is referred to as a “doctor’s office,” it follows that a classic wooden desk was generally present in one 100 years ago. There is something especially archaic-looking about a doctor seated at a wooden desk, though, since today we’re used to nonporous antiseptic surfaces in any space where medical exams or procedures take place. Indeed, it didn’t take long for the doctor’s office to shift in that direction: By the 1930s, most doctor’s offices contained furniture that was made of enamel-coated metal.
Sometimes referred to as vinegar of ipecac, syrup of ipecac was (in small doses) an early form of cough syrup used as an expectorant to treat respiratory illnesses. In larger doses, it was used as a poison control agent to induce vomiting, especially in pediatric medicine. Available by prescription only in the early 20th century, it was eventually approved by the FDA for over-the-counter sale and recommended as an essential item for households that included young children. In 2004, the FDA began discouraging use of syrup of ipecac due to its lack of efficacy as a treatment for poison ingestion.
The doctor’s bag (also referred to as a physician’s bag or a Gladstone bag) was an essential item in an era where house calls were still part of a general practitioner’s array of services. The bag was usually made of black leather, and carried the most important and portable medical equipment: a stethoscope, thermometer, bandages, syringes, a plexor for testing reflexes, a sphygmomanometer to test blood pressure, and more.
The image of an early 20th-century doctor sharpening a knife may seem foreboding, even sinister. But in the doctor’s office setting, the sharpening stone was used to sharpen scissors or knives for cutting bandages, not for any sort of medical procedures. The surgical discipline, meanwhile, used a cold sterilization method that prevented scalpel blades from dulling. Rest assured: Since at least 1915, scalpels have been a two-piece design that enable the blade to be discarded and replaced with a new one after each use, so there’s no need for sharpening.
As tensions rose on Earth during the Cold War, the United States and Soviet Union also vied for celestial supremacy. The space race between the two superpowers began shortly after World War II, and captivated the public until tensions finally eased in the 1970s. With the help of top scientists and talented pilots, Americans, Soviets, and other nations sought to do the seemingly impossible by conquering the final frontier. These decades were marked by scientific achievements and setbacks that make this space-obsessed era one of the most fascinating periods in the 20th century. Here are six facts about the space race.
Fruit Flies Became the First Animal Sent Into Space in 1947
Long before humans reached the stars, fruit flies became the first living organisms to be intentionally blasted into space. Beginning in 1946, the U.S. military conducted a series of experiments in New Mexico’s White Sands Missile Range with future space flight in mind. Utilizing V-2 ballistic missiles — which had been seized from Germany by the U.S. after World War II — the government propelled biological samples such as corn and rye seeds as far as 80 miles into the sky — well beyond the 66-mile distance that NASA now considers the limits of outer space. On February 20, 1947, a capsule containing fruit flies was affixed to one of said missiles and launched to a height of 67 miles above the ground. The flies were chosen to test the effects of cosmic radiation on living beings, and were the perfect candidate for a number of reasons, including their small size, minimal weight, and a genetic code analogous to that of humans, containing similar disease-causing genes. As the rocket began its descent, the capsule detached and drifted back down to Earth using a parachute, and the flies remained alive and unaffected.
In November 1969, just four months after Apollo 11 landed on the moon, the Apollo 12 mission took to the skies. But what was scheduled to be a standard launch experienced near-disaster just 36.5 seconds into the flight, as lightning struck the Saturn V rocket. The unexpected event disrupted the onboard control panels, causing astronaut Dick Gordon to confusedly exclaim, “What the hell was that?” before yet another bolt struck at the 52-second mark. With alarms blaring and equipment malfunctioning, the puzzled astronauts continued to troubleshoot the spacecraft while not fully understanding what had happened. Ultimately, the crew shifted the craft to an auxiliary power supply that allowed the mission to continue as planned. Around three minutes into the flight, astronaut Pete Conrad wondered aloud if they’d been struck by lightning, and by the 11-minute-and-34-second mark, the crew was successfully floating in space. With disaster averted, the Apollo 12 astronauts became the second group of individuals to walk on the moon.
Photo credit: Space Frontiers/ Archive Photos via Getty Images
Alan Shepard Played Golf on the Moon
While the harrowing stories of Apollo 12 and Apollo 13 are widely known, the Apollo 14 mission produced one of the more lighthearted moments of the space race. On February 6, 1971, during a live broadcast of the Apollo 14 spacewalk, astronaut Alan Shepard produced a retractable six-iron golf club and took four swings on the moon’s surface. Given his bulky spacesuit, Shepard couldn’t grip the club with both hands and swung it solely with his right, causing him to miss the golf ball and connect directly with the lunar surface on both of his first two swings. Shepard hit the ball with his third swing, though it only traveled 24 yards. With his fourth and final shot, Shepard made flush contact and claimed the ball traveled “miles and miles and miles”; in reality, it only reached a distance of about 40 yards, though it remained airborne for longer than here on Earth given the moon’s lack of gravity. After returning to Earth in 1974, Shepard donated the golf club to the USGA Golf Museum in New Jersey, where it remains a popular artifact.
The First Joint U.S.-Soviet Space Mission Occurred in 1975
Despite a heated rivalry that lasted more than two decades, the United States and U.S.S.R. worked together on a joint space mission in 1975. The space race had been in full swing since at least 1955, when the two global powers announced their intention to launch satellites into orbit. The Soviets made history by sending the first man into space in 1961, and America landed the first man on the moon in 1969. After years of one-upmanship, tensions began easing in 1972 with the signing of a space cooperation agreement. On July 15, 1975, the nations jointly embarked on the Apollo-Soyuz mission, which served as a symbolic end to the decades-long space race. This mission included two separate spaceflights led by American astronaut Tom Stafford and Soviet cosmonaut Alexei Leonov, who later docked their crafts together in space and exchanged an international handshake. In the 1980s, talk of an International Space Station jointly managed by the United States and Soviet Union began, and assembly of the spacecraft began in 1998.
Advertisement
Advertisement
Photo credit: Hulton Deutsch/ Corbis Historical via Getty Images
Nations Around the World Joined the Space Race
While the history of the space race largely focuses on the United States and Soviet Union, other nations joined the fray in the late 1970s. The first non-American and non-Soviet pilot to reach outer space was Czech cosmonaut Vladimír Remek, who studied flight in Moscow. On March 2, 1978, Remek took off aboard the Soviet’s Soyuz 28 spacecraft and headed for the Salyut 6 space station, where he and his co-pilot conducted research for eight days before returning to Earth. Upon his return, Remek was heralded as a hero by his native Czechoslovakia (now Czechia and Slovakia), paving the way for other nations to send humans into space shortly thereafter. Later that year, Polish pilot Mirosław Hermaszewski and East Germany’s Sigmund Jähn both boarded Soyuz missions of their own. In 1980, the Soyuz program also sent the first pilots from Latin America (Arnaldo Tamayo Méndez of Cuba) and Southeast Asia (Phạm Tuân of Vietnam) into outer space.
If you were alive at the time, it’s hard to forget where you were when humans first walked on the moon. While those vivid memories remain, it’s actually been more than 50 years since someone last set foot on the lunar surface. In 1969, Neil Armstrong became the first of the 12 people (all Americans) who have set foot on the moon. The final moon walk to date occurred just three years later as part of the Apollo 17 mission, with astronauts Eugene Cernan and Harrison H. Schmitt. They landed their spacecraft in the Taurus-Littrow valley, a narrow lunar opening deeper than the Grand Canyon. The pair explored the region for seven hours a day over the course of three straight days, and even suffered a minor accident in the process after Cernan accidentally dropped a hammer on their lunar rover. The astronauts also discovered orange soil as evidence of lunar volcanic activity, which proved to be one of the most important discoveries of any Apollo mission. Before leaving, they left behind a plaque that reads, “Here man completed his first explorations of the moon.” Nobody has returned to the moon since.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.