If you’ve ever wandered the aisles of an office supply store, you may have noticed something curious: For all the different pens, folders, and desk gadgets on display, paper doesn’t offer much in the way of variety. In the United States, the go-to sheet of printer paper is 8.5 inches wide and 11 inches long, and it has been for decades. So who made that call?
The answer goes back to the 1600s, when Dutch papermakers used wooden molds to form sheets of paper from big vats of watery pulp. The molds had to be big enough for the vatman — the worker handling the frame — to lift and shake comfortably. Through trial and error, papermakers settled on molds roughly 44 inches long, the average span of a worker’s outstretched arms. When that large sheet was quartered, the resulting pieces measured about 11 inches on their long side.
The origin of the width is less certain, but historians point to the molds’ original 17-inch dimension. Halved, that produced the familiar 8.5-inch width. In other words, the size of the modern office memo may be the legacy of how far a 17th-century worker could stretch their arms.
That still doesn’t explain why this size became the American standard. For that answer, we need to skip ahead to the 20th century, when typewriters, copiers, and printers made uniformity a necessity. A sheet measuring 8.5 by 11 inches accommodated a comfortable line length — 65 to 78 characters after accounting for margins. It also minimized trimming waste when paper was cut down from larger “parent” sheets, which were often 17 x 22 inches.
For a time, the U.S. government complicated matters. In 1921, the federal government adopted 8 by 10.5 inches as the standard size for official letterhead. Around the same time, a separate industry group — part of Herbert Hoover’s effort to reduce waste — settled on 17 by 22 inches as the size of the “parent” sheet. Halved, that produced the commercial 8.5 by 11 size that everyone outside the government was already using.
The mismatch persisted for decades, with bureaucracy running on one standard and businesses on another. It finally ended in the early 1980s, when President Ronald Reagan mandated 8.5 by 11 inches for all federal forms, aligning Washington, D.C., with the rest of the country.
Advertisement
Advertisement
As for legal-size paper — standard paper’s elongated 8.5-by-14-inch sibling — that likely owes its extra 3 inches to lawyers who wanted more room for dense contracts. Today, it’s just as likely to appear in restaurants, where that bonus space comes in handy for sprawling menus.
So the real reason your printer uses 8.5-by-11-inch paper? A mix of ergonomics, industrial efficiency, and administrative tidiness — all rooted, improbably enough, in the reach of a 17th-century papermaker’s arms.
Automobile designers and engineers have often pushed boundaries to make faster, sleeker, and more attractive cars. But sometimes that effort results in vehicles so eccentric that the world looks on in bemusement.
These automotive oddities aren’t necessarily bad cars, but they certainly stand out for being well beyond the norm, whether it’s a vehicle so tiny it can fit through a doorway or a propeller-powered safety hazard. Here are seven of the most curious cars ever presented to an unsuspecting public.
In 1913, French biplane designer Marcel Leyat had what he believed was a brilliant idea: Why not put an airplane propeller on the front of a car? The result was the Leyat Helica — basically a wingless plane on wheels, with a massive wooden propeller mounted directly to the front. The first production model appeared in 1921, but despite some initial interest, only 30 were ever built.
Leyat’s car had a few issues, but one stood out: It was spectacularly unsafe. The lightweight vehicle had rear-wheel steering, minimal brakes, a top speed of 106 mph, and a giant spinning blade where most cars would have a grille. Thankfully for pedestrians, pigeons, and anything else that stood in the way of the propeller-driven death trap, the Leyat Helica never took off.
When the visionary architect and designer Buckminster Fuller — inventor of the geodesic dome — turned his attention to cars, he created something both wonderful and strange. Fuller envisioned his land-based prototype as eventually being able to travel in the air and underwater as well, so the streamlined car looked like a cross between a VW camper, a zeppelin, and a torpedo.
Unveiled in the 1930s, the three-wheeled, 20-foot-long omnidirectional vehicle could reach speeds of 120 mph, make a 180-degree turn within its own length, and carry 12 passengers. To many people, it seemed like the future had arrived. But Fuller’s car never went into mass production, partly due to one of the prototypes being involved in a fatal crash.
The Stout Scarab is credited as the world’s first production minivan. Designed by aviation engineer William Bushnell Stout, the streamlined, beetle-shaped vehicle (hence the name “Scarab”) was notable for, among other things, its massive interior space.
Inside the 16-foot-long vehicle, only the driver’s seat was fixed — all other seats could rotate 180 degrees to face one another. There was also a removable table for business meetings or card games, a dust filter to clean the air inside, interior lighting, and power door locks — all incredibly cutting-edge for the 1930s. Perhaps it was a leap too far. Only nine Scarabs were ever built, but the car went on to influence automotive design for decades.
The Peel P50 is one of the smallest cars ever made. At just 4.5 feet long and just over 3 feet tall and wide, it’s smaller than most motorcycles and can fit through a standard doorway. And considering it weighs just 130 pounds, two people could easily carry it to the repair shop if it ever breaks down.
The original, produced between 1962 and 1965, featured a 49cc engine producing 4.2 horsepower, powering a single rear wheel to a top speed of 40 mph. The Peel P50 has no reverse gear — if you need to back up, you simply get out and pull. They are still available to buy — hand built in the U.K. and shipped worldwide — and remain both utterly impractical and absolutely wonderful.
The Amphicar Model 770 was based on the Volkswagen Schwimmwagen, an amphibious vehicle used extensively by Nazi ground forces during World War II. That’s not the most auspicious of starts, but the Amphicar nonetheless went on to become the most successful civilian amphibious production car ever made, with 3,878 units sold between 1961 and 1968. U.S. President Lyndon B. Johnson famously owned one and enjoyed pranking guests by screaming, “The brakes have failed!” as he drove into the lake on his Texas ranch.
The problem with the Amphicar, and the reason it never sold more, was simple: It was neither a good car nor a good boat. On land, it had poor handling and minimal comfort, while on water it had a max speed of only 7 knots and very poor steering (not to mention problems with rust). Still, it was a whole lot of fun for anyone with the money to buy such an eccentric and impractical vehicle.
If you’ve always aspired to drive a large wedge of cheese, then the Vanguard-Sebring CitiCar is a dream come true. Inspired by golf carts and partly a response to the 1973 oil crisis, this tiny, wedge-shaped electric vehicle was powered by a 2.5-horsepower motor and six 6-volt batteries. It could reach a top speed of 28 mph and had a range of about 40 miles.
Selling for less than $3,000, it was cheaper than most compact cars, and by 1976, the Florida-based Sebring-Vanguard was positioned as America’s sixth-largest automaker. For a while, the CitiCar was the bestselling electric vehicle in the U.S., despite looking like a doorstop on wheels.
The Fiat Multipla is frequently mentioned as one of the ugliest cars ever produced. With its bulbous two-tiered layout and three pairs of buglike headlights, it looks like the offspring of a beluga whale and a cartoon insect. But the Multipla isn’t a bad car. It runs perfectly well, and inside its strangely wide body are two rows of three seats, making it a true multipurpose vehicle with plenty of space for luggage — all things that earned plaudits from the automotive media.
The problem, however, was its peculiar appearance. It sold 79,000 units across Europe in 1999, but sales dropped off quickly — the Fiat Multipla never even reached America, in large part because of its reputation as an eyesore. It was discontinued in 2010, but can still be seen on the road (and purchased secondhand), albeit increasingly rarely. It remains a cautionary tale of what happens when automotive design goes awry.
Advertisement
Advertisement
5 Scientific Discoveries Born From Self-Experimentation
Throughout history, some bold scientists have taken the ultimate research risk when it comes to proving the efficacy of their work: experimenting on themselves. Due to constraints of time, funding, or available alternatives, these brave — some might say reckless — individuals chose to become their own test subjects, exposing themselves to diseases, vaccines, invasive techniques, and new technologies in the name of scientific progress.
While most modern ethics committees would likely never approve such experiments, these acts of courage sometimes led to breakthroughs that have saved countless lives. Here are five major discoveries that came about when experts put their own bodies on the line for science.
In the 1950s, polio outbreaks ravaged the United States. Tens of thousands of cases across the country left hundreds of people paralyzed or dead, and thousands of children disabled. During the crisis, American virologist and medical researcher Jonas Salk developed a vaccine that he believed could prevent infection. In 1953, after successful tests on monkeys, Salk made the audacious decision to test the vaccine on himself — and his family. He boiled needles and syringes on his kitchen stove, then vaccinated himself, his wife, and their three young sons. Thankfully for all involved, the family developed antibodies against polio without any adverse effects.
It may seem reckless today, but Salk’s willingness to inject his own children was based on his complete confidence in the vaccine’s safety. His actions helped convince the medical establishment to support large-scale trials. By 1961, the vast majority of American schoolchildren had received the vaccine, all but ending the polio scourge. Salk famously, and altruistically, decided not to patent the vaccine, saying in a TV interview with Edward R. Murrow, “There is no patent. Could you patent the sun?”
In 1921, Evan O’Neill Kane, chief surgeon at the Kane Summit Hospital in Pennsylvania, was in the operating room waiting for his own appendectomy to begin. To the surprise of his staff, who were ready and waiting to operate, Kane announced that he would remove his appendix himself. Reluctant to go against the wishes of their boss, the staff obeyed and stood back.
This wasn’t some kind of bizarre whim on Kane’s part. He had performed more than 4,000 appendectomies in the past, using general anesthesia, which was standard practice at the time. But general anesthesia was considered dangerous for people with heart conditions and other serious ailments, complicating (or ruling out entirely) many basic surgeries for high-risk patients, including appendectomies.
Kane believed that general anesthesia wasn’t necessary in these circumstances, and that local anesthesia was the solution. To prove his point, he removed his own appendix using only local anesthetic. With his assistants standing by, he made an incision in his own abdomen, located his appendix, and removed it while fully conscious. While undoubtedly extreme, his self-appendectomy demonstrated that local anesthesia was viable for abdominal surgery, leading to a wider acceptance of local anesthesia and a reduction in surgical mortality rates.
Advertisement
Advertisement
Credit: Keystone—Hulton Archive/Getty Images
Effects of Deceleration on the Body
When Air Force Colonel John Paul Stapp set out to discover the precise effects of deceleration on the human body, he went all in. With airplanes increasingly flying higher and faster after World War II, pilots were being placed under ever-greater stresses, and bailouts and crashes were becoming far more dangerous. To help counter this and protect pilots, Stapp and his team began strapping themselves into extremely fast rocket sleds in the name of science.
In one such test, carried out in December 1954, Stapp strapped himself into a sled called Sonic Wind No. 1. Powered by nine solid-fuel rockets, the sled hurtled along a custom-built track, and in just five seconds Stapp reached 632 miles per hour — faster than a .45-caliber bullet fired from a pistol. When the sled’s brakes engaged, Stapp reached a standstill in just 1.4 seconds, experiencing a deceleration force of 46.2 g (46.2 times the force of gravity) — momentarily making his body weigh more than 6,800 pounds. The effect cracked his ribs and burst all the blood vessels in his eyes, making him temporarily blind.
Stapp’s experiments provided invaluable data regarding human tolerance to extreme forces, and disproved the prevailing medical belief that pilots couldn’t survive forces above 18 g. As well as paving the way for improved safety features in airplanes, his work and advocacy led to the automobile industry adopting stronger safety belts and harnesses, saving countless lives.
In 1929, 25-year-old surgical trainee Werner Forssmann saw a picture in a book that showed a tube inserted into a horse’s heart through a vein. Forssmann believed that such a process could work just as well in humans, but his superiors dismissed the idea as far too dangerous, forbidding him to test the procedure on any patient. So Forssmann, assisted by his operating room nurse Gerda Ditzen, carried out the procedure on himself.
He anesthetized his arm, then cut into his antecubital vein, through which he inserted a catheter for 65 centimeters. He then calmly walked to the X-ray department, where he advanced the catheter until it reached his right atrium, taking X-rays of the whole process. As toe-curling as the whole thing may seem, Forssmann’s self-experiment paved the way for many different heart studies and procedures to come — and earned him the Nobel Prize in 1956.
For a long time, stomach ulcers were blamed on stress, poor diet, or eating too many spicy foods (or a combination of all three). The prevailing wisdom rejected the idea that bacteria could be the cause, as it was believed that bacteria couldn’t survive in the stomach’s acidic environment. Then, in the early 1980s, Australian physician Barry Marshall and pathologist Robin Warren discovered spiral bacteria (Helicobacter pylori) in the stomachs of patients with gastritis and ulcers. Yet the medical establishment dismissed their findings.
So, in 1984, Marshall took matters into his own hands — and into his own stomach. He drank a whole petri dish of H. pylori cultured from a sick patient. Three days later, he began experiencing nausea, vomiting, and halitosis. An endoscopy confirmed the bacteria had colonized his stomach and caused inflammation. Marshall then treated himself with antibiotics. He soon recovered, proving conclusively that H. pylori bacteria can cause acute gastritis, which in turn can cause ulcers that can then be treated with antibiotics. The discovery revolutionized treatment, and in 2005— when the significance of their work was properly recognized — Marshall and Warren were jointly awarded the Nobel Prize.
Credit: Norman Smith/ Hulton Archive via Getty Images
Author Paul Chang
October 29, 2025
Love it?47
From the wheel to the light bulb, innovation has played a central role in the story of human civilization. But history’s inventors also left behind a trail of misfires, failures, and downright disasters. Here are five ideas that promised to make life better, safer, or more efficient, but turned out to be spectacular flops.
Credit: Fox Photos/ Hulton Archive via Getty Images
Baby Cages
In the early 20th century, crowded cities such as New York and London grappled with widespread tuberculosis. At the time, one common treatment was fresh air, prescribed by figures including the influential pediatrician Luther Emmett Holt. In his 1894 book, The Care and Feeding of Children, Holtwrote that babies exposed to fresh air enjoyed better appetites, brighter cheeks, and improved health.
Enter the baby cage — a wire enclosure fastened to an open window, which allowed apartment dwellers to suspend infants several stories above city streets to “air them out.” The first U.S. patent was granted to Emma Read of Spokane, Washington, in 1922, though the idea had circulated earlier. The baby cage briefly caught on, notably among members of the Chelsea Baby Club in London. Even Eleanor Roosevelt used one for her infant daughter Anna, until a horrified neighbor threatened to call the New York Society for the Prevention of Cruelty to Children. Baby cages declined in the second half of the 20th century, largely due to safety concerns.
In the 1940s and ’50s, shoe stores across the U.S. offered customers a peek inside their shoes. The shoe-fitting fluoroscope was a wooden box that displayed real-time moving X-ray images of customers’ feet. Invented by Boston physician Jacob Lowe in 1919, the device was originally designed to diagnose foot problems in World War I veterans, and was repurposed for retail use after the war.
Though the X-ray technology was real, the device was essentially a marketing gimmick to help shoe sellers boost sales by lending the fitting process a scientific veneer. But that was the least of the invention’s issues — the fluoroscopes emitted dangerous doses of radiation that far exceeded the maximum safe daily dose. As a result of growing awareness around the dangers of radiation, shoe-fitting fluoroscopes were banned by the late 1950s.
Even the illustrious Thomas Edison had his flops. In 1890, the inventor unveiled the world’s first talking doll, a technological milestone that combined phonograph technology with the classic toy. The 4-pound, 22-inch doll had a porcelain head, wooden limbs, and a tin torso that housed a miniature phonograph. Turning a crank on the doll’s back played wax recordings of nursery rhymes such as “Mary Had a Little Lamb” and “Jack and Jill.”
Unfortunately, Edison’s dolls didn’t speak so much as screech. Due to the low recording volume, the women voicing the rhymes were required to scream into the recorder, resulting in creepy recordings that upset customers. The dolls were also plagued by durability issues and were prone to breaking. Parents complained, children were terrified, and many of the 2,650 dolls sold were returned. Within weeks, Edison withdrew them from the market, ruefully calling them his “little monsters.” Today, Edison’s “monsters” are considered rare collector’s items and have been displayed at the Smithsonian.
Science fiction pioneer Hugo Gernsback — founder of Amazing Stories magazine and namesake of the Hugo Awards — was also a prolific inventor. In 1925, he introduced one of his strangest creations: the Isolator, a helmet designed to eliminate distractions and maximize focus.
Made of wood and felt, the helmet completely enclosed the wearer’s head, leaving only narrow slits for vision. After realizing that people became drowsy inside the helmet after 15 minutes, partly due to oxygen deprivation, Gernsback added an oxygen tank connected to the helmet by a tube. He claimed the device was 75% efficient and predicted it would be a “great investment” for anyone in need of focus. Though the Isolator never caught on, its spirit lives on in noise-canceling headphones and focus apps.
Advertisement
Advertisement
Credit: Fred Mott/ Hulton Archive via Getty Images
Anti-Bandit Bag
In the 1950s and ’60s, inventors sought clever ways to foil thieves — and few were as dramatic as the Anti-Bandit Bag. One of the most famous versions was debuted by French Canadian inventor John H.T. Rinfret in 1963. Rinfret, inspired by his experience of being robbed, designed a bag with a spring-loaded handle, which enabled the carrier to launch the bag’s contents all over the place in the event of a robbery.
Other anti-bandit bags spewed smoke and dye or triggered shrieking alarms when snatched. One model, aptly named the “Arrestor,” clamped onto the thief’s hand and blasted out three 12-foot telescoping metal rods, preventing the robber’s escape. None of these designs reached mass production, likely due to their unwieldiness, dubious efficacy, and potential to be triggered accidentally.
Credit: Cincinnati Museum Center/ Archive Photos via Getty Images
Author Nicole Villeneuve
June 12, 2025
Love it?26
Today, being outdoors on a hot, sunny day usually means traveling with a few sun-blocking essentials: sunscreen, sunglasses, and a hat. Though our knowledge of sun damage is relatively recent — it wasn’t until the 1800s that scientists began to understand ultraviolet rays’ harmful potential — humans have always tried to avoid the unpleasant sting of too much sun. Yet the first commercial sunscreens didn’t arrive until the 20th century — before that, people had to find other ways to prevent getting a sunburn.
While it’s hard to pinpoint exactly when people first began actively protecting themselves from the sun, evidence suggests that even in prehistoric times, attempts were made to cover the skin both to stay warm in cold weather and also to block the heat of the sun. People covered themselves with animal hides, plant fibers, and later, woven textiles.
By at least 3000 BCE, some societies started to rely on parasols and umbrellas not only as accessories but also for shade; in ancient Egypt, they were often made out of palm leaves or feathers. Egyptians also wore lightweight, loose-fitting linen garments and headdresses to shield themselves from the sun. In ancient Greece, people commonly wore wide-brimmed hats such as the petasos, protecting their faces and necks from direct sunlight.
Early humans also used primitive versions of sunscreen made from natural compounds. Red ochre, a type of claylike iron oxide, has been mixed with water and applied as a paste to the skin since the time of early Homo sapiens. This mixture was used for ceremonial reasons, but scientists believe it may also have served as a physical barrier against the sun.
Ancient Egyptians, meanwhile, used skin treatments made from ingredients such as rice bran (which absorbs UV light), jasmine (to help repair sun-damaged skin), and lupine (believed to lighten the complexion). Ancient Greece had its own approach: Olive oil was commonly applied to the skin between 800 and 500 BCE. While it offered limited protection, modern studies have found it has a natural sun protection factor (SPF) of about 8 — enough to slightly reduce burning, though far lower than today’s common SPF 30 (the minimum recommended by dermatologists).
Like with ochre, other pastes were made from a variety of natural compounds, including mud and clay. These were used not only as camouflage or ceremonial decoration, but also as protection from the sun. Zinc oxide was used in India as early as 500 BCE, and water reeds and spices were turned into sunscreen by the Sama-Bajau peoples of Southeast Asia around 840 CE. Indigenous peoples in the Americas, meanwhile, used sunflower oil, pine needles, western hemlock bark, and deer fat, while thanaka, a mixture made from ground bark and water, has been used in Myanmar (formerly Burma) for more than 2,000 years.
Over time, in some societies — including in ancient Egypt and later in Europe and parts of Asia — protecting the skin became a mark of social status. A pale complexion signaled that you could afford to avoid outdoor manual labor and instead spend your days indoors or in the shade.
In Egypt, this desired look was achieved through use of parasols and topical skin treatments that blocked the sun. In Europe in the 16th century — particularly in France and England — upper-class women wore striking-looking visard masks to prevent sunburn and preserve that pale complexion.
The visard mask was made of black velvet with a silk interior lining, and its only features were a slight protrusion for the nose and small holes for the eyes and mouth. The masks weren’t just eerie to look at, either — they were rather unsettling to wear. Most versions didn’t have straps and were instead held in place by a bead or button gripped between clenched teeth; the wearer couldn’t speak while the mask was on.
By the early 1700s, visard masks had spread beyond the aristocracy, and beyond their intended purpose of preventing sunburns. Women of various social classes wore them, including sex workers, who often used them to discreetly enter public spaces such as theaters. In 1704, Britain’s Queen Anne even banned visard masks from the theater, but they’d already lost their status among the elite and eventually faded out of use.
By the end of the 19th century, dermatologists confirmed that prolonged exposure to the sun’s UV rays could inflame or burn the skin, and scientists began experiments to develop effective and suitable topical sun protection. In 1878, Austrian physician Otto Veiel promoted tannins — natural compounds found in many plants — as viable sun protection, but they also discolored the skin.
In 1891, a German doctor experimented with what was likely the first true attempt at a chemical sunscreen, a quinine-based ointment to treat skin sensitivity to sunlight. And in the early 1900s, German physician Paul Unna came up with another sunscreen precursor, a paste made of natural ingredients such as chestnut extract. Ultimately, these early products didn’t apply well, either discoloring the skin or going on too thick, and so the experiments to find a better solution continued.
It wasn’t until the mid-1900s that sunscreens began to resemble what we use today. In 1942, the U.S. military tapped the American Medical Association to study products or substances that could help protect soldiers from getting sunburned during particularly hot World War II Pacific campaigns. The solution was a thick, red, veterinary petroleum salve, also known as “red vet pet.” It was waterproof, durable, nontoxic, inexpensive, and, most importantly, relatively effective.
Florida pharmacist Benjamin Green had served in the Air Force throughout the war, and in 1944, he began experimenting with ways to make the sticky substance more appealing. He added ingredients such as cocoa butter and coconut oil, creating a smoother, nicely scented lotion — the earliest version of what later became Coppertone. Throughout the 1950s and 1960s, sunscreen formulas continued to improve in texture and ingredients, offering broader protection against UVA and UVB rays, and by the 1970s and ’80s, sunscreen was widely marketed for sunburn protection and as a tanning aid.
Today, sun protection can be as subtle as a swipe of SPF lip balm or as advanced as UV-reflective clothing and tinted window film. The methods may have changed, but the instinct remains the same: When the sun beats down, we find ways to keep cool, stay covered, and avoid the burn.
Credit: John Chillingworth/ Picture Post via Getty Images
Author Nicole Villeneuve
April 6, 2025
Love it?25
The earliest cars were a far cry from the high-tech machines we drive today. Even outside of modern amenities such as backup cameras and Bluetooth connectivity, very basic features that we now take for granted didn’t exist — including a way to clear water off the windshield. In rain or snow, early drivers had to get hands-on just to see the road ahead.
Credit: Bettmann via Getty Images
In fact, the earliest cars didn’t even have windshields, let alone wipers, and people parked their cars in carriage houses or barns when not in use. German engineer Carl Benz patented the first gas-powered vehicle in 1886, generally thought of as the world’s first automobile, and as the motor vehicle evolved, so did the need for better design and functionality. In 1908, Henry Ford’s introduction of the Model T marked a major shift, making cars more affordable and accessible for everyday use. Yet even the Tin Lizzie, as it was nicknamed, was not a fully enclosed vehicle.
The Model T was easy to operate and built to handle rough roads, and with a top speed of 40 miles per hour, it made travel far more efficient. But it still had its share of problems. In the winter, radiators, which were filled with water, could easily freeze and crack. In the spring, muddy roads in unpaved regions meant cars could easily get stuck. And no matter the season, there was nothing to shield drivers from wind, rain, dust, or debris. Before windshields were the norm, they were an optional add-on for the Model T and other cars, meaning many early motorists often faced the open road — quite literally — without protection.
Windshields didn’t become standard on most vehicles until around 1915, and while they handily did their job shielding drivers from hazards, they became somewhat of a hazard themselves. Visibility through the simple, straight pane of glass was limited in inclement weather. When rain, snow, or mud obscured a driver’s view, there was no built-in solution to clear it away. The only option? Pull over, get out, and manually wipe the glass. It wasn’t exactly convenient, but there weren’t a lot of options; one other trick drivers used was to rub a carrot, sliced onion, or even a pinch of tobacco across the glass to create a thin film believed to help repel water. Ultimately, though, stopping the car and getting out to wipe the windshield was the most reliable, if cumbersome, way to see.
Credit: HUM Images/ Universal Images Group via Getty Images
Luckily, some enterprising minds were working on a better solution. In 1902, Alabama entrepreneur Mary Anderson was visiting New York City when she had an idea while riding the trolley. Trolley cars had windows and windshields long before autos, and on this particular ride she noticed her driver struggling to see through the sleet-slicked windshield. The driver tried a few things, including opening both windshield panes to peer between them, and getting out every few minutes to clear the windows. Inspired to solve this nuisance, Anderson sketched out an idea for a window-cleaning device: a hand-operated lever inside the vehicle that controlled a wiper on the outside.
Anderson received a patent for her invention in 1903, but cars weren’t yet very popular — there were just under 33,000 registered vehicles in the U.S. at the time — and they didn’t all have windshields, so interest was slim. The patent expired without success in 1920, right around the time the automobile started to become a fixture of American life: Between 1919 and 1929, the number of passenger cars on U.S. roads skyrocketed from 6.5 million to 23 million.
Other similar patents followed, and by about 1916, some of the cars that had adopted windshields also came equipped with wipers, but they were often optional, and were not yet widely used. That same year, however, Buffalo, New York, engineer John Jepson patented a simple windshield cleaner. It was similar in essence but more complex than Anderson's early designs. His consisted of spring-loaded arms that applied consistent pressure on both the top and bottom of the common horizontally split sections of windshields at the time.
The idea caught the attention of John Oishei, a local theater manager who vowed to improve driving visibility after hitting a bicyclist with his car while driving in the rain. He partnered with Jepson to market the device, branding it as the Rain-Rubber with the slogan, “Would YOU Drive Blindfolded?”
The Rain-Rubber quickly gained traction, and by the mid-1920s, windshield wipers had become standard on cars from several manufacturers, including Cadillac and Ford. While wiper-related patents have continued to flood in by the hundreds each decade ever since, at their core, windshield wipers remain true to the simple concept Anderson thought up on that New York City trolley back in 1902 — a blade wiping away the rain.
Advertisement
Advertisement
The Untold Stories of 5 Influential Black Inventors
For much of American history, Black inventors have faced significant barriers; in many cases, these innovators were unable to patent their inventions or saw their achievements credited to others. Until the abolishment of slavery in 1865, the U.S. patent system was not even available to enslaved people, as they were not considered American citizens. And even after slavery was abolished, numerous barriers remained due to racial discrimination.
Take, for instance, the case of Ellen Eglin, an African American domestic servant and resident of Washington, D.C., who invented an improved clothes wringer. In 1888, she decided to sell her invention for $18 rather than file a potentially lucrative patent for it. In an interview with the The Woman Inventor she said, “You know I am Black and if it was known that a Negro woman patented the invention, white ladies would not buy the wringer.”
Despite these obstacles, many Black inventors have made remarkable contributions to science, technology, and everyday life, shaping our modern world in profound ways. Here are five Black inventors whose stories deserve more attention.
Credit: Fotosearch/ Archive Photos via Getty Images
Garrett Morgan
Garrett Morgan helped pave the way for Black inventors, most notably with two inventions that saved countless lives. In 1911, after hearing of a factory fire that killed 146 garment workers, he set to work on a new type of safety hood for firefighters that would allow them to breathe more easily in smoke-filled environments. He patented the design in 1914, and the hood was soon bought by some 500 cities in the northern United States, as well as the U.S. Navy and U.S. Army for use in World War I.
In 1916, Morgan used the hood himself to help rescue workers in a collapsed tunnel under Lake Erie — although most of the credit was initially given to white men who entered the tunnel after Morgan. Later, the inventor witnessed a terrible collision between a horse-drawn buggy and a car at a busy Ohio intersection. The incident inspired him to create a manually operated T-shaped traffic signal with movable arms that directed traffic to stop, go, or stop in all directions — a precursor to modern traffic lights. He patented the design in 1923 and sold it to General Motors for $40,000, equivalent to more than $700,000 today.
In 1938, Frederick McKinley Jones began designing a portable air-cooling unit for trucks. His invention ultimately revolutionized the transportation industry, with mobile refrigeration allowing trucks and railroad cars to safely transport perishable goods over long distances. As well as fundamentally changing food distribution, the invention also proved vital during World War II, allowing for the transportation and preservation of blood, medicine, and food for military use.
In total, Jones earned more than 60 patents in his lifetime, and despite having minimal formal schooling, he became the first Black American to be awarded the National Medal of Technology.
Charles Richard Drew was a physician and surgeon who invented a method for the long-term storage of blood plasma in blood banks. During the early years of World War II, he organized and directed the blood-plasma programs of both the United States and Great Britain, saving thousands of lives. This led to a national blood-banking program led by Drew and the American Red Cross, and the introduction of bloodmobiles and blood drives as we know them today.
Drew also openly criticized policies that segregated the blood of African Americans from plasma-supply networks — an issue that saw him resign his position with the American Red Cross. It wasn’t until 1950 — the year of Drew’s death in a car accident — that the Red Cross stopped requiring the segregation of blood based on race. (Blood segregation remained in place in Southern states such as Arkansas and Louisiana until the late 1960s and early 1970s.)
In the 1960s, Marie Van Brittan Brown, a nurse living in Jamaica, Queens, began to feel increasingly unsafe when home alone due to skyrocketing crime rates in the area. So, with help from her husband Albert L. Brown, an electronics technician, she began inventing what became the first home security system with video surveillance.
Completed in 1966 and patented three years later, the security system consisted of four peepholes, a sliding camera, television monitors, and two-way microphones. It also incorporated a remote control that allowed for locking and unlocking the door at a safer distance, and an emergency button that would send an alarm directly to the police or security. While not manufactured on a large scale due to high production costs, the invention gained widespread recognition and formed the foundation of modern CCTV and home security systems.
Advertisement
Advertisement
Credit: Jemal Countess/ Getty Images Entertainment via Getty Images
Patricia Bath
Ophthalmologist Patricia Bath revolutionized cataract surgery when she invented laserphaco, a device and technique that could perform all the necessary steps of cataract removal. Overall, the laser device allowed for less painful and more precise treatment of cataracts, and could help restore the sight of individuals who had been blind for more than 30 years. The device is now used worldwide and has improved the vision of millions of people. When laserphaco was patented in 1988, Bath became the first Black female physician to receive a medical patent in the United States.
In 1906, Sears was a flourishing catalog company that had just launched a highly successful initial public offering. The company went public under the name Sears, Roebuck and Co. after completing the construction of an enormous new headquarters and distribution center in Chicago, which totaled 3 million square feet of floor space over 40 acres of land. Sears advertised the new complex as “the largest mercantile plant in the world,” and included illustrations of it on the backs of its catalogs. It was a heady time for the company, but not everything was running smoothly.
Though Sears was growing, its building supplies department was proving unprofitable, and a decision to close it loomed. Manager Frank W. Kushel was appointed to oversee the liquidation of the department, but he instead developed a way to sustain it: All the supplies needed to construct a home were bundled together with blueprints, and shipped directly from the factory. This eliminated the need to warehouse the materials, thus saving costs, while simultaneously creating a bigger-ticket product line. The Book of Modern Homes and Building Plans — the first catalog of Sears mail-order houses — was sent to prospective customers in 1908.
Sears was not the first company to sell kit houses — the Aladdin Company, Montgomery Ward, Lewis Homes, and others were also in the market around the same time — but Sears touted its status as one of the “largest commercial institutions in the world” with its massive distribution center, and promised to save customers between “$500 and $1,000 or more” in building costs, while guaranteeing the quality and reliability of materials. Balloon-style framing design, with drywall instead of lath and plaster, reduced the carpenter hours needed to build a house, in turn lowering the total cost for the buyer. In the initial 1908 catalog, 22 home designs were offered, ranging in price from $650 to $2,500 (roughly $20,000 to $80,000 today) and in sizes from unassuming to approaching grandeur.
Not surprisingly, delivery of materials was a complex operation. The average buyer didn’t have the space to store all the building pieces at once, so shipments were phased. The lumber and nails for the frame arrived first, in order to allow the roof and enclosure to be built, thus ensuring adequate shelter for the ensuing materials. When the customer was ready, they sent for the next shipment, which included millwork and inside finish. Hardware, paint, and any additional furnishings were the third and final shipment.
The majority of mail-order houses arrived by train; the buyers hauled the materials from the boxcar to their building site, unless they were well heeled enough to pay for the railroad to truck the supplies from the station. The first orders for homes were placed by customers around late 1908 or early 1909.
Sears moved aggressively to improve home offerings and stimulate sales. In 1909, it acquired a lumber mill in Mansfield, Louisiana. The following year, electric lights and gas (high-end amenities at the time) were included in home designs. The next two years saw the completion of an additional lumber mill in Cairo, Illinois, and the acquisition of a millwork plant in Norwood, Ohio.
The new facilities enabled the company to manufacture its entire line of homes using its own sources, which allowed for an expanded number of home designs. By 1912, the Modern Homes department reached an annual sales volume of $2,595,000, which equated to a profit of $176,000. This was enough to wipe out the previous losses from the department’s former building supplies incarnation. Kushel’s plan was a success.
Credit: Jay Paull/ Archive Photos via Getty Images
In 1916, Sears introduced the feature that is most often associated with its mail-order homes: ready-cut lumber. Cut in the factory to fit, the lumber did not require any trimming before it was nailed together. This allowed for cost savings for the buyer as fewer carpentry hours were needed (a savings Sears estimated at 40%), and it was advantageous for the company as well: Sears was able to purchase lumber in more economical lengths, as well as use second-grade lumber and convert it to first-grade via trimming. The trimmed pieces were often repurposed for other materials — with the timber waste removed, freight costs were substantially reduced. To further encourage sales, Sears reintroduced its own mortgage program, which had originally been active from 1911 to 1913.
In 1919, the end of World War I brought with it the end of wartime building restrictions. A housing boom ensued, spurred on by the return of soldiers. Sears introduced three tiers of Modern Homes: Simplex Sectional (small two-room structures), Standard Built (less-insulated homes best suited for warmer climates), and Honor Bilt, high-end homes that featured amenities such as cypress siding, kitchens with white tile sinks and enameled cupboards, and “Air-Sealed Wall construction.” Sales from 1912 to 1920 totaled $29,160,000, at a profit of $3,377,000. During that same time frame, around $3.8 million in mortgage loans were written.
During the 1920s, Sears set its sights on maximizing the booming housing market. The company had opened its first sales office specifically for Modern Homes in Akron, Ohio, in 1919, and in 1920, it built a Philadelphia plant that became the base of Sears’ East Coast operations. In 1921, sales offices opened in Pittsburgh, Cleveland, Cincinnati, and Dayton, followed by sales offices in Chicago, Philadelphia, and Washington the following year.
The offices used the Sears mortgage program as a major selling tactic, steering buyers toward mortgages, and the company’s loan policy became more and more lenient in response to sales pressure. Accordingly, sales were booming, increasing from an average of 125 units shipped per month in 1920 to 326 units in May 1926 from the Cairo plant alone. But by 1926, mortgage loans represented 97% of sales.
Credit: Education Images/ Universal Images Group via Getty Images
The year 1929 was set to be a banner year for Sears Modern Homes. The department had sold 49,000 houses from its inception up to that point. It was averaging 250 units shipped per month just from its Cairo plant, and had a sales staff of 350 people in 48 different offices throughout the country. The $12,050,000 in sales for 1929 represented the highest total the department had ever had. But when the stock market crashed in October 1929, eventually plunging the United States into the Great Depression, Sears Modern Homes was one of the many businesses the bottom fell out of.
The decline was swift. The Sears annual stockholder report for 1932 plainly stated, “Since September 1931, [the Modern Homes department] has operated at a loss. Its sales declined over 40% in our fiscal year, with resulting losses of $81,154,984 during the year.” Two years later, the annual report relayed the dire state precipitating the end of the department: “About $11,000,000 in mortgages were liquidated during the year and the Modern Homes Department was discontinued.”
The department was reopened in 1935 under a different configuration, selling prefab houses by Chicago’s General Houses, Inc., and the Modern Homes catalog continued to be issued until 1940. Modern-day Sears house enthusiasts cite 1942 as the final year a Sears mail-order home was built, as orders from the final 1940 catalog continued to be filled in the year or so after the catalog was published. Unfortunately, Sears’ own records of home sales were destroyed, complicating a definitive date.
Advertisement
Advertisement
Credit: George Karger/ The Chronicle Collection via Getty Images
Though the 1929 stock market crash and Great Depression provide a tidy explanation for the beginning of the end of the Sears Modern Home department, historians Boris Emmet and John E. Jeuck made a different assessment in their seminal 1950 book, Catalogues and Counters: A History of Sears, Roebuck & Company. The book points out the unsustainability of the business model. Increased sales thresholds for Modern Homes necessitated building (or acquiring) more manufacturing plants to handle the increased production needs. The costly production facilities required investments in shipping and transportation, too, along with an increasingly growing and complex service organization to support the business.
The push for sales meant relying on ever more lenient mortgages and approving otherwise undesirable credit risks, creating what Emmet and Jeuck called an “ever increasing and ever more unsound mortgage receivable structure.” Ultimately, the business was simply not scalable: As sales increased, profit margins declined. The peak profit margin for Modern Homes was 15% in 1919 and 1920. It dropped to an average of approximately 10% from 1921 to 1926, and fell under 5% by 1928. The narrowing margins left the business unable to weather a slowdown in sales, let alone a large economic crisis. As Emmet and Jeuck wrote, “At any time subsequent to 1926, the onset of a [sales decline] would probably have developed losses comparable to those which actually were incurred after 1929; the condition was inherent in the structure.”
Advertisement
Advertisement
How Did the Canary Come To Be Associated With Coal Mines?
Humans have been mining since prehistoric times, when flint was excavated for its use in tools and weapons. Since then, we have gone on to mine all manner of minerals, from copper and gold to the rare earth elements used to create the components in many of our modern devices.
Yet mining, and underground mining in particular, is extremely dangerous, with risks such as cave-ins, explosions, toxic air, and extreme temperatures. It may seem strange, then, that such a risky profession is associated with the small and spritely songbird Serinus canaria, otherwise known as the canary.
Just how did this tiny, tuneful member of the finch family become connected with going deep down into the perilous dark of the world’s coal mines? Well, as it turns out, we owe quite a debt to this brave little bird.
The link between canaries and coal mines began with the British physiologist and philosopher John Scott Haldane (1860-1936), a pioneering specialist in the physiology of respiration. Haldane’s many contributions to the field include his investigations into decompression sickness, which helped to improve safety for undersea divers, and an early gas mask designed to protect soldiers against poison gas in World War I. And it was Haldane who first proposed an innovative safety measure for miners. Following his investigation into the cause of an 1896 explosion at Tylorstown Colliery in Wales, Haldane concluded that carbon monoxide buildup was to blame for the disaster. So, he suggested using mice or birds to monitor gas levels in the mines, as he knew that these animals were far more sensitive than humans to poison gases.
Credit: Culture Club/ Hulton Archive via Getty Images
Heeding Haldane’s recommendation, British miners began taking canaries into the coal mines — carried in small metal or wooden cages — to detect the presence of odorless carbon monoxide. If the canary showed any signs of distress (or if it suddenly died), it was a clear sign to the miners that conditions were unsafe and the mine should be evacuated. The use of canaries in this way is perhaps the most common example of what is known as a sentinel species, a term applied to animals or plants that serve as harbingers of danger to human health or the wider environment.
It wasn’t long before the use of canaries in coal mines became commonplace not only in Britain but also in the United States and Canada. The birds became such a trusted part of the mining process — part of the team, you could say — that miners were known to treat their avian companions almost as pets. In some cases, special cages were designed for the canaries that could be used to resuscitate the poor little birds should they succumb to noxious gases. These metal-and-glass cages were fitted with a small oxygen tank. If the canary inside began to show signs of carbon monoxide poisoning, the cage door was closed and a valve opened to revive the bird with a fresh supply of oxygen.
Canaries were used in British coal mines for almost a century. Then, in 1986, hundreds of canaries breathed a collective sigh of relief when Britain announced that the use of birds in coal mines would stop, to be replaced with modern detectors, often known as “electronic noses.” But to this day, canaries retain their connection with coal mining, as much a symbol of the profession as coal-blackened faces, hard hats, and pickaxes. The canary’s role in the mines was so significant that it also became ingrained in the English language: The common idiom “a canary in the coal mine” is still used to refer to something that gives an early warning of danger.
Advertisement
Advertisement
5 Inventions You Didn’t Realize Came From Ancient Rome
In 500 BCE, Rome was nothing more than a minor city-state on the Italian Peninsula. But with its eyes set on expansion, Rome began to conquer its neighbors until it controlled all of Italy. It didn’t stop there. It became an empire in 27 BCE, and at its height — around 100 CE — the vast and immensely powerful Roman Empire stretched from Britain to Egypt.
Rome’s influence on the world was both widespread and long-lasting. The Romans were great innovators and inventors, sometimes appropriating and advancing aspects from other cultures, and other times inventing entirely new technologies and systems. These innovations covered a wide range of fields, including state institutions, cultural practices, and engineering techniques.
The Roman Empire eventually fell in 476 CE, but its legacy and influence carried on — all the way to the present day. Some of Rome’s most famous innovations, such as sanitation systems and road networks, are well known and still very much in evidence; in the United Kingdom, for example, many modern roads still follow the routes laid down by the Romans. Other Roman innovations, however, are more obscure. Here are five inventions that continue to shape our modern world, but that many people don’t realize originated in ancient Rome.
The First Bound Books
Credit: Xavier ROSSI/ Gamma-Rapho via Getty Images
In the ancient world, the first written documents were typically recorded on clay or wax tablets, or on sheets or scrolls of papyrus. The Romans also used scrolls, but during the first and second centuries CE, a new form of storing and accessing information emerged: the codex-style book. These notebooks, known as pugillares membranei (roughly translating to “parchment book”), were formed by stacking pages — typically made of vellum or papyrus — that were then joined along one set of edges, much like modern books. They were mainly used for personal writing, and represent the first true form of the bound book. The codices soon became popular throughout Western Europe and the Middle East, eventually superseding scrolls and tablets.
The Romans were great pioneers in the field of surgery. We know from archaeological evidence — including well-preserved artifacts found in the buried remains of Pompeii and Herculaneum — that the Romans used precision medical instruments including bone forceps, catheters, obstetrical hooks, scalpels, and surgical scissors. The level of technology found in some of these tools is not so far from their modern counterparts. The Roman version of the vaginal speculum, for example, did not change significantly until the 20th century. The Romans also had a medical military corps with specialized field surgeons who were tasked with keeping the legions fit, healthy, and alive.
Advertisement
Advertisement
Underfloor Heating
Credit: Culture Club/ Hulton Archive via Getty Images
Heated floors might seem like a modern luxury, but the Romans started using them 2,000 years ago. Ancient Romans used an underfloor heating system known as a hypocaust, which drew in hot air from a wood-burning furnace outside the house and channeled it into a chamber below the floor. This not only warmed the floor itself but radiated heat throughout the home. Many examples of hypocausts can still be found in the foundations of villas and townhouses in Roman centers in Germany and England, where cold winters would have made the toasty floors especially inviting.
Markets have long been prevalent throughout the world as a gathering place for people to sell and buy all sorts of goods and produce. But the first permanent and covered shopping mall was likely built by the Romans: Trajan's Market (Mercatus Traiani), constructed between 100 and 110 CE. Made of red brick and concrete, it had six levels that housed around 150 different shops, as well as government offices and living accommodations. A street on the upper level, meanwhile, was named Via Biberatica after the Latin word for “drink,” suggesting that even the Romans needed to unwind after a hard day’s shopping.
Most of the world now uses the Gregorian calendar, first introduced in 1582 by Pope Gregory XIII. But credit for the modern calendar should really be given to the Romans. The Gregorian calendar is based on the much earlier Julian calendar (which itself owed a lot to the Egyptian solar calendar), introduced by Julius Caesar in 45 BCE. The calendars share many similarities, though one of the main differences is the treatment of those pesky leap years. The Gregorian calendar handles them more precisely, resulting in a discrepancy between the two calendars that is currently 13 days, and will become 14 days in 2100.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.