Since the fourth millennium BCE, when urban civilizations first appeared in ancient Mesopotamia, humans have strived to achieve proper dental hygiene. Yet the nylon-bristled toothbrush we use today didn’t come along until the 1930s. For the thousands of years in between, people relied on rudimentary tools that evolved with scientific knowledge and technological advancements over time. Some of the earliest toothbrush predecessors date as far back as 3500 BCE. Here’s a look at how people kept their teeth clean before the modern toothbrush.
Sometime around the year 3500 BCE, the ancient Babylonians (located near modern-day Iraq) created a tool known as a “chew stick.” This simple, handheld piece of wood is considered the earliest known direct predecessor to the toothbrushes we use today. Chew sticks were simple wooden twigs cut to approximately 5 or 6 inches long. One end of the stick was then softened in boiling liquid to help separate the fibers, creating an almost brushlike effect. Individuals would chew on these sticks to freshen their mouths, as the frayed fibers would slide between the teeth and help loosen debris. Many early Arab cultures used a specific shrub called Salvadora persica (also known as the “toothbrush tree”) to create their chew sticks, which they called miswak. The shrub was particularly aromatic in nature and thought to have a stronger mouth-freshening effect than other plants.
Around this same time, civilizations in Mesopotamia, Egypt, and elsewhere in the ancient world also used early versions of a toothpick to keep their teeth clean. These were often made of thin pieces of wood, though in later years, wealthy individuals began crafting toothpicks from brass and silver for added opulence and durability. In ancient Greece, toothpicks were known as karphos, roughly meaning “blade of straw,” suggesting the Greeks may have used coarse fibers such as straw in addition to wood.
Before handheld teeth-cleaning devices became widespread, many ancient civilizations relied on a dental cream made of various ingredients, including the ashes of oxen hooves, myrrh, egg shells, and pumice. This cream was first developed in ancient Egypt sometime before 3000 BCE, and was mixed with water and then rubbed into the teeth to help remove debris. Around 1000 BCE, flavorful ingredients such as herbs and honey were added to the mix by the Persians. Another thousand years after that, the ancient Greeks and Romans improved the cream even further by adding abrasive elements such as crushed bone and oyster shell to really help get rid of stuck-on debris.
The world of dental hygiene was forever changed thanks to the ingenuity of ancient China. During the Tang dynasty (618 to 907 CE), coarse hair fibers from the back of a hog’s neck were attached to a handle made of bamboo or bone, creating an early handheld toothbrushing device. Centuries later in 1498, Emperor Hongzhi of the Ming dynasty formally patented this hog’s hair toothbrush. For the next several centuries, artisan toothbrush makers around the world drew inspiration from ancient China and continued to heavily rely on coarse boar’s hair. In fact, toothbrush bristles didn’t see another major change until the middle of the 20th century.
Boar’s hair toothbrushes didn’t really take off in Western Europe until the 17th century — before that, Europeans often relied on toothpicks, as well as rags rolled in salt or soot, as they believed the coarse substances helped remove debris from teeth. But around the mid-1600s, French dentists began pushing people to take better care of their teeth as they made scientific advances on the topic of dental hygiene. In 1649, an English politician named Ralph Verney, who was living in Paris at the time, was asked by a friend back in England to see if he could get his hands on some French-made “little brushes for making cleane of the teeth,” which were becoming more common throughout the region.
There was a bit of pushback in British society against using toothbrushes, however, as some people worried about the cost and hassle of replacing these devices that wore down over time. Instead, some promoted cheaper alternatives that could be easily crafted at home — the 1741 British text The Compleat Housewife advised wrapping a cloth around your finger and using that to clean your teeth instead, a process that had also been used in ancient Greece and Rome. But these detractors were in the minority, and the production of handheld tools began to expand. By 1780, Englishman William Addis realized how to improve upon the available toothbrushes of the time, which were using animal fibers sourced locally. Addis began importing boar’s hair directly from Siberia and northern China, as the boars from those cold climates had coarser, more durable hair to withstand the elements, which was believed to be more effective at cleaning teeth. Addis founded his namesake company shortly thereafter — which is still in operation today — and began successfully producing toothbrushes on a much larger scale.
As brushing one’s teeth became increasingly popular in Britain, that fervor spread to the British colonies in the Americas. Toothbrushes even caught the eye of future Presidents George Washington and Thomas Jefferson, the latter of whom instructed a London-based colleague to bring him “½ doz. Tooth brushes, the hair neither too strong nor too weak” as well as “½ doz. do. with the strongest hair, such as hog’s bristle.”
It wasn’t until the 19th century, however, that toothbrushes really became popular among everyday Americans, who previously relied on coarse clothes, toothpicks, and other rudimentary methods that were common back in Europe. This was in part due to the rise of a middle class with disposable income to spend on dental care products. The Industrial Revolution also increased commercial production of toothbrushes, which made the product available on a wider scale. On November 7, 1857, dental school graduate H.N. Wadsworth became the first American to successfully patent a toothbrush design, and large-scale production of toothbrushes began throughout the country by the mid-1880s. One of the earliest known American-made toothbrushes was produced by the Florence Manufacturing Company of Massachusetts. Known as the “Pro-phy-lac-tic,” the brush looked quite similar in design to many modern toothbrushes, though coarse animal hair was still used for the bristles.
Advertisement
Advertisement
Credit: GraphicaArtis/ Archive Photos via Getty Images
The First Nylon Toothbrush
On February 24, 1938, the U.S.-based DuPont chemical company changed the world of dental care with the debut of a new toothbrush featuring delicate nylon bristles instead of coarse hog’s hair. This was the first nylon toothbrush in the world, fitting since DuPont invented nylon to begin with. The synthetic bristles were just as effective at cleaning teeth, if not more so, and made using the tool more pleasurable. The first commercial nylon-bristled toothbrush was called Dr. West’s Miracle Tuft, and DuPont maintained exclusive rights to nylon bristles until the 1940s. Once that exclusivity expired, other manufacturers began incorporating the material into their own designs, and nylon bristles eventually became the standard.
Since the middle of the 20th century, Las Vegas has been known as the capital of the American id. Gambling has long been at the center of its appeal, as nicknames such as “Sin City” and “Lost Wages” suggest. “What happens in Vegas stays in Vegas” is the city’s well-known slogan, while others have remarked, “Las Vegas is where losers come to win, and winners come to lose.”
Rising up from the Nevada desert, the city’s built environment is so extravagant that it’s difficult to imagine a time when its spectacle did not exist, fully formed. Let’s go back and trace the origins of this uniquely American city.
Even though Las Vegas occupies a unique place in American culture, its metropolitan origin was sparked by the same thing that gave rise to many other U.S. cities: the development of the railroad. The area that includes present-day Nevada became a United States territory with the signing of the Treaty of Guadalupe Hidalgo in 1848, which ended the U.S. war with Mexico. Despite its location in the basin of the Mojave Desert, the site of what is now Las Vegas was a sort of oasis — a valley that included a water source in the form of artesian springs.
The water source was the selling point for railroad magnate and U.S. Senator William Clark. In 1902, he bought 2,000 acres of land and water rights in order to create a waypoint for the San Pedro, Los Angeles & Salt Lake Railroad he incorporated to connect those cities. The railroad line through Nevada began construction in 1904, and the following year, Clark auctioned off parcels of his land, which was located east of the railroad tracks.
Around the same time, civil engineer John T. McWilliams was attempting to build a township west of the railroad tracks. Though he was working with far less acreage than Clark — 80 acres to Clark’s 2,000 — the development provoked competition and intensified Clark’s efforts to build his township. Clark offered refunds on the $16 train fare to town in order to attract buyers. Newspaper advertisements promised, “Get into line early. Buy now, double your money in 60 days,” though accounts differ on which of the two were commissioning that ad.
Ultimately, McWilliams couldn’t really compete. After all, Clark owned the water rights and far more land, and he had a major stake in the railroad. On September 5, 1905, a fire almost completely consumed McWilliams’ townsite, and ensured that the competition between the two was short-lived; development would be concentrated west of the railroad tracks. Clark formed the Las Vegas Land & Water Company with his partners, and vowed, “I will leave no stone unturned and spare myself no personal effort to do all that lies within my power to foster and encourage the growth and development of Las Vegas.”
Clark’s dramatic statement might sound like a natural lead-up to building the bombastic city we know today. But that’s not quite what happened. Over the next 25 years, Las Vegas settled into an existence as a quasi company town, with railroad and mining as the main industries and a population of about 2,300. Clark sold his share of the railroad to Union Pacific in 1921, living in retirement for four more years until his death at age 86.
The 1920s were a tumultuous decade for Las Vegas nearly from the outset. In 1921, Union Pacific cut 60 jobs in the wake of its acquisition of the railroad. President Warren G. Harding’s incoming administration also meant new appointments to the Railroad Labor Board, and the board approved a series of wage cuts for railroad workers. In the meantime, a post-World War I downturn in mining further impacted Las Vegas. Then, in what is largely viewed as a retaliatory move, Union Pacific moved its repair shops out of Las Vegas and to Caliente, Nevada, costing hundreds of jobs.
With a dire economic outlook impacting the entire state, Nevada revisited the legalization of gambling, which had been legal from its statehood in 1869 up until 1910. With greater public support for relegalizing gambling than previous efforts had, a bill to legalize “wide open” gambling passed in both the state Assembly and Senate, and on March 19, 1931, Governor Fred Balzar signed it into law. That same year, divorce laws were loosened to permit anyone with a six-week residency in the state to legally divorce. And just one year earlier, construction had begun on the Hoover Dam, bringing an influx of thousands of workers to the area, many of whom would take the short trip to Las Vegas to try their luck with the newly legalized games. With this confluence of events, the Las Vegas we know today began to take shape.
Advertisement
Advertisement
Credit: Gene Lester/ Hulton Archive via Getty Images
Organized Crime and the Strip
The decriminalization of gambling made Las Vegas an attractive destination to experienced gambling operators, some of whom were running criminal enterprises in other states. One such figure was the archetypal crooked cop Guy McAfee, a Los Angeles vice squad officer who fled to Las Vegas to escape prosecution for running gambling and prostitution rings — the exact vice he was supposed to be policing. Arriving in town in 1938, he bought the Pair-O-Dice Club on Highway 91 and renamed it the 91 Club, delaying its grand opening to 1939 in order to coincide with Ria Langham's six-week residency for divorcing Clark Gable.
McAfee was responsible for two enduring pieces of Las Vegas culture: He opened the Golden Nugget on Fremont Street, ushering in an era of grandiose casinos, and he is also credited with nicknaming Highway 91 “the Strip.” The Golden Nugget opened in 1946, about a year after the Nevada Legislature created the first casino license stipulating a 1% tax rate on gross gaming revenue in excess of $3,000.
The lucrative gaming industry began to attract heavier organized crime players beyond McAfee. Benjamin “Bugsy” Siegel arrived in Las Vegas intending to create a base of operations for the notorious Syndicate, which, at the time, was led by Meyer Lansky during a period when Salvatore “Lucky” Luciano was in prison. Using funds from the Syndicate, Siegel became the primary stakeholder in the construction of a casino on Highway 91 to rival the Golden Nugget. Siegel wanted it to depart from the Old West aesthetic of most casinos of the time, and instead be patterned after the tropical resorts the Syndicate backed in Havana, Cuba. He dubbed it the Flamingo, and hoped to set a new standard for opulence in line with Siegel’s own worldview. “Class, that’s the only thing that counts in life," he once said. “Without class and style, a man’s a bum, he might as well be dead.”
Lavish attention to detail and poor business management contributed to enormous cost overruns, and bad luck compromised the Flamingo’s opening and its ability to quickly recoup costs. Maybe because of the money, maybe for a number of other possible motives that are debated to this day, Bugsy Siegel was gunned down while reading the newspaper in a Beverly Hills mansion on June 21, 1947. The murder was a national sensation, covered in tabloids and TIME magazine alike. LIFE magazine ran a gruesomely iconic full-page photo of the crime scene in its article about the murder. The case, Crime Case #46176 in the Beverly Hills Police Department, is still open and unsolved.
In a tellingly quick matter of minutes after Siegel’s murder, other Syndicate bosses took over the Flamingo. The resort eventually became profitable — so much so that the Syndicate began building more casino-resorts on the Strip. Organized crime had taken hold in Las Vegas, and the era of the swanky, entertainment-oriented hotel-casino was born. The mob invested in more casinos; the Sands Hotel and Casino opened in 1952 and brought in the “Rat Pack” (Frank Sinatra, Dean Martin, Sammy Davis Jr., Joey Bishop, and Peter Lawford) for a high-profile residency. The Dunes, Riviera, and New Frontier opened in 1955; the Tropicana followed in 1957, and the Stardust opened a year later. Each had ties with organized crime syndicates from around the country.
Despite the sensational murder of Bugsy Siegel, the mob’s involvement in casinos, hotels, restaurants, and other Vegas businesses expanded, and more gangsters arrived in the city throughout the 1960s and ’70s. But in the late ’60s, billionaire Howard Hughes bought a series of mob-connected casinos — the Desert Inn, the Sands, Castaways, Frontier, the Silver Slipper, and Landmark — that shifted the balance of casino ownership in the city from mob-connected to corporate-owned. In 1969, the Nevada Legislature promoted corporate ownership of casinos, and in 1970, Congress passed the Racketeer Influenced and Corrupt Organizations Act (commonly known as RICO), which aided the U.S. Justice Department in cracking down on organized crime.
During the ’70s, high-profile car and restaurant bombings between rival gangs unsettled the city to the point of attracting the attention of the FBI. The Nevada Gaming Commission and the Nevada Gaming Control Board refocused on organized crime, and Governor Mike O’Callaghan made it a point of emphasis. A RICO case focused on mobster Anthony Spilotro and Frank “Lefty” Rosenthal, whose connections ran from Chicago mob families to others throughout the Midwest. By 1981, Spilotro’s operations had been broken up, and the mob was all but finished in Las Vegas.
Advertisement
Advertisement
Credit: Frank Edwards/ Archive Photos via Getty Images
The Rise of the Corporate Mega-Resort
Billionaire businessman Kirk Kerkorian bought the Flamingo in 1967, and in 1969, he opened the massive International Hotel. It was the largest hotel in the country, with 1,500 rooms and a 4,200-seat showroom. For its grand opening, he brought in Barbra Streisand, and then followed that by bringing in Elvis Presley for a famed residency — 837 consecutive sold-out performances over seven years — that set an enduring record. The same year, Kerkorian bought Hollywood’s venerable MGM Studio, and set out to build a themed resort in Las Vegas based on the production house.
With all of the buying and building, Kerkorian incurred enormous costs, so to help balance the ledger, he sold the Flamingo (and later the International Hotel as well) to the Hilton Hotel Corporation. The success of the Flamingo Hilton caught the attention of other major hotel corporations, such as Sheraton and Holiday Inn, and they too began opening casino-hotels in the city. In 1973, Kerkorian opened the MGM Hotel-Casino, which eclipsed the International Hotel in grandeur, boasting 2,100 rooms, eight restaurants, two showrooms, and the (at the time) world’s largest casino. It was the largest resort in the world, and Las Vegas’ first mega-resort.
During the rest of the ’70s and into the ’80s, development on the Strip stagnated. But Las Vegas itself was growing: From 1985 to 1995, the city’s population nearly doubled, increasing to around 368,360. Using junk bonds in 1989, developer Steve Wynn reinvigorated the Strip by building the most ostentatious mega-resort yet: the Mirage Resort and Casino. A 29-story Y-shaped tower with 3,044 rooms, a 1,500-seat showroom, and waterfalls, it also had a simulated volcano that would “erupt” every 15 minutes after sundown. That same year, Kerkorian announced plans for a new MGM Grand, which was completed in 1991 and took the mantle as Las Vegas’ largest casino, with even more over-the-top touches including a lion zoo and heavyweight boxing arena.
Credit: George Rose/ Getty Images News via Getty Images
An Entertainment Capital
The 1990s were a transitional era in Vegas, as many of the midcentury casino icons were razed in favor of constructing new family-friendly mega-resorts, representing a commitment toward broader entertainment tourism, rather than singular gambling. The Sands was imploded and replaced by the Venetian; similarly, the Dunes was replaced by the Bellagio, and the Hacienda was replaced by Mandalay Bay Resort. In true Las Vegas fashion, each implosion was a spectator event. The Hacienda implosion was even scheduled at 9 p.m. on December 31, 1996, in order to coincide with the new year on the East Coast. Most of the casino implosions were televised, and the videos can still be viewed on local TV news channel websites.
Today, Las Vegas continues to broaden its scope. Professional sports leagues have ended their historical aversion to placing teams in the city, as seen by the NHL awarding the expansion team the Las Vegas Golden Knights in 2017, the WNBA’s San Antonio Stars relocating to Las Vegas and becoming the Aces in 2018, and the NFL’s iconic franchise the Raiders relocating to Las Vegas in 2020. Major League Baseball’s Athletics are likely to follow. Las Vegas is now known as a city with an excellent fine-dining scene, with a number of chefs up as semifinalists for 2024 James Beard Awards. And the only place in town the mob exists now is in a museum.
One hundred years is a long time in the life of a city. New technologies emerge and wane, people come and go, cultural factors ebb and flow. But not all cities change at the same rate; some stay comparatively similar to their older incarnations, while others become drastically different. Here’s a glimpse at what a few iconic state capitals looked like a century ago.
Credit: Buyenlarge/ Archive Photos via Getty Images
Atlanta, Georgia
Atlanta was named after the Western and Atlantic Railroad, for which it was a terminus. In the early 20th century, the city was well established as a major railway hub, and the downtown was built around its first train station. Hotels were concentrated in an area near the station (called, fittingly, Hotel Row) in order to serve train travelers, and by the 1920s, masonry high-rises created the city’s skyline.
Like many cities during this period, Atlanta was beginning to expand its roads in order to accommodate increasing numbers of cars. In the 1920s, the city built three major viaducts to allow traffic to bypass the high number of railroad crossings. The Central Avenue, Pryor Street, and Spring Street (later renamed Ted Turner Drive) viaducts not only improved vehicle safety, but also led to development outside the city’s downtown core.
Though Boston was established as a colonial port city as early as 1630, a wave of immigration between 1880 and 1921 fueled a population boom and a sense of transition similar to what many younger cities were facing at the time. An expanding population created a need for a building boom, and changes wrought by the Industrial Revolution were at the forefront. The industrialization of nearby Springfield, Massachusetts led to a high population of mechanics and engineers in that city, and it became a hub for the nascent automotive industry. Rolls-Royce selected Springfield as the site of its U.S. factory, and many other early auto manufacturers were based in the area. In fact, Massachusetts claimed to have manufactured more cars at the beginning of the 20th century than Detroit, Michigan. Cars were particularly popular in Boston — more so than in many other cities — and 1 in 8 Bostonians were car owners by 1913. This led to the construction of a large number of buildings dedicated to automobiles, including garages, repair shops, car dealers, and more.
In terms of architecture, the city’s affluent Beacon Hill neighborhood appears very similar today to how it looked in the 1920s, with well-preserved colonial-style and Victorian buildings. However, little remains of Boston’s once-abundant theater district, which reached a peak count of 40 theaters by 1935.
Nashville has a storied history as a center of American popular music, but that history was in its very infancy 100 years ago. The famous Grand Ole Opry didn’t begin until the middle of the 1920s, first broadcasting as “the WSM Barn Dance,” and at the time, it was hardly the institution it would become later. In those days, it was purely a radio show broadcast out of the WSM studio on the fifth floor of the National Life and Accident Insurance building, with only as many spectators as could fit in the limited confines of the station’s Studio A.
Unlike other major capitals, Nashville wasn’t a city of high-rises — the 12-story Stahlman Building was the tallest building from the time of its completion in 1908 until the L&C Tower was built in the 1950s — and much of the low-rise brick and masonry buildings from the last century are preserved today. This is particularly true along the First Avenue front of the Cumberland River, and along SecondAvenue, formerly known as Market Street.
Though Austin’s population began steadily growing around the end of the 19th century, in 1920 it was only the 10th-largest city in Texas, with a population just under 35,000. Its visual focal point was the Texas State Capitol Building (the UT Tower didn't exist yet), and the surrounding downtown consisted of low- and mid-rise buildings with brick or stone facades — an aesthetic that was more “Main Street” than “metropolis.” Cars weren’t quite as dominant in Austin as in larger cities of the time, and horse-drawn carriages were still seen on the streets.
Phoenix is another city that had a relatively small population in 1920 — just around 29,000 — but it was still the largest city in a state that had existed for only eight years. Because of this, Phoenix had the flashiness and bustle of an up-and-coming city, despite its small size. The city’s first skyscraper, the Heard Building, was even the site of a stunt climbing performance shortly after it was built. Nonetheless, the Heard Building’s height of seven stories might not pass for consideration as a skyscraper in larger cities. The 10-story Luhrs Building surpassed it in height when it opened in 1924, and the 16-story Hotel Westward Ho became the city’s tallest building in 1928. It held that title for more than 30 years, as the vast availability of land surrounding Phoenix disincentivized vertical construction, in favor of outward land expansion.
Sacramento is often overshadowed by other iconic California cities, but 100 years ago it boasted a downtown of ornate classical architecture, was home to the largest manufacturing train yard in the western United States, and served as a major retail hub for the region. Vital downtown structures of the time — such as Sacramento City Hall, Memorial Auditorium, the California State Life Building, and the Federal Building — were all built during a construction boom that occurred between 1912 and 1932. But there isn’t much evidence of this architectural period today, as even some surviving buildings, such as Odd Fellows Hall, have been remodeled with simpler midcentury-style facades.
The use of quill pens dates back to the sixth century CE, when the feathers of large birds — primarily geese, turkeys, swans, and even crows — replaced the reed pens that had been used previously. Though it’s an obsolete writing utensil today, the quill pen remains a symbol of education, literature, and artistic expression. Many important historical documents were written using quill and ink, from the Magna Carta to the Declaration of Independence, and white quills are still laid out every day the U.S. Supreme Court is in session.
In pop culture, the Harry Potter series has helped generate interest in the old-fashioned writing instrument, and Taylor Swift, noting the influences of Charlotte Brontë and period films, has referred to some of her music as “Quill Pen Songs.” “If my lyrics sound like a letter written by Emily Dickinson’s great-grandmother while sewing a lace curtain, that’s me writing in the quill genre,” she explained at the Nashville Songwriter Awards in 2022.
So what is it actually like to write with the quill pens of yore? To answer that question, I turned to the internet for authentic supplies and expert advice, and set out scribbling. Here’s what I learned from the experience.
Photo credit: Kristina Wright
First, What Is a Quill?
A traditional quill pen consists of a feather that has been trimmed to around 9 inches long, had its shaft stripped of barbs, and had the inside and outside of the hollow barrel cleaned of membrane and wax. The quill is then dried, typically by curing it in sand, and the tip is shaped into a nib with a channel split (cut) to hold the ink.
The earliest fluid inks were carbon-based black inks that are believed to have originated in China around 2700 BCE. Iron gallotannate (iron gall) ink eventually replaced carbon and became the primary ink used with quill pens from the Middle Ages until the beginning of the 20th century. Iron gall ink is a permanent, deep purple-black or blue-black ink that darkens as it oxidizes, and is made from iron salts and gallotannic acids from organic sources (such as trees and vegetables). The Codex Sinaiticus, written in the fourth century CE and containing the earliest surviving manuscript of the Christian Bible, is one of the oldest known texts written with iron gall ink.
The Writing Technique Is Familiar — But With More Blotting
Though the quill pen was the primary writing implement for more than 1,500 years, it’s an awkward tool for a modern writer to learn to use. Thankfully, hand-cut goose quills and iron gall ink are readily available online for the curious scribe to purchase.
I was eager to start writing once I acquired my pen and ink, but I hadn’t considered what type of paper to use. I felt a bit like Goldilocks while trying different types: Watercolor paper was too absorbent, printer paper and a coated writing tablet weren’t absorbent enough (though the pen glided across both beautifully), but a good-quality sketchbook offered just the right amount of absorbency.
On the first attempt, I dipped the pen in the jar of ink and then removed the excess liquid by rubbing the barrel of the feather along the rim of the ink jar. I found this didn’t remove enough ink to prevent drips, however, so I used a paper towel to blot the excess.
Once that was done, using the quill pen was no different from using my favorite metal-tipped ink pen. I held the quill the same way and applied about the same amount of pressure (at least initially) to the paper to write.
Advertisement
Advertisement
Photo credit: Kristina Wright
A Soft Backing Helps Prevent Breakage
Once I found the right paper to use with my quill pen, I practiced loading the tip with ink and writing the alphabet and short sentences. My initial attempts were challenging because using a quill pen requires frequently dipping the pen tip back into the jar for more ink, which affected the quality and consistency of the letters. In trying to finish a word or sentence before replenishing the ink, I would find myself pressing too hard on the quill tip, which quickly dulled the point and resulted in me cracking my first pen.
With only one quill remaining, I sought the advice of more experienced quill pen users. They recommend using a felt cushion underneath the writing paper to preserve the quill’s point; I didn’t have any felt on hand, so I used an old linen napkin instead. I expected it to be more difficult to write with a soft backing rather than a solid tabletop, but I was amazed at how much easier and smoother it was to write. And I didn’t crack my pen!
The Ink Needs Time to Dry, But Sand Can Speed Up the Process
I smeared the ink on many of my early efforts by trying to move or stack the paper too soon. Blotting paper was (and still is) a popular way of preventing ink from smearing, but my attempts to use a clean piece of paper on top of my iron gall ink still resulted in smudges.
However, I had good luck with a technique that predates blotting paper: sand. In this case, I used sterile terrarium sand from the craft store and sprinkled it over my still-wet ink. The sand absorbed the wet ink in a matter of minutes and, once I shook off the sand, my quill ink writing was dry and (relatively) smudge-free. (Which was more than I can say for my hands and shirtsleeves.)
Advertisement
Advertisement
Photo credit: Kristina Wright
Successfully writing with a quill pen took more practice and patience than I initially expected, especially considering I’ve been a lifelong fan of sending handwritten letters. But once I got the hang of it, there was something very soothing about the rhythm of the old-fashioned process.
It’s easy to take for granted today, but the emergence of broadcast radio was a seismic shift in early 20th-century culture. Born out of ship-to-shore wireless telegraph communication at the turn of the 20th century, broadcast radio represented an entirely new pastime by the time it began to mature in the 1920s. The golden age of radio was the period from the 1920s to the 1950s when the medium was at its absolute peak in both program variety and popularity. Radio grew massively during this era: In 1922, Variety reported that the number of radio sets in use had reached 1 million. By 1947, a C.E. Hooper survey estimated that 82% of Americans were radio listeners.
In addition to the music, news, and sports programming that present-day listeners are familiar with, radio during this period included scripted dramas, action-adventure series such as The Lone Ranger, science fiction shows such as Flash Gordon, soap operas, comedies, and live reads of movie scripts. Major film stars including Orson Welles got their start in radio (Welles became a household name in the wake of the infamous panic sparked by his 1938 broadcast of The War of the Worlds), and correspondents such as Edward R. Murrow established the standard for broadcast journalism. President Franklin D. Roosevelt used the medium to regularly give informal talks, referred to as fireside chats, to Americans listening at home. But radio was also largely influenced by advertisers, who sometimes wielded control of programming right down to casting and the actual name of the program, resulting in some awkward-sounding show titles, such as The Fleischmann’s Yeast Hour. The golden age of radio was a combination of highbrow and lowbrow content, offering both enduring cultural touchstones and popular ephemera — much like the television that eclipsed it. Read on for five more facts from this influential era.
The first known radio advertisement was a real-estate commercial for the Hawthorne Court Apartments in Jackson Heights, Queens, broadcast by New York station WEAF in August 1922. There’s a bit of disagreement over whether the duration of the ad was 10 minutes or 15 minutes, but fortunately for listeners, it wasn’t long before the ad format was pared down considerably. In 1926, when General Mills predecessor Washburn-Crosby was looking for a way to boost the languishing sales of Wheaties, it turned to its company-owned radio station in Minneapolis (WCCO) for what ended up being a much shorter form of commercial. WCCO head of publicity Earl Gammons wrote a song about the cereal called “Have You Tried Wheaties?” and Washburn-Crosby hired a barbershop quartet to sing it, thus creating the first radio jingle.
Due to limited recording capabilities during the first three years of the ad campaign, the Wheaties Quartet (as they were known) performed the jingle live at the station every time the commercial aired. The decidedly manual campaign worked, as it led to the Minneapolis-St. Paul area comprising more than 60% of Wheaties’ total sales. When the ad campaign was expanded nationally, sales of Wheaties increased throughout the country, establishing the effectiveness of the jingle and the influence of advertising on the medium. By 1948, American advertisers were spending more than $100 million per year (around $1.2 billion today) on radio commercials.
Advertisement
Advertisement
Photo credit: Graphic House/ Archive Photos via Getty Images
The “Big Three” Networks Were Born in Radio
In 1926, RCA, the Radio Corporation of America, bought the radio station WEAF from AT&T and added the infrastructure to its New York and New Jersey station, WJZ. The combined assets established RCA’s broadcast network, dubbed the National Broadcasting Company, or NBC. On November 1 that same year, NBC officially became two networks: NBC Red (extending from WEAF) and NBC Blue (extending from WJZ). The upstart networks soon had a competitor. In 1927, frustrated talent agents Arthur Judson and George Coats resolved their inability to land a contract to get their clients work with NBC by forming their own radio network, United Independent Broadcasters. The network quickly changed its name to Columbia Phonograph Broadcasting Company after a merger with Columbia Phonograph and Records. Unfortunately for Judson and Coats, they were no more effective as would-be radio network moguls than they were as radio talent agents: The network operated at a loss, and it wasn’t long before Judson sold it to a relative who had been an initial investor, William S. Paley. On January 29, 1929, Paley shortened the network’s name to Columbia Broadcasting Company, or CBS.
The same year, NBC established the country’s first coast-to-coast radio infrastructure, but in 1934, antitrust litigation resulted in the FCC ordering the company to sell either the Red or Blue network. Years of appeals followed, finally resulting in NBC electing to sell the Blue network to Life Savers and Candy Company owner Edward J. Noble. Noble renamed it the American Broadcasting Company, and ABC was born.
Photo credit: Hulton Archive/ Hulton Archive via Getty Images
A Ventriloquist Show Was One of Radio’s Biggest Hits
A form as visual and illusion-based as ventriloquism seems like a poor fit for an audio-only medium, but from 1937 to 1957, The Edgar Bergen and Charlie McCarthy Show was an American radio institution. It was the top-rated show for six years of its run, and in the top seven for all but its final five years. Ventriloquist Edgar Bergen started in vaudeville, and it was his guest appearance on Rudy Vallée’s Royal Gelatin Hourin 1936 that introduced him to the radio audience. The appeal of the show was Bergen’s vaudevillian skill at performing multiple comedic voices, and his quick and salacious wit as Charlie, roasting celebrity guests and using the dummy’s nonhuman innocuousness to get away with censorship-pushing double-entendres. Though the show included a live studio audience, Bergen all but dropped the traditional ventriloquism requirement of not moving his lips while voicing Charlie. As he reasoned, “I played on radio for so many years… it was ridiculous to sacrifice diction for 13 million people when there were only 300 watching in the audience.”
Inventor Edwin H. Armstrong earned prestige for creating the regenerative circuit in 1912, a modification to the vacuum tube that led to the dawn of modern radio. In the late 1920s, he set out to find a way to eliminate static from broadcasts, and received initial support in the endeavor from RCA President David Sarnoff. Sarnoff allowed Armstrong to use the RCA radio tower atop the Empire State Building to conduct experiments, and Armstrong agreed to give RCA first rights to the resulting product. When Armstrong demonstrated his static-free invention in 1935, what he unveiled was an entirely new broadcast technology using frequency modulation (FM) instead of the existing AM band.
Sarnoff, however, had wanted an improvement to AM, and saw FM as a threat to both RCA’s existing AM infrastructure and the emerging television technology RCA was investing in: He feared it would render AM equipment obsolete, and that FM radios would compromise the nascent market for television sets. Instead of embracing FM, RCA withdrew its support of Armstrong. With no support elsewhere in the broadcast industry, Armstrong set up his own fledgling FM station in hopes of promoting high fidelity radio, but he spent years in court mired in a byzantine tangle of regulatory and patent battles. FM eventually caught on, of course, but not until after radio’s golden age had passed: The FCC finally authorized an FM broadcasting standard in 1961.
Photo credit: Gene Lester/ Archive Photos via Getty Images
The Last Shows of the Golden Age Ended in 1962
On September 30, 1962, the final two remaining scripted radio shows signed off for the last time on CBS. The detective series Yours Truly, Johnny Dollar ended a run that day that began in 1949, and mystery-drama Suspense ended a 20-year run that had begun on June 17, 1942. As evidenced by its longevity, Suspense was particularly venerable; it was a Peabody Award winner whose scripts drew from classical literature, stage plays and screenplays, and entirely original material. Suspense attracted top guest stars such as Humphrey Bogart, Bette Davis, Cary Grant, Bela Lugosi, Rosalind Russell, and James Stewart. CBS even produced a television adaptation that began airing in 1949, but it was canceled in 1954, outlasted by the original version on the radio.
If we could travel back 100 years and land on a typical city street, we’d probably be mightily discombobulated. Some things would seem familiar: the buzz of the urban environment, people walking this way and that, and buildings with facades that could well still exist today. But looking around, we’d soon realize that we weren’t in Kansas anymore — or if we were, it would be Kansas City in the 1920s.
A century ago, America was going through a monumental change. For the first time in U.S. history, more people were living in urban areas than in rural areas. The cities were booming, and for many middle-class Americans, the 1920s were a decade of unprecedented prosperity. People were earning more and spending more, advertising had reached new levels of sophistication, and the automobile was changing the way we live.
So, before you step into that time machine, you’d better brace yourself. Here are seven things you’d find in a city street a century ago, back in the dizzying days of the Roaring ’20s.
Before the development of practical light bulbs, street lights typically used piped coal gas, oil, or kerosene as fuel. The first electric streetlights were installed in Paris in 1878, but these used unwieldy and harsh arc lamps. Then came inventors such as Joseph Swan in the U.K. and Thomas Edison in the U.S., both of whom patented revolutionary incandescent light bulbs in 1880. Incandescent street lamps became the norm in many cities throughout the world, and the 1920s saw a wave of patents filed for innovative new street lighting. These electric lights, however, were often placed where they were needed rather than lining a whole street. So, 100 years ago, a city street at night would not have been as brightly lit as it is today, and pedestrians would often find themselves walking from one pool of yellowish light to the next.
Public phones and phone booths began appearing in the U.S. not long after Alexander Graham Bell patented the first telephone in 1876. By the 1920s, wooden phone booths were a fairly common sight on many city streets, but the wooden construction meant they were hard to maintain, limiting their popularity. In some cities, you’d be more likely to come across a public telephone room, which contained multiple booths. Individual outdoor phone booths became truly commonplace in the 1950s, when glass and aluminum became the booth-building materials of choice. Today, of course, public phones are heading rapidly toward extinction, now that most everyone can carry a phone in their pocket.
The art deco style flourished in the United States during the 1920s, in both the visual arts and architecture, as well as product design. Walking down a city street 100 years ago, art deco would have been everywhere, from the facades of grand buildings such as the Empire State Building and the Chrysler Building, to the window displays of newly emerging department stores such as Macy’s and Saks. The style, characterized by bold geometric patterns, vibrant colors, and glamorous details, became synonymous with the opulence and extravagance that defined the Roaring ’20s.
Thankfully, modern child labor laws ensure that we don’t see children working in the streets anymore. But 100 years ago, it was a common sight. In 1920, about a million children aged 10 to 15 were working in America, out of a total population of about 12 million children in that age range. The most visible were those working in city streets, in jobs such as flower seller, shoe shine, and courier. Children carried messages — and sometimes money and sales slips — throughout the city, facilitating daily commerce for banks, factories, and offices. Even more notable were the “newsies,” young children (some as young as 5) who sold newspapers in the street. But by the end of the decade, a growing preference for home delivery and tougher child labor laws led to the decline of the “newsie” in urban America.
If we traveled back 100 years, one of the first things we might notice is the fashion of the day. Men would be walking the streets wearing three-piece suits, thin bow ties, wingtip shoes, and the then-ubiquitous fedora hat. Sportswear was also becoming acceptable menswear, thanks in large part to the growing popularity of golf, which brought longer “plus four” trousers and wide-legged oxford bag pants to the urban milieu. Women’s fashion, meanwhile, reflected the newfound freedoms of the day. The dresses of the 1920s were loose, straight, and slender, with shorter hemlines. This was typified by the flapper style of the Jazz Age, with dropped waistlines and calf-revealing dresses — clothing that was stylish but also allowed women to move. New hairstyles completed the look, with bobs and waves becoming the defining cuts of the ’20s.
Today, we don’t encounter many horses on our city streets, but go back 100 years and you’d still occasionally see peddlers, milk trucks, coal wagons, and fire wagons using horse-drawn carriages. The heyday of horses, however, was coming to an end. In 1916, for instance, there were 46,662 horse-drawn vehicles in Chicago. By the end of the 1920s, this number had plummeted, and by 1940 there were fewer than 2,000 horse-drawn vehicles in the city. In New York City, meanwhile, the last horse-drawn fire engine was retired in 1922. The rise of the automobile had begun in earnest, bringing about a permanent change in the very nature of city streets.
Arguably no invention changed the everyday lives of Americans in the 20th century more than the automobile. Between 1900 and 1920, the number of cars increased dramatically, from 8,000 to 8 million. By 1929, there were more than 23 million automobiles on American roads. These early vehicles, of course, looked very different from the cars we drive today. A hundred years ago, the car of choice was the Model T Ford, which brought driving to the masses. By the early 1920s, more than half of the registered automobiles in the world were Model Ts. They were so popular that Henry Ford himself once quipped, “There’s no use trying to pass a Ford, because there’s always another one just ahead.” Pedestrians, however, found it hard to adapt to the new laws of the street. With the introduction of jaywalking laws — facilitated by automobile manufacturers themselves — the streets became a place for cars, rather than for pedestrians, horse-drawn carts, and children at play, as they once had been.
Queen Victoria ruled Britain from 1837 until her death in 1901. Her reign of 63 years and 216 days was longer than that of any of her predecessors, and was exceeded only by Elizabeth II’s time on the throne. This period, known as the Victorian era, saw the British Empire expand to become the first global industrial power.
Fueled by the industrial revolution that began the previous century — which reshaped almost every existing sector of human activity — the era saw many breakthroughs in the arts and sciences (perhaps most notably, Charles Darwin’s theory of evolution) as well as great social change and political reforms. And, as people from the countryside began to move to urban industrial hubs in search of work, there was a rise in both education and affluence, further driving the wave of ideas and innovation.
Victorian-era Brits were avid inventors, and many of the creations from this time had a major impact not only in Britain but across the globe. That’s not to say that all Victorian innovations were a hit. The hat cigar holder, ventilating top hat, anti-garroting cravat, reversible trousers, and “corset with expansible busts” all rank among the less successful ideas. These failures, however, were far outweighed by the era’s many influential developments, some of which laid the foundation for our modern age, and are still used every day. Here are some of the greatest innovations of the Victorian era, from the telephone to the electric light bulb.
Scottish-born inventor Alexander Graham Bell is considered the father of the telephone, but a degree of controversy remains over who exactly invented the world-changing device. The American electrical engineer Elisha Gray filed a patent on the exact same day as Bell, in Washington, D.C., on February 14, 1876. We’ll never quite know how things played out in the patent office, but Bell’s documents were filed first, and he was awarded the patent on March 7. A few days later, he made the first-ever telephone call. He called his assistant, Thomas Watson, with the now-famous words, “Mr. Watson, come here. I want you.”
Bell, who had lived in Boston since 1871, was keen to introduce his invention to Britain, where, as a young man, he had received an expansive Victorian education in Scotland and London, and where he first began his experiments in sound. In August 1877, he toured Britain with his wife Mabel (it was supposed to be their honeymoon), promoting his invention as he went. He even demonstrated the newly invented telephone to Queen Victoria herself, who was so impressed she asked to keep the temporary installation in place.
Photo credit: Miles Willis/ Getty Images Entertainment via Getty Images
Adhesive Postage Stamp
In 1837, English inventor Rowland Hill submitted a number of reforms to the British government regarding the existing postal system. Among his ideas was the use of an adhesive postage stamp. At the time, the postal service was unwieldy and rates were high — they were based on distance and the number of sheets in a letter, and the recipient paid for the delivery. Hill proposed a low-cost stamp based on weight, with the cost prepaid. This resulted in the Penny Black, the world’s first postage stamp, which cost a flat rate of one penny, regardless of distance. The idea was simple, but revolutionary. His adhesive stamp and associated reforms were soon adopted by other countries, and ultimately paved the way for modern postal systems around the world.
Advertisement
Advertisement
Photo credit: Print Collector/ Hulton Archive via Getty Images
Underground Railway
On January 10, 1863, the Metropolitan Railway opened to the public, becoming the world's first underground railway. Now part of the extensive London Underground, the original line ran for 3.75 miles between Farringdon Street and Bishop’s Road, Paddington. Hailed by The Timesas “the great engineering triumph of the day,” it consisted of open wooden carriages pulled by steam locomotives. Between 30,000 and 40,000 people attempted to travel on the railway’s opening day, causing chaos on the fledgling underground system. Despite the initial bedlam — and the sulfurous fumes released by the locomotives — the line was a huge success. In its first year, it carried 9.5 million passengers, showing cities around the world that underground railways were an excellent solution to growing congestion problems.
Basic public toilets were part of the sanitation system of ancient Rome, but it wasn’t until the Victorian era that they became widespread — and flushable. This was all thanks to George Jennings, an English sanitary engineer who introduced his flushable public toilets — which he called “monkey closets” — at the Great Exhibition of 1851, held at London’s Crystal Palace. Visitors could spend one penny to use the facilities, and records show that 675,000 pennies were spent (this led to the expression “to spend a penny,” which Brits still use today to refer to a trip to the loo). Public toilets were soon installed in various cities across Britain. Typically, however, these facilities were designed only for men. Because of this, women were far more restricted than men when it came to traveling, something referred to as the “urinary leash.” But by the end of the era — thanks to campaigning and women’s role in the ever-growing economy — women’s amenities began opening in major cities.
Decades before Thomas Edison patented the first incandescent light bulb in 1879, British inventors had already been working on the problem. James Bowman Lindsay and Warren de la Rue both created early versions of the light bulb, in 1835 and 1840, respectively. Then, in 1841, Frederick de Moleyns was granted the first patent for an incandescent lamp, which used powdered charcoal heated between two platinum wires. Then came English physicist and chemist Joseph Swan, who produced a primitive electric light in 1860 and, 20 years later, a practical light bulb. Both Swan and Edison applied for patents for their incandescent lamps in 1880. Litigation ensued, but was resolved when the two men formed a joint company in 1883. From there, there was no looking back. By 1893, even the chandeliers in Queen Victoria’s residence, Osborne House, had been wired for electricity.
Photo credit: Three Lions/ Hulton Archive via Getty Images
Pneumatic Tires
Robert William Thomson was a Scottish engineer who invented the pneumatic tire at the age of 23. He was granted the patent in 1846, for a hollow leather tire that enclosed a rubberized fabric tube filled with air. A set of his “aerial wheels” ran successfully for 1,200 miles on a horse-drawn carriage. At the time, however, the rubber required to make the inner tubes was prohibitively expensive, and so Thomson returned to solid tires. For 43 years, air-filled tires were all but forgotten.
Then came a second Scotsman, John Boyd Dunlop, who developed his own version of the pneumatic tire while trying to make his son’s bicycle more comfortable to ride. Dunlop patented his pneumatic bicycle tire in 1888. A year later, the competitive cyclist Willie Hume became the first man to fit his bike with Dunlop’s pneumatic tires, and he started winning races across Britain. Dunlop later lost his main patent when Thomson’s original patent was rediscovered. While Thomson is rightfully considered the inventor of the pneumatic tire, it’s Dunlop’s name that remains synonymous with tires today.
Our planet is home to many talented engineers. Termites, for example, build complex structures that rise up to 10 feet in height, their “bricks” bonded by bio-cementation. Spiders, meanwhile, weave intricate webs, which, like suspension bridges, are capable of bearing heavy loads in even the stormiest weather. Then there are beavers and their well-engineered dams, bees and their cellular hives, and industrious ants whose largest recorded contiguous colony stretches a truly incredible 3,700 miles.
Humans, of course, are in a league of their own when it comes to construction. For millennia, we have been building structures of awesome size and complexity: roads and bridges, cathedrals and stadiums, tunnels and skyscrapers. Among the innumerable structures built by humankind, some stand out for their sheer size and magnificence. Here are six of the greatest engineering marvels in history.
The Great Wall of China is widely considered one of the greatest engineering feats of all time. Built continuously from the third century BCE to the 17th century CE, this series of walls and natural barriers stretches for around 13,000 miles. (Still, despite a persistent myth, it is not visible from the moon or space, at least not with the naked eye.) The Great Wall was originally the idea of Emperor Qin Shi Huang, the founder of the Qin dynasty and the first emperor of a unified China, who wished to protect the country from barbarian attacks from the north.
Under his orders, many older, unconnected state walls and fortifications were removed and a number of existing walls were joined into a single system stretching from the eastern Hebei province to Gansu province in the west. The wall itself was 15 to 30 feet high, topped with ramparts of at least 12 feet, with guard towers distributed at regular intervals. Much of the Great Wall that we now see was constructed during the powerful Ming dynasty, starting from around 1474. Today, the most famous and iconic section of the Great Wall of China is located 43 miles northwest of Beijing. This section was rebuilt in the late 1950s and now attracts thousands of tourists every day.
The awe-inspiring Hagia Sophia was built under Byzantine Emperor Justinian I between 532 and 537 CE — a remarkably quick construction considering the size of the project. With its vast central basilica, complex system of vaults and semi-domes, and high central dome spanning more than 101 feet in diameter, it was the world’s largest cathedral for nearly a thousand years (until the completion of Seville Cathedral in 1520). Considered one of the greatest examples of Byzantine architecture, Hagia Sophia represents the region’s shifting political and religious affiliations. Built as a Christian church, in later centuries it became a mosque, a museum, and a mosque once again. These changes are reflected in the building’s design, which features minarets and inscriptions from Islam as well as extravagant Christian mosaics. Hagia Sophia was the inspiration for many other Ottoman mosques, and has long been considered a unique architectural masterpiece.
Perched high above the Urubamba River valley in a narrow saddle between two mountain peaks, with tropical cloud forests tumbling down below, Machu Picchu is simply mind-blowing. It’s hard not to look around and think, “How the heck did they build this here?” The site is situated at an elevation of 7,710 feet, and contains about 200 structures spread across zones dedicated to religious, ceremonial, astronomical, and agricultural activities, all surrounded by incredible farming terraces, connected by some 3,000 stone steps. Many mysteries still surround the construction and precise purpose of Machu Picchu. Historians estimate that it was built around 1450 CE and abandoned around 100 years later with the arrival of the Spanish conquistadors. The conquistadors never found Machu Picchu, and it stood for centuries known only to the peasant farmers who lived in the immediate area. It wasn’t until 1911, when American academic and explorer Hiram Bingham rediscovered the site, that the remarkable citadel was brought to the world’s attention.
The idea of building a waterway across the isthmus of Panama to link the Atlantic and Pacific oceans dates back to at least the 1500s, when Spanish explorer Vasco Núñez de Balboa realized that the two oceans were separated by a 50-mile stretch of land. It wasn’t until 1880, however, that excavations for a canal first began, led by France. Unfortunately, the original nine-year project was devastated by malaria, yellow fever, and other tropical diseases, and ended with bankruptcy and the loss of some 20,000 lives. In 1904, the United States began the project anew. A major sanitation effort was put in place to help minimize disease, and the entire infrastructure was modernized.
Perhaps most importantly, the project’s chief engineer, John Stevens, convinced President Theodore Roosevelt that the concept of the waterway must be changed from a sea level canal to a lock canal. The original locks (the system has since been expanded), each measuring 110 feet wide by 1,000 feet long, made it possible to lift ships 85 feet above sea level to the artificial Gatún Lake, and then back down to the sea. This incredible feat of engineering radically altered maritime travel and trade. By using the canal to cut across the isthmus, ships could avoid the lengthy trip around Cape Horn in South America, shortening sea voyages by about 8,000 nautical miles.
After six years of construction, the Channel Tunnel opened in May 1994, becoming the only fixed link between the island of Great Britain and the European mainland. The idea of building a tunnel between England and France was not new. French engineer Albert Mathieu had proposed such a project in 1802, his design featuring an artificial island halfway across for changing horses. Back then, the idea simply wasn’t technologically feasible, but by the 1980s, the tunnel project was given the green light. Digging began on both sides of the Channel, and the initial connection was completed in 1991. Three years later, the 31.5-mile tunnel was opened. It actually consists of three tunnels: two for rail traffic and a central tunnel for services and security. The tunnel descends to a depth of 246 feet below the sea bed. It is the third-longest railway tunnel in the world, and with 23.5 miles running under the English Channel, the world's longest undersea tunnel.
Built between 1998 and 2011, the International Space Station (ISS) is the largest single structure humans ever put into space. It also represents arguably the greatest multinational collaborative construction project in history, involving Europe, the United States, Russia, Canada, and Japan. Elements of the station were built in multiple countries beginning in the late 1980s. In 1998, Russia’s Zarya control module and the U.S.-built Unity connecting node were launched into orbit, where they were connected by American space shuttle astronauts. In 2000, following the installation of the Russian-built habitat module Zvezda, the ISS received its first resident crew. Since then, it has been continuously occupied. Altogether, the space station is larger than a football field and weighs almost 450 tons. Maintaining an orbit with an average altitude of 250 miles, it circles the Earth in about 93 minutes, completing 15.5 orbits per day. The ISS is a truly unique engineering masterpiece, and its sophisticated science laboratory allows us to carry out research that is simply not possible on Earth — paving the way for future missions to Mars and beyond.
In 1903, a Vermont doctor named Horatio Nelson Jackson drove from San Francisco to New York in a Winton touring car and became the first person to traverse the United States in an automobile. At the time, there were no more than 150 miles of paved road in the country, mostly concentrated within cities. The path that Jackson traveled was along rivers, mountain passes, flatlands, and the Union Pacific Railroad, and what roads he did encounter between cities were, in his description, “a compound of ruts, bumps, and ‘thank you m’ams’ [sic].” The trip took 63 days, 12 hours, and 30 minutes, but it inspired auto companies and other early car adopters to arrange trips of their own, sparking demand for long-distance highways.
The first automobile highways weren’t construction projects, and were referred to as “auto trails.” They were essentially suggested routes made up of existing thoroughfares, conceived of by private associations and codified with names such as Lincoln Highway, Victory Highway, National Old Trails Road, and so on. The associations marked the trails with signs or logos, and promoted the improvement of the routes, sometimes collecting dues from towns and businesses. Eventually, the U.S. government grew wary of the proceedings, and proposed the construction of a paved and nationalized numbered highway system. The proposal was adopted on November 11, 1926.
The numbered highways were a marked improvement over the auto trails, but nearly 30 years after their adoption, Congress approved the Federal-Aid Highway Act of 1956, revolutionizing the highway system by building 41,000 miles of interstate roads. The interstates repurposed existing numbered highways, connecting and extending them for greater efficiency, and these roads are to this day our main mode of distance auto travel. Let’s look at when some of the country’s biggest and most vital interstates were built.
I-70 is arguably the oldest interstate in the U.S. When it comes to the interstate projects initiated by the Federal-Aid Highway Act of 1956, I-70 was the first both by date of initial construction (August 13, 1956 in St. Charles County, Missouri) and initial paving (September 26, 1956 just west of Topeka, Kansas). The highway runs through 10 states as it spans the center of the country west from Baltimore, connecting Pittsburgh, Columbus, Indianapolis, St. Louis, Kansas City, and Denver. I-70 also includes the highest car tunnel in the world, the Eisenhower-Johnson Memorial Tunnel near Denver, which has an average elevation of 11,112 feet. The most recent segment of the tunnel is the 12.5-mile stretch through the narrow Glenwood Canyon, completed in 1992.
Interstate 70 may be the first of the federal interstates to begin construction, but Interstate 80 likely has the oldest antecedents, as it approximates the route of Nebraska’s Mormon Trail (aka Great Platte River Road), dating back to the 1840s, and also parts of the Lincoln Highway auto trail from the late-1910s to mid-1920s. Its transcontinental span runs through 11 states. Construction of the modern-day I-80 began in Nebraska in 1957 and in Pennsylvania in 1958 (though the Delaware Water Gap Toll Bridge that later became part of I-80 was opened on December 16, 1953). A final 5-mile connecting segment was completed near Salt Lake City on August 17, 1986.
Interstate 90 is another federal interstate that traces its origins to an older antecedent auto trail: the Yellowstone Trail from Plymouth, Massachusetts to Seattle, Washington that was founded in 1912. The first segment of newly constructed road for I-90 was opened in Spokane, Washington in November 1956. I-90 has the distinction of being the longest interstate, at 3,085 miles, and covers 13 states. The last link to its western terminus in Seattle was completed in 1993.
Route 66 was perhaps the most famous highway in the United States during the first half of the 20th century, inspiring a song and even a TV show. Interstate 40 is the longest of the five federal interstates that gradually replaced it, and it was I-40 bypassing Route 66’s final segment in 1984 that led to the iconic highway being decommissioned the following year. Construction of I-40 began in 1957 in North Carolina. Though the interstate stretches more than 2,500 miles between its eastern and western ends, its final segment was completed in 1990 in Wilmington, North Carolina — just 220 miles from its first segment’s completion in Kernersville, North Carolina.
Interstate 10 is the transcontinental highway with the southernmost span, running through all eight states of the lower U.S. Similar to I-40, it served as a replacement for Route 66, primarily for the stretch between California to Arizona. Exact details about the first new construction stretch of I-10 are sparse, but it most likely took place in El Paso in 1960. The Papago Freeway Tunnel completed I-10’s final segment when it opened in August 1990.
Interstate 95’s 1,920-mile span from Houlton, Maine to Miami, Florida makes it the longest north-south oriented interstate in the country. It crosses 15 states and Washington, D.C. (the most of any interstate), and it also established the first bus/carpool lanes in 1969. Since the route traverses more densely populated cities than any other interstate, its construction was often contentious, particularly in Philadelphia. The first new construction for I-95 began in the summer of 1956 in Richmond, Virginia, though the Connecticut Turnpike was the first stretch of I-95 that opened. The final stretch of I-95, a long unresolved gap on the Pennsylvania and New Jersey border, was finally completed in the summer of 2018. The event also marked a larger momentous occasion: the completion of the original federal interstate system first planned in 1956.
In many historical contexts, 100 years isn’t a very long time. But when it comes to science, technology, and medicine — particularly in the last century — it’s a veritable eternity. The seeds of modern medicine were just being planted in the early 20th century: Penicillin was discovered in 1928, physicians were still identifying vitamins, and insulin was a new breakthrough.
The doctor’s role itself was different than it is today, as preventative care was not yet an established practice; there was no such thing as a routine visit to a doctor’s office 100 years ago. A visit to the doctor typically meant that you were ailing (though in some cases during the Prohibition era, it meant that you and your doctor had agreed on a way around the alcohol ban). Thanks to advances in technology, doctors’ offices in the 1920s were also stocked with very different items than we see today. These are a few things you likely would have found there a century ago.
A metallic disc attached to a headband is generally considered part of a classic doctor costume, but what is the genuine article, exactly? It’s called a head mirror, and your doctor 100 years ago would’ve been wearing one. It wasn’t just an emblem; it provided a very core function, which was illumination for the examination of the ear, nose, or throat. The patient would be seated next to a lamp that was pointed toward the doctor, and the head mirror would focus and reflect the light to the intended target. Today, the easier-to-use pen light or fiber optic headlamp have largely replaced the head mirror, though some ENT specialists argue that the lighter weight and cost-effectiveness of the latter mean it may still have a place in contemporary medicine.
Photo credit: Marka/ Universal Images Group via Getty Images
Floor-Standing Spirometer
One hundred years ago, a spirometer was a large floor-standing unit made of metal, used to evaluate pulmonary function. The patient would breathe into a tube, and a dial on the top would indicate lung capacity and respiratory volume, allowing the doctor to diagnose pulmonary ailments. Today, spirometers are still very much in use, but they are much smaller and made of plastic. In fact, they’re so compact nowadays that patients can hold the entire unit themselves while they’re in use.
Advertisement
Advertisement
Photo credit: FPG/ Archive Photos via Getty Images
Electric Vaporizer
The electric vaporizer was similar to today’s at-home humidifiers, but it was more complex, and could be used to make vapor out of water or other liquid medication. In the doctor’s office, vaporizers were used to treat sinus or bronchial illnesses. Vaporizers were also used in hospital settings in order to administer anesthesia.
Considering that a doctor’s workspace is referred to as a “doctor’s office,” it follows that a classic wooden desk was generally present in one 100 years ago. There is something especially archaic-looking about a doctor seated at a wooden desk, though, since today we’re used to nonporous antiseptic surfaces in any space where medical exams or procedures take place. Indeed, it didn’t take long for the doctor’s office to shift in that direction: By the 1930s, most doctor’s offices contained furniture that was made of enamel-coated metal.
Sometimes referred to as vinegar of ipecac, syrup of ipecac was (in small doses) an early form of cough syrup used as an expectorant to treat respiratory illnesses. In larger doses, it was used as a poison control agent to induce vomiting, especially in pediatric medicine. Available by prescription only in the early 20th century, it was eventually approved by the FDA for over-the-counter sale and recommended as an essential item for households that included young children. In 2004, the FDA began discouraging use of syrup of ipecac due to its lack of efficacy as a treatment for poison ingestion.
The doctor’s bag (also referred to as a physician’s bag or a Gladstone bag) was an essential item in an era where house calls were still part of a general practitioner’s array of services. The bag was usually made of black leather, and carried the most important and portable medical equipment: a stethoscope, thermometer, bandages, syringes, a plexor for testing reflexes, a sphygmomanometer to test blood pressure, and more.
The image of an early 20th-century doctor sharpening a knife may seem foreboding, even sinister. But in the doctor’s office setting, the sharpening stone was used to sharpen scissors or knives for cutting bandages, not for any sort of medical procedures. The surgical discipline, meanwhile, used a cold sterilization method that prevented scalpel blades from dulling. Rest assured: Since at least 1915, scalpels have been a two-piece design that enable the blade to be discarded and replaced with a new one after each use, so there’s no need for sharpening.