One hundred years is a long time in the life of a city. New technologies emerge and wane, people come and go, cultural factors ebb and flow. But not all cities change at the same rate; some stay comparatively similar to their older incarnations, while others become drastically different. Here’s a glimpse at what a few iconic state capitals looked like a century ago.
Credit: Buyenlarge/ Archive Photos via Getty Images
Atlanta, Georgia
Atlanta was named after the Western and Atlantic Railroad, for which it was a terminus. In the early 20th century, the city was well established as a major railway hub, and the downtown was built around its first train station. Hotels were concentrated in an area near the station (called, fittingly, Hotel Row) in order to serve train travelers, and by the 1920s, masonry high-rises created the city’s skyline.
Like many cities during this period, Atlanta was beginning to expand its roads in order to accommodate increasing numbers of cars. In the 1920s, the city built three major viaducts to allow traffic to bypass the high number of railroad crossings. The Central Avenue, Pryor Street, and Spring Street (later renamed Ted Turner Drive) viaducts not only improved vehicle safety, but also led to development outside the city’s downtown core.
Though Boston was established as a colonial port city as early as 1630, a wave of immigration between 1880 and 1921 fueled a population boom and a sense of transition similar to what many younger cities were facing at the time. An expanding population created a need for a building boom, and changes wrought by the Industrial Revolution were at the forefront. The industrialization of nearby Springfield, Massachusetts led to a high population of mechanics and engineers in that city, and it became a hub for the nascent automotive industry. Rolls-Royce selected Springfield as the site of its U.S. factory, and many other early auto manufacturers were based in the area. In fact, Massachusetts claimed to have manufactured more cars at the beginning of the 20th century than Detroit, Michigan. Cars were particularly popular in Boston — more so than in many other cities — and 1 in 8 Bostonians were car owners by 1913. This led to the construction of a large number of buildings dedicated to automobiles, including garages, repair shops, car dealers, and more.
In terms of architecture, the city’s affluent Beacon Hill neighborhood appears very similar today to how it looked in the 1920s, with well-preserved colonial-style and Victorian buildings. However, little remains of Boston’s once-abundant theater district, which reached a peak count of 40 theaters by 1935.
Nashville has a storied history as a center of American popular music, but that history was in its very infancy 100 years ago. The famous Grand Ole Opry didn’t begin until the middle of the 1920s, first broadcasting as “the WSM Barn Dance,” and at the time, it was hardly the institution it would become later. In those days, it was purely a radio show broadcast out of the WSM studio on the fifth floor of the National Life and Accident Insurance building, with only as many spectators as could fit in the limited confines of the station’s Studio A.
Unlike other major capitals, Nashville wasn’t a city of high-rises — the 12-story Stahlman Building was the tallest building from the time of its completion in 1908 until the L&C Tower was built in the 1950s — and much of the low-rise brick and masonry buildings from the last century are preserved today. This is particularly true along the First Avenue front of the Cumberland River, and along SecondAvenue, formerly known as Market Street.
Though Austin’s population began steadily growing around the end of the 19th century, in 1920 it was only the 10th-largest city in Texas, with a population just under 35,000. Its visual focal point was the Texas State Capitol Building (the UT Tower didn't exist yet), and the surrounding downtown consisted of low- and mid-rise buildings with brick or stone facades — an aesthetic that was more “Main Street” than “metropolis.” Cars weren’t quite as dominant in Austin as in larger cities of the time, and horse-drawn carriages were still seen on the streets.
Phoenix is another city that had a relatively small population in 1920 — just around 29,000 — but it was still the largest city in a state that had existed for only eight years. Because of this, Phoenix had the flashiness and bustle of an up-and-coming city, despite its small size. The city’s first skyscraper, the Heard Building, was even the site of a stunt climbing performance shortly after it was built. Nonetheless, the Heard Building’s height of seven stories might not pass for consideration as a skyscraper in larger cities. The 10-story Luhrs Building surpassed it in height when it opened in 1924, and the 16-story Hotel Westward Ho became the city’s tallest building in 1928. It held that title for more than 30 years, as the vast availability of land surrounding Phoenix disincentivized vertical construction, in favor of outward land expansion.
Sacramento is often overshadowed by other iconic California cities, but 100 years ago it boasted a downtown of ornate classical architecture, was home to the largest manufacturing train yard in the western United States, and served as a major retail hub for the region. Vital downtown structures of the time — such as Sacramento City Hall, Memorial Auditorium, the California State Life Building, and the Federal Building — were all built during a construction boom that occurred between 1912 and 1932. But there isn’t much evidence of this architectural period today, as even some surviving buildings, such as Odd Fellows Hall, have been remodeled with simpler midcentury-style facades.
The use of quill pens dates back to the sixth century CE, when the feathers of large birds — primarily geese, turkeys, swans, and even crows — replaced the reed pens that had been used previously. Though it’s an obsolete writing utensil today, the quill pen remains a symbol of education, literature, and artistic expression. Many important historical documents were written using quill and ink, from the Magna Carta to the Declaration of Independence, and white quills are still laid out every day the U.S. Supreme Court is in session.
In pop culture, the Harry Potter series has helped generate interest in the old-fashioned writing instrument, and Taylor Swift, noting the influences of Charlotte Brontë and period films, has referred to some of her music as “Quill Pen Songs.” “If my lyrics sound like a letter written by Emily Dickinson’s great-grandmother while sewing a lace curtain, that’s me writing in the quill genre,” she explained at the Nashville Songwriter Awards in 2022.
So what is it actually like to write with the quill pens of yore? To answer that question, I turned to the internet for authentic supplies and expert advice, and set out scribbling. Here’s what I learned from the experience.
Photo credit: Kristina Wright
First, What Is a Quill?
A traditional quill pen consists of a feather that has been trimmed to around 9 inches long, had its shaft stripped of barbs, and had the inside and outside of the hollow barrel cleaned of membrane and wax. The quill is then dried, typically by curing it in sand, and the tip is shaped into a nib with a channel split (cut) to hold the ink.
The earliest fluid inks were carbon-based black inks that are believed to have originated in China around 2700 BCE. Iron gallotannate (iron gall) ink eventually replaced carbon and became the primary ink used with quill pens from the Middle Ages until the beginning of the 20th century. Iron gall ink is a permanent, deep purple-black or blue-black ink that darkens as it oxidizes, and is made from iron salts and gallotannic acids from organic sources (such as trees and vegetables). The Codex Sinaiticus, written in the fourth century CE and containing the earliest surviving manuscript of the Christian Bible, is one of the oldest known texts written with iron gall ink.
The Writing Technique Is Familiar — But With More Blotting
Though the quill pen was the primary writing implement for more than 1,500 years, it’s an awkward tool for a modern writer to learn to use. Thankfully, hand-cut goose quills and iron gall ink are readily available online for the curious scribe to purchase.
I was eager to start writing once I acquired my pen and ink, but I hadn’t considered what type of paper to use. I felt a bit like Goldilocks while trying different types: Watercolor paper was too absorbent, printer paper and a coated writing tablet weren’t absorbent enough (though the pen glided across both beautifully), but a good-quality sketchbook offered just the right amount of absorbency.
On the first attempt, I dipped the pen in the jar of ink and then removed the excess liquid by rubbing the barrel of the feather along the rim of the ink jar. I found this didn’t remove enough ink to prevent drips, however, so I used a paper towel to blot the excess.
Once that was done, using the quill pen was no different from using my favorite metal-tipped ink pen. I held the quill the same way and applied about the same amount of pressure (at least initially) to the paper to write.
Advertisement
Advertisement
Photo credit: Kristina Wright
A Soft Backing Helps Prevent Breakage
Once I found the right paper to use with my quill pen, I practiced loading the tip with ink and writing the alphabet and short sentences. My initial attempts were challenging because using a quill pen requires frequently dipping the pen tip back into the jar for more ink, which affected the quality and consistency of the letters. In trying to finish a word or sentence before replenishing the ink, I would find myself pressing too hard on the quill tip, which quickly dulled the point and resulted in me cracking my first pen.
With only one quill remaining, I sought the advice of more experienced quill pen users. They recommend using a felt cushion underneath the writing paper to preserve the quill’s point; I didn’t have any felt on hand, so I used an old linen napkin instead. I expected it to be more difficult to write with a soft backing rather than a solid tabletop, but I was amazed at how much easier and smoother it was to write. And I didn’t crack my pen!
The Ink Needs Time to Dry, But Sand Can Speed Up the Process
I smeared the ink on many of my early efforts by trying to move or stack the paper too soon. Blotting paper was (and still is) a popular way of preventing ink from smearing, but my attempts to use a clean piece of paper on top of my iron gall ink still resulted in smudges.
However, I had good luck with a technique that predates blotting paper: sand. In this case, I used sterile terrarium sand from the craft store and sprinkled it over my still-wet ink. The sand absorbed the wet ink in a matter of minutes and, once I shook off the sand, my quill ink writing was dry and (relatively) smudge-free. (Which was more than I can say for my hands and shirtsleeves.)
Advertisement
Advertisement
Photo credit: Kristina Wright
Successfully writing with a quill pen took more practice and patience than I initially expected, especially considering I’ve been a lifelong fan of sending handwritten letters. But once I got the hang of it, there was something very soothing about the rhythm of the old-fashioned process.
It’s easy to take for granted today, but the emergence of broadcast radio was a seismic shift in early 20th-century culture. Born out of ship-to-shore wireless telegraph communication at the turn of the 20th century, broadcast radio represented an entirely new pastime by the time it began to mature in the 1920s. The golden age of radio was the period from the 1920s to the 1950s when the medium was at its absolute peak in both program variety and popularity. Radio grew massively during this era: In 1922, Variety reported that the number of radio sets in use had reached 1 million. By 1947, a C.E. Hooper survey estimated that 82% of Americans were radio listeners.
In addition to the music, news, and sports programming that present-day listeners are familiar with, radio during this period included scripted dramas, action-adventure series such as The Lone Ranger, science fiction shows such as Flash Gordon, soap operas, comedies, and live reads of movie scripts. Major film stars including Orson Welles got their start in radio (Welles became a household name in the wake of the infamous panic sparked by his 1938 broadcast of The War of the Worlds), and correspondents such as Edward R. Murrow established the standard for broadcast journalism. President Franklin D. Roosevelt used the medium to regularly give informal talks, referred to as fireside chats, to Americans listening at home. But radio was also largely influenced by advertisers, who sometimes wielded control of programming right down to casting and the actual name of the program, resulting in some awkward-sounding show titles, such as The Fleischmann’s Yeast Hour. The golden age of radio was a combination of highbrow and lowbrow content, offering both enduring cultural touchstones and popular ephemera — much like the television that eclipsed it. Read on for five more facts from this influential era.
The first known radio advertisement was a real-estate commercial for the Hawthorne Court Apartments in Jackson Heights, Queens, broadcast by New York station WEAF in August 1922. There’s a bit of disagreement over whether the duration of the ad was 10 minutes or 15 minutes, but fortunately for listeners, it wasn’t long before the ad format was pared down considerably. In 1926, when General Mills predecessor Washburn-Crosby was looking for a way to boost the languishing sales of Wheaties, it turned to its company-owned radio station in Minneapolis (WCCO) for what ended up being a much shorter form of commercial. WCCO head of publicity Earl Gammons wrote a song about the cereal called “Have You Tried Wheaties?” and Washburn-Crosby hired a barbershop quartet to sing it, thus creating the first radio jingle.
Due to limited recording capabilities during the first three years of the ad campaign, the Wheaties Quartet (as they were known) performed the jingle live at the station every time the commercial aired. The decidedly manual campaign worked, as it led to the Minneapolis-St. Paul area comprising more than 60% of Wheaties’ total sales. When the ad campaign was expanded nationally, sales of Wheaties increased throughout the country, establishing the effectiveness of the jingle and the influence of advertising on the medium. By 1948, American advertisers were spending more than $100 million per year (around $1.2 billion today) on radio commercials.
Advertisement
Advertisement
Photo credit: Graphic House/ Archive Photos via Getty Images
The “Big Three” Networks Were Born in Radio
In 1926, RCA, the Radio Corporation of America, bought the radio station WEAF from AT&T and added the infrastructure to its New York and New Jersey station, WJZ. The combined assets established RCA’s broadcast network, dubbed the National Broadcasting Company, or NBC. On November 1 that same year, NBC officially became two networks: NBC Red (extending from WEAF) and NBC Blue (extending from WJZ). The upstart networks soon had a competitor. In 1927, frustrated talent agents Arthur Judson and George Coats resolved their inability to land a contract to get their clients work with NBC by forming their own radio network, United Independent Broadcasters. The network quickly changed its name to Columbia Phonograph Broadcasting Company after a merger with Columbia Phonograph and Records. Unfortunately for Judson and Coats, they were no more effective as would-be radio network moguls than they were as radio talent agents: The network operated at a loss, and it wasn’t long before Judson sold it to a relative who had been an initial investor, William S. Paley. On January 29, 1929, Paley shortened the network’s name to Columbia Broadcasting Company, or CBS.
The same year, NBC established the country’s first coast-to-coast radio infrastructure, but in 1934, antitrust litigation resulted in the FCC ordering the company to sell either the Red or Blue network. Years of appeals followed, finally resulting in NBC electing to sell the Blue network to Life Savers and Candy Company owner Edward J. Noble. Noble renamed it the American Broadcasting Company, and ABC was born.
Photo credit: Hulton Archive/ Hulton Archive via Getty Images
A Ventriloquist Show Was One of Radio’s Biggest Hits
A form as visual and illusion-based as ventriloquism seems like a poor fit for an audio-only medium, but from 1937 to 1957, The Edgar Bergen and Charlie McCarthy Show was an American radio institution. It was the top-rated show for six years of its run, and in the top seven for all but its final five years. Ventriloquist Edgar Bergen started in vaudeville, and it was his guest appearance on Rudy Vallée’s Royal Gelatin Hourin 1936 that introduced him to the radio audience. The appeal of the show was Bergen’s vaudevillian skill at performing multiple comedic voices, and his quick and salacious wit as Charlie, roasting celebrity guests and using the dummy’s nonhuman innocuousness to get away with censorship-pushing double-entendres. Though the show included a live studio audience, Bergen all but dropped the traditional ventriloquism requirement of not moving his lips while voicing Charlie. As he reasoned, “I played on radio for so many years… it was ridiculous to sacrifice diction for 13 million people when there were only 300 watching in the audience.”
Inventor Edwin H. Armstrong earned prestige for creating the regenerative circuit in 1912, a modification to the vacuum tube that led to the dawn of modern radio. In the late 1920s, he set out to find a way to eliminate static from broadcasts, and received initial support in the endeavor from RCA President David Sarnoff. Sarnoff allowed Armstrong to use the RCA radio tower atop the Empire State Building to conduct experiments, and Armstrong agreed to give RCA first rights to the resulting product. When Armstrong demonstrated his static-free invention in 1935, what he unveiled was an entirely new broadcast technology using frequency modulation (FM) instead of the existing AM band.
Sarnoff, however, had wanted an improvement to AM, and saw FM as a threat to both RCA’s existing AM infrastructure and the emerging television technology RCA was investing in: He feared it would render AM equipment obsolete, and that FM radios would compromise the nascent market for television sets. Instead of embracing FM, RCA withdrew its support of Armstrong. With no support elsewhere in the broadcast industry, Armstrong set up his own fledgling FM station in hopes of promoting high fidelity radio, but he spent years in court mired in a byzantine tangle of regulatory and patent battles. FM eventually caught on, of course, but not until after radio’s golden age had passed: The FCC finally authorized an FM broadcasting standard in 1961.
Photo credit: Gene Lester/ Archive Photos via Getty Images
The Last Shows of the Golden Age Ended in 1962
On September 30, 1962, the final two remaining scripted radio shows signed off for the last time on CBS. The detective series Yours Truly, Johnny Dollar ended a run that day that began in 1949, and mystery-drama Suspense ended a 20-year run that had begun on June 17, 1942. As evidenced by its longevity, Suspense was particularly venerable; it was a Peabody Award winner whose scripts drew from classical literature, stage plays and screenplays, and entirely original material. Suspense attracted top guest stars such as Humphrey Bogart, Bette Davis, Cary Grant, Bela Lugosi, Rosalind Russell, and James Stewart. CBS even produced a television adaptation that began airing in 1949, but it was canceled in 1954, outlasted by the original version on the radio.
If we could travel back 100 years and land on a typical city street, we’d probably be mightily discombobulated. Some things would seem familiar: the buzz of the urban environment, people walking this way and that, and buildings with facades that could well still exist today. But looking around, we’d soon realize that we weren’t in Kansas anymore — or if we were, it would be Kansas City in the 1920s.
A century ago, America was going through a monumental change. For the first time in U.S. history, more people were living in urban areas than in rural areas. The cities were booming, and for many middle-class Americans, the 1920s were a decade of unprecedented prosperity. People were earning more and spending more, advertising had reached new levels of sophistication, and the automobile was changing the way we live.
So, before you step into that time machine, you’d better brace yourself. Here are seven things you’d find in a city street a century ago, back in the dizzying days of the Roaring ’20s.
Before the development of practical light bulbs, street lights typically used piped coal gas, oil, or kerosene as fuel. The first electric streetlights were installed in Paris in 1878, but these used unwieldy and harsh arc lamps. Then came inventors such as Joseph Swan in the U.K. and Thomas Edison in the U.S., both of whom patented revolutionary incandescent light bulbs in 1880. Incandescent street lamps became the norm in many cities throughout the world, and the 1920s saw a wave of patents filed for innovative new street lighting. These electric lights, however, were often placed where they were needed rather than lining a whole street. So, 100 years ago, a city street at night would not have been as brightly lit as it is today, and pedestrians would often find themselves walking from one pool of yellowish light to the next.
Public phones and phone booths began appearing in the U.S. not long after Alexander Graham Bell patented the first telephone in 1876. By the 1920s, wooden phone booths were a fairly common sight on many city streets, but the wooden construction meant they were hard to maintain, limiting their popularity. In some cities, you’d be more likely to come across a public telephone room, which contained multiple booths. Individual outdoor phone booths became truly commonplace in the 1950s, when glass and aluminum became the booth-building materials of choice. Today, of course, public phones are heading rapidly toward extinction, now that most everyone can carry a phone in their pocket.
The art deco style flourished in the United States during the 1920s, in both the visual arts and architecture, as well as product design. Walking down a city street 100 years ago, art deco would have been everywhere, from the facades of grand buildings such as the Empire State Building and the Chrysler Building, to the window displays of newly emerging department stores such as Macy’s and Saks. The style, characterized by bold geometric patterns, vibrant colors, and glamorous details, became synonymous with the opulence and extravagance that defined the Roaring ’20s.
Thankfully, modern child labor laws ensure that we don’t see children working in the streets anymore. But 100 years ago, it was a common sight. In 1920, about a million children aged 10 to 15 were working in America, out of a total population of about 12 million children in that age range. The most visible were those working in city streets, in jobs such as flower seller, shoe shine, and courier. Children carried messages — and sometimes money and sales slips — throughout the city, facilitating daily commerce for banks, factories, and offices. Even more notable were the “newsies,” young children (some as young as 5) who sold newspapers in the street. But by the end of the decade, a growing preference for home delivery and tougher child labor laws led to the decline of the “newsie” in urban America.
If we traveled back 100 years, one of the first things we might notice is the fashion of the day. Men would be walking the streets wearing three-piece suits, thin bow ties, wingtip shoes, and the then-ubiquitous fedora hat. Sportswear was also becoming acceptable menswear, thanks in large part to the growing popularity of golf, which brought longer “plus four” trousers and wide-legged oxford bag pants to the urban milieu. Women’s fashion, meanwhile, reflected the newfound freedoms of the day. The dresses of the 1920s were loose, straight, and slender, with shorter hemlines. This was typified by the flapper style of the Jazz Age, with dropped waistlines and calf-revealing dresses — clothing that was stylish but also allowed women to move. New hairstyles completed the look, with bobs and waves becoming the defining cuts of the ’20s.
Today, we don’t encounter many horses on our city streets, but go back 100 years and you’d still occasionally see peddlers, milk trucks, coal wagons, and fire wagons using horse-drawn carriages. The heyday of horses, however, was coming to an end. In 1916, for instance, there were 46,662 horse-drawn vehicles in Chicago. By the end of the 1920s, this number had plummeted, and by 1940 there were fewer than 2,000 horse-drawn vehicles in the city. In New York City, meanwhile, the last horse-drawn fire engine was retired in 1922. The rise of the automobile had begun in earnest, bringing about a permanent change in the very nature of city streets.
Arguably no invention changed the everyday lives of Americans in the 20th century more than the automobile. Between 1900 and 1920, the number of cars increased dramatically, from 8,000 to 8 million. By 1929, there were more than 23 million automobiles on American roads. These early vehicles, of course, looked very different from the cars we drive today. A hundred years ago, the car of choice was the Model T Ford, which brought driving to the masses. By the early 1920s, more than half of the registered automobiles in the world were Model Ts. They were so popular that Henry Ford himself once quipped, “There’s no use trying to pass a Ford, because there’s always another one just ahead.” Pedestrians, however, found it hard to adapt to the new laws of the street. With the introduction of jaywalking laws — facilitated by automobile manufacturers themselves — the streets became a place for cars, rather than for pedestrians, horse-drawn carts, and children at play, as they once had been.
Queen Victoria ruled Britain from 1837 until her death in 1901. Her reign of 63 years and 216 days was longer than that of any of her predecessors, and was exceeded only by Elizabeth II’s time on the throne. This period, known as the Victorian era, saw the British Empire expand to become the first global industrial power.
Fueled by the industrial revolution that began the previous century — which reshaped almost every existing sector of human activity — the era saw many breakthroughs in the arts and sciences (perhaps most notably, Charles Darwin’s theory of evolution) as well as great social change and political reforms. And, as people from the countryside began to move to urban industrial hubs in search of work, there was a rise in both education and affluence, further driving the wave of ideas and innovation.
Victorian-era Brits were avid inventors, and many of the creations from this time had a major impact not only in Britain but across the globe. That’s not to say that all Victorian innovations were a hit. The hat cigar holder, ventilating top hat, anti-garroting cravat, reversible trousers, and “corset with expansible busts” all rank among the less successful ideas. These failures, however, were far outweighed by the era’s many influential developments, some of which laid the foundation for our modern age, and are still used every day. Here are some of the greatest innovations of the Victorian era, from the telephone to the electric light bulb.
Scottish-born inventor Alexander Graham Bell is considered the father of the telephone, but a degree of controversy remains over who exactly invented the world-changing device. The American electrical engineer Elisha Gray filed a patent on the exact same day as Bell, in Washington, D.C., on February 14, 1876. We’ll never quite know how things played out in the patent office, but Bell’s documents were filed first, and he was awarded the patent on March 7. A few days later, he made the first-ever telephone call. He called his assistant, Thomas Watson, with the now-famous words, “Mr. Watson, come here. I want you.”
Bell, who had lived in Boston since 1871, was keen to introduce his invention to Britain, where, as a young man, he had received an expansive Victorian education in Scotland and London, and where he first began his experiments in sound. In August 1877, he toured Britain with his wife Mabel (it was supposed to be their honeymoon), promoting his invention as he went. He even demonstrated the newly invented telephone to Queen Victoria herself, who was so impressed she asked to keep the temporary installation in place.
Photo credit: Miles Willis/ Getty Images Entertainment via Getty Images
Adhesive Postage Stamp
In 1837, English inventor Rowland Hill submitted a number of reforms to the British government regarding the existing postal system. Among his ideas was the use of an adhesive postage stamp. At the time, the postal service was unwieldy and rates were high — they were based on distance and the number of sheets in a letter, and the recipient paid for the delivery. Hill proposed a low-cost stamp based on weight, with the cost prepaid. This resulted in the Penny Black, the world’s first postage stamp, which cost a flat rate of one penny, regardless of distance. The idea was simple, but revolutionary. His adhesive stamp and associated reforms were soon adopted by other countries, and ultimately paved the way for modern postal systems around the world.
Advertisement
Advertisement
Photo credit: Print Collector/ Hulton Archive via Getty Images
Underground Railway
On January 10, 1863, the Metropolitan Railway opened to the public, becoming the world's first underground railway. Now part of the extensive London Underground, the original line ran for 3.75 miles between Farringdon Street and Bishop’s Road, Paddington. Hailed by The Timesas “the great engineering triumph of the day,” it consisted of open wooden carriages pulled by steam locomotives. Between 30,000 and 40,000 people attempted to travel on the railway’s opening day, causing chaos on the fledgling underground system. Despite the initial bedlam — and the sulfurous fumes released by the locomotives — the line was a huge success. In its first year, it carried 9.5 million passengers, showing cities around the world that underground railways were an excellent solution to growing congestion problems.
Basic public toilets were part of the sanitation system of ancient Rome, but it wasn’t until the Victorian era that they became widespread — and flushable. This was all thanks to George Jennings, an English sanitary engineer who introduced his flushable public toilets — which he called “monkey closets” — at the Great Exhibition of 1851, held at London’s Crystal Palace. Visitors could spend one penny to use the facilities, and records show that 675,000 pennies were spent (this led to the expression “to spend a penny,” which Brits still use today to refer to a trip to the loo). Public toilets were soon installed in various cities across Britain. Typically, however, these facilities were designed only for men. Because of this, women were far more restricted than men when it came to traveling, something referred to as the “urinary leash.” But by the end of the era — thanks to campaigning and women’s role in the ever-growing economy — women’s amenities began opening in major cities.
Decades before Thomas Edison patented the first incandescent light bulb in 1879, British inventors had already been working on the problem. James Bowman Lindsay and Warren de la Rue both created early versions of the light bulb, in 1835 and 1840, respectively. Then, in 1841, Frederick de Moleyns was granted the first patent for an incandescent lamp, which used powdered charcoal heated between two platinum wires. Then came English physicist and chemist Joseph Swan, who produced a primitive electric light in 1860 and, 20 years later, a practical light bulb. Both Swan and Edison applied for patents for their incandescent lamps in 1880. Litigation ensued, but was resolved when the two men formed a joint company in 1883. From there, there was no looking back. By 1893, even the chandeliers in Queen Victoria’s residence, Osborne House, had been wired for electricity.
Photo credit: Three Lions/ Hulton Archive via Getty Images
Pneumatic Tires
Robert William Thomson was a Scottish engineer who invented the pneumatic tire at the age of 23. He was granted the patent in 1846, for a hollow leather tire that enclosed a rubberized fabric tube filled with air. A set of his “aerial wheels” ran successfully for 1,200 miles on a horse-drawn carriage. At the time, however, the rubber required to make the inner tubes was prohibitively expensive, and so Thomson returned to solid tires. For 43 years, air-filled tires were all but forgotten.
Then came a second Scotsman, John Boyd Dunlop, who developed his own version of the pneumatic tire while trying to make his son’s bicycle more comfortable to ride. Dunlop patented his pneumatic bicycle tire in 1888. A year later, the competitive cyclist Willie Hume became the first man to fit his bike with Dunlop’s pneumatic tires, and he started winning races across Britain. Dunlop later lost his main patent when Thomson’s original patent was rediscovered. While Thomson is rightfully considered the inventor of the pneumatic tire, it’s Dunlop’s name that remains synonymous with tires today.
Our planet is home to many talented engineers. Termites, for example, build complex structures that rise up to 10 feet in height, their “bricks” bonded by bio-cementation. Spiders, meanwhile, weave intricate webs, which, like suspension bridges, are capable of bearing heavy loads in even the stormiest weather. Then there are beavers and their well-engineered dams, bees and their cellular hives, and industrious ants whose largest recorded contiguous colony stretches a truly incredible 3,700 miles.
Humans, of course, are in a league of their own when it comes to construction. For millennia, we have been building structures of awesome size and complexity: roads and bridges, cathedrals and stadiums, tunnels and skyscrapers. Among the innumerable structures built by humankind, some stand out for their sheer size and magnificence. Here are six of the greatest engineering marvels in history.
The Great Wall of China is widely considered one of the greatest engineering feats of all time. Built continuously from the third century BCE to the 17th century CE, this series of walls and natural barriers stretches for around 13,000 miles. (Still, despite a persistent myth, it is not visible from the moon or space, at least not with the naked eye.) The Great Wall was originally the idea of Emperor Qin Shi Huang, the founder of the Qin dynasty and the first emperor of a unified China, who wished to protect the country from barbarian attacks from the north.
Under his orders, many older, unconnected state walls and fortifications were removed and a number of existing walls were joined into a single system stretching from the eastern Hebei province to Gansu province in the west. The wall itself was 15 to 30 feet high, topped with ramparts of at least 12 feet, with guard towers distributed at regular intervals. Much of the Great Wall that we now see was constructed during the powerful Ming dynasty, starting from around 1474. Today, the most famous and iconic section of the Great Wall of China is located 43 miles northwest of Beijing. This section was rebuilt in the late 1950s and now attracts thousands of tourists every day.
The awe-inspiring Hagia Sophia was built under Byzantine Emperor Justinian I between 532 and 537 CE — a remarkably quick construction considering the size of the project. With its vast central basilica, complex system of vaults and semi-domes, and high central dome spanning more than 101 feet in diameter, it was the world’s largest cathedral for nearly a thousand years (until the completion of Seville Cathedral in 1520). Considered one of the greatest examples of Byzantine architecture, Hagia Sophia represents the region’s shifting political and religious affiliations. Built as a Christian church, in later centuries it became a mosque, a museum, and a mosque once again. These changes are reflected in the building’s design, which features minarets and inscriptions from Islam as well as extravagant Christian mosaics. Hagia Sophia was the inspiration for many other Ottoman mosques, and has long been considered a unique architectural masterpiece.
Perched high above the Urubamba River valley in a narrow saddle between two mountain peaks, with tropical cloud forests tumbling down below, Machu Picchu is simply mind-blowing. It’s hard not to look around and think, “How the heck did they build this here?” The site is situated at an elevation of 7,710 feet, and contains about 200 structures spread across zones dedicated to religious, ceremonial, astronomical, and agricultural activities, all surrounded by incredible farming terraces, connected by some 3,000 stone steps. Many mysteries still surround the construction and precise purpose of Machu Picchu. Historians estimate that it was built around 1450 CE and abandoned around 100 years later with the arrival of the Spanish conquistadors. The conquistadors never found Machu Picchu, and it stood for centuries known only to the peasant farmers who lived in the immediate area. It wasn’t until 1911, when American academic and explorer Hiram Bingham rediscovered the site, that the remarkable citadel was brought to the world’s attention.
The idea of building a waterway across the isthmus of Panama to link the Atlantic and Pacific oceans dates back to at least the 1500s, when Spanish explorer Vasco Núñez de Balboa realized that the two oceans were separated by a 50-mile stretch of land. It wasn’t until 1880, however, that excavations for a canal first began, led by France. Unfortunately, the original nine-year project was devastated by malaria, yellow fever, and other tropical diseases, and ended with bankruptcy and the loss of some 20,000 lives. In 1904, the United States began the project anew. A major sanitation effort was put in place to help minimize disease, and the entire infrastructure was modernized.
Perhaps most importantly, the project’s chief engineer, John Stevens, convinced President Theodore Roosevelt that the concept of the waterway must be changed from a sea level canal to a lock canal. The original locks (the system has since been expanded), each measuring 110 feet wide by 1,000 feet long, made it possible to lift ships 85 feet above sea level to the artificial Gatún Lake, and then back down to the sea. This incredible feat of engineering radically altered maritime travel and trade. By using the canal to cut across the isthmus, ships could avoid the lengthy trip around Cape Horn in South America, shortening sea voyages by about 8,000 nautical miles.
After six years of construction, the Channel Tunnel opened in May 1994, becoming the only fixed link between the island of Great Britain and the European mainland. The idea of building a tunnel between England and France was not new. French engineer Albert Mathieu had proposed such a project in 1802, his design featuring an artificial island halfway across for changing horses. Back then, the idea simply wasn’t technologically feasible, but by the 1980s, the tunnel project was given the green light. Digging began on both sides of the Channel, and the initial connection was completed in 1991. Three years later, the 31.5-mile tunnel was opened. It actually consists of three tunnels: two for rail traffic and a central tunnel for services and security. The tunnel descends to a depth of 246 feet below the sea bed. It is the third-longest railway tunnel in the world, and with 23.5 miles running under the English Channel, the world's longest undersea tunnel.
Built between 1998 and 2011, the International Space Station (ISS) is the largest single structure humans ever put into space. It also represents arguably the greatest multinational collaborative construction project in history, involving Europe, the United States, Russia, Canada, and Japan. Elements of the station were built in multiple countries beginning in the late 1980s. In 1998, Russia’s Zarya control module and the U.S.-built Unity connecting node were launched into orbit, where they were connected by American space shuttle astronauts. In 2000, following the installation of the Russian-built habitat module Zvezda, the ISS received its first resident crew. Since then, it has been continuously occupied. Altogether, the space station is larger than a football field and weighs almost 450 tons. Maintaining an orbit with an average altitude of 250 miles, it circles the Earth in about 93 minutes, completing 15.5 orbits per day. The ISS is a truly unique engineering masterpiece, and its sophisticated science laboratory allows us to carry out research that is simply not possible on Earth — paving the way for future missions to Mars and beyond.
In 1903, a Vermont doctor named Horatio Nelson Jackson drove from San Francisco to New York in a Winton touring car and became the first person to traverse the United States in an automobile. At the time, there were no more than 150 miles of paved road in the country, mostly concentrated within cities. The path that Jackson traveled was along rivers, mountain passes, flatlands, and the Union Pacific Railroad, and what roads he did encounter between cities were, in his description, “a compound of ruts, bumps, and ‘thank you m’ams’ [sic].” The trip took 63 days, 12 hours, and 30 minutes, but it inspired auto companies and other early car adopters to arrange trips of their own, sparking demand for long-distance highways.
The first automobile highways weren’t construction projects, and were referred to as “auto trails.” They were essentially suggested routes made up of existing thoroughfares, conceived of by private associations and codified with names such as Lincoln Highway, Victory Highway, National Old Trails Road, and so on. The associations marked the trails with signs or logos, and promoted the improvement of the routes, sometimes collecting dues from towns and businesses. Eventually, the U.S. government grew wary of the proceedings, and proposed the construction of a paved and nationalized numbered highway system. The proposal was adopted on November 11, 1926.
The numbered highways were a marked improvement over the auto trails, but nearly 30 years after their adoption, Congress approved the Federal-Aid Highway Act of 1956, revolutionizing the highway system by building 41,000 miles of interstate roads. The interstates repurposed existing numbered highways, connecting and extending them for greater efficiency, and these roads are to this day our main mode of distance auto travel. Let’s look at when some of the country’s biggest and most vital interstates were built.
I-70 is arguably the oldest interstate in the U.S. When it comes to the interstate projects initiated by the Federal-Aid Highway Act of 1956, I-70 was the first both by date of initial construction (August 13, 1956 in St. Charles County, Missouri) and initial paving (September 26, 1956 just west of Topeka, Kansas). The highway runs through 10 states as it spans the center of the country west from Baltimore, connecting Pittsburgh, Columbus, Indianapolis, St. Louis, Kansas City, and Denver. I-70 also includes the highest car tunnel in the world, the Eisenhower-Johnson Memorial Tunnel near Denver, which has an average elevation of 11,112 feet. The most recent segment of the tunnel is the 12.5-mile stretch through the narrow Glenwood Canyon, completed in 1992.
Interstate 70 may be the first of the federal interstates to begin construction, but Interstate 80 likely has the oldest antecedents, as it approximates the route of Nebraska’s Mormon Trail (aka Great Platte River Road), dating back to the 1840s, and also parts of the Lincoln Highway auto trail from the late-1910s to mid-1920s. Its transcontinental span runs through 11 states. Construction of the modern-day I-80 began in Nebraska in 1957 and in Pennsylvania in 1958 (though the Delaware Water Gap Toll Bridge that later became part of I-80 was opened on December 16, 1953). A final 5-mile connecting segment was completed near Salt Lake City on August 17, 1986.
Interstate 90 is another federal interstate that traces its origins to an older antecedent auto trail: the Yellowstone Trail from Plymouth, Massachusetts to Seattle, Washington that was founded in 1912. The first segment of newly constructed road for I-90 was opened in Spokane, Washington in November 1956. I-90 has the distinction of being the longest interstate, at 3,085 miles, and covers 13 states. The last link to its western terminus in Seattle was completed in 1993.
Route 66 was perhaps the most famous highway in the United States during the first half of the 20th century, inspiring a song and even a TV show. Interstate 40 is the longest of the five federal interstates that gradually replaced it, and it was I-40 bypassing Route 66’s final segment in 1984 that led to the iconic highway being decommissioned the following year. Construction of I-40 began in 1957 in North Carolina. Though the interstate stretches more than 2,500 miles between its eastern and western ends, its final segment was completed in 1990 in Wilmington, North Carolina — just 220 miles from its first segment’s completion in Kernersville, North Carolina.
Interstate 10 is the transcontinental highway with the southernmost span, running through all eight states of the lower U.S. Similar to I-40, it served as a replacement for Route 66, primarily for the stretch between California to Arizona. Exact details about the first new construction stretch of I-10 are sparse, but it most likely took place in El Paso in 1960. The Papago Freeway Tunnel completed I-10’s final segment when it opened in August 1990.
Interstate 95’s 1,920-mile span from Houlton, Maine to Miami, Florida makes it the longest north-south oriented interstate in the country. It crosses 15 states and Washington, D.C. (the most of any interstate), and it also established the first bus/carpool lanes in 1969. Since the route traverses more densely populated cities than any other interstate, its construction was often contentious, particularly in Philadelphia. The first new construction for I-95 began in the summer of 1956 in Richmond, Virginia, though the Connecticut Turnpike was the first stretch of I-95 that opened. The final stretch of I-95, a long unresolved gap on the Pennsylvania and New Jersey border, was finally completed in the summer of 2018. The event also marked a larger momentous occasion: the completion of the original federal interstate system first planned in 1956.
In many historical contexts, 100 years isn’t a very long time. But when it comes to science, technology, and medicine — particularly in the last century — it’s a veritable eternity. The seeds of modern medicine were just being planted in the early 20th century: Penicillin was discovered in 1928, physicians were still identifying vitamins, and insulin was a new breakthrough.
The doctor’s role itself was different than it is today, as preventative care was not yet an established practice; there was no such thing as a routine visit to a doctor’s office 100 years ago. A visit to the doctor typically meant that you were ailing (though in some cases during the Prohibition era, it meant that you and your doctor had agreed on a way around the alcohol ban). Thanks to advances in technology, doctors’ offices in the 1920s were also stocked with very different items than we see today. These are a few things you likely would have found there a century ago.
A metallic disc attached to a headband is generally considered part of a classic doctor costume, but what is the genuine article, exactly? It’s called a head mirror, and your doctor 100 years ago would’ve been wearing one. It wasn’t just an emblem; it provided a very core function, which was illumination for the examination of the ear, nose, or throat. The patient would be seated next to a lamp that was pointed toward the doctor, and the head mirror would focus and reflect the light to the intended target. Today, the easier-to-use pen light or fiber optic headlamp have largely replaced the head mirror, though some ENT specialists argue that the lighter weight and cost-effectiveness of the latter mean it may still have a place in contemporary medicine.
Photo credit: Marka/ Universal Images Group via Getty Images
Floor-Standing Spirometer
One hundred years ago, a spirometer was a large floor-standing unit made of metal, used to evaluate pulmonary function. The patient would breathe into a tube, and a dial on the top would indicate lung capacity and respiratory volume, allowing the doctor to diagnose pulmonary ailments. Today, spirometers are still very much in use, but they are much smaller and made of plastic. In fact, they’re so compact nowadays that patients can hold the entire unit themselves while they’re in use.
Advertisement
Advertisement
Photo credit: FPG/ Archive Photos via Getty Images
Electric Vaporizer
The electric vaporizer was similar to today’s at-home humidifiers, but it was more complex, and could be used to make vapor out of water or other liquid medication. In the doctor’s office, vaporizers were used to treat sinus or bronchial illnesses. Vaporizers were also used in hospital settings in order to administer anesthesia.
Considering that a doctor’s workspace is referred to as a “doctor’s office,” it follows that a classic wooden desk was generally present in one 100 years ago. There is something especially archaic-looking about a doctor seated at a wooden desk, though, since today we’re used to nonporous antiseptic surfaces in any space where medical exams or procedures take place. Indeed, it didn’t take long for the doctor’s office to shift in that direction: By the 1930s, most doctor’s offices contained furniture that was made of enamel-coated metal.
Sometimes referred to as vinegar of ipecac, syrup of ipecac was (in small doses) an early form of cough syrup used as an expectorant to treat respiratory illnesses. In larger doses, it was used as a poison control agent to induce vomiting, especially in pediatric medicine. Available by prescription only in the early 20th century, it was eventually approved by the FDA for over-the-counter sale and recommended as an essential item for households that included young children. In 2004, the FDA began discouraging use of syrup of ipecac due to its lack of efficacy as a treatment for poison ingestion.
The doctor’s bag (also referred to as a physician’s bag or a Gladstone bag) was an essential item in an era where house calls were still part of a general practitioner’s array of services. The bag was usually made of black leather, and carried the most important and portable medical equipment: a stethoscope, thermometer, bandages, syringes, a plexor for testing reflexes, a sphygmomanometer to test blood pressure, and more.
The image of an early 20th-century doctor sharpening a knife may seem foreboding, even sinister. But in the doctor’s office setting, the sharpening stone was used to sharpen scissors or knives for cutting bandages, not for any sort of medical procedures. The surgical discipline, meanwhile, used a cold sterilization method that prevented scalpel blades from dulling. Rest assured: Since at least 1915, scalpels have been a two-piece design that enable the blade to be discarded and replaced with a new one after each use, so there’s no need for sharpening.
First developed in the late 1820s, photography combined art and science into one medium capable of capturing an image in the moment. The innovation transformed recorded history into something that could be documented in pictures as well as text. As the technology advanced, the medium exploded in popularity, making it possible for families to create snapshots of memories for future generations to appreciate. These early photographic portraits transport us back in time, painting a picture of a different way of life: Families were larger, clothes were bulkier, and postures were noticeably stiff and formal. But perhaps the most conspicuous difference of all is that no one ever seemed to smile.
The somber expressions preserved in early photographs might lead us to assume that past generations led austere and joyless lives. However, the lack of joviality in these snapshots can be attributed to several other factors. Here’s the truth behind those stern expressions in old photos.
In the earliest days of photography, the lengthy exposure periods made it impractical to photograph people. For instance, French inventor Nicéphore Niépce’s 1826 “View from the Window at Le Gras,” credited as the oldest surviving photograph, required an exposure time of eight hours. It was more than a decade before Louis Daguerre’s 1839 invention of the daguerreotype made portrait photography practical. But even then, it was a relatively slow and meticulous process that required the subject to remain still for as long as 20 minutes.
By the early 1840s, photographic technology had advanced further, and the daguerreotype images that once required a 20-minute exposure neededonly 20 seconds to process. Still, even modern photo subjects understand the difficulty of maintaining an open-mouthed smile for any amount of time. It only takes a few moments for a candid smile to turn into something more like an awkward grimace. And anyone who has dealt with a restless child can attest that more than a few seconds of remaining motionless is a formidable challenge. To minimize movement and guarantee a sharp image, children were sometimes put into restraints for the length of a photo shoot.
Additionally, until the 20th century, the expense of photographic equipment and the toxic and dangerous chemicals needed to process film meant that most photographs were taken by professional photographers working out of studios or traveling with their equipment. A photography session was a time-consuming and pricey undertaking; it cost the average person as much as three or more months’ salary, and a person might only be photographed a few times in their life. The requirement for stillness, combined with the novelty and cost of posing for a professional photographer, created an atmosphere where it was simply easier to maintain a neutral or serious expression. But even once the technology existed to capture more relaxed expressions, it was a long time before smiling in photos became the norm.
Though technological limitations are frequently cited as the reason for the solemn expressions in old photographs, it wasn’t the only reason our ancestors so often appeared solemn in front of the camera. One notable feature shared by artist portraits from the 17th and 18th centuries and photographs from the early 19th century is the presence of stoic, enigmatic expressions on the subjects’ faces. As portrait artist Miss La Creevy observes in Charles Dickens’ novel Nicholas Nickleby, only two types of expressions existed in portraiture: “the serious and the smirk.”
Before photography, a painted portrait was the only way to preserve someone’s image for posterity. Having your portrait painted was an activity associated with wealth and social status, and accordingly, the art form had its own rules and expectations. This formal portraiture proved to be a big influence on early photographers, who featured their subjects in ways that represented their social status, occupation, or other interests. The social mores associated with painted portraits carried over into photographic portraiture, and smiling was discouraged.
Photo credit: Heritage Images/ Hulton Archive via Getty Images
Social Etiquette Frowned Upon Smiling
Some historians believe that advancements and accessibility in dental care may have contributed to more smiles eventually being captured on film. Other experts disagree, noting that for centuries, a lack of dental care was the norm and thus wasn’t considered to detract from a person’s physical appeal. Still, smiling for a photograph wasn’t commonplace in the early days of photography. In fact, instead of the modern directive to “say cheese!” to produce a wide, toothy grin, some photographers in Victorian-era England asked people to say “prunes,” forcing them to tighten their lips for a more socially acceptable expression based on the beauty standards and etiquette of the time.
In an era where open-mouthed grins were considered unacceptable and a smile was believed to signify someone was poor, drunk, lewd, or otherwise corrupt, it was rare for someone to choose to smile in a portrait — and even less likely that a photographer would encourage it. That all changed, however, with Kodak’s democratization of photography in the early 20th century.
As photography became more accessible in the late 19th century, a wider variety of people took and sat for photographs, and what was acceptable in portrait photography became less rigid. In 1888, Kodak founder George Eastman started a photographic revolution that put cameras in the hands of amateur photographers and gave them an instruction manual on how to take good photos. In 1900, the Kodak Brownie camera was marketed for children and sold for just $1, creating a photography craze that appealed to adults as well.
By the 1920s, a century after the first landscape photographs were captured on film, more relaxed postures and a greater variety of expressions, including closed- and open-mouthed smiles, were common in both amateur and professional photography. With the advent of color photography, the popularity of candid photos, and the rise of affordable personal cameras, capturing an array of expressions — including moments of genuine joy — became the gold standard.
There’s nothing more frustrating than working your socks off only to see someone else get all the credit for your efforts. Spare a thought, then, for the minds behind some of history’s most significant innovations, who, despite months, years, or in some cases lifetimes of work, find someone else’s name ignominiously attached to their invention.
Sometimes inventions are miscredited in the public consciousness simply because a more famous name becomes associated with the creation. For example, Thomas Edison and Henry Ford — two of modern history’s most well-known innovators — are often credited with things they didn’t actually invent, through no fault of their own. Then there are the more insidious misattributions. In some instances, an idea has been copied or outright stolen, robbing the true inventor of their glory; in others, a more senior or prominent member of a team is given credit despite not coming up with the original idea. See, for example, the Matilda effect, in which notable discoveries made by women have often been misattributed to the men they worked with.
Here are some notable inventions in history that are frequently credited to the wrong person, from the flush toilet to the iPod.
No name in the history of toilets is more famous than that of plumber Thomas Crapper, partly because his name appeared on the once-ubiquitous Crapper brand of toilets, and partly because Crapper is a humorously appropriate name for a toilet (the slang word “crap” existed before Thomas Crapper). Crapper, however, did not invent the flushing device with which he is so associated. He did patent the U-bend and floating ballcock — key components of the modern toilet — in the late 1880s, but he never held a patent for the flush toilet. Much earlier, in 1596, John Harington, an English courtier and the godson of Queen Elizabeth I, described what can be considered the first flush toilet, which involved a 2-foot-deep bowl and a massive 7.5 gallons of water per flush. (Only two working models were made, one in Haringon’s own home and one in Queen Elizabeth’s palace.) The first patent for a flushable toilet was granted to the Scottish inventor Alexander Cumming in 1775.
The Italian polymath Galileo Galilei is often credited with inventing the telescope, and it’s easy to see why. He gave birth to modern astronomy with his telescope-assisted discoveries about our moon, the moons of Jupiter, and other celestial bodies. Galileo made his first telescope in 1609 after hearing about the “perspective glasses” being made in the Netherlands. But the first person to apply for a patent for a telescope was Dutch eyeglass-maker Hans Lippershey in 1608, a year before Galileo. His telescope could magnify objects only three times, but it was nonetheless a landmark in the history of optics. (By comparison, by the end of 1609, Galileo had developed a telescope that magnified objects 20 times.) Whether Lippershey should be credited as the inventor of the telescope remains an open debate, as it is entirely possible that others created similar devices before he filed his patent.
Thomas Edison is often — and incorrectly — given all the credit for inventing the lightbulb. But the lightbulb was actually the result of a process that began before Edison was even born. In 1802, English chemist Humphry Davy used a voltaic pile (invented by Alessandro Volta, after whom the volt is named) to create the first “electric arc lamp” between charcoal electrodes. His rudimentary lamp was too bright and burned out too quickly, but it was nonetheless an important breakthrough. Other scientists worked to refine the lightbulb, but problems with filaments and batteries made these early bulbs impractical for everyday use. In 1860, English physicist Joseph Swan developed a primitive electric light that utilized a filament of carbonized paper in an evacuated glass bulb. Lack of a good vacuum and an adequate electric source ultimately made it inefficient, but it did pave the way for later innovations, including those by Edison. Edison purchased some of his predecessor’s patents, improved upon them, and came up with his own lightbulb, which, while not the first overall, was the first to be commercially viable.
Photo credit: Culture Club/ Hulton Archive via Getty Images
The Automobile
One commonly held misconception is that Henry Ford invented the automobile. In reality, the development of the automobile can be traced back to Nicolas-Joseph Cugnot, a French military engineer who, in 1769, built a steam-powered tricycle for hauling artillery. Due to its steam-powered nature, not everyone accepts Cugnot’s invention as the first true auto. Instead, that distinction often goes to vehicles made by two Germans, Karl Friedrich Benz and Gottlieb Daimler, who — working entirely separately — developed their own gasoline-powered automobiles in 1886, in two different German cities. Benz actually drove his three-wheeled vehicle in 1885, and it is regarded as the first practical modern automobile and the first commercially available car in history. As for Henry Ford, his name is forever remembered in auto history for the Model T, which he mass-produced using an innovative moving assembly line, making automobiles available to middle-class Americans.
Advertisement
Advertisement
Photo credit: Maurice Ambler/ Picture Post via Getty Images
Monopoly
Since the 1930s, it’s been common knowledge that Charles Darrow invented Monopoly, an idea that both he and the game’s manufacturer, Parker Brothers, freely propagated (it was printed in the instructions for decades). But it’s not quite true. Darrow got the idea for the game — which made him a millionaire — from a left-wing feminist named Elizabeth Magie. Magie created and patented an early version of Monopoly, called The Landlord’s Game, in 1903, about three decades before Darrow. Darrow learned about the game from a couple who had played it in Atlantic City (which is where many of the game’s street names come from) and made a few changes: The original game included a wealth tax, public utilities, and was designed as a protest against the big monopolists of her time. It had two sets of rules, one that allowed players to create monopolies and crush their opponents, and an anti-monopolist version that rewarded all players when wealth was created (the latter demonstrating what Magie believed to be a morally superior path). It’s only in recent years that Magie has started to receive the credit for inventing one of the world’s most popular and iconic board games.
Photo credit: Justin Sullivan/ Getty Images News via Getty Images
The iPod
Portable digital audio players have existed since the mid-1990s, but it was Apple’s iPod that revolutionized the industry upon its release in 2001. Yet it wasn’t the engineers at Apple who invented the iPod — not entirely, at least. British inventor Kane Kramer actually developed the technology behind the iPod as far back as 1979. His credit card-sized music player, which looked very similar to the iPod, could store only 3.5 minutes of music, but he was sure the storage capacity would increase over time. Unfortunately for Kramer, internal problems at his company ultimately led to his patent lapsing, at which point the technology became public. Apple later acknowledged Kramer’s involvement in inventing the technology behind the iPod.