Ready to take a trip down memory lane? Before tablets, touchscreens, and Wi-Fi, toys were all about tactile fun, imagination, and the joy of hands-on play. Today, retro toys carry a special charm, reminding us of simpler times when even a bouncy spring or simple building blocks could offer hours of entertainment. In a world where tech toys are constantly evolving, these classics have stayed true to their roots — some have barely changed from their original designs, while others have adapted for new audiences in surprising ways.
Whether you’re looking to reconnect with your childhood favorites or introduce a new generation to the magic of these timeless playthings, these retro toys will bring a touch of nostalgia.
With the tagline “Knock his block off!” and a comic book-worthy illustration on the box, Rock ’Em Sock ’Em Robots quickly captivated audiences when they were introduced in 1965 by toy designer Marvin Glass. This two-person game featuring boxing robots Red Rocker and Blue Bomber inspired many a playful boxing match in the decades that followed, and is still capturing imaginations today. A live-action movie starring Vin Diesel is rumored to be in the works, but until these toy robots hit the big screen, you can find them at Walmart for $21.92.
Long before virtual reality headsets, the View-Master offered a window to the world, displaying stereoscopic 3D images of popular travel destinations. The original version, known as Sawyer’s View-Master, was introduced at the 1939 New York World’s Fair and was intended to replace the traditional postcard. The stereoscope quickly became popular with adults and children alike. You can find a contemporary version of this classic toy at Target for $9.99. For a more personalized experience, you can customize your own reel viewer disk at Uncommon Goods — it’s $34.95 for one reel viewer and a reel redemption code, and $16.95 for each additional reel.
Invented by mechanical engineer Richard James at a Philadelphia shipyard in 1943, the Slinky was an accidental discovery that came out of the development of a line of sensitive springs designed to stabilize fragile equipment on ships. Two years later, James demonstrated the toy — named by his wife, Betty James — at Gimbels department store in Philadelphia and sold his entire stock of 400 toys within 90 minutes. Today, the Slinky comes in a variety of colorful, plastic styles, but the original Slinky is still the most popular version. You can buy it on Amazon for $3.59.
Created by two employees at Marvin Glass and Associates and sold by Ideal in 1963, Mouse Trap was one of the first mass-produced 3D board games. The game’s engaging Rube Goldberg-style design involves collaborative and competitive gameplay. Unlike other board games where players race or strategize to a simple endpoint, Mouse Trap’s interactive design is all about players collaborating to build an intricate mechanism where each piece sets off a chain reaction, ultimately resulting in a tiny plastic cage dropping down to “trap” an opponent’s mouse-shaped game piece. Originally designed as a fun way to teach engineering concepts, Mouse Trap combines the thrill of strategy with the satisfaction of building. This family favorite has been updated and is available from major retailers and on Amazon for $21.28.
Advertisement
Advertisement
Credit: Mario Ruiz/ The Chronicle Collection via Getty Images
Invented by George Lerner and launched by Hasbro in 1952, Mr. Potato Head was originally sold as a kit of 30 plastic parts meant to be inserted into real potatoes. It became the first toy ever advertised on television, earning the company more than $4 million in its first few months and spurring other toy companies to start their own television marketing campaigns. In 1953, Mrs. Potato Head was introduced, and by 1964 the toy had evolved to include a plastic potato body with holes for facial features and limbs. Available at most toy retailers, Mr. Potato Head and Mrs. Potato Head, along with a Potato Head family-of-three kit, are also available on Amazon for $5.00 to $19.99.
Invented in Paris by electrician André Cassagnes in the late 1950s, the Etch A Sketch used a unique internal system of a stylus and pulleys to create an electrostatic charge that would hold aluminum powder to the glass screen. Discovered by the Ohio Art Company at the 1959 Nuremberg Toy Fair, the mechanical drawing toy was launched during the 1960 holiday season and quickly became a symbol of mess-free creative play. Two large knobs control the vertical and horizontal movements of the internal stylus, allowing users to draw images as simple or as elaborate as they like. This red-framed classic continues to offer a screen-free way to entertain and challenge the imagination. Available at most toy retailers, including Amazon for $23.99, the Etch A Sketch also comes in a mini version.
The building kits known as Lincoln Logs were first conceived in 1916 by John Lloyd Wright, the son of architect Frank Lloyd Wright. Inspired by his father’s earthquake-proof design for the Imperial Hotel in Tokyo, John began marketing his toy cabin construction kit in 1918 and received a patent two years later. He named the productLincoln Logs for Abraham Lincoln’s boyhood log cabin in Kentucky. This classic American toy is still widely available from toy retailers such as Fat Brain Toys, which offers the 268-piece Lincoln Logs Classic Farmhouse for $129.95. A smaller 111-piece set is also available on Amazon for $45.07.
One of America’s oldest board games, The Game of Life — or Life, as it’s typically known — started its own life in 1860 as The Checkered Game of Life. The first board game invented by Milton Bradley, it originally included squares for disgrace, intemperance, poverty, and ruin as it guided players on a morality journey where success or failure was based on decisions made along the way. To celebrate its centennial in 1960, the Milton Bradley Company released an updated version of the game, changing the name to The Game of Life and shifting the focus from moral lessons to a modern life journey through experiences such as college, marriage, career choices, and family. The most recent updates to The Game of Life include the ability to adopt pets and an impressive 31 career options, and the game rules encourage players to “choose a path for a life of action, adventure, and unexpected surprises.” You can spin the wheel on the contemporary version of The Game of Life for $21.99 at Amazon.
When we take a look back through history, we find that many items we now consider commonplace were once rare, exotic, and incredibly valuable. These precious commodities were often out of reach for the majority of people, reserved for royalty and wealthy citizens.
The journey of these items from scarcity to ubiquity tells a fascinating story of human progress, a tale of technological advancements and shifting global economics. Centuries of exploration, agricultural developments, industrial innovations, and the opening of new trade routes transformed our material world. What was once worth its weight in gold may now be found in every household, often used — or even discarded — without a second thought.
Here are some now-common items that have undergone this remarkable transition, from spices that once financed entire cities tobeverages that sparked riots and wars.
Credit: Sepia Times/ Universal Images Group via Getty Images
Salt and Pepper
Salt and pepper were often known as “white gold” and “black gold,” respectively, by merchants of the ancient world. Salt was once essential not just for flavoring food but also for preserving it, making it crucial for survival and expansion. It was transported along the ancient salt routes to markets across Europe, making some citizens, cities, and regions extremely wealthy. The city of Salzburg in Austria, for example, whose name literally means “Salt Castle,” amassed great wealth by trading salt. Pepper, meanwhile, was once so rare and desirable that it was literally worth its weight in gold and was sometimes used as currency. The desire for salt and pepper, along with other spices, was so high that it helped drive European global exploration in the 15th and 16th centuries.
Aluminum is the most abundant metal found in the Earth’s crust. Today, it is inexpensive and ubiquitous, used in everything from soda cans to aircraft. But before the development of aluminum electrolysis in the late 19th century, it was extremely difficult to extract and refine, making it more precious than gold and silver. In the 1860s, aluminum was so rare that Napoleon III reserved a set of aluminum cutlery for his most honored guests, while those of lesser status had to make do with utensils made of gold. And in 1884, the United States capped the Washington Monument with a 6-pound pyramid of aluminum as a display of its industrial prowess.
There’s arguably too much sugar floating around in our modern world, and its overconsumption is the cause of many health issues. But back in medieval Europe, sugar was considered a fine spice and was often kept under lock and key along with other precious items. In Britain around 1300, 1 kilogram (about 2.2 pounds) of sugar cost around £350, equal to roughly $457 today. Four centuries later, sugar was still considered a luxury item. By 1750, there were 120 sugar refineries operating in Britain, but their combined output was only 30,000 tons per year, ensuring that vast profits were to be made by those in control of sugar production. It wasn’t until the 19th century, with the cultivation of sugar beets and the industrialization of production, that sugar became widely available and affordable to the general population.
Before the invention of artificial refrigeration, ice was a luxury item, particularly in warm climates. From the days of ancient Rome until the late 1800s, ice was harvested from mountains or frozen lakes and rivers and stored in covered pits or purpose-built storage rooms for use in the summer, as a means of keeping food cool or simply for putting in drinks. By the 19th century, ice harvesting had become a major industry. The most notable figure in this burgeoning trade was Frederic Tudor, an American merchant known as the “Ice King.” He made a fortune shipping ice from New England to the Caribbean, South America, Europe, and even as far as India. However, the ice trade eventually collapsed in the early 20th century with the advent of artificial refrigeration and ice-making machines, which made ice a common household item.
Advertisement
Advertisement
Credit: Sepia Times/ Universal Images Group via Getty Images
Tea
Tea, now the world’s most consumed beverage after water, was once a luxury item in the Western world. Originating in China, where it had been enjoyed for thousands of years, it began to be imported into Europe in the early 1600s. By the 1660s, it reached Britain, where it became a highly fashionable luxury good consumed by those who could afford it. Tea was initially so expensive in Europe that it was kept in locked chests known as tea caddies. The tea trade eventually became a significant factor in global economics and politics. The British East India Company’s monopoly on tea imports to Britain led to widespread smuggling and ultimately played a role in the American Revolution — most memorably in the Boston Tea Party. The British obsession with tea also led to a growing trade imbalance with China (then the primary producer of tea), which eventually sparked open conflict between the two nations. By the end of the 18th century, with taxes on tea greatly reduced and plantations established in India and Ceylon (now Sri Lanka), tea had become more affordable and widely consumed.
Chocolate, now available in countless forms and at varying price points, was once a luxury reserved for the wealthy. We can thank the Maya and Aztecs, both of whom saw it as a gift from the gods and valued cacao beans as highly as any other product. When Europeans encountered chocolate in the Americas, they wasted little time introducing it to Europe. By the end of the 16th century, it had become the drink of the European aristocracies, enjoyed only by the nobility and wealthy merchants. Then, in 1828, the cocoa press was invented, revolutionizing chocolate production and making it available to the masses.
In 1493, Christopher Columbus had his first encounter with a pineapple on the island of Guadeloupe in the West Indies. Like explorers who came after him, he was mightily impressed by the strange, sweet fruit. By the 17th and 18th centuries, pineapples had become an exotic luxury in Europe and North America, available to only the wealthiest consumers. In the American colonies in the 1700s, a single pineapple imported from the Caribbean islands could cost as much as $8,000 in today’s money. And in mid-17th-century Britain, an affluent aristocrat could expect to spend £60 for one pineapple — equivalent to about $14,400 today. Pineapples were such a symbol of wealth and status in Britain that they became a common motif in architecture and design — they can still be seen adorning the rooftops, railings, and doors of many prestigious buildings in London. It wasn’t until the development of steamships and the canning industry in the 19th century that most of the public could even think of tasting a pineapple.
Last names, also known as surnames, can be more than just family identifiers — they can be gateways to understanding more about our ancestral history, cultural heritage, and even ancient migration patterns. The practice of using last names began as populations expanded and it became necessary to distinguish individuals with the same first names. The origins of these surnames are often tied to geographical regions, occupations, or even personal traits — think “Hill,” “Baker,” or “Armstrong.” In the United States, where the population is a diverse mix of cultures, surnames also carry with them the marks of migration, colonization, and assimilation.
Whether a last name suggests our ethnic heritage, an occupational trade, a geographical region, or the influences of colonization and religion, the identifiers we carry with us can reveal intriguing stories about our past and connect us to a broader story of human movement and settlement. With around 31 million surnames in the world, here are just a few ways that our last names tell us who we are.
The Viking Age marks the period of time when seafaring Norse people raided and colonized their way through Northern Europe, from the end of the eighth century CE until the Norman Conquest in 1066. The influence of the Vikings can still be seen in the surnames of people with Scandinavian, English, Irish, and Scottish ancestry. Names ending in “-son” or “-sen,” such as Davidson or Andersen, are likely to have Viking roots in Scandinavian or Norse heritage, derived from the practice of using “son of” to identify a man’s father. For example, Andersen means “son of Anders,” a popular Scandinavian first name. Other surnames of Old Norse descent include Carlson, Ericsson, Rogerson, Gundersen, Olsen, and Iverson.
Viking migrations, raids, and settlements spread Viking naming conventions as well as the Old Norse language across regions that are now part of modern-day England, Ireland, and Scotland. Regions such as Yorkshire in northern England and parts of Ireland were significantly influenced by Viking settlers, a fact still visible in the surnames common in these areas, such as Holmes, a Viking word meaning “a small island”; McAuliff, meaning “son of Olaf”; and Higgins, which comes from an Irish word that means “Viking.”
Similarly, Doyle, from the Irish Ó Dubhghaill, means a “descendant of Dubhghaill,” coming from the Old Gaelic dubh, meaning “dark” or “black” and ghaill, meaning “foreigner” or “stranger,” which was how the first Vikings in Ireland were described. Other Viking names with the same meaning include the Irish surname Mcdowell and the Scottish surname Mcdougall, both of which are anglicized forms of Mac Dubhghaill, meaning “son of Dubhghaill.”
Credit: Sepia Times/ Universal Images Group via Getty Images
Surnames Can Tell Us About Our Ancestors’ Occupations
For many people in the United States, surnames hold clues to their ancestors’ European origins as well as their occupations. Names such as Baker, Miller, Sawyer, and Smith each tell a story about the jobs that were once an important part of a community. The last name Baker refers, of course, to someone whose occupation was baking, while the name Miller, from the Middle English “mille,” references a person who owned or worked in a grain mill. The name Sawyer comes from the Old English “sagu,” referring to one who saws wood and signifying a family whose profession was in lumber or woodworking. Similarly, the name Smith is derived from the Old English word “smitan,” meaning “to smite” or “strike,” indicating someone who worked with metal, such as a blacksmith or goldsmith. Other common examples include Taylor, Fisher, Wright (as in wheelwright), Cooper (barrel maker), Hunter, Thatcher, Brewer, Cook, Glover, Shepherd, and Gardner.
Occupational names are common in languages other than English, too, such as Geiger, a German surname meaning “fiddle player”; Notaro, an Italian occupational name meaning “clerk”; Fisker, a Dutch surname meaning “fisherman”; Patel, an Indian surname meaning “landowner”; and Favre, a French surname meaning “blacksmith.” The surnames Müeller (German), Mulder (Dutch) and Melnik (Russian), and Molinero (Spanish), are all equivalent to the English last name Miller.
As societies evolved, these occupational surnames stuck, even as descendants of those original bakers or smiths moved into different lines of work. Their names remain a connection to their ancestors and the roles they played in their communities.
Advertisement
Advertisement
Credit: Florilegius/ Universal Images Group via Getty Images
Surnames Can Reflect Our Ethnic Heritage
Beyond occupations, surnames can also be tied to geography. In many cultures, last names were derived from a family’s place of origin. For instance, English surnames such as Hill, from the Old English “hyll,” and Wood, for the Old English “wudu,” refer to natural landmarks, while the Italian surname Carrara is derived from the Latin word for “quarry” and comes from the name of a city in Tuscany known for its marble quarries. In Spanish-speaking cultures, common names include Fernández, meaning “son of Fernando,” and Lopez, meaning “son of Lope,” emphasizing a patronymic naming tradition similar to that of the Vikings.
Some other common Spanish names meaning “son of-” include Perez, “son of Pedro,” Alvarez, “son of Alvaro,” Sanchez, “son of Sancho,” and Diaz, “son of Diego,” while Spanish geographical names include Rivera from the Spanish word ribera, meaning “bank” or “shore,” and Vargas, from the Spanish and Portuguese word varga, meaning “flooded field,” “pastureland,” or “slope.”
In Asia, surnames carry rich historical significance as well. The most common surname in China, as well as the world, is Wang, meaning “king” or “monarch,” while Li was the surname of Chinese emperors of the Tang dynasty. The most common surname in Japan is Satō and, like other popular surnames such as Itō and Katō, it is linked to the powerful Fujiwara clan that controlled the Japanese government for four centuries.
African surnames originated through a mix of Indigenous traditions and external influences, including colonization, religion, and migration. In West Africa, surnames often indicate the ethnic group or clan a person belongs to, such as Diop, a surname that comes from the Diop clan of the Wolof people of Senegal. The influence of Islam and the Arabic language is reflected in the surnames of North Africa. The most common surname in Egypt, for instance, is Mohamed, which is associated with the Islamic prophet, Muhammad, while the Sudanese surname Elmalik means “the king” or “the owner” in Arabic.
There Are More Than 6 Million Different Surnames in the U.S.
According to the U.S. Census Bureau, about 6.3 million different surnames were reported in the 2010 census. Centuries of immigration and forced migration from Europe, Asia, Africa, and Latin America, as well as the existing Indigenous populations, contributed to making the United States home to one of the most diverse arrays of surnames in the world. As people from different countries settled in America, they brought their surnames with them, often adapting, abbreviating, or anglicizing them over time to make them easier for English speakers to pronounce or to avoid discrimination. While many of us today may be far removed from the origins of our surnames, they still connect us to our past.
The number 13 has long been considered unlucky in many Western cultures. Even today — in a world far less superstitious than it was in the past — a surprising amount of people have a genuine, deep-rooted fear of the number 13, known as triskaidekaphobia. For this reason, many hotels don’t list the presence of a 13th floor (Otis Elevators reports 85% of its elevator panels omit the number), and many airlines skip row 13. And the more specific yet directly connected fear of Friday the 13th, known as paraskevidekatriaphobia, results in financial losses in excess of $800 million annually in the United States as significant numbers of people avoid traveling, getting married, or even working on the unlucky day.
But why is 13 considered such a harbinger of misfortune? What has led to this particular number being associated with bad luck? While historians and academics aren’t entirely sure of the exact origins of the superstition, there are a handful of historical, religious, and mythological matters that may have combined to create the very real fear surrounding the number 13.
The Code of Hammurabi was one of the earliest and most comprehensive legal codes to be proclaimed and written down. It dates back to the Babylonian King Hammurabi, who reigned from 1792 to 1750 BCE. Carved onto a massive stone pillar, the code set out some 282 rules, including fines and punishments for various misdeeds, but the 13th rule was notably missing. The artifact is often cited as one of the earliest recorded instances of 13 being perceived as unlucky and therefore omitted. Some scholars argue, however, that it was simply a clerical error. Either way, it may well have contributed to the long-standing negative associations surrounding the number 13.
The idea of 13 being unlucky may have originated with, or at least have been bolstered by, a story in Norse mythology involving the trickster god Loki. In this particular myth, 12 gods are having a dinner party at Valhalla when a 13th — and uninvited — guest arrives. It is the mischievous Loki, who sets about contriving a situation in which Hoder, the blind god of darkness, fatally shoots Balder the Beautiful, the god of joy and gladness, with an arrow. It’s possible that this ill-fated myth helped cement the number’s connection to chaos and misfortune in Nordic cultures, and in Western civilization more widely.
Advertisement
Advertisement
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
The Last Supper
Christianity has also helped fuel the superstition surrounding the number 13. In the New Testament — as in Norse mythology — there is a fateful gathering centered around a meal, in this case the Last Supper. At the dinner, Jesus Christ gathers with his Twelve Apostles — making 13 attendees in total. Judas Iscariot, the apostle who betrayed Jesus, is often considered to have been the 13th guest to sit down at the Last Supper, which might have contributed to the number’s negative connotation. This, in turn, may have led to the notion of Friday the 13th being a day of misfortune or malevolence, as the Last Supper (with its 13 attendees) was on a Thursday, and the next day was Friday, the day of the crucifixion.
It’s also possible that 13 gained a bad reputation because of the squeaky-clean nature of the number 12. In Christian numerology, 12 symbolizes God’s power and authority and carries a notion of completeness (a concept also found in pre-Christian societies). Its neighboring numeral may have suffered as a result, being seen as conflicting with this sense of goodness and perfection, further adding to the potent and enduring idea that the number 13 is unlucky.
In 1832, Yale University students William Huntington Russell and Alphonso Taft co-founded “The Order of the Skull and Bones,” a secret society that has gone on to become one of the most elite organizations of its kind in the United States. For almost two centuries, Skull and Bones has been a subject of much fascination, speculation, and suspicion. Its members have included some of the most influential and powerful figures in American history — including three U.S. presidents — and its secrecy has fueled numerous conspiracy theories and rumors about the society’s true nature and purpose.
Over the years, several strange secrets about Skull and Bones have been revealed. According to some accounts, new members are — or once were — made to lie naked in a stone coffin while describing their most intimate secrets and experiences. And the society’s headquarters — a stark, windowless brownstone building in New Haven, Connecticut, called “The Tomb” — is rumored to house a number of macabre artifacts, including the skulls of the Apache warrior Geronimo and the Mexican revolutionary Pancho Villa. Perhaps of greater import to the Bonesmen and Boneswomen, as initiates are known (women were granted membership in 1992), is the promise that all members are guaranteed lifelong financial stability — in exchange, of course, for their absolute loyalty and secrecy.
Despite this secretive nature, many prominent individuals have been identified as members of Skull and Bones. (Up until 1971, the society published an annual membership register.) Here are six of the most influential known members of the secret society.
William Howard Taft, 27th President of the United States
William Howard Taft served as president of the United States from 1909 to 1913, and later as chief justice of the United States — he is the only person to have held both positions. The young Taft was initiated as a Bonesman in 1878, which was no surprise as his father, Alphonso Taft, was the society’s co-founder. It’s hard to say how much bearing Skull and Bones had on Taft’s career, but he rose rapidly after Yale and was appointed a judge while still in his 20s. He became a federal circuit judge at 34, and in 1900 was sent to the Philippines by President William McKinley to serve as chief civil administrator — a political position that set him on the path to the White House.
Walter Camp, often referred to as the “father of American football,” was initiated into Skull and Bones in 1880. He was a college football player and coach at Yale, during which time he played a pivotal role in shaping the rules and strategies of the game. Camp’s changes included the introduction of the quarterback role, reducing the team size to 11 from 15, and replacing the traditional scrum of British rugby with the scrimmage. He served on Yale’s athletic committee for nearly 50 years, influencing not just football but collegiate athletics as a whole, and is widely considered his generation’s most influential champion of athletic sports.
Henry Luce was a hugely influential publisher who founded Time magazine, Life magazine, Fortune, and Sports Illustrated. Before he became one of the most powerful figures in the history of American journalism, Luce was a member of Skull and Bones. He was initiated in 1920 alongside his best friend Briton Hadden, with whom he co-founded Time in 1923. Hadden died six years after Time was first published, leaving Luce in sole control of the magazine. As with all new members of Skull and Bones, Luce was assigned a secret name — in his case, “Baal.” Many Bonesmen were given names from literature, myth, or religion, such as Hamlet, Uncle Remus, Sancho Panza, Thor, and Odin.
Skull and Bones has had its fair share of scientifically minded members, including climatologist William Welch Kellogg and physicist John B. Goodenough (recipient of the 2019 Nobel Prize for chemistry). Then there was Lyman Spitzer, a theoretical physicist and astronomer who joined Skull and Bones in 1935. Spitzer made significant contributions to several fields of astrophysics, including research into star formation and plasma physics. He was also the first person to propose the idea for a space-based observatory, which he detailed in his 1946 paper “Astronomical Advantages of an Extra-Terrestrial Observatory.” His idea later became reality in 1977, when NASA, along with the European Space Agency, took the concept and began developing what became the Hubble Space Telescope.
George H.W. Bush, 41st President of the United States
George H.W. Bush was a notable student during his time at Yale. He was accepted into Phi Beta Kappa, a prestigious academic honor society; he captained the Yale baseball team that played in the first two College World Series; and he was a member of the Yale cheerleading squad. He was a worthy candidate for Skull and Bones, which he was initiated into in 1948. Later, of course, Bush became the second Bonesman to occupy the Oval Office, when he was sworn in as the 41st president of the United States in 1989. Bush wasn’t the first of his family to join Skull and Bones, though. His father, U.S. Senator Prescott Bush, was initiated in 1917, while his uncle George Herbert Walker Jr. joined a decade later. Bush’s son, George W. Bush (the 43rd president of the United States), continued the family tradition when he too was initiated in 1968.
On the other end of the political spectrum, plenty of Bonesmen have gone on to become members of the Democratic Party, the most famous of which is John Kerry, initiated in 1966. Prior to serving as secretary of state under Barack Obama, Kerry was the Democratic nominee for president in the 2004 election — and his opponent was none other than fellow Bonesman George W. Bush. When Kerry was asked what he could say about the significance of both him and Bush being Skull and Bones members, he simply and dutifully replied, “Not much, because it’s a secret.”
Covered bridges are an idyllic symbol of rural America. These charming, often hand-built structures have been romanticized in popular culture for years, from Thomas Kinkade’s painting “The Old Covered Bridge” to the novel (and film adaptation) The Bridges of Madison County. Despite their dispensability in the age of concrete and steel, these old wooden bridges continue to be beloved landmarks, their distinct roofs making them easily recognizable even today. But what exactly led to their proliferation in decades past?
A covered bridge is exactly what its name suggests: a bridge with a roof and enclosed sides, typically constructed from wood. The reason for the covering is quite simple. While there are some theories — most likely with some truth to them — that the roofs were added to keep animals calm above rushing water, or to provide shelter for travelers, the real purpose was much more practical. Wooden bridges, which were common in the U.S. and Europe in the 18th and 19th centuries due to the abundance of timber, deteriorated quickly when exposed to the elements. Rain, snow, and sunlight caused the wood to rot or warp, compromising the materials’ integrity and reducing the lifespan of the bridge. Covering the structure protected the wooden framework and deck. By keeping the timber dry, the bridge’s life could be extended by decades. Uncovered wooden bridges might last just 10 to 20 years, whereas some of America’s original covered bridges, such as the Hyde Hall Bridge in New York’s Glimmerglass State Park, remain intact almost 200 years after being built.
Credit: Historical/ Corbis Historical via Getty Images
Simply having a roof doesn’t necessarily make a structure a true covered bridge, though. Underneath every authentic covered bridge is its truss system, a network of beams, often in the shape of triangles, that distributes the weight of the bridge and the load it carries on its deck. The trusses, though rugged in appearance, require precision, and building one often took a whole village — quite literally. Dozens, if not hundreds, of skilled workers from the community were involved: sawyers to prepare the rough-cut logs, timber framers to properly place the beams, and stonemasons to build the abutments, to name a few.
While the bridge coverings were primarily a form of protection, they also became symbols of, and important to, the communities that built them. They served as gathering places and even inspired local lore — such as the tradition of couples sharing a covert kiss under the roof, inspiring the name “kissing bridges.”
Covered bridges began appearing in the United States in the early 1800s; one of the earliest and most famous examples was Philadelphia's Permanent Bridge, built by architect Timothy Palmer over the Schuylkill River in 1805. By the mid-19th century, covered bridges were a common sight in the American countryside; estimates suggest that as many as 10,000 were built by the peak of their popularity in the 1870s. Though they’ve become emblematic of bucolic Americana, they weren’t unique to the U.S. In ancient China, for instance, covered bridges — known as corridor bridges — served as multifunctional structures, housing community events and shops, or providing a place to rest. Similarly, covered bridges in Switzerland, such as the artwork-adorned Kapellbrucke (or “Chapel Bridge”) in Lucerne, have been around for centuries and remain admired for their intricate designs and historical significance.
While covered bridges were once a common sight across the American landscape, fewer than 1,000 remain today. Despite the protection and reinforcement a covering offered, it wasn’t always enough in the face of floods, fires, or neglect over time. Remaining structures in states such as Pennsylvania, Vermont, and Indiana continue to be preserved and restored, connecting travelers not only to the other side of the river, but to the past.
In weddings around the world, exchanging rings is a crucial part of the ceremony, a moment in which a couple’s promises are sealed with a tangible token. This simple piece of jewelry does a lot of heavy lifting: It acts as a symbol of love, unity, and eternity, while also making our relationship status clear to the world. Various cultures have contributed to the history of the wedding ring, from its ancient beginnings to the relatively recent advent of the double-ring exchanges popular today. But when and how exactly did this time-honored tradition begin?
It’s believed the ancient Romans were the first people to use wedding rings in a way resembling the modern custom, although exchanging rings as symbols of eternity or affection dates back even earlier to ancient Egypt and Greece. Roman weddings were not like the elaborate, picturesque affairs of today, however; marriages were often less about romance and more about family alliances and property. After a marriage contract was signed and a feast was had, there was a procession to the couple’s new home, where the bride was carried over the threshold. It was then that the groom presented the bride with a ring — not just as a gesture of affection, but as a public acknowledgment of their bond and a sign that she was now a part of his household. Romans first used copper and iron for the bands, but they began to favor gold after around the third century CE. In wealthier households, brides often had both: one ring, usually made of iron, to wear at home, and another fancier gold ring to present to the public.
The wedding ring was worn on the fourth finger of the left hand, a custom based on the belief that a vein — known as the vena amoris, or “vein of love” — connected this finger directly to the heart. This tradition may have originated in ancient Egypt, where rings were seen as symbols of eternity; the ring’s circular shape, with no beginning and no end, made it a powerful representation of infinity. While the vena amoris has since been proved anatomically incorrect, the symbolic ring placement on the left hand’s fourth finger remains customary. Though the Romans were the first to formalize the use of rings in a wedding ceremony, it’s believed they took a cue from the ancient Greek and Egyptian cultures. After Alexander the Great of Macedonia conquered Egypt in 332 BCE, the Greeks adopted the custom of giving rings as a sign of love — these tokens often featured motifs of Eros, the Greek god of love, known as Cupid in the Roman pantheon.
By the medieval period in Europe, the Christian church introduced more structured wedding rituals, including the presentation of a wedding band as part of a sacred union performed by a priest. But rings were still just for the bride. Interestingly, men did, for a time, wear engagement rings, long before they started wearing wedding rings. Gimmel rings, popular during the Renaissance era, consisted of two interlocking bands that were separated and worn individually during an engagement period, and then put back together to be worn as one band by the bride after marriage. But the one-sided wedding ring exchange persisted for centuries, all the way until the Second World War.
In the 1940s, as family values and stability were emphasized in the uncertainty of World War II, marriage rates soared in the United States. Jewelers jumped on the chance to promote men’s wedding rings — and it worked. During World War II, many deployed men wore wedding rings as a comforting reminder of their wives and families back home. By the late 1940s, about 80% of U.S. couples gave rings to each other during their wedding ceremonies, compared to just 15% at the end of the Great Depression. Social norms began to change, too, and as marriage became increasingly viewed as a partnership of equals versus an exchange of property, double-ring ceremonies became the norm, and remain so today.
Over the past hundred years, baby-naming trends have largely been shaped by family traditions and popular culture. Classic names such as Mary, John, Betty, and James often appear repeatedly in family trees, passed down out of respect for previous generations and a desire to keep family legacies alive. By the latter half of the 20th century, parents found baby name inspiration in popular culture, including films, theater, and music. The name Jennifer, for instance, began its climb in the U.S. thanks to the George Bernard Shaw play The Doctor’s Dilemma, which debuted on Broadway in 1927. Today, Olivia and Liam are the reigning favorites, and it’s likely only a matter of time before names that are already in the top 10 — such as Mia, Mateo, Evelyn, and Elijah — claim the No. 1 spots.Here is a fascinating look at the most popular girls’ and boys’ names of the last century, based on data collected by the U.S. Social Security Administration from Social Security card applications.
1924: Mary, Robert 1925: Mary, Robert 1926: Mary, Robert 1927: Mary, Robert 1928: Mary, Robert 1929: Mary, Robert
The “Roaring ’20s” brought new cultural, economic, and sexual freedoms for women, but the most popular female names of the Greatest Generation — those born between 1901 and 1927 — didn’t reflect this newfound sense of liberation. Mary remained the most popular girls’ name from 1924 to 1929, just as it had since 1900. A biblical name that appears in both the Old and New Testaments, Mary is the anglicized form of Maria and originated from the Hebrew Miryam. In 1924, the name Robert, favored by European royalty and nobility in the Middle Ages,” replaced John, another common biblical name, as the most popular boys’ name, ending John’s decades-long place at the top of the list.
1930: Mary, Robert 1931: Mary, Robert 1932: Mary, Robert 1933: Mary, Robert 1934: Mary, Robert 1935: Mary, Robert 1936: Mary, Robert 1937: Mary, Robert 1938: Mary, Robert 1939: Mary, Robert
The end of the 1920s represented the beginning of the Silent Generation, which continued until 1945. For children born in the 1930s, it was an era marked by scarcity, sacrifice, and traditional values. This generation is also known as the Traditionalist Generation, and the most popular baby names of the 1930s reflect that characteristic: As in the second half of the 1920s, Mary and Robert remained the most popular female and male names, respectively, throughout the decade.
Advertisement
Advertisement
Credit: Reg Speller/ Hulton Archive via Getty Images
1940s
1940: Mary, James 1941: Mary, James 1942: Mary, James 1943: Mary, James 1944: Mary, James 1945: Mary, James 1946: Mary, James 1947: Linda, James 1948: Linda, James 1949: Linda, James
With the end of the Silent Generation in 1945, a new era of baby boomers — so named for the “baby boom” that happened after World War II — was born. After decades as the No. 1 female name, Mary lost that designation when Linda claimed the top spot in 1947, boosted by the popularity of Jack Lawrence’s 1946 song “Linda.”For boys’ names, Robert’s 16-year reign at the top came to an end at the beginning of the decade, when James — a staple on the top 100 most popular names thanks to both its biblical and royal associations — took the No. 1 spot in 1940 and stayed there until 1952.
Credit: ClassicStock/ Archive Photos via Getty Images
1950s
1950: Linda, James 1951: Linda, James 1952: Linda, James 1953: Mary, Robert 1954: Mary, Michael 1955: Mary, Michael 1956: Mary, Michael 1957: Mary, Michael 1958: Mary, Michael 1959: Mary, Michael
Linda continued to be the top female name at the start of the 1950s but swapped places with Mary (again) in 1953, when Mary reclaimed the No. 1 spot and stayed there for the rest of the decade. Meanwhile, three different boys’ names saw the No. 1 spot during the 1950s. James remained at the top for three years before Robert returned as the No. 1 name for just one year in 1953. In 1954, Robert slipped to No. 2 behind Michael — a biblical name historically associated with emperors, kings, and saints — which remained the most popular male name for the rest of the decade.
1960: Mary, David 1961: Mary, Michael 1962: Lisa, Michael 1963: Lisa, Michael 1964: Lisa, Michael 1965: Lisa, Michael 1966: Lisa, Michael 1967: Lisa, Michael 1968: Lisa, Michael 1969: Lisa, Michael
In 1960 and 1961, Mary was once again the most popular female name in the nation. In 1961, Lisa appeared on the list of the top five most popular names for the first time, coming in second behind Mary. The following year, Lisa surpassed Mary as the No. 1 female name, and stayed in the top spot for the rest of the decade. David, which first appeared in the top five most popular male names in 1948 and continually ranked between No. 2 and No. 5 for the next 12 years, finally took the top spot in 1960. Its reign was short-lived, though: David was bumped in 1961 by perennial favorite Michael, making 1960 the only year David has ever held the No. 1 spot, despite being in the top five 39 times over the past century.
1970: Jennifer, Michael 1971: Jennifer, Michael 1972: Jennifer, Michael 1973: Jennifer, Michael 1974: Jennifer, Michael 1975: Jennifer, Michael 1976: Jennifer, Michael 1977: Jennifer, Michael 1978: Jennifer, Michael 1979: Jennifer, Michael
Generation X, which includes people born between 1965 and 1980, introduced a generation of “latchkey kids,” children of dual-income or divorced parents whose free-range childhoods led them to be characterized as independent, resourceful, and cynical. Jennifer, which first appeared in the top five most popular female names in 1968, took the top spot in 1970 and stayed there throughout the 1970s and into the 1980s. Michael, which had dominated the top of the list since the mid-1950s, continued as No. 1 throughout the decade. Three names — James, Christopher, and Jason — vied for the No. 2 spot over the course of the decade, but neither Christopher nor Jason ever reached No. 1, unlike James, which had previously held that spot for 13 years.
1980: Jennifer, Michael 1981: Jennifer, Michael 1982: Jennifer, Michael 1983: Jennifer, Michael 1984: Jennifer, Michael 1985: Jessica, Michael 1986: Jessica, Michael 1987: Jessica, Michael 1988: Jessica, Michael 1989: Jessica, Michael
The year 1981 marked the start of Generation Y, better known as millennials. This generation grew up during a time of major technological advancements, including personal computers and internet access. During the early 1980s, Jennifer continued to hold the top spot as the most popular female name, but 1985 saw another three-syllable “J” name take the lead: Jessica. This name may have appealed to parents who wanted a trendier “J” name than Jennifer, but the first recorded instance of Jessica actually dates back to William Shakespeare’s play, The Merchant of Venice. Meanwhile, Michael enjoyed another decade at No. 1, while Christopher remained No. 2 throughout the 1980s and into the 1990s, never quite surpassing the popularity of Michael.
1990: Jessica, Michael 1991: Ashley, Michael 1992: Ashley, Michael 1993: Jessica, Michael 1994: Jessica, Michael 1995: Jessica, Michael 1996: Emily, Michael 1997: Emily, Michael 1998: Emily, Michael 1999: Emily, Jacob
In 1993, the World Wide Web launched, introducing a whole new way of learning and communicating. The millennial generation came to a close in the mid-1990s, and 1997 is considered the beginning of Generation Z, also called “zoomers.” Jessica bounced between No. 1 and No. 2 for most of the decade before disappearing from the top five in 1998. Ashley, a name more commonly given to boys prior to the 1960s, held the lead in 1991 and 1992, but slipped to No. 2, No. 3, and No. 5 for the rest of the decade. Emily, another appealing “-ly” name, appeared in the top five in 1993 and held the lead for 13 years, starting in 1996. And after holding the No. 1 spot as the most popular male name for 44 of the previous 45 years, Michael was dethroned in 1999 by Jacob, a name that first appeared in the top five in 1995.
Advertisement
Advertisement
Credit: PYMCA/Avalon/ Hulton Archive via Getty Images
2000s
2000: Emily, Jacob 2001: Emily, Jacob 2002: Emily, Jacob 2003: Emily, Jacob 2004: Emily, Jacob 2005: Emily, Jacob 2006: Emily, Jacob 2007: Emily, Jacob 2008: Emily, Jacob 2009: Isabella, Jacob
The year 2001 marked the beginning of the second millennium CE, but the occasion didn’t have an impact on the most popular baby names for the youngest Gen Zers. Though computers and the internet changed the way millennials lived and worked, Generation Z is considered the first true digital native generation, as this cohort has never known a time when the internet wasn’t as close as their smartphone. Still, for older Gen X and millennial parents, sticking with tradition was the name of the name game in this booming digital era. Emily held the No. 1 spot for the entire decade until Isabella, which first appeared on the top five in 2006, took the lead in 2009. And Jacob, a biblical name with a more contemporary feel, replaced Michael after a long reign and remained No. 1 throughout this decade and into the next.
2010: Isabella, Jacob 2011: Sophia, Jacob 2012: Sophia, Jacob 2013: Sophia, Noah 2014: Emma, Noah 2015: Emma, Noah 2016: Emma, Noah 2017: Emma, Liam 2018: Emma, Liam 2019: Olivia, Liam
Those born in the 2010s are shaping up to be known as Generation Alpha. The oldest members are just entering their teen years and the qualities that will define their cohort are still being formed. That fact might be reflected in the number of names — four for girls and three for boys — that held the No. 1 spot over the decade. This decade saw a trend toward vintage and vintage-inspired names. Isabella started in the No. 1 spot just as it had ended the previous decade, but was displaced in 2011 by Sophia, which stayed at the top for three years before being replaced by Emma. Emma then stayed at No. 1 for four years, before Olivia claimed the No. 1 spot to finish the decade. Continuing its popularity from the previous decade, Jacob held the top spot as the most popular male name for three years before being unseated by Noah. Four years later, in 2017, Liam — a derivative of William that first appeared on the top five list in 2013 as the third most popular male name — replaced Noah as the most popular name for the rest of the decade.
We’re only a few years into this century’s ’20s, but the most popular baby names reflect how trends have shifted since the start of the new millennium. So far in this decade, Olivia and Liam are holding steady at No. 1, but old-fashioned names are making a notable comeback. Henry, which ranked as the 18th most popular male name in 1924, climbed to No. 8 in 2023, and Evelyn, which was the 12th most popular female name in 1924, was No. 9 in 2023. On the other hand, Robert and Mary, the two most popular names 100 years ago, now rank 89th and 135th in popularity, showing how much trends have changed in the last hundred years.
In the mid-1820s, French inventor Joseph Nicéphore Niépce stood at an upstairs window of his estate in Saint-Loup-de-Varennes in Burgundy, France. In his hand he held a primitive camera. After at least eight hours of exposure, Niépce created the world’s first — or at least oldest surviving — photograph, known as “View From the Window at Le Gras.” In that moment, an entirely new medium was born.
Photography rapidly went from one first to another. In 1838, Louis Daguerre, inventor of the daguerreotype, shot the first photo to include people. In 1840, English scientist John W. Draper took the first photo of the moon. And in 1861, Scottish mathematical physicist James Clerk Maxwell produced the earliest color photograph. Advances continued apace, until another landmark — the first cellphone photo, in 1997 — launched an era in which cameras became ubiquitous, and the age of the selfie was born.
Throughout the 20th century, photographers captured images that ran the gamut of human experience. Here are some of the most iconic photos from each decade of the last century, from Kitty Hawk, North Carolina, to the far-flung reaches of the universe.
On the morning of December 17, 1903, on the sand dunes 4 miles south of the fishing village of Kitty Hawk, North Carolina, one of the most pivotal moments in human history was captured on camera. Once developed, the photo showed the moment that aviation pioneer Orville Wright took to the air in the world’s first successful airplane.
In 1908, the U.S. National Child Labor Committee hired sociologist and photographer Lewis Hine to photograph children at work in mills, mines, and factories. His photos of the “breaker boys” — children usually between the ages of 8 and 12 who worked in coal mines — helped build public support for legislation barring child labor.
“Le Violon d'Ingres” is a black-and-white photograph created by the American visual artist Man Ray. It features the model Kiki de Montparnasse, with two f-holes painted on her back to make her body resemble a violin. One of the most iconic images of the surrealist movement, it set a record in 2022 when it sold for $12.4 million, becoming the most expensive photo of all time.
The 1930s were a decade of many iconic photos, from the Hindenburg zeppelin bursting into flames to a classic photo purporting to show the Loch Ness monster. Arguably the most famous, however, was a black-and-white portrait of 11 ironworkers eating lunch while sitting precariously on a steel beam 850 feet in the air, taken during the construction of Rockefeller Center in New York City.
Advertisement
Advertisement
Credit: Photo 12/ Universal Images Group via Getty Images
“Raising the Flag on Iwo Jima,” 1945
World War II heralded a new era of war photography, much of it more intimate and at times more harrowing than anything previously seen. One of the most recognizable images was taken by Joe Rosenthal of the Associated Press on February 23, 1945. It shows six United States Marines raising the American flag atop Mount Suribachi during the Battle of Iwo Jima. Three of the Marines were killed in action during the battle, but their image remains a timeless portrayal of troops at war.
In 1957, photographer Will Counts took a series of photos during the Central High School desegregation crisis in Little Rock, Arkansas. One of his photos, sometimes known as “The Scream Image,” shows Black student Elizabeth Eckford being harassed by white students in front of the school. Instantly iconic, the image became a key symbol of the Civil Rights Movement.
Advertisement
Advertisement
Credit: Corbis Historical via Getty Images
“Earthrise,” 1968
On Christmas Eve in 1968, the crew of Apollo 8 saw a spectacular sight as they orbited the moon: the Earth appearing above the lunar horizon. Astronaut Bill Anders took a photo, which became known as “Earthrise.” Not only was it the first color photograph of Earth taken from space, but it also inspired a generation to think more seriously about our planet. The image is now widely credited with advancing the global environmental movement.
The 1970s were characterized by political upheaval on a global scale, with frequent coups, domestic conflicts, and civil wars. In the U.S., the decade was marked by the Vietnam War, President Richard Nixon’s term, and the Watergate scandal. This iconic image shows Nixon waving goodbye from the steps of his helicopter as he leaves the White House after his farewell address on August 9, 1974.
From a sixth-floor balcony of the Beijing Hotel, Jeff Widener of the Associated Press took one of the most recognizable photos of all time. Below him, in Tiananmen Square, a lone man bravely stood in front of a column of tanks, repeatedly changing his position in order to obstruct their attempted advance. Widener’s photo is a classic image of resistance, though the lone protester has never been identified.
Credit: Buyenlarge/ Archive Photos via Getty Images
“Pillars of Creation,” 1995
Photography is literally light-years ahead of where it was when Joseph Nicéphore Niépce took the first photo in history from his upstairs window. “The Pillars of Creation” — arguably the Hubble Space Telescope’s most iconic image — shows a section of the Eagle Nebula star cluster located some 7,000 light-years from Earth. It features three huge, fingerlike columns formed of interstellar gas and dust, where the process of creating new stars takes place, all captured in a truly awe-inspiring photograph.
The first inhabitants of what is now the United States appeared around 15,000 to 20,000 years ago — a blip in time compared to the annals of some of the earliest places humans lived. Initially, population growth was slow due to the continent’s geographic isolation; significant increases began only after Europeans made their way to the Americas throughout the 16th and 17th centuries. By the 20th century, the U.S. population was experiencing rapid expansion — a trend that has slowed in recent years. Here’s a look at America’s changing population through history, from early prehistoric arrivals to the decline we’re seeing today.
The North American continent was inhabited by prehistoric humans, although they arrived much later than humans in other parts of the world. While early human species have been around for millions of years, the first people didn’t make their way to North America until sometime between 20,000 BCE and 13,000 BCE. It’s believed they traveled via the Bering Land Bridge from modern-day Siberia to Alaska, although exactly when and how they first arrived is still a matter of debate. The number of people who were around in this era is debated as well, and while estimates vary, it’s believed some 230,000 people were living in America by 10,000 BCE.
By 1 CE, an estimated 640,000 people were living in what is now the United States. Indigenous peoples developed agricultural practices that helped to define their communities, especially along the Mississippi watershed. By 1100 CE, a settlement known as Cahokia, located across the Mississippi River from modern-day St. Louis, was home to about 20,000 people. The population continued to grow throughout the land, and by 1400, there were an estimated 1.74 million people in the modern-day U.S.
When Christopher Columbus landed in the Caribbean in 1492, that number had increased by nearly 150,000 people. But in the years that followed, as other European explorers began to map and claim parts of North America, waves of disease, displacement, and conflict had a major effect on the population. Within 100 years of the first European landing in the New World, an estimated 85% to 90% of the Americas’ Indigenous population was wiped out. In the modern-day United States alone, the population dropped almost 60% between the years 1500 and 1600, from 1.89 million to 779,000.
By the early 1600s, Spanish and English explorers had established permanent settlements in what is now St. Augustine, Florida, and Jamestown, Virginia, and by the mid-1600s, the Pilgrims had established themselves in modern-day Massachusetts. By 1700, the population of the colonies had grown to an estimated 250,000. Official population estimates at the time did not include Indigenous peoples, an omission that wasn’t corrected until the late 19th century; scholars who later attempted to include Indigenous populations in the count put it closer to 900,000. In 1776, when the U.S. gained independence from England, the known population had surged to approximately 2.5 million — but the new country was about to undergo even more transformation.
The period of time from the American Revolution through the end of World War II saw explosive population growth. In 1790, the first official U.S. Census Bureau counted 3.9 million Americans living in the country. This era coincided with the start of the Industrial Revolution, a transformative time in the Western world. Technological innovation not only marked a societal shift from agrarian to industrial economies, but also spurred rapid urbanization. The era saw improvements in working conditions, sanitation, and medical care, too, making life expectancy longer than ever before. Ten years after the first U.S. census, in 1800, the population had shot up by almost 1.4 million people to reach 5.3 million. That surged to 23 million by 1850, fueled largely by a wave of European immigration to the U.S. By 1900, the country was home to 76 million people, a number that also reflected the country’s Indigenous residents. America’s population continued to grow during the early 20th century, reaching about 148 million by 1945.
The post-World War II era saw a dramatic increase in birth rates — a trend that famously became known as the baby boom. The sharp rise was due to a number of factors, key among them being economic prosperity, soldiers returning from war, and a cultural emphasis on family life. By the time the boom tapered off in 1964, some 76 million babies had been born, and these new citizens made up almost 40% of the country’s population of almost 197 million.
Today
In 2024, the U.S. Census Bureau counted America’s population at 337 million people. And while that’s an all-time high, the rate of growth has been slowing down. The baby boom was followed by a period of lower birth rates, which remain on a downward slope. Meanwhile, an aging population means death rates are projected to meet or exceed births.
The postwar period also saw changes in U.S. immigration policy, including the Immigration and Nationality Act of 1965, which did away with previous immigration quotas, opening the doors to more new Americans. Immigration has been the main driver of the country’s population growth since 1970, and that trend is expected to continue. In 2023, the Census Bureau projected that sustaining diverse immigration will help to balance the effects of an aging population. Still, despite a projected global population increase of nearly 2 billion people in the next 30 years, the U.S. population is expected to peak at around 370 million by 2080, then decline slightly to 366 million by 2100.