Credit: Gaston Paris/ Roger Viollet via Getty Images
Author Timothy Ott
September 4, 2025
Love it?27
Well before there was a need to enter individual units of an apartment building or secure clothes inside a gym locker, people sought to keep thieving hands away from their stores of grain or precious jewels. As a result, locks and keys have been around for a large chunk of recorded history.
While both have undergone numerous alterations in accordance with ever-updating technologies, the story of keys is perhaps more personal to the human experience as the portable component that has accompanied us on our journeys over the years. Here’s a look at how these pronged keepsakes have changed since they first surfaced in the ancient world.
According to Eric Monk’s Keys: Their History and Collection, the oldest known lock is a wooden specimen unearthed from the ruins of the Palace of Sargon in Dur-Sharrukin (modern-day Khorsabad), near the Assyrian capital of Nineveh. As a similar version was found displayed on frescoes at the Karnak temple complex in Egypt, this style of “Egyptian lock” is believed to be in the neighborhood of 4,000 years old.
This early form of security entailed sliding a wooden board through a slot across a door, with movable pins above the slot dropping through corresponding holes in the board to keep it bolted in place. The key for this type of lock was another long piece of wood, sometimes measuring more than 2 feet, with pegs on the end to push the pins back through the holes and allow the board to be released from the bolting position. While these locks were originally fitted to the outside of a door, a hole cut next to the lock enabled a person to reach through and operate the lock from the inside.
The ancient Greeks, meanwhile, developed a different system in which the lock was mounted on the inside of the door. Unlocking from the outside again involved reaching through a hole, with a sickle-shaped key used to turn the bolt. As with the Egyptian models, these keys were large and were often carried by being slung over the shoulder.
A noticeable difference to methodology arrived with the Romans, who began developing metal locks and bronze keys. Romans employed the use of "wards," which were obstacles built within the locks that could only be bypassed by keys of specific shapes. This led to the development of teeth and cut-out shapes within the "bit," the end of a key that engages the bolt.
The Romans also popularized the use of small keys that fastened to a ring and could easily be carried in the palm of the hand. These keys were typically used for boxes that stored valuables within a home and as such were used to signify the wearer's affluence.
Key ornamentation progressed to a remarkable degree through the long period of the Middle Ages, which eventually saw bronze keys phased out in favor of iron. While some medieval keys were small, others surpassed 9 inches in length. This served the same purpose as the Roman ring keys, to project the wealth and prestige of the owners of such magnificent instruments, with the added benefit that these plus-size keys were less likely to be lost than tinier versions.
Early medieval bits featured a cutout cross or another simple cleft to bypass a built-in ward, but were eventually fashioned with teeth and other irregular shapes. The handle, or "bow," also underwent a series of stylistic changes, from the loops and ovals of the eighth century to the kidney shape, often intricately designed, of the 15th century. By this time, many keys were also embedded with a "collar" to stop them from being pressed too far into holes.
By the 17th century, bows were sometimes designed with animal motifs, initials, or a coat of arms. Ceremonial keys also soon came into vogue, including the gilded "chamberlain's keys" that once served a functional purpose in European royal courts but evolved into ornaments for special occasions.
As mentioned in Keys: Their History and Collection, the attention to craftsmanship failed to solve an enduring problem: Most locks were capable of being picked by a person with the dexterity and patience to make the effort. This sparked a focus on stronger security measures in the second half of the 18th century, and the revival of ancient Egyptian methodology with English locksmith Robert Barron's introduction of a twin-tumbler lock in 1778.
Six years later, English inventor Joseph Bramah unveiled an even stronger lock that featured spring-loaded sliders spaced around a pin. A small, cylindrical key embedded with notches corresponding to the height and depth of each slider engaged the pin, enabling the key to turn.
While inventors on both sides of the Atlantic competed to produce the strongest locks over the first half of the 19th century, their creations were too expensive for the general public to afford. That meant old-fashioned (and easily pickable) warded locks remained commonplace around homes, to be locked and unlocked by hefty iron keys.
However, the industry changed for good after the invention of the Yale lock. Patented by Linus Yale in 1844 and perfected by Linus Yale Jr. over the next two decades, this lock required the insertion of a flat, serrated key that raised a series of spring-loaded tumblers to varying heights.
The Yale products became prototypes of the types of keys that remain widely in use today, and with the invention of specialized machinery by the end of the 19th century, the manufacture of locks and keys moved into an era of mass production that made these items more affordable.
Nowadays, keys for homes and valuables are typically made from materials such as nickel silver, brass, and steel. But the development of key cards in the 1970s marked a turn away from metal, and innovations in biometrics have hinted at a future in which fingerprints and facial recognition technology become the norm for entry.
Ultimately, physical keys may wind up being a relic of the past, much the same way as those clunky old door openers from Egypt and Greece are to us today.
Parades are a curious phenomenon: We gather en masse to march through a space with music, costumes, and elaborate displays. But even at their most odd, these festive demonstrations also feel universal — most likely because they represent a tradition that spans continents and centuries.
Before there were parades, there were processions. These were ceremonial or ritualistic events that anthropologists believe predated parades as a way to connect with community — and with the divine. Their roots stretch back to ancient civilizations: In Mesopotamia, as early as 2900 BCE, priests and citizens marched with statues of deities such as the Babylonian god Marduk during the multiday Akitu celebrations for the new year.
The Panathenaic festival, held in Athens to honor the goddess Athena, similarly featured a grand procession to the Acropolis, where offerings and sacrifices were made. These events were spiritual and social, but they were not yet parades as we know them. As author Doug Matthews explains in his book Why We Love Parades: Their History and Enduring Appeal, there is an important difference: A procession has deeper significance behind the occasion, while a parade, though often organized around an occasion, is primarily about the content of the event for the enjoyment of spectators.
The word “parade” itself comes from a mid-17th-century French term meaning “pompous show.” That’s a fitting definition, given that if one ancient civilization can be credited with originating the modern-day parade, it’s the Romans. Their notorious penchant for spectacle meant processions became grandiose displays of power known as triumphs.
The Roman triumph celebrated military victories; generals were paraded through the streets of Rome in horse-drawn chariots accompanied by soldiers and captives, sometimes acting out battle scenes. While the parades were as much about propaganda as spectacle, they were also celebratory; the streets were filled with music, waving flags, cheering crowds, and a theatricality that set the template for parades to come.
During the Middle Ages, processions for Christian holy days were prominent and began to incorporate theatrical elements including pageantry and dramatized biblical scenes set against movable scenery. This period also saw the emergence of pre-Lent Carnival parades; communal joy was a central fixture, as were the masks, costumes, music, and dancing that laid the groundwork for the diverse and exuberant Carnival parades around the world today.
By the 18th century, parades had taken on an even more festive form and a now-familiar symbol had become more common: the float. In the U.K., the annual Lord Mayor’s Show became an elaborate parade through the streets of London; some believe the term “float” to have originated here, describing the decorated floating barges that were towed along canals.
Parades began making their way to Americans in this century, too: In the 1760s, with the American colonies still under British rule, Irish soldiers stationed in New York City with the British army began the tradition of marching in St. Patrick’s Day parades. And in 1788, a series of marches through the streets of New York, Philadelphia, and other cities took place to celebrate the newly ratified U.S. Constitution.
Soon, parades became a staple of American public life. Fourth of July celebrations, observances of George Washington’s birthday, and local milestones around the country — especially in its burgeoning urban centers — brought communities together in a unifying expression of American culture and civic pride. By the turn of the 20th century, parades had become both institutionalized and commercialized. Local governments formalized annual parades for national holidays, while at the same time, businesses recognized parades as powerful marketing opportunities. Nowhere was this better embodied than the Macy’s Thanksgiving Day Parade, launched in 1924 as a Christmas parade with elaborate floats and live animals that were meant to advertise the New York City store.
Today, some of the biggest parades in the world are not only joyful spectacles, but also platforms for social movements. Since the first Pride marches took place in 1970, they’ve gone on to become massive global celebrations of the LGBTQ+ community, underlining how parades remain a vital piece of cultural expression, tradition, identity, and joy.
As the subject of numerous carols, a featured attraction for both cavernous department stores and cozy mom-and-pop businesses, and an object of purchase for some 35 million to 50 million American consumers every year, Christmas trees are undoubtedly a focal point of annual Yuletide celebrations. Older than the carols in which they’re celebrated but not nearly as old as the pagan origins of the holiday they support, Christmas trees have been a familiar, comforting sight since childhood for countless people dating back many generations.
But these evergreen conifers didn’t simply emerge as part of holiday celebrations like a fully assembled toy right out of the box. Here’s a look back at how Christmas trees became part of, and then inextricable from, the end-of-year festivities in which they have a starring role.
The Christmas tree tradition has its roots in the long, multicultural history of evergreen plants being used to mark the arrival of the winter solstice. From the palm branches that featured in celebrations of the sun god Ra in ancient Egypt to the wreaths that were incorporated into the Roman festivities of Saturnalia, evergreens symbolized the rebirth of life during the cold, dark winter months.
According to Judith Flanders’ Christmas, A Biography, the emergence of trees to commemorate the season were in part inspired by the “paradise plays” that were popular in Europe during the Middle Ages. Staged in observance of the Christian feast day of Adam and Eve on December 24, the plays typically featured an evergreen fir, festooned with apples, as the stand-in for the symbolic tree of life.
Meanwhile, the proliferation of certain legends in 15th-century Germany further strengthened the association of trees with the Christian celebrations of the winter holiday. Among the most popular was the story of St. Boniface, who supposedly chopped down an oak that was towering above the remains of human sacrifice meant for the pagan god Thor, and replaced it with a fir tree that symbolized the eternal truth of Christ.
As described in Christmas, A Biography, the first documented Christmas tree dates back to at least 1419, when a guild in Freiburg, Germany, decorated an outdoor tree with apples, gingerbread, wafers, and tinsel.
By the 1530s, the first known market for Christmas trees had surfaced in Strasbourg, then part of southwest Germany. With demand apparently threatening the livelihood of local forests, ordinances enacted in the Alsace region first limited households to one small tree at Christmas, before the city of Freiburg attempted to ban the practice altogether. Nevertheless, the popularity of Christmas trees continued to grow in Germany, as illustrated by the appearance of another guild-sponsored tree in the northern city of Brennen in 1570.
Although popular legend holds that the 16th-century reformer Martin Luther was the first to decorate an indoor tree with candles after being inspired by the nighttime stars, researchers nowadays largely believe that story to be apocryphal. Instead, the first decorated indoor tree, its branches filled with wafers, paper roses, apples, and sugar ornaments, was documented in Strasbourg in 1605.
The Christmas tree tradition took on different forms as it spread across the country: Some celebrants followed the custom of impaling an apple on the sharpened tip; others suspended the trees upside-down from the ceiling; and still others erected a wooden pyramid as a symbolic tree.
Distinct names for these seasonal markers also emerged among the differing religious communities that were doing the celebrating. Protestants referred to their tree as Weihnachtsbaum, Tannenbaum, or Lutherbäume, while Catholics had names such as Christbaum, Lichterbaum, and Lebensbaum.
Thanks in part to the marriage of German and English royals, the Christmas tree tradition began spreading to the British Isles by the late 18th century. Charlotte of Mecklenburg-Strelitz, the wife of England’s King George III, is credited with introducing the ritual to her adopted home country by having a decorated tree at Windsor Castle in 1800.
Around this same time, the Christmas tree was beginning to find a home in the United States. While some of the nation's earliest tree activity was recorded in Georgia, it was the German-immigrant-populated regions of Pennsylvania that served as the epicenter of this growing custom.
Beyond becoming a seasonal fixture in individual households, trees served as a drawing point for communal markets and fundraisers, such as the one advertised for the 1830 Christmas bazaar of the Dorcas Society of York, Pennsylvania. By the 1840s, records noted the migration of holiday trees to the Midwest and Texas.
An 1848 Illustrated London News engraving, featuring Queen Victoria and the German-born Prince Albert celebrating Christmas with their children in front of a large tree, is believed to have boosted the popularity of the Christmas tree among the queen's subjects. Two years later, a nearly identical illustration appeared in the popular American magazine Godey’s Lady’s Book, albeit with the ostentatious display of presents and royal wealth tamped down from the original.
As told in Phillip V. Snyder's The Christmas Tree Book, the idea for a tree market was first carried out in the U.S. by upstate New Yorker Mark Carr, who quickly sold out his stock of young firs and spruces from a small strip of sidewalk in Lower Manhattan's Washington Market in 1851. Carr returned with another supply the following year, the brisk sales soon instigating a legion of copycats. By 1880, some 200,000 trees were annually being shipped to merchants at the Washington Market.
As demand began to outweigh supplies and conservation concerns emerged by the turn of the 20th century, a new commercial development helped keep the Christmas tree industry in good health. In 1901, W.V. McGalliard planted 25,000 Norway spruce on his south New Jersey lot to start the nation's first Christmas tree farm. Within six to seven years, the first crop was ready for customers to select on site, with other batches shipped to the nearby state capital of Trenton.
Meanwhile, the first instances of the mammoth tree as a public spectacle were beginning to appear. One of the earliest examples was the 45-foot hemlock that stole the show at the 1884 World's Industrial and Cotton Centennial Exposition in New Orleans, Louisiana.
In 1912, the cities of Boston, Massachusetts, and Hartford, Connecticut, narrowly beat out the Big Apple by a half-hour for the co-honors of lighting the nation's first public tree on December 24, 1912. Eleven years later, President Calvin Coolidge lit the first National Christmas Tree on the Ellipse, just south of the White House. And in 1931, the workers building Rockefeller Center in New York City pooled their earnings to buy the first Christmas tree to go up at the location, although the annual tree-lighting ceremony didn’t begin until 1933.
Artificial Christmas trees have been available in the United States for nearly as long as the first real trees sold at the Washington Market. In 1882, a New York man named August Wengenroth received a patent for his tree with detachable wire branches. Around that time, Germans began producing trees made from dyed green goose feathers.
By the 1950s, industrial brush-making companies such as American Brush Machinery began adapting their products to the artificial tree industry. That decade also saw the rise of aluminum trees, a popular if somewhat dangerous product due to their incompatibility with electric lights, until the denigration of the aluminum variety in 1965's A Charlie Brown Christmas special hastened the end of that fad.
With the addition of the compound polyvinyl chloride (PVC), sturdy plastic brush trees were able to dominate the artificial market in the wake of aluminum's demise. Nowadays, the ease of maintaining an artificial tree outweighs the nostalgic yearning for a traditional live specimen for many celebrants: According to a fall 2023 survey conducted by the American Christmas Tree Association, a whopping 77% of Christmas tree customers planned to display an artificial version in their home that holiday season.
Theodore Roosevelt is known as the first conservationist President, having established national parks, wildlife refuges, and national forests during his time in the White House. It seems fitting, then, that one of the world’s most recognizable animal figures — the beloved teddy bear — was inspired by and named after the 26th U.S. President.
In November 1902, Roosevelt joined Mississippi Governor Andrew H. Longino on a hunting trip in Mississippi. On the second day of the trip, Roosevelt’s aides — including guide Holt Collier, a skilled hunter in his own right — captured a bear, tied it to a tree, and presented it to the President, who was eager to start the trip off strong with a catch. Roosevelt, however, refused to shoot the restrained bear. He may have been an avid hunter, but he found it unsportsmanlike to harm a defenseless animal.
The hunting incident attracted attention in the press. Washington Post cartoonist Clifford Berryman depicted Roosevelt refusing to shoot a small, tied bear in “Drawing the Line in Mississippi,” a cartoon that doubled as a commentary on the President’s handling of a state border dispute. The cute bear cub character became popular with Americans, and in the ensuing years, Berryman continued to use the bear as a symbol for President Roosevelt, who was commonly known as “Teddy,” short for Theodore.
Credit: Stock Montage/ Archive Photos via Getty Images
Berryman’s cartoon, published on November 16, was particularly inspiring to Morris and Rose Michtom, owners of a Brooklyn toy and candy store. Morris, a Roosevelt supporter, created a stuffed bear and named it after the President. Before making more, he reportedly wrote to Roosevelt to ask his permission to name the toy “Teddy’s Bear.” Roosevelt agreed, but is said to have expressed skepticism. “I don’t think my name is likely to be worth much in the bear business,” Roosevelt wrote, according to the Michtoms, “but you’re welcome to use it.” The Michtoms’ first teddy bear stood about 2.5 feet tall, had button eyes, and was made of a golden-honey plush fabric. By 1903, the Michtoms founded the Ideal Toy Company in order to produce and sell the teddy bears that their customers loved.
At around the same time, the German toy company Steiff introduced a stuffed bear of its own. Designed by Richard Steiff, the nephew of company founder Margarete Steiffthe, the bear first appeared in 1902, reportedly inspired by animal sketches Richard made as a child. The bear was made of soft mohair and had a movable head and limbs. After the toy appeared at the 1903 Leipzig Toy Fair, a U.S. buyer ordered 3,000 of the stuffed bears, kicking off the company’s success overseas. By 1906, the Steiff bear was also known as the “teddy bear” (though exactly how this happened remains something of a mystery), and in 1907 alone the company produced 974,000 teddy bears.
It wasn’t long before the teddy bear’s popularity extended beyond toy stores. In 1907, composer John Walter Bratton wrote “The Teddy Bear Two-Step,” which later became known as “The Teddy Bear’s Picnic.” That same year, the toy’s popularity sparked minor controversy when a Michigan minister suggested the stuffed animals were a “menace” to the country and would take away young girls’ desire to nurture human babies.
Roosevelt, for his part, was supportive of the eponymous stuffed toy, and in 1904, he even used a teddy bear as a symbol in his campaign for reelection, printing the teddy’s likeness on buttons and displaying the Michtoms’ creations at the White House. Even after his presidency came to an end, Roosevelt’s passion for wildlife and the outdoors endured. He went on numerous expeditions, some of which, such as his 1913 trek through the Amazon’s River of Doubt, were more treacherous than others.
In 1963, the Ideal Toy Company reached out to Roosevelt’s family to offer them one of the original teddy bears. It made its way into the hands of one of Theodore’s grandsons, Kermit Roosevelt, and in 1964, the Roosevelt family donated it to the Smithsonian National Museum of Natural History.
For many of us, sending and receiving mail is a routine part of our daily lives. But this seemingly mundane task has quite an interesting history. Postal systems have existed for nearly as long as humans have communicated through writing. Egypt holds the distinction of pioneering the earliest documented state-sponsored postal service, which dates all the way back to 2400 BCE, with the oldest known postal document dating back to 255 BCE. Initially used by pharaohs, emperors, and kings to disseminate information across their domain, postal systems eventually broadened their scope to transmit messages among religious and educational institutions.
Relay stations were established along messenger routes to expedite the delivery of information across vast distances. As these systems evolved to become more efficient and inclusive, the opportunity to send messages via formal postal services was eventually made available to private individuals.
Since the earliest days of royal postal services, mail has been delivered via nearly all possible means: It’s been carried by couriers on foot, horse and wagon, mule, bicycle, train, steamboat, plane, motorcycle, and even dog sled. Here are more fascinating facts about the history of mail and how it has evolved over the centuries to keep us all connected.
In the 1800s, mass migration westward via the Oregon Trail, the arrival of Mormon immigrants in Utah, and the California gold rush all played a role in the need for swift and reliable mail service beyond the Rocky Mountains. The Leavenworth and Pike’s Peak Express Company, which eventually became the parent company of the Pony Express, galloped in to fulfill this need in 1859.
Covering more than 1,900 miles in just 10 days, the Pony Express ran between St. Joseph, Missouri, and Sacramento, California. With horse-changing stations posted at 10- to 15-mile intervals along the route, each rider was able to cover an average of 75 to 100 miles before passing the reins to the next.
However, it wasn’t long before the completion of the transcontinental telegraph system brought an end to the Pony Express. Although the equestrian delivery service looms large as an enduring symbol of the rugged American Old West, it really only ran from April 1860 to October 1861.
Headquartered in Bern, Switzerland, the General Postal Union was established when 22 countries signed the Treaty of Bern on October 9, 1874. The organization was formed with the intent of unifying the multitude of international postal services into a single postal territory and establishing regulations for international mail exchanges.
In 1878, the group’s name was changed to the Universal Postal Union to reflect its fast-growing global membership. Today, the UPU has expanded to 192 member countries and not only sets the guidelines for international mail exchanges, but also serves to advise, mediate, and act as a liaison in postal matters, making recommendations for growth and providing technical assistance as needed.
Starting January 1, 1913, people were able to send parcels that weighed more than 4 pounds via the U.S. Post Office’s parcel post service. Prior to that, Americans had to rely on private, unregulated express companies to transport their packages.
The service was used primarily by rural farm families to ship their goods, and around 300 million parcels were mailed during the first six months of operations, spurring box manufacturers to design boxes capable of carrying the wide variety of items consumers wanted to ship. Some of these first packages included items such as apples and eggs, but other resourceful customers took advantage by “mailing” their children, because the cost of postage to have a child chaperoned to their destination by a postal worker was cheaper than a train ticket. However, only a few children were “mailed” via the postal service before the postmaster established regulations prohibiting the practice.
Credit: PhotoQuest/ Archive Photos via Getty Images
Each Digit of Your ZIP Code Has a Meaning
The 1963 introduction of Zone Improvement Plan (ZIP) codes in the United States aimed to streamline the mail sorting process for faster delivery. Two-digit numerical codes originated during World War II, when post offices faced staffing shortages; the first digit denoted the city, while the second denoted the state.
Those two-digit codes evolved into the five-digit ZIP codes we know today. The first digit represents a broad geographical area, from 0 in the Northeast to 9 in the West; the following two digits designate the code of a central post office facility in that region; and the last two digits represent small post offices or postal zones.
In 1983, the ZIP+4 code was introduced to further increase the precision of mail sorting by adding a hyphen and four more numbers to the five-digit ZIP code. These additional numbers serve to identify specific delivery routes that can represent houses on one side of a street or even a particular building that receives a lot of mail.
The prevalence of letter writing began to fade in the 19th and 20th centuries as the telegraph and telephone became widely available. Over the past three decades, the convenience of email, texting, and other forms of digital communication have caused another steep drop in the sending and receiving of “snail mail.” A 2021 survey found that 37% percent of Americans hadn’t written a personal letter for more than five years, while 15% of adults reported that they had never written one.
But the sending and receiving of handwritten messages continues to endure on a smaller scale; in fact, the U.S. Postal Service noted an uptick in handwritten notes and cards during the pandemic as people sought to connect during an uncertain and isolating time. In addition, volunteer organizations have used postcard-writing programs to help grassroots groups foster community and encourage people to vote.
Since the middle of the 20th century, Las Vegas has been known as the capital of the American id. Gambling has long been at the center of its appeal, as nicknames such as “Sin City” and “Lost Wages” suggest. “What happens in Vegas stays in Vegas” is the city’s well-known slogan, while others have remarked, “Las Vegas is where losers come to win, and winners come to lose.”
Rising up from the Nevada desert, the city’s built environment is so extravagant that it’s difficult to imagine a time when its spectacle did not exist, fully formed. Let’s go back and trace the origins of this uniquely American city.
Even though Las Vegas occupies a unique place in American culture, its metropolitan origin was sparked by the same thing that gave rise to many other U.S. cities: the development of the railroad. The area that includes present-day Nevada became a United States territory with the signing of the Treaty of Guadalupe Hidalgo in 1848, which ended the U.S. war with Mexico. Despite its location in the basin of the Mojave Desert, the site of what is now Las Vegas was a sort of oasis — a valley that included a water source in the form of artesian springs.
The water source was the selling point for railroad magnate and U.S. Senator William Clark. In 1902, he bought 2,000 acres of land and water rights in order to create a waypoint for the San Pedro, Los Angeles & Salt Lake Railroad he incorporated to connect those cities. The railroad line through Nevada began construction in 1904, and the following year, Clark auctioned off parcels of his land, which was located east of the railroad tracks.
Around the same time, civil engineer John T. McWilliams was attempting to build a township west of the railroad tracks. Though he was working with far less acreage than Clark — 80 acres to Clark’s 2,000 — the development provoked competition and intensified Clark’s efforts to build his township. Clark offered refunds on the $16 train fare to town in order to attract buyers. Newspaper advertisements promised, “Get into line early. Buy now, double your money in 60 days,” though accounts differ on which of the two were commissioning that ad.
Ultimately, McWilliams couldn’t really compete. After all, Clark owned the water rights and far more land, and he had a major stake in the railroad. On September 5, 1905, a fire almost completely consumed McWilliams’ townsite, and ensured that the competition between the two was short-lived; development would be concentrated west of the railroad tracks. Clark formed the Las Vegas Land & Water Company with his partners, and vowed, “I will leave no stone unturned and spare myself no personal effort to do all that lies within my power to foster and encourage the growth and development of Las Vegas.”
Clark’s dramatic statement might sound like a natural lead-up to building the bombastic city we know today. But that’s not quite what happened. Over the next 25 years, Las Vegas settled into an existence as a quasi company town, with railroad and mining as the main industries and a population of about 2,300. Clark sold his share of the railroad to Union Pacific in 1921, living in retirement for four more years until his death at age 86.
The 1920s were a tumultuous decade for Las Vegas nearly from the outset. In 1921, Union Pacific cut 60 jobs in the wake of its acquisition of the railroad. President Warren G. Harding’s incoming administration also meant new appointments to the Railroad Labor Board, and the board approved a series of wage cuts for railroad workers. In the meantime, a post-World War I downturn in mining further impacted Las Vegas. Then, in what is largely viewed as a retaliatory move, Union Pacific moved its repair shops out of Las Vegas and to Caliente, Nevada, costing hundreds of jobs.
With a dire economic outlook impacting the entire state, Nevada revisited the legalization of gambling, which had been legal from its statehood in 1869 up until 1910. With greater public support for relegalizing gambling than previous efforts had, a bill to legalize “wide open” gambling passed in both the state Assembly and Senate, and on March 19, 1931, Governor Fred Balzar signed it into law. That same year, divorce laws were loosened to permit anyone with a six-week residency in the state to legally divorce. And just one year earlier, construction had begun on the Hoover Dam, bringing an influx of thousands of workers to the area, many of whom would take the short trip to Las Vegas to try their luck with the newly legalized games. With this confluence of events, the Las Vegas we know today began to take shape.
Advertisement
Advertisement
Credit: Gene Lester/ Hulton Archive via Getty Images
Organized Crime and the Strip
The decriminalization of gambling made Las Vegas an attractive destination to experienced gambling operators, some of whom were running criminal enterprises in other states. One such figure was the archetypal crooked cop Guy McAfee, a Los Angeles vice squad officer who fled to Las Vegas to escape prosecution for running gambling and prostitution rings — the exact vice he was supposed to be policing. Arriving in town in 1938, he bought the Pair-O-Dice Club on Highway 91 and renamed it the 91 Club, delaying its grand opening to 1939 in order to coincide with Ria Langham's six-week residency for divorcing Clark Gable.
McAfee was responsible for two enduring pieces of Las Vegas culture: He opened the Golden Nugget on Fremont Street, ushering in an era of grandiose casinos, and he is also credited with nicknaming Highway 91 “the Strip.” The Golden Nugget opened in 1946, about a year after the Nevada Legislature created the first casino license stipulating a 1% tax rate on gross gaming revenue in excess of $3,000.
The lucrative gaming industry began to attract heavier organized crime players beyond McAfee. Benjamin “Bugsy” Siegel arrived in Las Vegas intending to create a base of operations for the notorious Syndicate, which, at the time, was led by Meyer Lansky during a period when Salvatore “Lucky” Luciano was in prison. Using funds from the Syndicate, Siegel became the primary stakeholder in the construction of a casino on Highway 91 to rival the Golden Nugget. Siegel wanted it to depart from the Old West aesthetic of most casinos of the time, and instead be patterned after the tropical resorts the Syndicate backed in Havana, Cuba. He dubbed it the Flamingo, and hoped to set a new standard for opulence in line with Siegel’s own worldview. “Class, that’s the only thing that counts in life," he once said. “Without class and style, a man’s a bum, he might as well be dead.”
Lavish attention to detail and poor business management contributed to enormous cost overruns, and bad luck compromised the Flamingo’s opening and its ability to quickly recoup costs. Maybe because of the money, maybe for a number of other possible motives that are debated to this day, Bugsy Siegel was gunned down while reading the newspaper in a Beverly Hills mansion on June 21, 1947. The murder was a national sensation, covered in tabloids and TIME magazine alike. LIFE magazine ran a gruesomely iconic full-page photo of the crime scene in its article about the murder. The case, Crime Case #46176 in the Beverly Hills Police Department, is still open and unsolved.
In a tellingly quick matter of minutes after Siegel’s murder, other Syndicate bosses took over the Flamingo. The resort eventually became profitable — so much so that the Syndicate began building more casino-resorts on the Strip. Organized crime had taken hold in Las Vegas, and the era of the swanky, entertainment-oriented hotel-casino was born. The mob invested in more casinos; the Sands Hotel and Casino opened in 1952 and brought in the “Rat Pack” (Frank Sinatra, Dean Martin, Sammy Davis Jr., Joey Bishop, and Peter Lawford) for a high-profile residency. The Dunes, Riviera, and New Frontier opened in 1955; the Tropicana followed in 1957, and the Stardust opened a year later. Each had ties with organized crime syndicates from around the country.
Despite the sensational murder of Bugsy Siegel, the mob’s involvement in casinos, hotels, restaurants, and other Vegas businesses expanded, and more gangsters arrived in the city throughout the 1960s and ’70s. But in the late ’60s, billionaire Howard Hughes bought a series of mob-connected casinos — the Desert Inn, the Sands, Castaways, Frontier, the Silver Slipper, and Landmark — that shifted the balance of casino ownership in the city from mob-connected to corporate-owned. In 1969, the Nevada Legislature promoted corporate ownership of casinos, and in 1970, Congress passed the Racketeer Influenced and Corrupt Organizations Act (commonly known as RICO), which aided the U.S. Justice Department in cracking down on organized crime.
During the ’70s, high-profile car and restaurant bombings between rival gangs unsettled the city to the point of attracting the attention of the FBI. The Nevada Gaming Commission and the Nevada Gaming Control Board refocused on organized crime, and Governor Mike O’Callaghan made it a point of emphasis. A RICO case focused on mobster Anthony Spilotro and Frank “Lefty” Rosenthal, whose connections ran from Chicago mob families to others throughout the Midwest. By 1981, Spilotro’s operations had been broken up, and the mob was all but finished in Las Vegas.
Advertisement
Advertisement
Credit: Frank Edwards/ Archive Photos via Getty Images
The Rise of the Corporate Mega-Resort
Billionaire businessman Kirk Kerkorian bought the Flamingo in 1967, and in 1969, he opened the massive International Hotel. It was the largest hotel in the country, with 1,500 rooms and a 4,200-seat showroom. For its grand opening, he brought in Barbra Streisand, and then followed that by bringing in Elvis Presley for a famed residency — 837 consecutive sold-out performances over seven years — that set an enduring record. The same year, Kerkorian bought Hollywood’s venerable MGM Studio, and set out to build a themed resort in Las Vegas based on the production house.
With all of the buying and building, Kerkorian incurred enormous costs, so to help balance the ledger, he sold the Flamingo (and later the International Hotel as well) to the Hilton Hotel Corporation. The success of the Flamingo Hilton caught the attention of other major hotel corporations, such as Sheraton and Holiday Inn, and they too began opening casino-hotels in the city. In 1973, Kerkorian opened the MGM Hotel-Casino, which eclipsed the International Hotel in grandeur, boasting 2,100 rooms, eight restaurants, two showrooms, and the (at the time) world’s largest casino. It was the largest resort in the world, and Las Vegas’ first mega-resort.
During the rest of the ’70s and into the ’80s, development on the Strip stagnated. But Las Vegas itself was growing: From 1985 to 1995, the city’s population nearly doubled, increasing to around 368,360. Using junk bonds in 1989, developer Steve Wynn reinvigorated the Strip by building the most ostentatious mega-resort yet: the Mirage Resort and Casino. A 29-story Y-shaped tower with 3,044 rooms, a 1,500-seat showroom, and waterfalls, it also had a simulated volcano that would “erupt” every 15 minutes after sundown. That same year, Kerkorian announced plans for a new MGM Grand, which was completed in 1991 and took the mantle as Las Vegas’ largest casino, with even more over-the-top touches including a lion zoo and heavyweight boxing arena.
Credit: George Rose/ Getty Images News via Getty Images
An Entertainment Capital
The 1990s were a transitional era in Vegas, as many of the midcentury casino icons were razed in favor of constructing new family-friendly mega-resorts, representing a commitment toward broader entertainment tourism, rather than singular gambling. The Sands was imploded and replaced by the Venetian; similarly, the Dunes was replaced by the Bellagio, and the Hacienda was replaced by Mandalay Bay Resort. In true Las Vegas fashion, each implosion was a spectator event. The Hacienda implosion was even scheduled at 9 p.m. on December 31, 1996, in order to coincide with the new year on the East Coast. Most of the casino implosions were televised, and the videos can still be viewed on local TV news channel websites.
Today, Las Vegas continues to broaden its scope. Professional sports leagues have ended their historical aversion to placing teams in the city, as seen by the NHL awarding the expansion team the Las Vegas Golden Knights in 2017, the WNBA’s San Antonio Stars relocating to Las Vegas and becoming the Aces in 2018, and the NFL’s iconic franchise the Raiders relocating to Las Vegas in 2020. Major League Baseball’s Athletics are likely to follow. Las Vegas is now known as a city with an excellent fine-dining scene, with a number of chefs up as semifinalists for 2024 James Beard Awards. And the only place in town the mob exists now is in a museum.
In 1962, Lawrence Herbert founded Pantone to solve a problem he noticed while working at a commercial printing company: There was no standard language to describe different shades of color. The printer he worked for specialized in color charts for the cosmetics and fashion industries, but there was no easy way to match the specific hues that designers needed. For instance, the printer created color swatches for customers to use to match their skin tones with pantyhose, yet ink manufacturers defined shades such as beige and cream differently. Recognizing the need for a universal language of color, Herbert set out to create a graphic standards system that could be used for color matching worldwide.
Herbert drew on his chemistry background to hand-mix his own combinations of color tones, developing a series of shades that were each given a unique name — descriptors such as “Greenery” or “Tangerine Tango” — and a number (15-4020, 19-1664, and so on). The result was the Pantone Matching System, which was presented as a book of swatches that fanned out to showcase a rainbow of standardized colors. A name and number combination would consistently yield the same results because each color tone contained an exact ink formula. By the 1970s, Pantone had sold more than 100,000 swatch books and expanded into the industrial, plastics, and fashion markets. The Pantone process was digitized in the 1980s, and the Pantone Color Institute was founded in 1986 to educate designers about color, the way it’s described, and, in more recent years, the psychology that helps determine the Pantone Color of the Year.
In 1999, the Pantone Color Institute chose Cerulean as the Color of the Year. Meant to represent the new millennium, Cerulean was inspired by the blue sky and the energy gleaned from spending an afternoon outside. It also represented the limitlessness of a new century — a clean, bright shade that was meant to inspire, but familiar enough not to overwhelm. The Color of the Year project was part of the institute’s larger goal of linking color and culture. Though it was only intended as a one-off event to mark the millennium, the concept took off, and in 2023, the Pantone Color of the Year celebrated its 25th anniversary.
Choosing the Pantone Color of the Year is a long and thought-out process. Each year, a team of global color experts is tasked with applying the Pantone Color Institute’s lessons on color psychology to draw parallels between upcoming trends in entertainment, fashion, design, and travel, and the feelings evoked by particular hues. Appointees spend the year in close communication to select a color that embodies the year’s cultural or political landscape.
True Red was chosen in 2002 to represent the devastation of the 9/11 attacks, while 2008’s Blue Iris was meant to inspire calm amid the turmoil brought on by economic recession. In 2016, Pantone announced two colors: Rose Quartz and Serenity, a dusty pink and gentle blue thought to bring on peace, tranquility, and a reprieve from the pandemonium of modern life. The institute followed suit in 2021 and selected two colors amid the global pandemic: Illuminating and Ultimate Gray represented sunny optimism enforced by resilience and stability.
In 2024, Pantone picked “Peach Fuzz” as the Color of the Year, to represent themes of nurturing, connection, and the enrichment of the body, mind, and soul. Here are the 25 colors Pantone has chosen to reflect the last quarter century:
The humble calendar of one of civilization’s oldest staples. The earliest means of measuring days and weeks dates back 10,000 years, and timekeeping techniques adopted by the ancient Babylonians, Egyptians, and Romans slowly evolved into the calendar we use today. Yet the emphasis here is on “slowly.” The evolution from charting moon phases to separating seasons to measuring fiscal years was one of controversy and chaos across centuries. Still, humans never stopped working to perfect how we mark the passage of time. Here’s a brief look at the fascinating history of calendars, just in time to start a new one.
Photo credit: Buyenlarge/ Archive Photos via Getty Images
The First Known Calendar Is From Prehistoric Scotland
In 2013, British archaeologists discovered what they consider the world’s oldest calendar, dating back to around 8000 BCE. The prehistoric calendar, located at Warren Field in Scotland, consists of 12 pits believed to have contained wooden posts representing months of the year. Positioned to chart lunar phrases, the pits are aligned with the southeast horizon and were likely used by hunter-gatherer societies to track seasons. The site precedes Stonehenge by several thousand years.
Though the exact purpose of Stonehenge remains a mystery, archaeologists believe that in addition to a burial site, the prehistoric monument may have been a solar calendar. Keeping track of months and years in ancient societies relied on charting the stars, sun, and lunar patterns (and the subsequent weather they’d bring). Stonehenge, built around 3000 BCE, consists of stones and archways arranged in a circular pattern, and experts believe time was measured by the way the sun, moon, and stars lined up with these markers. Archaeological evidence suggests lunar and solar eclipses were also observed, as were the winter and summer solstices.
The Babylonian Calendar Had a 12-Month Year and Seven-Day Week
The Babylonians, a civilization that began in the Fertile Crescent of ancient Mesopotamia (modern-day Iraq) around 4000 BCE, used lunar patterns to determine the length of the year. They divided the calendar into 12 months based on the cycle of the moon, starting with the first sighting of the crescent moon. This put the year at 354 days, slightly shorter than a solar year. (The moon cycle is actually 29.5 days long, so there are 13 lunar phases each year.) To sync the lunar and solar calendars, the Babylonians adopted a system known as intercalation, which allowed them to add an extra month when necessary. They also divided each month into seven-day weeks, based on (and named for) the seven celestial bodies they were able to observe: the sun, the moon, and the five nearest planets.
Like many ancient agricultural societies, the ancient Egyptians measured time according to the planting and harvest seasons. The Nile river flooded each year between the months of what is now May and August, and this annual flooding was crucial to Egyptian society, as it determined the success of that year’s crops. As a result, the Egyptians divided the year into three seasons, each related to the river’s status and the agricultural phases associated with it: Akhet (translated as Flood or Inundation), Peret (Emergence or Going Forth), and Shemu (Low Water, Deficiency, or Harvest).
Our Modern Calendar Was (Partially) Created by Julius Caesar
In 46 BCE, Julius Caesar introduced the Julian calendar as a means of reforming the existing and flawed Roman timekeeping system. The Roman Republic’s calendar (which referred to the first day of each month as kalends, the origin of the word “calendar”) not only miscalculated the length of a solar year, but also could be manipulated to control political power, which created further inaccuracies. Caesar’s response was a new system based on calculations by the Greek astronomer Sosigenes of Alexandria, who determined each year consisted of 365.25 days. To account for that .25, the Julian calendar added an extra day every fourth February: leap year. Though the calendar was still imperfect, much of what Caesar put in place remains in use today — including, of course, the name of the seventh month, July.
Photo credit: PHAS/ Universal Images Group via Getty Images
The Gregorian Calendar Fixed Caesar’s Math
For all its improvements, the Julian calendar still miscalculated the length of a solar year by 11 minutes. That may not seem like much, but the discrepancy added up over time, and by 1582 CE, the months of the calendar and the seasons were misaligned. This concerned Pope Gregory XIII, who noticed that Easter was becoming increasingly out of sync with the spring equinox — a problem for the Catholic Church. In response, the pope launched the Gregorian calendar on October 4, 1582, which set the following day as October 15, jumping the calendar ahead 10 days. Chaos ensued: Catholic countries adopted the reformed calendar, but many Protestant nations waited more than a century to get on board. The British Empire, including its colonies in America, waited until 1752 to switch to the Gregorian calendar, and Russia didn’t adopt the new calendar until as late as 1918, after the Russian Revolution.
France Had a Calendar With 10-Day Weeks During the Revolution
In 1793, during the French Revolution, the revolutionary government attempted to create a new secular calendar for the French Republic, keen to do away with Christian associations as part of its break from the ruling elite. Known as the French Republican calendar, it set each month at 30 days, divided into three weeks consisting of 10 days each. The calendar never took off, however. It was abandoned for its Gregorian counterpart on January 1, 1806, after Napoleon Bonaparte became emperor.
The concept of miniature dwellings traces back to ancient civilizations, when Egyptians placed small clay replicas of their houses and belongings in and around burials. These models were intended to provide the comforts of home to the deceased in the afterlife. Although the tiny dwellings we know as dollhouses today are quite different from these ancient versions, their history also includes purposes other than play. Over the last 500 years, dollhouses have evolved from elaborate displays for adults, to useful household teaching tools, to enduring objects of imagination and aspiration for children.
The earliest known dollhouses were made in the 16th century, primarily in Germany, and later in Holland and England. Known as a “dockenhaus” (miniature house), “cabinet house,” or “baby house” (because of the size, not the intended audience), these handcrafted items were not initially made for children to play with — they served as display cases for wealthy adults to fill with miniature furniture, fabrics, and artwork that reflected their own taste and lifestyle.
One of the earliest recorded examples of a dollhouse is the Munich Baby House. Commissioned by Albert V, the Duke of Bavaria, in the 1550s, the piece was made by skilled artisans in the shape of a royal residence (instead of a wooden cabinet like the dominant style that soon followed). Though the Munich Baby House was lost in a fire in the 1600s, Albert V had the object detailed in an inventory of his household goods. Historians believe that the Munich Baby House was likely made for the duke’s entertainment, but some suggest it may have been built as a gift for his daughter, which would make it an early example of a dollhouse for children.
Throughout the 17th century, dollhouses remained as elaborate and whimsical showcases of a family's wealth; they were most often constructed in cabinets with doors that opened and closed like a china cabinet. In the early 1600s, however, baby houses also took on a more practical purpose. Nuremberg houses and Nuremberg kitchens, named for their primary place of manufacture in Germany, emerged as inspirational and educational tools to motivate and teach young women how to decorate and care for a household.
These types of baby houses were less ornamental than their predecessors, typically made entirely out of metal and sometimes consisting of just a kitchen. But they were no less meticulous, featuring tiny handcrafted brooms, kettles, copper cooking pots, and mini masonry hearths at the heart of the kitchen. Though they’re thought of as being instructional in nature, Nuremberg houses were also used for play, primarily by girls, and often during the Christmas holidays. This trend played another important role in the evolution of dollhouses: A well-preserved Nuremberg house from 1673 is another early example of a dollhouse built to look like a family home.
Advertisement
Advertisement
Photo credit: Heritage Images/ Hulton Archive via Getty Images
The Victorian Dollhouse
By the 18th century, baby houses had become popular in England. These structures commonly had detailed and realistic facades, including doors and windows, informing what we commonly think of as the classic Victorian dollhouse. They were often modeled after the owner’s home, and although they still functioned as displays of opulence, caring for them and decorating them also became a beloved hobby for women.
Until the mid-18th century, dollhouses were unique, one-of-a-kind creations. But with the innovations of the Industrial Revolution, the once custom-made dollhouses became much easier to produce en masse. This dovetailed with changing ideas of childhood in the early 19th century, when kids were no longer expected to weather the hardships of adulthood. Around this time, dollhouses started to be used more frequently as toys.
One of the first mass-produced dollhouses available in the United States came courtesy of the Bliss Manufacturing Company. Bliss began producing dollhouses in the 1890s, even as the U.S. also imported the tiny dwellings and miniature furnishings from Germany. Despite becoming more mainstream in the early 1900s, dollhouses didn’t become affordable for most families until after World War II. The postwar economic boom, along with increased manufacturing materials and abilities, made the toys fixtures in playrooms throughout America. Mattel released Barbie’s first Dreamhouse in 1962, just three years after the Barbie doll’s debut, and by the 1970s, other major toy companies including Fisher Price, Playmobil, and Tomy were producing popular miniature toy houses as well.
In recent decades, dollhouses have evolved into collector's items and continue to be a passionate hobbyist pursuit. Over the past few years, social media communities have developed around a renewed love of miniature dwellings. Today, the toy also reflects a cultural shift — many people consider the meticulous structures a form of escapism and a source of solace and joy in a world with issues that often feel beyond individual control. Dollhouses continue to be cherished toys that exist at the intersection of craftsmanship, culture, and creativity.
From its very first episode in 1969, Sesame Street captivated the imaginations of America’s youth, using research-based programming to reinvent children’s television. Created by Joan Ganz Cooney and Lloyd Morrisett in the late 1960s, the show aimed to not only entertain, but educate — and it did just that. It’s been called the “largest and least-costly [early childhood] intervention that’s ever been implemented” in the United States.
Through its diverse characters and cast members, the show reflected the real world, and its fast-paced storytelling, repetition, and humor helped impart valuable life lessons. Sesame Street quickly became more than just another TV show: It’s been a trusted companion for generations of families. Read on to learn more about the history of the show that, through its commitment to inclusivity and social change, has left a profound mark on society — and made Big Bird a star.
The seed that grew into Sesame Street was planted at a fateful Manhattan dinner party hosted by Joan Ganz Cooney, a producer with a background in education. At the time, Cooney was working for WNET/Channel 13, where she produced public affairs programming, including an Emmy Award-winning documentary about poverty in America. The guest list at the dinner party included Lloyd Morrisett, vice president of the nonprofit Carnegie Corporation. As the conversation turned to television, Morrisett shared that his young daughter was so mesmerized by TV that she would sit and stare at nothing but the test pattern. Morrisett, who was also a psychologist, wondered whether the medium could be used to teach children.
Inspired by the conversation, Cooney went on a three-month trip around the country to interview educators, psychologists, television producers, and more. The result was a study called “The Potential Uses of Television in Preschool Education.” It proposed a new kind of children’s television program — Cooney envisioned a fast-paced format similar to a sketch comedy show. She wanted to foster a strong connection between the show’s characters and the audience. And most of all, she wanted it to teach the young minds that would be watching, especially kids from lower-income and marginalized communities who often slipped through the cracks.
The yet-unnamed show went into development at the newly formed Children’s Television Workshop (now known as the Sesame Workshop). Morrisett helped raise the funds to make it happen, and in 1968, Cooney hired Jon Stone from the children’s show Captain Kangaroo to produce and direct the project. That summer, Stone brought a former colleague, a puppeteer named Jim Henson, to one of Cooney’s workshops. Together, Stone and Henson produced a pitch reel for the show featuring some of Henson’s Muppets, including Kermit the Frog and Rowlf the Dog. “Hey, Rowlf, why don’t you call your show ‘Sesame Street’?” Kermit says in the reel. “You know, like ‘Open Sesame’? It kind of gives the idea of a street where neat stuff happens.”
Photo credit: Hulton Archive/ Archive Photos via Getty Images
“Sesame Street” Is Born
Sesame Street debuted on November 10, 1969, and its brownstone-lined city street set and Henson’s colorful puppets became an immediate fixture in American homes. Kids and parents loved it; critics largely did too, even though there were some questions about whether a show tailored for short attention spans might lead to a generation that lacked focus. (That issue is up for debate, but many studies over the years have found that watching the show helped prepare children for school.) Additionally, despite Sesame Street’s legacy as a diverse and inclusive show, it initially faced criticism for its lack of representation of Latino people in its first couple of seasons. The show listened to its audience, and in its third season, added the characters Luis (Emilio Delgado) and Maria (Sonia Manzano), who became beloved cast members for decades. The show continues to embrace diversity and inclusion to this day.
The biggest stars of Sesame Street were not the human cast members, however, but Henson’s beloved Muppets. The distinctive puppets had personalities and backstories all their own, and were created with specific educational goals in mind. Big Bird, for instance, is a 6-year-old preschooler, and was designed to help children develop reasoning skills. Bert and Ernie represent cooperation, and Cookie Monster, well, Cookie Monster just likes cookies — a relatable motive for any preschooler. New puppets have been added throughout the years — the Count arrived in season 4 to teach math skills, while the empathetic Elmo became a mainstay in 1980 — but most of the core characters remain fixtures on the show after more than 50 years.
During the show’s development, psychologists advised not to have the Muppet characters interact with the human cast; they believed mixing fantasy and reality would cause confusion. But test screenings showed that scenes featuring only human cast members scored low. In a last-minute attempt to improve the show before it debuted, the team went against the professional advice and remade the puppets so they could walk and talk with the human cast. Months before Sesame Street went on air, the show created what author Malcolm Gladwell has called “the essence of Sesame Street — the artful blend of fluffy monsters and earnest adults.”
Since its debut in 1969, Sesame Street has embraced music as a powerful tool for both education and entertainment. The show’s musical segments are not mere jingles; many have become iconic songs and cultural touchstones. “Rubber Duckie,” “C Is for Cookie,” and “Sunny Day” (the show’s theme song, also known as “Can You Tell Me How to Get to Sesame Street?”) have transcended generations. Musical guest appearances by stars from Stevie Wonder to Johnny Cash, Bruno Mars to the Chicks, and Ed Sheeran to Destiny’s Child, have enriched the show’s musical repertoire and helped it appeal to a broader audience.
Advertisement
Advertisement
Photo credit: Mathew Imaging/ FilmMagic via Getty Images
Lasting Impact
Now in its 54th season, Sesame Street has become part of the American cultural fabric. As of 2023, it has won more than 216 Emmy Awards, 11 Grammy Awards, and two Peabody Awards. It is broadcast in around 150 countries, and boasts more than 30 international versions. Even as one of the longest-running TV shows in U.S. history, the pioneering series remains iterative and collaborative, reflecting changes in the world not just in its story lines and characters, but also in the show’s format. In recent years, the show has introduced a character with autism, as well as one experiencing housing and food insecurity; it now also integrates modern technology, including a smartphone character named Smartie. Through all of the changes, Sesame Street continues to create an engaging and playful imaginative environment while staying true to its educational mission.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.