Theodore Roosevelt is known as the first conservationist President, having established national parks, wildlife refuges, and national forests during his time in the White House. It seems fitting, then, that one of the world’s most recognizable animal figures — the beloved teddy bear — was inspired by and named after the 26th U.S. President.
In November 1902, Roosevelt joined Mississippi Governor Andrew H. Longino on a hunting trip in Mississippi. On the second day of the trip, Roosevelt’s aides — including guide Holt Collier, a skilled hunter in his own right — captured a bear, tied it to a tree, and presented it to the President, who was eager to start the trip off strong with a catch. Roosevelt, however, refused to shoot the restrained bear. He may have been an avid hunter, but he found it unsportsmanlike to harm a defenseless animal.
The hunting incident attracted attention in the press. Washington Post cartoonist Clifford Berryman depicted Roosevelt refusing to shoot a small, tied bear in “Drawing the Line in Mississippi,” a cartoon that doubled as a commentary on the President’s handling of a state border dispute. The cute bear cub character became popular with Americans, and in the ensuing years, Berryman continued to use the bear as a symbol for President Roosevelt, who was commonly known as “Teddy,” short for Theodore.
Credit: Stock Montage/ Archive Photos via Getty Images
Berryman’s cartoon, published on November 16, was particularly inspiring to Morris and Rose Michtom, owners of a Brooklyn toy and candy store. Morris, a Roosevelt supporter, created a stuffed bear and named it after the President. Before making more, he reportedly wrote to Roosevelt to ask his permission to name the toy “Teddy’s Bear.” Roosevelt agreed, but is said to have expressed skepticism. “I don’t think my name is likely to be worth much in the bear business,” Roosevelt wrote, according to the Michtoms, “but you’re welcome to use it.” The Michtoms’ first teddy bear stood about 2.5 feet tall, had button eyes, and was made of a golden-honey plush fabric. By 1903, the Michtoms founded the Ideal Toy Company in order to produce and sell the teddy bears that their customers loved.
At around the same time, the German toy company Steiff introduced a stuffed bear of its own. Designed by Richard Steiff, the nephew of company founder Margarete Steiffthe, the bear first appeared in 1902, reportedly inspired by animal sketches Richard made as a child. The bear was made of soft mohair and had a movable head and limbs. After the toy appeared at the 1903 Leipzig Toy Fair, a U.S. buyer ordered 3,000 of the stuffed bears, kicking off the company’s success overseas. By 1906, the Steiff bear was also known as the “teddy bear” (though exactly how this happened remains something of a mystery), and in 1907 alone the company produced 974,000 teddy bears.
It wasn’t long before the teddy bear’s popularity extended beyond toy stores. In 1907, composer John Walter Bratton wrote “The Teddy Bear Two-Step,” which later became known as “The Teddy Bear’s Picnic.” That same year, the toy’s popularity sparked minor controversy when a Michigan minister suggested the stuffed animals were a “menace” to the country and would take away young girls’ desire to nurture human babies.
Roosevelt, for his part, was supportive of the eponymous stuffed toy, and in 1904, he even used a teddy bear as a symbol in his campaign for reelection, printing the teddy’s likeness on buttons and displaying the Michtoms’ creations at the White House. Even after his presidency came to an end, Roosevelt’s passion for wildlife and the outdoors endured. He went on numerous expeditions, some of which, such as his 1913 trek through the Amazon’s River of Doubt, were more treacherous than others.
In 1963, the Ideal Toy Company reached out to Roosevelt’s family to offer them one of the original teddy bears. It made its way into the hands of one of Theodore’s grandsons, Kermit Roosevelt, and in 1964, the Roosevelt family donated it to the Smithsonian National Museum of Natural History.
Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images
Author Mark DeJoy
April 16, 2024
Love it?144
The 1960s were some of the most significant years in American history. The decade saw the Civil Rights Movement and a rising counterculture that reimagined the shape of the American social fabric. Pop music exploded like never before with the British Invasion led by the Beatles and Rolling Stones, but the ’60s were also an intense era of war and political violence.
The decade’s most monumental moments tend to be widely covered, and the sheer number of historic events during this time almost create the impression that every moment was imbued with turbulence. But while the tumult of the decade played out on the evening news in homes across America, many people were still living normal everyday lives — albeit lives that looked quite different from our modern lifestyle. The following numbers offer a snapshot of day-to-day life in 1960s America.
Credit: PhotoQuest/ Archive Photos via Getty Images
42% of Adults Were Smokers
Smoking was still widespread in the middle of the 20th century. The smoking rate in the U.S. reached a peak of 47% of adults (including 50% of doctors!) by the end of 1952. Though cigarette sales declined somewhat in 1953 and 1954 amid growing health concerns, the introduction of the filtered cigarette created a rebound. Through the early years of the 1960s, the smoking rate held steady at 42% of adults. On January 11, 1964, Surgeon General Luther L. Terry published the first report of the Surgeon General’s Advisory Committee on Smoking and Health, a landmark event that brought the link between smoking and disease front and center in the American consciousness. Smoking has been on an overall downward trend ever since: As of 2021, smoking has declined to 11.5% of adults.
In 1966, the national average for the price of a men’s haircut was $1.95 ($19.03 in today’s currency). For women, it was $2.16 ($20.79 today) — unless an extravagant “permanent wave” was desired, which cost an average of $12.15 ($118.57 today). The permanent wave (or “perm”) was a multi-step process to make long-lasting curls, which required additional materials and could take between six to eight hours to complete, hence the premium cost. Chicago was the most expensive city for men to get a haircut in; the average price there was $2.48 ($24.20 today), while Dallas was the least expensive at $1.79 ($17.47 today). But interestingly, Chicago was the cheapest city for women’s haircuts — $2.08 ($20.30 today) for a conventional cut, and $11.27 ($109.98) for the permanent wave. The most expensive city for women was Washington, D.C., at $3.31 and $18.19 ($32.30 and $177.51, respectively).
At the beginning of the 1960s, marriage was still a fairly unquestioned rite of passage into adulthood. The median age for brides in 1960 was 20.1, while the median age for grooms was 24.2, and the percentage of adults who were married was a large majority: 72% in 1960. But the decade brought about sweeping social changes in attitudes toward divorce, sexuality, and parenthood, creating a downward trend in marriage that persisted into the 21st century. Data collected in 2023 shows that the current median age at first marriage is 28 for women and 30 for men, and 53% of American adults are married.
A single dollar bill had a lot of buying power throughout most of the 20th century. The national average price for most grocery staples in the ’60s was less than a buck: A 5-pound bag of flour was 61 cents; a dozen eggs cost 66 cents; a pound of ground beef (which was broadly referred to as “hamburger” even when not formed into a patty) was 55 cents; and a box of generic corn flakes was 32 cents. In today’s dollars, these prices equate to $5.95, $6.44, $5.37, and $3.12, respectively. With the notable exception of eggs (which have infamously inflated in cost since 2020), these equivalent prices are right in line with what we’d expect to see at a grocery store today.
Advertisement
Advertisement
Credit: Three Lions/ Hulton Archive via Getty Images
A Three-Minute Phone Call to Someone Across the Country Cost $2
Though many aspects of daily life are more expensive today than they were in the past, phone service is one item that’s actually more affordable today than it was in the 1960s. During most of the landline era, phone calls to different regions incurred long-distance charges, based on the duration and distance of the call. In 1960, the cost for a three-minute call from New York to San Francisco was $2.25; it dropped to $1.75 by the end of 1967. With inflation, the $2 average for that three-minute call would be the equivalent of $19.89 today. A lengthier conversation could easily incur enough long-distance charges to surpass the cost of an entire month of cellphone service today.
For most of the 20th century, the typewriter was the quintessential office item. In 1946, leading manufacturer IBM set out to improve the typewriter design that had been standard since the late 19th century. IBM engineer Horace “Bud” Beattie developed a mushroom-shaped type element to replace the basket of individual typebars that manual typewriters were equipped with; it solved the problem of typebars jamming if keys were pressed in too rapid succession. Beattie and a team of engineers refined the “mushroom printer” to a spherical shape about the size of a golf ball, which allowed for a pivoting motion that made the page more stable and less prone to small shifts that could result in unwanted slanted text.
In 1954, the team at IBM developed a prototype of the new design. The type sphere was designed to be easily replaceable, allowing for switching out typefaces, thus giving the machine its name: Selectric. The Selectric was capable of printing 186 words per minute and accommodating keystrokes as quick as 20 milliseconds apart with no risk of jamming. It included ergonomic keys, and was available in eight color combinations. It took seven years from the completion of the prototype for the product to go to market, but when the Selectric went on sale on July 31, 1961, the buzz around it was instant. First-year sales hit 80,000, topping projections by 400%. For the rest of the decade and beyond, it became the new standard in offices, comprising 75% of all typewriters sold, and eventually a 94% market share for electric typewriters.
The British nobility is divided into tiers or ranks, known as the peerage. The roots of this hierarchical system date back around a thousand years; it began to gain a defined structure (as with many things in British history) after William I conquered England in 1066.
The peerage has five ranks: baron, viscount, earl, marquess, and duke, in ascending order. And within each tier, superiority is given to the holder of the oldest peerage. So, for example, the Duke of Devonshire is more senior than the Duke of Marlborough because the former title was created in 1694, eight years before the latter. While many titles are hereditary, it’s important to note that fewer than 90 peerages can be inherited by a female heir (in most cases the title would become extinct if there was no male heir). It’s a subject understandably under scrutiny by activists and some members of Parliament. Peerages are awarded through legal documents known as letters patent, which officially bestow a title in the name of the monarch.
Here is an introduction to the five tiers of peerage, from the lowest rank of baron to the highest title of duke.
The word “baron” — which possibly came from an Old German word meaning “man” — first appeared in English texts in the 13th century. It became part of the peerage system in 1387, when Richard II created the first formal baron by making John Beauchamp de Holt the Baron of Kidderminster. Further barons were appointed, all of whom were expected, when summoned, to attend council or Parliament. In Scotland, barons are known as Lords of Parliament. If a woman holds the rank, or is the wife of a baron, she is called a baroness. Currently, there are 426 hereditary barons and Lords of Parliament and nine hereditary baronesses and Ladies of Parliament in the United Kingdom — making it the most populated of all five peerage ranks.
Viscount is the fourth rank of the British peerage system. The word comes from the Old French “visconte,” meaning the deputy or lieutenant of a count. (Despite having numerous counties, the United Kingdom has no counts. Historians disagree on why this is. Some have argued that the word “count” was rejected because it sounded too similar to a rather vulgar word in the English language, while others simply say it never gained traction because the older “earl” was already in use.) The rank of viscount was introduced in England in 1440, when King Henry VI gave John, Lord Beaumont the title of Viscount Beaumont, giving him precedence over all barons. Today, there are 115 viscounts in the British peerage. The oldest existing title — and therefore the highest ranking — is that of Viscount Hereford, created in 1550. A woman holding the rank or the wife of a viscount is known as a viscountess. Viscounts and viscountesses are formally addressed as “lord” or “lady,” respectively.
The rank of earl is the oldest of all the titles in the English peerage. The word has its origins in the Scandinavian “jarl,” which became “eorl” in the Anglo-Saxon tongue. It first appeared in England during the reign of King Canute (or Cnut), who ruled from 1016 to 1035. It was the highest title available to the British nobility for some three centuries, until the creation of the duchy of Cornwall and with it, the title of duke. There is no female equivalent to the title of earl (such as “earless,” which could strangely imply someone with no ears), so female earls are known as countesses. Currently, there are 191 earls and only four countesses in their own right (versus the wives of earls) in Britain.
The title of marquess comes from the French “marquis,” meaning “march,” in reference to the marches (borders) between Wales, England, and Scotland. The earls and barons guarding these marches were known as marquesses, initially without any inference that they were superior in any way to their peers of similar rank. The title was formalized in 1385 when King Richard II made Robert de Vere, the ninth Earl of Oxford, the Marquess of Dublin. The title took precedence over that of earl, which caused great controversy at the time, resulting in the marquessate being revoked in 1386. It wasn’t until 1443, when Edmund Beaufort was given the title of Marquess of Dorset, that the rank retained its place in the peerage. There are only 34 marquesses in Britain today, the premier — or highest ranking — being the Marquess of Winchester, created in 1551. (Marquesses that were created earlier either became extinct or were raised to dukedoms.) The only woman ever appointed as a marquess in her own right was Anne Boleyn, who was made Marchioness of Pembroke just before her marriage to Henry VIII.
Duke is the highest rank of the British peerage system. It is the ultimate tier of the nobility, surpassed only by princes and kings. Princes, however, can also be dukes — and traditionally they are given a dukedom when they come of age or are married. The first British duke was created in 1337 when King Edward III gave his son, known as Edward the Black Prince, the title of Duke of Cornwall. Today, of course, Prince William and his wife Catherine are officially the Prince and Princess of Wales as well as the Duke and Duchess of Cambridge. (Prince Harry and Meghan, meanwhile, retain their titles of Duke and Duchess of Sussex.) At present there are 24 dukes, not including the royals. Understandably, it’s particularly difficult to become a duke or duchess. The last dukedom — the Duke of Westminster — was created by Queen Victoria in 1874, and is the most recent dukedom conferred on someone not related to the British royal family.
The myth of the Wild West is one of the most persistent and influential myths in American culture. From quick-draw gun duels and cowboy hats to notorious outlaws such as Jesse James and Billy the Kid, the Old West is full of legends and lore, many of them popularized by dime novels and, later, Western movies. Sorting truth from fiction can be a tricky process when it comes to the American frontier. Here are six facts about some of the most infamous outlaws from the Wild West.
Henry McCarty, alias William H. Bonney, and best known as Billy the Kid, was only 21 years old when he was killed by Sheriff Pat Garrett in 1881. McCarty packed a lot into his short and violent life. He was orphaned at 15, committed his first crime shortly after, joined a band of rustlers, and quickly became involved in the brutal Lincoln County War between rival factions, which featured famous names from the Old West, such as Sheriff William J. Brady and John Chisum. Despite his early demise, Billy the Kid became one of the most notorious gunfighters of the American West. According to his own count, he killed 21 men, although the actual number is probably fewer than 10.
Credit: adoc-photos/Corbis Historical via Getty Images
Jesse James Was Shot in the Chest Twice Before He Was Even 18
Jesse James was an outlaw, guerrilla fighter, and leader of the famous James-Younger Gang that pulled off a series of daring and often vicious robberies of banks, trains, and stagecoaches. He was a notorious celebrity during his lifetime, and became one of the most legendary figures of the Wild West. Things could have been a lot different, however, as James was lucky to have made it out of his teens alive. Before he was even 18, he had been shot twice in the chest. The first shooting happened in 1864 during the American Civil War, when James was fighting alongside a group of pro-Confederate guerrillas known as Quantrill's Raiders. Despite the chest wound, James was ready to ride again in a matter of weeks — but he carried the bullet in his body for the rest of his life. The following year, James was once again shot in the chest, this time while trying to surrender to a Union cavalry patrol. It was a serious wound, but he was slowly nursed back to health by his cousin Zee Mimms, whom he married nine years later.
Advertisement
Advertisement
Credit: Jonathan Blair/ Corbis Historical via Getty Images
Butch Cassidy Had a Strong Aversion to Killing
All humans should have an aversion to killing others, but for the leader of a gang of criminal outlaws in the Wild West, it was quite unique. For all his notoriety, it’s possible that Butch Cassidy never killed a soul. He was known as something of a gentleman who abstained from actually using his gun, and he tried to keep his gang, the Wild Bunch, from unnecessary violence during robberies. That said, gang members such as Kid Curry, George Curry, and Will Carver certainly did claim their fair share of victims. Butch, however, tried his best to keep people safe. According to Richard Patterson, author of Butch Cassidy: A Biography, “The closest Butch ever came to harming a robbery victim was when he used explosives to force his way into an express car.”
Credit: Jonathan Blair/Corbis Historical via Getty Images
The Sundance Kid Took His Name From the Town Where He Was Arrested
Harry Alonzo Longabaugh was arrested only once during his lifetime, but the arrest gave rise to one of the most famous names in the history of the Wild West: the Sundance Kid. Longabaugh left home when he was just 15 years old. A couple of years later, in 1887, he stole a gun, a horse, and a saddle from a cowboy while traveling across Wyoming. He was captured and arrested in Miles City, Montana, and sentenced to 18 months in the Sundance, Wyoming, jail. While incarcerated, he adopted the nickname “the Sundance Kid.” Upon his release in 1889, he tried to make an honest life for himself as a cowboy, but it didn’t go as planned. He soon became an integral part of the Wild Bunch, and was known as the gang’s fastest gunslinger — although it’s possible he never killed anyone during the gang’s various heists and robberies.
Buffalo Bill Was a Vocal Supporter of Women’s Rights
William “Buffalo Bill” Cody was one of the most colorful figures of the Old West. A buffalo hunter, U.S. Army scout, Pony Express rider, showman, actor, and all-around celebrity, he helped create the myth of the American West with his Wild West show, which made him one of the world’s first global celebrities. Having spent years in the company of famed frontier women such as Annie Oakley and Calamity Jane, Buffalo Bill became a strong supporter of the rights of women. When asked by a reporter if he supported women’s suffrage, his response was unequivocal: “I do. Set that down in great big black type that Buffalo Bill favors woman suffrage… If a woman can do the same work that a man can do and do it just as well, she should have the same pay.” It was a bold response for its day, especially coming from a hardened frontiersman.
It’s certainly true that Belle Starr associated with a rogue’s gallery of Wild West figures, including Frank James and Jesse James. It’s also true that she was a crack shot, rode sidesaddle while dressed in a black velvet riding habit, carried two pistols, and had cartridge belts strung across her hips. It is highly unlikely, however, that Starr was the criminal mastermind of a gang that preyed on travelers, ranchers, and cowboys, as the myth surrounding her might suggest. It was only a few months after her death in 1889 that a purported biography, Bella Starr, the Bandit Queen; or, The Female Jesse James, was published by the king of dime novels, Richard K. Knox. Supposedly written by Starr, it was most likely a fabrication, and it helped cement the myth surrounding this Old West outlaw.
Advertisement
Advertisement
5 Common Items From Colonial America You’ve Never Heard Of
Credit: Science & Society Picture Library via Getty Images
Author Kristina Wright
April 10, 2024
Love it?52
Life in colonial America was undeniably challenging, and early settlers had to be resilient and resourceful in order to survive. Many of the items that colonists used in day-to-day life were either brought from Europe or based on tools they had used in their old lives. While some remnants of the colonial era, such as spinning wheels and quill pens, remain a part of our collective memory, many lesser-known items have faded into obscurity or been replaced by modern innovations. Here are five once-common objects you may not have heard of before, each of which served an important role in sustaining family life and building communities in colonial America.
A simple, durable tablet used as a primer for children’s studies, the hornbook originated in England around 1450 and was a staple of early childhood education in colonial America. Hornbooks were crafted by affixing a single page of parchment or paper onto a paddle-shaped wooden board and covering it with a translucent protective sheet made from an animal’s horn. This was created by soaking the horn in cold water to separate the parts, then heating and pressing the needed part into a thin, clear layer. A fundamental lesson was printed on the paper, such as the alphabet in lowercase and capital letters, simple vowel-consonant combinations, Roman numerals, and religious texts. Hornbooks remained popular well into the era of mass-printed books because they were both sturdy and functional.
Dating back to ancient Greece, the salt cellar (or salt-box) was a practical and decorative piece of tableware that colonists brought with them to the Americas. Traditionally, a salt cellar was not only useful for storing salt, but could also signify the hierarchy of those who were seated at the table. Those who were seated “above the salt,” typically closer to the host at the head of the table, were considered esteemed guests, while individuals of less prominence, including children, were seated “below the salt,” toward the middle or opposite end of the table from the host.
Sugar was considered a luxury in colonial America. The crop was generally harvested by enslaved people on sugar cane plantations in the Caribbean, then shipped to refineries where it was processed and shaped into cones that were wrapped in paper and distributed throughout the colonies. Sugar nippers were specialized steel tools that were used to cut chunks off a sugar “loaf,” which could weigh as much as 10 pounds. The smaller pieces were then crushed into granulated sugar using a mortar and pestle.
Shaped like a frying pan with a long handle and hung by the fireplace when not in use, a warming pan was typically made of wrought iron and fitted with a decorative brass or copper lid. Warming pans were popular throughout Europe by the 17th century, and were indispensable household items for settlers in the northern American colonies during the harsh winter months. Before retiring for the evening, a family would fill the warming pan with hot coals or embers, place it between the layers of bedding, and gently move it around to warm the sheets. The pans themselves were considered valuable family heirlooms that were handed down from generation to generation.
The flail is an ancient hand tool that colonial farmers used for threshing grain, such as wheat, barley, and oats. Commonly used until the widespread availability of mechanical threshers in the mid-19th century, the flail consisted of two wooden sticks of different lengths joined by a short, flexible thong made of rope, leather, or chain. The longer rod, called a handstaff or helve, provided leverage, while the shorter rod, called the beater, was used to strike the grain over and over, separating the edible part of the grain from the husks or chaff. Using a flail, one person could thresh around seven bushels of wheat, 15 bushels of barley, or 18 bushels of oats in one day.
Family dinner has been a mainstay of U.S. households since the mid-19th century, when men increasingly began to work and eat lunch — once considered the main meal of the day — outside the home. By the 1920s, the food rationing of World War I was a thing of the past, and the “Roaring ’20s” brought economic prosperity for many Americans.
When families sat down for dinner in this era, they could expect a menu typically consisting of a meat, a starch, and a side dish. The 1920s also saw an increase in the availability and variety of foods, including canned fruits, as well as innovations such as iceboxes and, later, refrigerators, which began to make their way into family homes over the course of the decade.
All of these factors played a part in what was served for dinner. From hearty mains to unique salads and decadent desserts, here’s a peek into dining rooms across America in the 1920s.
F. Scott Fitzgerald’s The Great Gatsby focused on the wealthy elite of New York’s Gilded Age, describing buffet tables overflowing with hors d’oeuvres and spiced baked hams. But meats weren’t just for the rich, and in the 1920s, a baked ham or other large cut of meat was a common sight at family meal time, especially during holidays or as the centerpiece of a Sunday dinner.
A popular glazed ham recipe involved studding the outside with cloves, canned pineapple rings, and maraschino cherries. With the invention of Wonderbread and the proliferation of sliced bread in the same decade, leftover ham sandwiches were also a lunchbox fixture.
The origins of this recipe, like those of many classic food and drink concoctions, are unclear, but its regular appearance on dinner tables in the early 20th century is an undisputed fact. At its most basic, the dish consists of cubed chicken and mushrooms in a creamy white sauce, garnished with pimentos and served over toast, pasta, or flaky puff pastry pieces.
The rich sauce, made with cream, butter, and flour, was also often seasoned with sherry or other spirits for added flavor. Over time, ingredients such as green peppers or peas also began appearing in recipes. The beloved comfort food grew even more popular in later years, reaching its peak during the 1950s. While it fell out of favor in subsequent decades, it can often still be found as a canned or frozen entrée in grocery stores.
Hors d'oeuvres enjoyed immense popularity in the 1920s. The Prohibition era fostered more informal social gatherings and cocktail parties at home, where finger foods were often served. Stuffed celery was a regular choice not only for entertaining, but also as a side at the dinner table.
The recipe began appearing in cookbooks and women’s magazines in the early 1900s and gained popularity throughout the ’20s and ’30s. The basic premise remained simple: chilled celery stalks filled with cream cheese, which was mixed with toppings such as tuna, lobster, crab, or, more commonly, olives and pimentos. Olives and pimentos were also used in cheese and olive salad, which was served on leaves of lettuce.
Originally created in 1893 at New York’s prestigious Waldorf Hotel by the executive chef Edouard Beauchamp and maître d’hôtel Oscar Tschirky, this timeless salad was a staple of family dinner menus in the early 20th century, when sweet salads became a favorite.
The first recipe was published in 1986 in Tschirky’s own The Cook Book, by “Oscar” of the Waldorf. This original incarnation called only for diced apples and celery to be “dressed with a good mayonnaise.” Over time, the salad evolved; throughout the 1920s, recipes also began calling for walnuts and seedless red grapes. As recently as 2017, the Waldorf — short for the Waldorf-Astoria — still featured an elevated version of the salad on the menu, made with Granny Smith and Fuji apples, halved grapes, and candied walnuts.
Thanks to advancements in canning and a boom in the Hawaiian pineapple industry, canned pineapple enjoyed newfound popularity in the U.S. in the 1920s. The sweet golden rings were used not just as a garnish for baked hams, but in baked goods, too — most notably, in the pineapple upside-down cake.
The all-American dessert was an adaptation of already-popular skillet cakes, previously made with fruits such as apples or cherries. It gained a new audience after Dole — then known as the Hawaiian Pineapple Company — held a pineapple recipe contest in 1925.
Dole received 2,500 recipes for pineapple upside-down cake alone, and the published recipe, which appeared in a Dole cookbook as well as in magazines, helped it find a whole new audience. The syrupy-sweet tropical dessert has remained a family dinner favorite for decades, through the 1960s and beyond.
This no-bake dessert was named for the then-ubiquitous icebox, a precursor to the electric refrigerator that held a large ice block and was often made of wood and lined with tin. Iceboxes helped preserve fresh food in a way that canning or drying could not, and in the case of the icebox cake, it helped soften it to decadent perfection.
To start, crisp wafers or cookies were layered in a dish with whipped cream. After being left to sit in the icebox for a few hours or even overnight, the cookies softened and melded with the creamy filling. While the simplicity of the dessert was part of its appeal, other ingredients such as custard, fruit, gelatin, or chocolate were sometimes used as well.
Credit: Stanley Bielecki Movie Collection/ Moviepix via Getty Images
Author Michael Nordine
April 4, 2024
Love it?129
With all due respect to Hollywood’s golden age, you could make a convincing argument that the 1970s were the best decade in cinematic history. As the New Hollywood era reached its peak and visionary directors were given previously unseen control over their productions, creativity flourished in Tinseltown like never before. It came to a (perhaps inevitable) end in the early ’80s after a string of high-profile box-office failures, but even the movies considered responsible for ending New Hollywood (such as William Friedkin’s Sorcerer and Michael Cimino’s Heaven’s Gate) have since been reassessed as severely underappreciated in their own time.
Though there are hundreds of movies from the ’70s well worth your time — including such classics as The Godfather, Jaws, and Star Wars that you’ve likely already seen — these seven films are a great place to start exploring the decade further.
There are innovators, and then there’s Barbara Loden. The actress-turned-filmmaker wrote, directed, produced, and starred in the semi-autobiographical film Wanda, a landmark of the then-nascent independent film movement. The movie centers on an aimless housewife who joins up with a bank robber after leaving her husband in Pennsylvania’s coal country. Made for just $100,000, it won an award at the Venice Film Festival in Italy for Best Foreign Film and paved the way for countless female filmmakers to follow. Sadly, it was the only feature Loden would direct. She was diagnosed with breast cancer in 1978 — by which point she’d also helmed two short films and a number of off-Broadway plays — and died in 1980 at the age of 48. Her legacy has only grown with time, as has Wanda’s.
Stanley Kubrick’s 2001: A Space Odyssey is often (and rightly) cited as one of the greatest films ever made, but not everyone enjoyed going beyond the infinite when the film first debuted. One notable dissenter was Russian auteur Andrei Tarkovsky, who considered2001 “phony on many points” and “a lifeless schema with only pretensions to truth.” Seeking to bring more emotional depth to science fiction, a genre he feared was becoming cold and lifeless, Tarkovsky decided to put his convictions to the test by adapting Stanisław Lem’s novel Solaris.
The result is a mind-altering exploration of a semi-sentient planet that creates hallucinations and/or physical manifestations of our deepest desires and fears, depending on which interpretation you choose to believe. For the protagonist Kris, a psychologist who’s been sent light-years from Earth to a space station orbiting Solaris, that comes in the form of his long-dead wife Hari. He’s haunted by her, which is to say he’s haunted by his own decisions — a problem no less urgent than the matter of how he’s going to get home.
John Cassavetes and Gena Rowlands, a married couple who collaborated on numerous films together, essentially invented American independent filmmaking in the late 1950s. Their shared filmography includes 10 movies, none quite as moving as A Woman Under the Influence. Rowlands delivered one of the most devastating performances in cinema history as Mabel, a housewife on the verge of a nervous breakdown. Her increasingly erratic behavior — strange tics, private and public outbursts — can only be partially explained by her reliance on alcohol, and would appear to speak to a deeper affliction.
Rowlands embodied her character in a way few performers ever have, and she earned the first of her two Oscar nominations for the performance — the second was for 1980’s Gloria, also directed by Cassavetes. Their pairing ended with Cassavetes’ untimely death in 1989, but Rowlands continued working for decades — including with the couple’s son Nick Cassavetes, who directed her in The Notebook.
Jeanne Dielman, 23 quai du Commerce, 1080 Bruxelles (1975)
Recently named the greatest film of all time by the British Film Institute’s once-per-decade poll of critics, filmmakers, and academics, Chantal Akerman’s Jeanne Dielman serves as a before-and-after moment in the history of film. With much of its 201-minute running time consisting of its title character (a phenomenal Delphine Seyrig) performing her daily domestic tasks in real time, the movie is sure to alienate some potential viewers. Those who can get on its wavelength, however, will find it a uniquely immersive experience that has never been and likely never will be replicated.
Jeanne, a widowed mother who earns extra money by sleeping with a different male client shortly before her son returns home from school each afternoon, performs most of her mundane tasks with a silence that speaks to their mundanity — and, by extension, that of her entire existence. But tensions come to a boil just as surely as the water she cooks her potatoes in, and the film’s shocking climax is impossible to see as anything but inevitable in hindsight.
Robert Altman once said that the idea for his strangest film came to him in a dream while his wife was in the hospital, and it shows. And while the movie’s title is accurate, it’s also something of a misdirection: 3 Women is at its heart a cinematic dyad, with two women named Mildred (one played by Shelley Duvall, who goes by Millie, and the other, nicknamed Pinky, played by Sissy Spacek) coming into each other’s lives in the California desert with increasingly bizarre consequences. It’s not unlike Ingmar Bergman’s Persona or David Lynch’s Mulholland Drive in the way the personalities of the two female leads refract through a shared lens, until they seem to merge into something darker than the sum of their parts.
Terrence Malick burst onto the scene with 1973’s Badlands, a lovers-on-the-lam drama that helped launch the careers of Martin Sheen and Sissy Spacek in addition to his own. It was one of the most exciting debut films in years, and Malick followed it up with the even more accomplished Days of Heaven. Not that it was easy: Malick, whose loose filmmaking style entails shooting hours of footage and settling on a narrative in the editing room, had cinematographers Néstor Almendros and Haskell Wexler shoot almost entirely during magic hour, the brief period around sunset when the sky is at its most painterly, with a warm glow of light. The result, accompanied by legendary composer Ennio Morricone’s evocative score, is, well, magical. Set in 1916, it follows wayward couple Bill and Abby (Richard Gere and Brooke Adams, respectively), who flee Chicago for the Texas panhandle, where they pretend to be siblings in order to trick a wealthy, ailing farmer (Sam Shepard) into falling in love with Abby so they can inherit his money.
Also along for this ill-fated journey is Bill’s actual kid sister Linda (Linda Manz), who provides the film’s lyrical, evocative narration: “The sun looks ghostly when there’s a mist on a river and everything’s quiet,” she’ll say in one scene, and in another, “You're only on this Earth once. And I — to my opinion, as long as you’re around, you should have it nice.” Days of Heaven premiered at the Cannes Film Festival, where Malick won the prize for Best Director; it went on to be nominated for four Academy Awards, with Almendros winning Best Cinematography. It’s still considered one of the most beautiful films ever made, and few who’ve seen it would disagree.
In space no one can hear you scream, but here on Earth, your neighbors will certainly hear you the first time you watch Alien. Ridley Scott’s first masterpiece remains the measuring stick for sci-fi horror 45 years later, and the only movie to match it was one released shortly after it: John Carpenter’s The Thing. Before it was a massive franchise consisting of sequels, prequels, comic books, and toys, Alien was a fairly straightforward story of a cosmic entity coming aboard a spaceship and picking off its inhabitants one by one. And while it’s obvious to anyone watching today that Sigourney Weaver’s Ripley is the heroine all along, it wasn’t at the time — Weaver was a relative unknown then, whereas Tom Skerritt, in the role of Captain Dallas, was the character audiences expected to save the day. In addition to the xenomorph itself — an enduringly horrifying creature designed by Swiss artist H.R. Giger — Alien also introduced moviegoers to Jonesy, a fan-favorite feline who in some ways is the franchise’s ultimate survivor.
When we think of U.S. presidents through history, we don’t tend to picture their physical frame so much as recall a collection of historical facts and anecdotes. If you imagine George Washington, for example, is a mental image of his presence in a room the first thing that comes to mind? Or do you recall a story about a cherry tree, or crossing the Delaware? With a few exceptions here and there, the physicality of presidents has been largely obscured by history. Can you name the tallest president? The shortest? What about the second-tallest or second-shortest? A full list of the height of each president follows, spanning a foot difference from 5 feet, 4 inches tall to 6 feet, 4 inches tall.
Images via Getty Images, illustration courtesy of Madison Hunt
Over 6 Feet Tall
The tallest president in U.S. history was Abraham Lincoln, who stood at 6 feet, 4 inches — and that’s without his signature stovepipe hat. It’s a height that still sounds fairly tall today, but it was extraordinarily tall for the time; the average height for an American male during Lincoln’s presidency was 5 feet, 7 inches, making him 9 inches taller than average. Lincoln’s equivalent height today would be 6 feet, 7 inches — a half-inch taller than the average NBA player.
Given his distinct physical presence, it perhaps comes as no surprise that Lincoln’s appearance was frequently commented upon in his day. The New York Herald once wrote, “Lincoln is the leanest, lankiest, most ungainly mass of legs, arms, and hatchet-face ever strung upon a single frame.” Another reporter wrote of his “shambling gait” in London’s The Times, and described him as “a tall, lank, lean man, considerably over six feet in height, with stooping shoulders, long pendulous arms, terminating in hands of extraordinary dimensions, which, however, were far exceeded in proportion by his feet.” Here are the 18 other presidents who stood over 6 feet, if not quite as noticeably as Uncle Abe.
– Abraham Lincoln: 6 feet, 4 inches (193 cm) – Lyndon B. Johnson: 6 feet, 3.5 inches (192 cm) – Donald J. Trump: 6 feet, 3 inches (191 cm) – Thomas Jefferson: 6 feet, 2.5 inches (189 cm) – Chester A. Arthur: 6 ft, 2 inches (188 cm) – Bill Clinton: 6 feet, 2 inches (188 cm) – George H. W. Bush: 6 feet, 2 inches (188 cm) – Franklin D. Roosevelt: 6 feet, 2 inches (188 cm) – George Washington: 6 feet, 2 inches (188 cm) – Andrew Jackson: 6 feet, 1 inch (185 cm) – John F. Kennedy: 6 feet, 1 inch (185 cm) – Barack Obama: 6 feet, 1 inch (185 cm) – Ronald Reagan: 6 feet, 1 inch (185 cm) – James Buchanan: 6 feet, 1 inch (185 cm) – Gerald R. Ford: 6 feet, 1 inch (185 cm) – James A. Garfield: 6 feet, 1 inch (185 cm) – Warren G. Harding: 6 feet, 1 inch (185 cm) – James Monroe: 6 feet, 1 inch (185 cm) – John Tyler: 6 feet, 1 inch (185 cm)
Images via Getty Images, illustration courtesy of Madison Hunt
Over 5 Feet, 10 Inches Tall
The average height of all 46 U.S. presidents is 5 feet, 11 inches, and it has been decades since the United States elected a president below that height (in part, notably, because all U.S. presidents have been male). Jimmy Carter was the last one, at 5 feet, 9.5 inches — still roughly an inch taller than the average American male at the time. According to the data, the United States almost never elects a president who is shorter than the average U.S. citizen of the time. The last time Americans voted in a shorter-than-average president was when Benjamin Harrison emerged victorious in the election of 1888, though at only 1.5 centimeters below the average of the time, he wouldn’t have been noticeably shorter. Based on this fact, it does seem that Americans prefer their presidents to be somewhat tall — though, considering the lack of mention in exit polls over the years, that may be a subconscious preference rather than an actual requirement. Here are the presidents who fell right around average height for a commander in chief, between 5 feet, 10 inches and 5 feet, 11 inches tall.
Images via Getty Images, illustration courtesy of Madison Hunt
Under 5 Feet, 10 Inches Tall
On the other end of the height spectrum, the shortest U.S. president was James Madison, who at 5 feet, 4 inches holds that record by 2 inches — the next-shortest presidents were Martin Van Buren and Benjamin Harrison, both at 5 feet, 6 inches. The average height in Madison’s time was actually slightly taller than in Lincoln’s time; 172 centimeters to 170 centimeters, or just a bit shy of 5 feet, 8 inches. Despite Madison being the only president on record who was shorter than his First Lady (Dolley Madison was around 5 feet, 7 inches tall), his relatively diminutive physical stature didn’t cause nearly the same level of commentary as Lincoln’s lanky height (or if it did, that commentary is lost to history). To conclude our list, here are the 14 presidents who stood under 5 feet, 10 inches tall.
- Jimmy Carter: 5 feet, 9.5 inches (177 cm) - Millard Fillmore: 5 feet, 9 inches (175 cm) - Harry S. Truman: 5 feet, 9 inches (175 cm) - Rutherford B. Hayes: 5 feet, 8.5 inches (174 cm) - Ulysses S. Grant: 5 feet, 8 inches (173 cm) - William Henry Harrison: 5 feet, 8 inches (173 cm) - James K. Polk: 5 feet, 8 inches (173 cm) - Zachary Taylor: 5 feet, 8 inches (173 cm) - John Quincy Adams: 5 feet, 7.5 inches (171 cm) - John Adams: 5 feet, 7 inches (170 cm) - William McKinley: 5 feet, 7 inches (170 cm) - Benjamin Harrison: 5 feet, 6 inches (168 cm) - Martin Van Buren: 5 feet, 6 inches (168 cm) - James Madison: 5 feet, 4 inches (163 cm)
Most of us are familiar with Benjamin Franklin’s scientific inventions, and his role as one of the United States’ foremost Founding Fathers. But his ingenuity extended far beyond his most defining accomplishments; Franklin, it seems, was a visionary without limits. For instance, did you know his rustic clothing inspired European copycats? Or that he was instrumental in understanding the Gulf Stream? These lesser-known facets of Franklin’s legacy underscore the breadth of his intellect and the enduring impact of his innovations. Here are five fascinating ways Franklin’s forward-thinking approach made him one of the most fascinating figures in American history.
In late 1776, early in the Revolutionary War, Franklin sailed from Philadelphia to France on a diplomatic mission. Although he was ultimately there to secure French support for American independence, he also became somewhat of a style icon. Unlike many of his contemporaries who favored the popular powdered wig of the time, Franklin wore his natural hair unstyled. His clothing was similarly unfussy: Plain suits, a walking stick, and fur hats were his signature sartorial items.
This deliberately simple and very Americana choice of attire, coupled with Franklin’s global reputation, endeared him to the French, and even influenced French fashion. Some women began wearing wigs made to look like his fur cap, a style known as the “coiffure a la Franklin,” and his image appeared in portraits and on medallions and other jewelry. In 1779, Franklin wrote to his daughter about just how popular he had become. “The numbers sold are incredible,” he wrote. “The pictures, busts, and prints, (of which copies upon copies are spread every where) have made your father’s face as well known as that of the moon.”
He Started America’s First Volunteer Fire Department
Many rural communities rely on volunteer fire departments, and their origin is thanks to Franklin’s civic-mindedness. In 1736, he founded the first volunteer fire department of its kind in the U.S., the Union Fire Company in Philadelphia. Unlike Boston’s Mutual Fire Societies, Franklin’s department, sometimes called the “Bucket Brigade,” served the entire community and not just members. The brigade had approximately 30 volunteers to start, fulfilling various roles such as water management, property protection, and training. When attending a fire, each member brought leather buckets to carry water and linen sacks to attempt to save belongings.
Volunteer fire departments aren’t the only enduring and important public service attributed to Franklin, either. In 1731, he established the Library Company of Philadelphia, the first successful public lending library in America, and in 1751, he played a pivotal role in the establishment of Pennsylvania Hospital, one of the first hospitals in the country.
He Invented an Instrument That Was Beloved by Beethoven
Franklin was quite musical, too. Not only did he play instruments such as the guitar and the harp, but in the 1760s, he even invented an instrument of his own. The “glass armonica” consisted of glass bowls of varying sizes, arranged concentrically to eliminate the need for water and mounted on a rounded rod. The rod was moved by a foot pedal, and the glass bowls were played by rubbing one's fingers along their edges. It was meant to produce tones similar to “singing” glasses, something Franklin had seen while living in England.
Following its first public performance in 1762, the armonica became a hit. Marie Antoinette took lessons, Thomas Jefferson was a fan, and Ludwig van Beethoven and Wolfgang Amadeus Mozart both composed music for the novel instrument. Despite its initial popularity, the armonica fell out of favor by the 1820s, due in part to its purported negative effects on mental health — attributed at first to the instrument’s ethereal tones, but later thought to be due to lead poisoning from the paint applied to the bowls. Today, the armonica is used by some niche musicians, a second life that would surely please Franklin, who said the instrument had brought him “the most personal satisfaction.”
Credit: Culture Club/ Hulton Archive via Getty Images
His First Pseudonym Was “Mrs. Silence Dogood”
Franklin was a prolific writer, famously contributing words to the Declaration of Independence and the United States Constitution. But his first prose was published when was just a teenager — and it was done so under the pen name "Mrs. Silence Dogood." He used the name to get published in his brother’s newspaper, TheNew-England Courant, without his brother’s knowledge. Disguising himself as a middle-aged widow, Franklin penned a series of witty and satirical essays that quickly gained popularity with readers.
He penned 14 essays under the pseudonym, one of just many he would use in his life. Some of the others were Polly Baker, a character Franklin used to point out double standards among men and women under the law; the Busy-Body, an American Weekly Mercury character who dabbled in lighthearted societal gossip and relationship talk; and perhaps Franklin’s most famous pseudonym, Richard Saunders, of Poor Richard's Almanack, which was first published in 1732 and lasted for 26 annual editions.
The Gulf Stream is a major ocean current that brings warm water from the Gulf of Mexico into the Atlantic Ocean, significantly influencing weather patterns. Spanish explorer Juan Ponce de León first observed the current in 1513, but it wasn’t until the late 18th century that Franklin became the first to chart out the path of the Gulf Stream on a map.
While serving in London as deputy postmaster general for the American colonies, Franklin noticed a difference in sailing times between westbound and eastbound ships. He consulted his cousin Timothy Folger, a Nantucket whaler with deep knowledge of the area, who provided his insights into the powerful current. Together, Franklin and Folger charted the waters, and published their findings on a map in 1768, the first known physical depiction of what they termed the “Gulph Stream.” The map was distributed to Franklin’s mail ships, and the knowledge served as the basis of future Gulf study by the United States Coast Survey.
Credit: Mondadori Portfolio/ Hulton Fine Art Collection via Getty Images
Author Bennett Kleinman
April 3, 2024
Love it?109
It’s often said that people during the Middle Ages, a period that lasted from roughly the end of the fifth century through the 15th century, drank beer instead of water because the drinking water at the time was dirty and unsafe. It would beg the question: Were people in medieval times always drunk? While it’s true that beer was free-flowing in the Middle Ages, a lack of clean drinking water is one of the most common misconceptions about the time period. We took a look at the history to get to the truth behind the myth.
Despite the myth that’s been perpetuated in the centuries since, there was plenty of clean water during the Middle Ages, and people rarely relied on alcoholic beverages as a substitute. That isn’t to say people steered clear of the stuff — boozy beverages were widely enjoyed by everyone from members of the working class to those in high society. But it’s not actually true that unclean water led to the widespread consumption of ale as an alternative.
People in medieval times had an understanding about the health benefits of drinking water, even if the science wasn’t fully understood. This was based in part on the early medical findings of ancient Greek physician Hippocrates, who recommended boiled and strained water as an important ingredient for overall health. There was, of course, medical misinformation as well, including some 15th-century texts that encouraged pregnant mothers to drink wine instead of cold water for the health of the baby. But generally, fresh water was understood to be good for you.
Indeed, fresh, running water was so coveted that many medieval villages were built along rivers and streams so that residents could have access to a constant supply of water for drinking, cleaning, farming, and other daily chores that required clean water. Many people also collected rainwater in barrels, which was safe to drink at the time given the lack of air pollution. Freshwater wells were quite common, too, and were built to ensure the purity of the water. People in the Middle Ages were aware of the fact that the best water was clear, cold, and odorless, and people often lined their wells with wood to ensure that the water wouldn’t get contaminated with murky mud. Some people also understood that if water looked or smelled impure, boiling it could remove impurities and make it safer to drink.
Clean water wasn’t always as easy to come by in densely populated urban environments that were further removed from natural sources. There was also a high risk of contamination due to runoff from local businesses such as tanneries and butchers. With these concerns in mind, many cities instituted ordinances and imposed fines on anyone who polluted the local water supply.
Over time, cities began constructing intricate networks of pipes to bring fresh water into the city center from nearby water sources. In 1237, the city of London was given the authority to build a system of underground pipes connected with springs in nearby Tyburn (a manor located in Middlesex county) in order to supply residents with a fresh supply of water. Known as the “Great Conduit,” it was among the first major plumbing projects of its kind. The pipes were made of wood and lead, and safely transported water into London for people to enjoy at no charge. The system was expanded throughout the 14th and 15th centuries before being damaged beyond repair by the Great Fire of London in 1666.
Those who were wealthy enough could actually pay to have private pipes installed that brought water directly to their homes. These were known as “quills,” and anyone who illegally tapped into the pipes was subject to a sizable fine. Many wealthy Londoners also paid water delivery people, known as “cobs,” to bring them 3-gallon tubs of water each day. However, these perks were difficult for the majority of people to afford, and most folks relied on the communal water supply instead.
Given the plentiful amount of clean drinking water available during the Middle Ages, it’s natural to wonder how the myth of beer-guzzling medieval folk formed to begin with. One theory is that it stemmed from various written works at the time that highlighted excessive drinking as a popular activity — William Shakespeare’s plays, for instance, are full of inebriated characters, none so famous as Sir John Falstaff. But the myth was also likely perpetuated by the fact that people often drank beer along with water, as the former was a staple in many medieval diets.
Ales were frequently consumed by farmers and other workers as a type of energy drink. The ales brewed at the time often had a lower alcoholic content than modern-day beers, with an average ABV as low as 1%, versus an average around 5% today. (There were stronger beers, too, but they were harder to come by.) These low-alcohol ales still contained plentiful amounts of calories, though, especially compared to water. That’s why at the end of a long shift, it was common for workers to enjoy several glasses of ale not only to quench their thirst but also to provide them with caloric fuel to recoup the energy they lost that day.
The upper classes, meanwhile, often turned to fancy wines and liquor as their beverages of choice, as they believed water to be the drink of the common folk. While everyone could easily acquire water for free, only those of a higher status could afford the most expensive and decadent wines. Members of high society who chose to drink water often added ingredients that only the wealthy could afford, such as ice, honey, fruit, and spices. Nevertheless, it’s likely that people in medieval times weren’t inebriated any more frequently than people in any other era of history, and when they were, it wasn’t for a lack of safe drinking water.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.