Heritage Images/ Hulton Fine Art Collection via Getty Images
Author Tony Dunnell
April 2, 2026
Love it?84
Much of human history has been defined by the actions of around 50 to 70 empires that once ruled large swathes of people across vast chunks of the globe. Each of these empires, whether large or small, for ill or for good, has influenced world history. It’s hard to say which has had the greatest impact on society — it is, after all, somewhat subjective and hard to measure — but some have undeniably shaped the course of human history, forever and irrevocably. Here are six such empires, from the mighty Persians to the globe-spanning British.
Around 550 BCE, Cyrus II of Persia — later to be known as Cyrus the Great — conquered a number of neighboring kingdoms, including Media and Babylon, and brought them together under his control. In so doing, he founded the first Persian Empire, also known as the Achaemenid Empire. Centered in modern-day Iran, it became one of the largest empires in history, stretching from Egypt and the Balkans to parts of Afghanistan and Pakistan. For more than two centuries, the empire was a global center of culture, religion, science, arts, and technology. But then came the Persian ruler Xerxes, whose failed invasion of Greece in 480 BCE brought about a period of decline. Weakened, the Persian Empire eventually fell in 330 BCE at the hands of the invading armies of Alexander the Great of Macedonia.
Photo credit: Print Collector/ Hulton Archive via Getty Images
Roman Empire
Following a period of unrest and civil wars — including the assassination of Julius Caesar — the Roman Republic came to an end and Augustus Caesar was crowned the first ruler of the new Roman Empire in 27 BCE. At its height in 117 CE, Rome controlled all the land from Western Europe to the Middle East, and was the most powerful political and military entity the world had yet seen. The impact of the Roman Empire on the modern world is hard to overstate. Our art, architecture, laws, technology, and engineering — even the very words we speak — have all been heavily influenced by the ancient Romans. But even an empire as mighty as Rome was destined to fall. A series of Gothic invasions heralded a general decline, and in 476 CE, the Western Roman Empire fell. The Eastern Roman Empire — also known as the Byzantine Empire — remained until 1453, but the glory days of the Roman Empire had reached their end.
Photo credit: Photo 12/ Universal Images Group via Getty Images
Han Dynasty
Founded in 206 BCE and established by a commoner named Liu Bang, the Han dynasty was the second great imperial dynasty of China. It spanned more than four centuries and is considered a golden age in Chinese history. Despite much political turbulence, the dynasty helped cement Confucianism as the state religion and opened up a world-changing trade route with Europe: the Silk Road. The Han dynasty is also known for its many innovations that shaped the world as we know it today. Developments in everything from record-keeping to agriculture and health care had a global impact, while inventions such as the rudder, the blast furnace, the wheelbarrow, suspension bridges, and paper forever changed the way we live.
Photo credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Mongol Empire
At the height of its powers, the Mongol Empire covered around 9 million square miles, making it the largest contiguous land empire the world has ever seen. The empire was founded by Genghis Khan, a former tribal leader, in 1206. Genghis’ early victories gave him control of the whole of what is now Mongolia. He and his fearsome armies then engaged in a period of aggressive expansion that conquered most of Eurasia, leaving a trail of ruin in its wake. But the Mongol Empire was far more complex than its notorious hordes would suggest. Under Genghis and his successors, the Mongols reformed his people’s laws, created a military-feudal form of government, and enhanced trade (including along the Silk Road) throughout his conquered territories. His armies, meanwhile, were quick to adopt advanced technologies of the time, such as powerful siege weapons and possibly gunpowder, while perfecting their mounted hit-and-run tactics. The Mongols were also innovators who, through their expansion, helped introduce military technology to new lands, including their famed composite bow and stirrups.
Advertisement
Advertisement
Photo credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Ottoman Empire
From humble beginnings as a provincial principality in Anatolia (part of modern-day Turkey), the Ottoman Empire rose to become one of the most powerful and long-lasting empires in history, spanning an incredible six centuries from the early 1300s to the aftermath of World War I. The Islamic superpower ruled large swathes of the Middle East, Eastern Europe, and North Africa, and reached the height of its powers under the appropriately named Suleiman the Magnificent. Suleiman, who ruled the empire from 1520 to 1566, brought about a golden age of geographic expansion, trade, economic growth, and huge cultural and artistic developments, while forging an empire that embraced ethnic diversity and religious tolerance.
Photo credit: Heritage Images/ Hulton Archive via Getty Images
British Empire
The British Empire remains the largest empire the world has ever seen. Beginning with overseas colonies in the Americas in the 16th century, British expansion then accelerated in the 18th century, particularly in Asia. With the aid of the London-based East India Company, the empire established trading posts around the world, which in turn developed into a worldwide system of dependencies, including colonies and protectorates. At its height in the early 20th century, the British Empire covered around 25% of the world’s land surface, including large parts of North America, Australia, Africa, and Asia. In 1913, it ruled over some 412 million inhabitants in its entirety — about 23% of the world’s population at the time. Such a vast territory was unsustainable, however, and, as more and more nations fought for their independence, the empire began to crumble. But the influence of the British Empire upon the world was massive — and remains a hugely controversial subject. Once a source of pride in Britain, the nation’s imperial past is now more often seen as a dark and often brutal period of colonialism. Since the decline of the empire, more than 60 countries have gained their independence from the United Kingdom.
In the early 19th century, as the United States pushed steadily south and west, the nation’s map was far less settled than it is today. Borders shifted, treaties were creatively interpreted, and distant empires claimed lands they could barely govern.
On the American frontier, settlers often lived in a gray zone — technically under a nation’s flag, but more or less governing themselves. Out of that uncertainty emerged one of the strangest chapters in U.S. history: a republic that survived for just two and a half months.
The Republic of West Florida’s brief existence began in the fall of 1810, when Anglo-American and British settlers in Spanish territory staged a revolt and declared independence from Spain — only to see their fledgling nation absorbed, almost immediately, by the United States. Here’s a look at the strange story of West Florida.
Credit: Sepia Times/ Universal Images Group via Getty Images
It Wasn’t the Florida We Know
To understand West Florida, you have to set aside modern geography. This was not the western half of today’s Sunshine State. In fact, none of the short-lived republic’s territory lies in modern Florida. West Florida was a narrow strip of Gulf Coast land stretching from the Mississippi River east to the Perdido River — territory that now falls mostly within Louisiana, Mississippi, and Alabama.
The name Florida dates back to Spanish exploration in the 1500s, when Juan Ponce de Leon named the peninsula La Florida — referring to Pascua Florida, the Spanish Easter season during which he first sighted the land. By the 18th century, the Perdido River served as the boundary between West Florida and East Florida, with the latter encompassing most of the peninsula that later became the modern state of Florida.
The region’s political identity shifted repeatedly during the 18th century. Originally claimed by Spain, it passed to British control in 1763 after the French and Indian War. Britain divided its holdings into East Florida and West Florida, establishing administrative borders that outlived British rule. When Spain regained the territory after the American Revolution, it kept those British-era boundaries and governed East and West Florida as separate colonies.
By 1810, both East and West Florida remained Spanish colonies, but in West Florida the empire’s grip was weak. Spain was strained by wars in Europe and independence movements across Latin America. Out on the Gulf Coast, Spanish officials were underfunded and undermanned — and settlers in the region, whose cultural and economic ties leaned toward the United States rather than distant Spain, were ready for change. East Florida, which was centered on St. Augustine and more dependent on Spanish military support, remained comparatively loyal to Spain.
The population of West Florida was diverse. English-speaking settlers — many from the American South — dominated politically and economically. They lived alongside Spanish and French officials, enslaved people, free people of color, and Indigenous communities. Trade flowed north toward American ports rather than south toward Spanish ones, and cultural ties to the United States grew stronger each year.
Spanish governance frustrated many settlers. Land policies were slow and inconsistent, legal systems unfamiliar, and trade regulations restrictive. Spain also lacked the ability to protect the region from lawlessness or external threats.
Complicating matters further was the Louisiana Purchase of 1803. The United States assumed that the western portion of West Florida had been included in the deal, but Spain rejected this interpretation outright. American officials argued that Louisiana’s eastern boundary reached the Perdido River, while Spain insisted it stopped at the Mississippi River. The resulting stalemate left West Florida in political limbo, governed by Spain but increasingly claimed by the United States.
By 1810, patience had run out. A group of local settlers, many of them prominent landowners, decided to take matters into their own hands. On September 23, they launched a brief uprising, capturing Fort San Carlos in Baton Rouge from Spanish forces. The takeover was swift and largely bloodless, reflecting how little resistance Spanish authorities could muster.
The Republic of West Florida declared independence a few days later. Although the uprising took place in Baton Rouge, the new republic established its capital in the nearby town of St. Francisville, reflecting the rural character of its leadership and support base. The Republic of West Florida was complete with a constitution, elected leaders, and a blue flag bearing a single white star. The new government insisted it was orderly, lawful, and deserving of international recognition.
Photo credit: Image courtesy of the Library of Congress
74 Days of Nationhood
For 74 days — from September 23 to December 6, 1810 — West Florida operated as a self-declared sovereign state. Its leaders spoke earnestly of self-government and local rights, borrowing language from the American Revolution. But the reality was more precarious.
The republic had few resources and no powerful allies to defend itself. On October 27, 1810, President James Madison issued a proclamation that West Florida already belonged to the United States as part of the Louisiana Purchase and ordered U.S. forces to take possession. The announcement asserted American authority at once, but troops did not enter the republic’s capital at St. Francisville until early December. For several uneasy weeks, West Florida existed in a political limbo.
Governor Fulwar Skipwith, speaking for the fledgling government of West Florida, formally protested the U.S. takeover. Armed resistance was briefly discussed, but without outside support it was certain to fail. In the end, the leaders chose not to fight. On December 6, 1810, American troops entered St. Francisville, and by December 10 the republic had disappeared into U.S. control.
Unlike other independence movements within the United States, West Florida left little lasting mark on the national memory. The land was eventually divided among several states, leaving no single region to preserve a distinct identity. In Louisiana, however, the former West Florida territory became known as the Florida Parishes, a regional label that still echoes the short-lived republic today.
Spain treated the episode as a minor inconvenience rather than a major diplomatic crisis. For the United States, annexation was promoted as a straightforward correction of territorial boundaries based on the contested interpretation of the Louisiana Purchase. While the United States had annexed and divided much of West Florida into Louisiana, Mississippi, and Alabama soon after 1810, Spain did not formally cede East and West Florida until the Adams-Onís Treaty. It was signed on February 22, 1819, and took effect in 1821, ending Spain’s legal claims to the territory and transferring the Floridas to U.S. control.
Credit: James Aylott/ Hulton Archive via Getty Images
Author Timothy Ott
November 4, 2025
Love it?66
Area 51 does not exist. That is, it does not exist under that specific designation, as it’s formally known as the Groom Lake and Homey Airport, part of the Nevada Test and Training Range. And even that information only became public knowledge after nearly 60 years of government denial of clandestine military activity taking place at the base, which is hidden some 85 miles north of Las Vegas in the Mojave Desert.
According to official records, Area 51 has served as the base of operations for the development of aircraft and other technology designed to enhance the capabilities of the U.S. military. And yet, the place has also carved out a distinct identity in popular culture as a hotbed for alien research and communications, as evidenced by the 26% of Americans who believe that crashed alien spaceships are housed there.
The long-standing tradition of secrecy only bolsters the conspiracy theories surrounding the purposes of Area 51. But while it can be difficult to separate truth from myth, enough details have been unearthed to allow inquiring minds to put together some basic facts about this mysterious site.
Photo credit: Image courtesy of the Laughlin Heritage Foundation/ CIA
Area 51 Began as a Training Ground for a New Spy Plane
The story of Area 51 began in 1955 with the CIA’s attempts to find a training ground for the Lockheed Corporation’s U-2 spy plane. A suitable spot was found at Groom Lake, a salt flat located near the northeast corner of the Atomic Energy Commission’s (AEC) Nevada Proving Ground. Upon being incorporated into AEC territory, the site became known by its map designation of Area 51.
In its infancy, Area 51 was a meagerly populated facility consisting of a 5,000-foot-long runway, three aircraft hangars, a few administration buildings, and trailers for employee housing. The sparse accommodations led to the base being sarcastically known by such nicknames as “Paradise Ranch” and “Dreamland.”
With the development of the high-altitude, supersonic A-12 aircraft in the early 1960s, Area 51 underwent major renovations to expand its infrastructure and tighten security. This included the addition of an 8,500-foot-long concrete runway with a 6,000-foot extension onto the lake bed, along with the construction of three new hangars. Additionally, a 60-foot-tall pylon was installed to test the radar-deflecting abilities of a mounted A-12 prototype.
In the late 1960s, the base became a site to analyze a Soviet MiG-21 fighter jet that had wound up in Israeli possession. By evaluating the capabilities of the MiG-21 and training American pilots to counter them, the U.S. military was able to overcome the problems posed by these speedy and highly maneuverable jets during the Vietnam War.
After Area 51 oversight passed from the CIA to the Air Force in 1978, engineers continued to develop stealth technology for the next generation of U.S. fighter jets. The fruit of their labor was the revolutionary F-117A Nighthawk, which proved nearly undetectable to radars upon achieving operating capability in 1983.
Credit: James Aylott/ Hulton Archive via Getty Images
A Man Claimed To Have Seen Alien Ships There
For the first three decades of its existence, Area 51 drew little attention save for the occasional local report of unusual lights and aircraft spotted in its vicinity. But everything changed in 1989, when a man named Bob Lazar delivered a series of interviews to KLAS-TV that introduced this secretive facility to the rest of the world.
Claiming to be a physicist who previously worked in the S-4 section of the base, Lazar insisted that there were nine alien spaceships being stored at the facility as scientists attempted to reverse engineer these technological oddities. He also described the existence of "element 115," which at the time was too heavy to be synthesized on Earth, that was used to power the ships' antigravity propulsion systems.
Lazar's claims were largely discredited (although he still has his believers) after reports surfaced of him seemingly lying about his employment and education history. Nevertheless, the association between Area 51 and extraterrestrial phenomena only grew from there, eventually becoming so widespread that the subject was a plot point for fictional fare such as the 1996 big-screen blockbuster Independence Day and the popular sci-fi TV show The X-Files.
Although the U.S. government continued to officially deny the existence of Area 51, those efforts were offset by lawsuits from former employees alleging health problems stemming from exposure to hazardous waste at the site. In 1995, Bill Clinton introduced a presidential determination that precluded the release of classified information regarding "the United States Air Force's operating location near Groom Lake, Nevada," an action that was annually reapplied through his remaining years in office before being continued by his successor, George W. Bush.
Advertisement
Advertisement
Photo credit: Image courtesy of the National Security Archives
The Base Was Finally Acknowledged With the Declassification of CIA Documents
In August 2013, following a Freedom of Information Act request filed in 2005, the government finally acknowledged the existence of Area 51 by declassifying a series of CIA documents that detailed the history of the U-2 and A-12 programs developed at the Nevada base. During the annual Kennedy Center Honors that December, Barack Obama became the first U.S. president to use the term "Area 51" in a public setting.
The base returned to headlines in summer 2019, when college student Matty Roberts launched the "Storm Area 51, They Can't Stop All of Us" Facebook event. Although some 2 million people confirmed they would attend, prompting local authorities to plan extra security measures, only a few thousand participants arrived at the nearby town of Rachel for the three-day event in September.
Although its cloak of invisibility has been lifted, Area 51 continues to function in much the same manner as before, with only a privileged few employees privy to the technological breakthroughs underway within its heavily guarded perimeter. And while reports of government investigations of unidentified aircraft occasionally push into the crowded news cycle, it may be decades — if ever — before any legitimate connections between this secretive facility and little green men from other worlds are revealed.
Advertisement
Advertisement
Why Are Pennsylvania, Massachusetts, Virginia, and Kentucky Called Commonwealths?
The United States is made up of 50 states, yet four of them — Pennsylvania, Massachusetts, Virginia, and Kentucky — are officially referred to as commonwealths. Their names are technically styled as the Commonwealth of Virginia, the Commonwealth of Massachusetts, etc. The term “commonwealth” is frequently associated with Great Britain, and its ongoing alignment with many countries that were formerly part of the British Empire. But in the U.S., which was also once under British rule, the term has a somewhat different meaning.
A commonwealth is defined as a political entity founded for the good of the people. The word dates back to the mid-15th century and was given weight by political philosophers such as John Locke and Thomas Hobbes throughout the 17th century. The term was also popularized during this period due to the execution of England’s King Charles I in 1649. After the king’s death, the country was declared a republic, and was known as the Commonwealth of England until the monarchy was restored in 1660.
This concept of a state existing for the benefit of its citizens — not for any one individual, such as a monarch — is the idea behind the use of the word in the United States. Its use dates back to colonial times and the revolutionary ideals of governance and political rhetoric that were paramount to the cause of American independence.
Virginia — the first British colony established in America — adopted the commonwealth designation for a short time during England’s Interregnum (“between reigns”) period and brought it back when the state adopted its own constitution in 1776. According to the Hornbook of Virginia History, the term was chosen “most likely to emphasize that Virginia’s new government was based upon the sovereignty of the people united for the common good.” Pennsylvania followed suit, officially affirming the authority of its citizens with a commonwealth designation in its 1776 constitution. The name fit well with Pennsylvania founder William Penn’s long-standing ethos of democratic governance and equality.
Massachusetts also sought a clear break from monarchical rule. When the state’s constitution was drafted in 1780, it used the Commonwealth of Massachusetts as the state’s official name and to describe the government of the state. The Massachusetts Constitution was primarily authored by founding father John Adams, who was heavily influenced by Enlightenment ideals that promoted the idea of governments existing to help enforce collective well-being.
Unlike the other three commonwealths, Kentucky was not one of the original 13 colonies — it was part of Virginia until it split off and became its own state in 1792. It wasn’t until 1891 and the fourth version of the state’s constitution that Kentucky styled itself a commonwealth. The change was likely influenced by its historical ties to Virginia as well as the impact of the Pennsylvania Constitution, which was seen as one of the most democratic of its time.
Calling a state a commonwealth made no functional or legal difference, however — the distinction was purely symbolic. As the Library of Congress puts it, Pennsylvania, Kentucky, Massachusetts, and Virginia are commonwealths “because their constitutional drafters declared they were.” The term has been applied to two U.S. territories, too: Puerto Rico and the Northern Mariana Islands are both classified as commonwealths; they’re not U.S. states, but not independent countries, either.
Beyond U.S. borders, many people still think of the term in tandem with the British monarchy. In fact, the Commonwealth of Nations, often simply referred to as "the Commonwealth," is indeed made up of 56 countries that, for the most part, were once under British rule and today share a mutual interest in democratic values and economic cooperation. Member countries such as Canada, Australia, and India remain independent nations, but maintain symbolic ties to Britain, with King Charles III serving as the symbolic head of state. Unique as they may be, the American commonwealths were established based on these same values of democracy and common welfare.
Credit: PhotoQuest/ Archive Photos via Getty Images
Author Tony Dunnell
June 24, 2024
Love it?132
On June 7, 1929, the Italian dictator Benito Mussolini signed a treaty that established the independent state of Vatican City. With this act, the Holy See — the government of the Catholic Church, led by the pope — finally had an official home. The Vatican had existed since the ancient Roman Republic, and had served as the capital of the Papal States, but it was only in 1929 that its geographic and political boundaries were defined.
With the creation of Vatican City, a true geographical oddity was born. The most famous fact about the Vatican is likely its status as the world’s smallest fully independent nation-state. No country in the world comes close to matching the Vatican’s minuscule population, which stands at less than 800 people, nor its tiny size, with an area of just 121 acres (49 hectares) — about one-eighth the size of New York City’s Central Park. Yet within this small space sit some of the world’s most spectacular religious and cultural sites, including St. Peter’s Basilica, the Sistine Chapel, the Vatican Apostolic Library, and the extensive Vatican Museums. Here are some more fascinating facts about the Vatican, from the elite soldiers who guard the pope to a papal telescope in an unlikely locale.
The Vatican Is Protected by One of the World’s Oldest Military Units
The Pontifical Swiss Guard has protected the pope since 1506. Consisting of between 110 to 125 soldiers, it is often considered one of the smallest armies in the world. It is also one of the oldest military units in continuous operation, originating with the Swiss mercenaries recruited by former popes during the Italian Wars (1494 to 1559). Today, members of the Swiss Guard are some of the most famous and recognizable residents of the Vatican. In their distinct dress uniforms of blue, red, orange, and yellow, and often wielding halberds, they are an impressive sight. But they are not simply ceremonial. The Swiss Guard is an elite military corps, and competition for inclusion among their ranks is fierce. New recruits must be unmarried Roman Catholic males with Swiss citizenship aged between 19 and 30 years old — and they must be both capable and willing to protect the pope with their lives.
The Pope Has a Hidden Escape Route Through the Vatican
The Vatican is connected to the Castel Sant’Angelo in Parco Adriano, Rome, by what looks like a walled fortification. Inside the wall, however, is an elevated passageway that stretches for about half a mile. Known as the Passetto di Borgo, it has served as the pope’s hidden escape route for hundreds of years (the current structure dates back to 1277). On at least two occasions, it has helped save the leader’s life. In 1494, Pope Alexander VI used the Passetto to escape to safety during the invasion of Charles VIII of France. Not long after, in 1527, Pope Clement VII fled through the passageway during the Sack of Rome, when forces of the Holy Roman Emperor Charles V rampaged through the city. The Swiss Guard fought bravely — and was ultimately massacred — while buying enough time for Pope Clement to escape to the safety of Castel Sant’Angelo.
Advertisement
Advertisement
Credit: Culture Club/ Hulton Fine Art Collection via Getty Images
The Vatican Museums Stretch for More Than 4 Miles
The Vatican is home to an immense museum complex called the Musei Vaticani. Founded in the 16th century by Pope Julius II, the public museums have amassed a huge array of artifacts over the centuries, collected by subsequent popes. Taken in its entirety, the complex consists of 26 museums whose combined halls and galleries stretch for around 4.3 miles and contain some 70,000 exhibits. Arguably the most famous of all the priceless masterpieces is the Sistine Chapel, with its ceiling and altar wall decorated by Michelangelo. The chapel is the last room visited on a tour of the museum complex.
Credit: Print Collector/ Hulton Fine Art Collection via Getty Images
The Vatican Has a Zero Birth Rate
It’s hard to imagine a country having a birth rate of zero, but then the Vatican isn’t a regular country. First, there are very few women in the Vatican. Figures released in 2011 revealed that there were only 32 female citizens among the Vatican’s population, compared to 540 men (and one of the women was a nun). Further contributing to the zero birth rate is the fact that the Vatican has no hospital, meaning that births take place outside the city-state. Gaining citizenship, therefore, is not dependent upon being born in the Vatican, but is instead granted by the papal powers that be.
The Vatican Owns an Observatory in Tucson, Arizona
Religion and astronomy haven’t always been easy bedfellows (just ask Galileo), but the Vatican is nonetheless home to one of the oldest astronomical institutes in the world. Pope Leo XIII formally founded the Specola Vaticana (Vatican Observatory) in 1891, then located on a hillside behind the dome of St. Peter’s Basilica. But as Rome grew, increased smog and sky glow forced the Vatican to move its observatory just outside the city. By 1961, however, light pollution again hindered the functionality of the observatory. The Vatican therefore took a major step and opened a second research center in Tucson, Arizona. To this day, the Vatican Advanced Technology Telescope remains operational atop Mount Graham outside Tucson.
Credit: Print Collector/ Hulton Archive via Getty Images
Author Mark DeJoy
May 22, 2024
Love it?140
The worldwide timekeeping convention, known as Universal Time Coordinated (UTC), is often still colloquially referred to by its historical name, Greenwich Mean Time. But how did Greenwich, a borough in southeast London, become the reference point for timekeeping all around the world? To answer that, we have to go back to the 17th century, during Europe’s age of exploration.
In 1674, King Charles II of England assembled a Royal Commission to study the possibility of creating a more precise measure of longitude in order to improve ship navigation. The greater oceanic distances being traveled by trade ships meant that any inaccuracies were magnified, causing shipwrecks and other maritime disasters; an improved reference for longitude would enable better course-plotting. The commission concluded that accurately determining star positions (as reference points) would be an essential part of the calculation, and recommended establishing an astronomical observatory. In response, Charles II appointed astronomer John Flamsteed as Britain’s first Astronomer Royal in 1675. Meanwhile, Royal Commission architect and astronomer Christopher Wren chose the ruins of Greenwich Castle as the site for the observatory. This was due to its location on high ground in a royal park, as well as the presence of the castle’s foundation, which could be repurposed for the observatory. Construction was completed in about a year, and Flamsteed began his first observations in 1676.
Flamsteed charted stars from Greenwich Observatory until his death in 1719. The catalog of nearly 3,000 stars he observed was published posthumously, first as the three-volume Historia Coelestis Britannica, and then the Atlas Coelestis, the largest and most accurate star atlas at the time. Flamsteed’s work was expanded on by Astronomer Royal Nevil Maskelyne, who published Nautical Almanac and Astronomical Ephemeris in 1766. These works contained the astronomical data needed to calculate longitude, with the help of celestial navigation tools such as the sextant and chronometer.
Using the sextant, which measures the angular distance between objects (such as the horizon and the sun), scientists could accurately observe the highest point of the sun: noon. The chronometer, a mechanical timepiece, was then set to precisely show the time at the Greenwich Observatory right at that moment. The difference between the time at Greenwich on the chronometer and local noon was translated to longitude: An hour of time difference was equal to 15 degrees of longitude (since the Earth rotates 360 degrees in 24 hours, or 15 degrees per hour). The lines of longitude were designated as east or west depending on whether the time at Greenwich was earlier or later than local time, respectively. This was the origin of Greenwich as a reference for timekeeping — at least, for British navigators.
Unlike latitude, for which the equator is Earth’s natural midpoint, the pole-to-pole vertical meridians have no such natural reference point. As a result, the location of 0 degrees longitude — known as the prime meridian — is arbitrary, and before a universal standard was established, many countries considered the longitudinal line running through their most significant port city as the prime meridian. France set the longitude at Paris as the prime meridian; in Spain, it was at Cádiz; in Italy, it was at Naples. In the 19th century, there were no fewer than 11 different prime meridians chosen by each of the world’s biggest shipping nations. Since time and longitude are so integral to each other, this also meant that there was no standardized timekeeping between nations.
The lack of an international time standard wasn’t impactful in the slow-moving era of seafaring, but the development of fast-moving transcontinental railroads in the mid-19th century changed that. Transcontinental train travel meant that a train would be crossing several different irregular time standards in relatively short order, especially in Europe, which in turn meant that a crew would have to make specific time changes with each national border crossing. The longer the distance of the trip, the more irregular time changes would need to be made, increasing the chance for scheduling errors and accidents. A solution was critical.
In 1884, U.S. President Chester A. Arthur convened the International Meridian Conference. Delegates from 21 nations met in Washington, D.C., to determine a globally recognized location for the prime meridian — and with it, a worldwide standard for time zones. Delegates from France argued that the meridian should not be located in either Europe or the U.S., so that it would have “a character of neutrality.” But support for the meridian to be located in Greenwich, England, was overwhelming, because a vast majority of the world’s shipping was already dependent on nautical maps that used Greenwich as 0 degrees longitude. The vote for Greenwich was nearly unanimous; only San Domingo voted against, with France abstaining.
A time zone standard was set with a one-hour time change every 15 degrees of longitude, with Greenwich Mean Time as the reference point. The time zones called for every 15 degrees east of the prime meridian to be an hour earlier than the time at Greenwich, and every 15 degrees west of the prime meridian to be an hour later. However, though the decision was made, it wasn’t actually binding. It took more than 20 years for Greece, Holland, Portugal, Russia, and Turkey to implement the decision on the location of the prime meridian and corresponding global time zone standard. France, proud of its own astronomical heritage, continued to use the Paris Observatory as the prime meridian, and did not adjust its time relative to Greenwich Mean Time until 1911.
In 1972, Universal Time Coordinated (UTC) replaced Greenwich Mean Time (GMT) as the global standard for timekeeping. The reference point for UTC remains 0 degrees longitude, but thanks to technological advances such as GPS, a more precise location for 0 degrees longitude was determined and agreed upon in 1984. The current prime meridian is known as the International Reference Meridian. It’s located only about 334 feet east of the previous prime meridian — still in Greenwich, England.
Since the middle of the 20th century, Las Vegas has been known as the capital of the American id. Gambling has long been at the center of its appeal, as nicknames such as “Sin City” and “Lost Wages” suggest. “What happens in Vegas stays in Vegas” is the city’s well-known slogan, while others have remarked, “Las Vegas is where losers come to win, and winners come to lose.”
Rising up from the Nevada desert, the city’s built environment is so extravagant that it’s difficult to imagine a time when its spectacle did not exist, fully formed. Let’s go back and trace the origins of this uniquely American city.
Even though Las Vegas occupies a unique place in American culture, its metropolitan origin was sparked by the same thing that gave rise to many other U.S. cities: the development of the railroad. The area that includes present-day Nevada became a United States territory with the signing of the Treaty of Guadalupe Hidalgo in 1848, which ended the U.S. war with Mexico. Despite its location in the basin of the Mojave Desert, the site of what is now Las Vegas was a sort of oasis — a valley that included a water source in the form of artesian springs.
The water source was the selling point for railroad magnate and U.S. Senator William Clark. In 1902, he bought 2,000 acres of land and water rights in order to create a waypoint for the San Pedro, Los Angeles & Salt Lake Railroad he incorporated to connect those cities. The railroad line through Nevada began construction in 1904, and the following year, Clark auctioned off parcels of his land, which was located east of the railroad tracks.
Around the same time, civil engineer John T. McWilliams was attempting to build a township west of the railroad tracks. Though he was working with far less acreage than Clark — 80 acres to Clark’s 2,000 — the development provoked competition and intensified Clark’s efforts to build his township. Clark offered refunds on the $16 train fare to town in order to attract buyers. Newspaper advertisements promised, “Get into line early. Buy now, double your money in 60 days,” though accounts differ on which of the two were commissioning that ad.
Ultimately, McWilliams couldn’t really compete. After all, Clark owned the water rights and far more land, and he had a major stake in the railroad. On September 5, 1905, a fire almost completely consumed McWilliams’ townsite, and ensured that the competition between the two was short-lived; development would be concentrated west of the railroad tracks. Clark formed the Las Vegas Land & Water Company with his partners, and vowed, “I will leave no stone unturned and spare myself no personal effort to do all that lies within my power to foster and encourage the growth and development of Las Vegas.”
Clark’s dramatic statement might sound like a natural lead-up to building the bombastic city we know today. But that’s not quite what happened. Over the next 25 years, Las Vegas settled into an existence as a quasi company town, with railroad and mining as the main industries and a population of about 2,300. Clark sold his share of the railroad to Union Pacific in 1921, living in retirement for four more years until his death at age 86.
The 1920s were a tumultuous decade for Las Vegas nearly from the outset. In 1921, Union Pacific cut 60 jobs in the wake of its acquisition of the railroad. President Warren G. Harding’s incoming administration also meant new appointments to the Railroad Labor Board, and the board approved a series of wage cuts for railroad workers. In the meantime, a post-World War I downturn in mining further impacted Las Vegas. Then, in what is largely viewed as a retaliatory move, Union Pacific moved its repair shops out of Las Vegas and to Caliente, Nevada, costing hundreds of jobs.
With a dire economic outlook impacting the entire state, Nevada revisited the legalization of gambling, which had been legal from its statehood in 1869 up until 1910. With greater public support for relegalizing gambling than previous efforts had, a bill to legalize “wide open” gambling passed in both the state Assembly and Senate, and on March 19, 1931, Governor Fred Balzar signed it into law. That same year, divorce laws were loosened to permit anyone with a six-week residency in the state to legally divorce. And just one year earlier, construction had begun on the Hoover Dam, bringing an influx of thousands of workers to the area, many of whom would take the short trip to Las Vegas to try their luck with the newly legalized games. With this confluence of events, the Las Vegas we know today began to take shape.
Advertisement
Advertisement
Credit: Gene Lester/ Hulton Archive via Getty Images
Organized Crime and the Strip
The decriminalization of gambling made Las Vegas an attractive destination to experienced gambling operators, some of whom were running criminal enterprises in other states. One such figure was the archetypal crooked cop Guy McAfee, a Los Angeles vice squad officer who fled to Las Vegas to escape prosecution for running gambling and prostitution rings — the exact vice he was supposed to be policing. Arriving in town in 1938, he bought the Pair-O-Dice Club on Highway 91 and renamed it the 91 Club, delaying its grand opening to 1939 in order to coincide with Ria Langham's six-week residency for divorcing Clark Gable.
McAfee was responsible for two enduring pieces of Las Vegas culture: He opened the Golden Nugget on Fremont Street, ushering in an era of grandiose casinos, and he is also credited with nicknaming Highway 91 “the Strip.” The Golden Nugget opened in 1946, about a year after the Nevada Legislature created the first casino license stipulating a 1% tax rate on gross gaming revenue in excess of $3,000.
The lucrative gaming industry began to attract heavier organized crime players beyond McAfee. Benjamin “Bugsy” Siegel arrived in Las Vegas intending to create a base of operations for the notorious Syndicate, which, at the time, was led by Meyer Lansky during a period when Salvatore “Lucky” Luciano was in prison. Using funds from the Syndicate, Siegel became the primary stakeholder in the construction of a casino on Highway 91 to rival the Golden Nugget. Siegel wanted it to depart from the Old West aesthetic of most casinos of the time, and instead be patterned after the tropical resorts the Syndicate backed in Havana, Cuba. He dubbed it the Flamingo, and hoped to set a new standard for opulence in line with Siegel’s own worldview. “Class, that’s the only thing that counts in life," he once said. “Without class and style, a man’s a bum, he might as well be dead.”
Lavish attention to detail and poor business management contributed to enormous cost overruns, and bad luck compromised the Flamingo’s opening and its ability to quickly recoup costs. Maybe because of the money, maybe for a number of other possible motives that are debated to this day, Bugsy Siegel was gunned down while reading the newspaper in a Beverly Hills mansion on June 21, 1947. The murder was a national sensation, covered in tabloids and TIME magazine alike. LIFE magazine ran a gruesomely iconic full-page photo of the crime scene in its article about the murder. The case, Crime Case #46176 in the Beverly Hills Police Department, is still open and unsolved.
In a tellingly quick matter of minutes after Siegel’s murder, other Syndicate bosses took over the Flamingo. The resort eventually became profitable — so much so that the Syndicate began building more casino-resorts on the Strip. Organized crime had taken hold in Las Vegas, and the era of the swanky, entertainment-oriented hotel-casino was born. The mob invested in more casinos; the Sands Hotel and Casino opened in 1952 and brought in the “Rat Pack” (Frank Sinatra, Dean Martin, Sammy Davis Jr., Joey Bishop, and Peter Lawford) for a high-profile residency. The Dunes, Riviera, and New Frontier opened in 1955; the Tropicana followed in 1957, and the Stardust opened a year later. Each had ties with organized crime syndicates from around the country.
Despite the sensational murder of Bugsy Siegel, the mob’s involvement in casinos, hotels, restaurants, and other Vegas businesses expanded, and more gangsters arrived in the city throughout the 1960s and ’70s. But in the late ’60s, billionaire Howard Hughes bought a series of mob-connected casinos — the Desert Inn, the Sands, Castaways, Frontier, the Silver Slipper, and Landmark — that shifted the balance of casino ownership in the city from mob-connected to corporate-owned. In 1969, the Nevada Legislature promoted corporate ownership of casinos, and in 1970, Congress passed the Racketeer Influenced and Corrupt Organizations Act (commonly known as RICO), which aided the U.S. Justice Department in cracking down on organized crime.
During the ’70s, high-profile car and restaurant bombings between rival gangs unsettled the city to the point of attracting the attention of the FBI. The Nevada Gaming Commission and the Nevada Gaming Control Board refocused on organized crime, and Governor Mike O’Callaghan made it a point of emphasis. A RICO case focused on mobster Anthony Spilotro and Frank “Lefty” Rosenthal, whose connections ran from Chicago mob families to others throughout the Midwest. By 1981, Spilotro’s operations had been broken up, and the mob was all but finished in Las Vegas.
Advertisement
Advertisement
Credit: Frank Edwards/ Archive Photos via Getty Images
The Rise of the Corporate Mega-Resort
Billionaire businessman Kirk Kerkorian bought the Flamingo in 1967, and in 1969, he opened the massive International Hotel. It was the largest hotel in the country, with 1,500 rooms and a 4,200-seat showroom. For its grand opening, he brought in Barbra Streisand, and then followed that by bringing in Elvis Presley for a famed residency — 837 consecutive sold-out performances over seven years — that set an enduring record. The same year, Kerkorian bought Hollywood’s venerable MGM Studio, and set out to build a themed resort in Las Vegas based on the production house.
With all of the buying and building, Kerkorian incurred enormous costs, so to help balance the ledger, he sold the Flamingo (and later the International Hotel as well) to the Hilton Hotel Corporation. The success of the Flamingo Hilton caught the attention of other major hotel corporations, such as Sheraton and Holiday Inn, and they too began opening casino-hotels in the city. In 1973, Kerkorian opened the MGM Hotel-Casino, which eclipsed the International Hotel in grandeur, boasting 2,100 rooms, eight restaurants, two showrooms, and the (at the time) world’s largest casino. It was the largest resort in the world, and Las Vegas’ first mega-resort.
During the rest of the ’70s and into the ’80s, development on the Strip stagnated. But Las Vegas itself was growing: From 1985 to 1995, the city’s population nearly doubled, increasing to around 368,360. Using junk bonds in 1989, developer Steve Wynn reinvigorated the Strip by building the most ostentatious mega-resort yet: the Mirage Resort and Casino. A 29-story Y-shaped tower with 3,044 rooms, a 1,500-seat showroom, and waterfalls, it also had a simulated volcano that would “erupt” every 15 minutes after sundown. That same year, Kerkorian announced plans for a new MGM Grand, which was completed in 1991 and took the mantle as Las Vegas’ largest casino, with even more over-the-top touches including a lion zoo and heavyweight boxing arena.
Credit: George Rose/ Getty Images News via Getty Images
An Entertainment Capital
The 1990s were a transitional era in Vegas, as many of the midcentury casino icons were razed in favor of constructing new family-friendly mega-resorts, representing a commitment toward broader entertainment tourism, rather than singular gambling. The Sands was imploded and replaced by the Venetian; similarly, the Dunes was replaced by the Bellagio, and the Hacienda was replaced by Mandalay Bay Resort. In true Las Vegas fashion, each implosion was a spectator event. The Hacienda implosion was even scheduled at 9 p.m. on December 31, 1996, in order to coincide with the new year on the East Coast. Most of the casino implosions were televised, and the videos can still be viewed on local TV news channel websites.
Today, Las Vegas continues to broaden its scope. Professional sports leagues have ended their historical aversion to placing teams in the city, as seen by the NHL awarding the expansion team the Las Vegas Golden Knights in 2017, the WNBA’s San Antonio Stars relocating to Las Vegas and becoming the Aces in 2018, and the NFL’s iconic franchise the Raiders relocating to Las Vegas in 2020. Major League Baseball’s Athletics are likely to follow. Las Vegas is now known as a city with an excellent fine-dining scene, with a number of chefs up as semifinalists for 2024 James Beard Awards. And the only place in town the mob exists now is in a museum.
Advertisement
Advertisement
What 6 Major State Capitals Looked Like 100 Years Ago
One hundred years is a long time in the life of a city. New technologies emerge and wane, people come and go, cultural factors ebb and flow. But not all cities change at the same rate; some stay comparatively similar to their older incarnations, while others become drastically different. Here’s a glimpse at what a few iconic state capitals looked like a century ago.
Credit: Buyenlarge/ Archive Photos via Getty Images
Atlanta, Georgia
Atlanta was named after the Western and Atlantic Railroad, for which it was a terminus. In the early 20th century, the city was well established as a major railway hub, and the downtown was built around its first train station. Hotels were concentrated in an area near the station (called, fittingly, Hotel Row) in order to serve train travelers, and by the 1920s, masonry high-rises created the city’s skyline.
Like many cities during this period, Atlanta was beginning to expand its roads in order to accommodate increasing numbers of cars. In the 1920s, the city built three major viaducts to allow traffic to bypass the high number of railroad crossings. The Central Avenue, Pryor Street, and Spring Street (later renamed Ted Turner Drive) viaducts not only improved vehicle safety, but also led to development outside the city’s downtown core.
Though Boston was established as a colonial port city as early as 1630, a wave of immigration between 1880 and 1921 fueled a population boom and a sense of transition similar to what many younger cities were facing at the time. An expanding population created a need for a building boom, and changes wrought by the Industrial Revolution were at the forefront. The industrialization of nearby Springfield, Massachusetts led to a high population of mechanics and engineers in that city, and it became a hub for the nascent automotive industry. Rolls-Royce selected Springfield as the site of its U.S. factory, and many other early auto manufacturers were based in the area. In fact, Massachusetts claimed to have manufactured more cars at the beginning of the 20th century than Detroit, Michigan. Cars were particularly popular in Boston — more so than in many other cities — and 1 in 8 Bostonians were car owners by 1913. This led to the construction of a large number of buildings dedicated to automobiles, including garages, repair shops, car dealers, and more.
In terms of architecture, the city’s affluent Beacon Hill neighborhood appears very similar today to how it looked in the 1920s, with well-preserved colonial-style and Victorian buildings. However, little remains of Boston’s once-abundant theater district, which reached a peak count of 40 theaters by 1935.
Nashville has a storied history as a center of American popular music, but that history was in its very infancy 100 years ago. The famous Grand Ole Opry didn’t begin until the middle of the 1920s, first broadcasting as “the WSM Barn Dance,” and at the time, it was hardly the institution it would become later. In those days, it was purely a radio show broadcast out of the WSM studio on the fifth floor of the National Life and Accident Insurance building, with only as many spectators as could fit in the limited confines of the station’s Studio A.
Unlike other major capitals, Nashville wasn’t a city of high-rises — the 12-story Stahlman Building was the tallest building from the time of its completion in 1908 until the L&C Tower was built in the 1950s — and much of the low-rise brick and masonry buildings from the last century are preserved today. This is particularly true along the First Avenue front of the Cumberland River, and along SecondAvenue, formerly known as Market Street.
Though Austin’s population began steadily growing around the end of the 19th century, in 1920 it was only the 10th-largest city in Texas, with a population just under 35,000. Its visual focal point was the Texas State Capitol Building (the UT Tower didn't exist yet), and the surrounding downtown consisted of low- and mid-rise buildings with brick or stone facades — an aesthetic that was more “Main Street” than “metropolis.” Cars weren’t quite as dominant in Austin as in larger cities of the time, and horse-drawn carriages were still seen on the streets.
Phoenix is another city that had a relatively small population in 1920 — just around 29,000 — but it was still the largest city in a state that had existed for only eight years. Because of this, Phoenix had the flashiness and bustle of an up-and-coming city, despite its small size. The city’s first skyscraper, the Heard Building, was even the site of a stunt climbing performance shortly after it was built. Nonetheless, the Heard Building’s height of seven stories might not pass for consideration as a skyscraper in larger cities. The 10-story Luhrs Building surpassed it in height when it opened in 1924, and the 16-story Hotel Westward Ho became the city’s tallest building in 1928. It held that title for more than 30 years, as the vast availability of land surrounding Phoenix disincentivized vertical construction, in favor of outward land expansion.
Sacramento is often overshadowed by other iconic California cities, but 100 years ago it boasted a downtown of ornate classical architecture, was home to the largest manufacturing train yard in the western United States, and served as a major retail hub for the region. Vital downtown structures of the time — such as Sacramento City Hall, Memorial Auditorium, the California State Life Building, and the Federal Building — were all built during a construction boom that occurred between 1912 and 1932. But there isn’t much evidence of this architectural period today, as even some surviving buildings, such as Odd Fellows Hall, have been remodeled with simpler midcentury-style facades.
In 1903, a Vermont doctor named Horatio Nelson Jackson drove from San Francisco to New York in a Winton touring car and became the first person to traverse the United States in an automobile. At the time, there were no more than 150 miles of paved road in the country, mostly concentrated within cities. The path that Jackson traveled was along rivers, mountain passes, flatlands, and the Union Pacific Railroad, and what roads he did encounter between cities were, in his description, “a compound of ruts, bumps, and ‘thank you m’ams’ [sic].” The trip took 63 days, 12 hours, and 30 minutes, but it inspired auto companies and other early car adopters to arrange trips of their own, sparking demand for long-distance highways.
The first automobile highways weren’t construction projects, and were referred to as “auto trails.” They were essentially suggested routes made up of existing thoroughfares, conceived of by private associations and codified with names such as Lincoln Highway, Victory Highway, National Old Trails Road, and so on. The associations marked the trails with signs or logos, and promoted the improvement of the routes, sometimes collecting dues from towns and businesses. Eventually, the U.S. government grew wary of the proceedings, and proposed the construction of a paved and nationalized numbered highway system. The proposal was adopted on November 11, 1926.
The numbered highways were a marked improvement over the auto trails, but nearly 30 years after their adoption, Congress approved the Federal-Aid Highway Act of 1956, revolutionizing the highway system by building 41,000 miles of interstate roads. The interstates repurposed existing numbered highways, connecting and extending them for greater efficiency, and these roads are to this day our main mode of distance auto travel. Let’s look at when some of the country’s biggest and most vital interstates were built.
I-70 is arguably the oldest interstate in the U.S. When it comes to the interstate projects initiated by the Federal-Aid Highway Act of 1956, I-70 was the first both by date of initial construction (August 13, 1956 in St. Charles County, Missouri) and initial paving (September 26, 1956 just west of Topeka, Kansas). The highway runs through 10 states as it spans the center of the country west from Baltimore, connecting Pittsburgh, Columbus, Indianapolis, St. Louis, Kansas City, and Denver. I-70 also includes the highest car tunnel in the world, the Eisenhower-Johnson Memorial Tunnel near Denver, which has an average elevation of 11,112 feet. The most recent segment of the tunnel is the 12.5-mile stretch through the narrow Glenwood Canyon, completed in 1992.
Interstate 70 may be the first of the federal interstates to begin construction, but Interstate 80 likely has the oldest antecedents, as it approximates the route of Nebraska’s Mormon Trail (aka Great Platte River Road), dating back to the 1840s, and also parts of the Lincoln Highway auto trail from the late-1910s to mid-1920s. Its transcontinental span runs through 11 states. Construction of the modern-day I-80 began in Nebraska in 1957 and in Pennsylvania in 1958 (though the Delaware Water Gap Toll Bridge that later became part of I-80 was opened on December 16, 1953). A final 5-mile connecting segment was completed near Salt Lake City on August 17, 1986.
Interstate 90 is another federal interstate that traces its origins to an older antecedent auto trail: the Yellowstone Trail from Plymouth, Massachusetts to Seattle, Washington that was founded in 1912. The first segment of newly constructed road for I-90 was opened in Spokane, Washington in November 1956. I-90 has the distinction of being the longest interstate, at 3,085 miles, and covers 13 states. The last link to its western terminus in Seattle was completed in 1993.
Route 66 was perhaps the most famous highway in the United States during the first half of the 20th century, inspiring a song and even a TV show. Interstate 40 is the longest of the five federal interstates that gradually replaced it, and it was I-40 bypassing Route 66’s final segment in 1984 that led to the iconic highway being decommissioned the following year. Construction of I-40 began in 1957 in North Carolina. Though the interstate stretches more than 2,500 miles between its eastern and western ends, its final segment was completed in 1990 in Wilmington, North Carolina — just 220 miles from its first segment’s completion in Kernersville, North Carolina.
Interstate 10 is the transcontinental highway with the southernmost span, running through all eight states of the lower U.S. Similar to I-40, it served as a replacement for Route 66, primarily for the stretch between California to Arizona. Exact details about the first new construction stretch of I-10 are sparse, but it most likely took place in El Paso in 1960. The Papago Freeway Tunnel completed I-10’s final segment when it opened in August 1990.
Interstate 95’s 1,920-mile span from Houlton, Maine to Miami, Florida makes it the longest north-south oriented interstate in the country. It crosses 15 states and Washington, D.C. (the most of any interstate), and it also established the first bus/carpool lanes in 1969. Since the route traverses more densely populated cities than any other interstate, its construction was often contentious, particularly in Philadelphia. The first new construction for I-95 began in the summer of 1956 in Richmond, Virginia, though the Connecticut Turnpike was the first stretch of I-95 that opened. The final stretch of I-95, a long unresolved gap on the Pennsylvania and New Jersey border, was finally completed in the summer of 2018. The event also marked a larger momentous occasion: the completion of the original federal interstate system first planned in 1956.
While some modern countries are little more than a decade old, others boast a rich history dating back thousands of years. Long before nations such as Iran and Egypt became the independent states we know them as today, early governments were formed by ancient civilizations in those regions, laying the foundation for thousands of years of expansion and development.
It can be a challenge to determine the exact age of any given country, but based on the current archaeological data, there are several nations in the Middle East and Asia that consistently rank among the oldest in human history. Here are five facts about some of the world’s oldest countries.
The First Architect Known by Name Lived in Ancient Egypt
Though the Great Pyramids of Giza are the most famous ancient Egyptian landmarks, the region is home to an even older structure. The Pyramid of Djoser — built in the mid-27th century BCE — predates the Great Pyramids by roughly a century, and was designed by a man named Imhotep, who is considered to be one of human civilization’s first architects. Imhotep not only conceived of this groundbreaking pyramidal structure, but also gets credit for using columns before anyone else and revolutionizing the use of stone in building construction. He also offered vast contributions to the world of medicine, writing texts describing the early diagnosis and treatment of many ailments. In 525 BCE, centuries after his death, Imhotep even rose to the status of full deity, being dubbed the Egyptian god of science, medicine, and architecture.
Two Vietnamese Sisters Led a Successful Revolt Against China
According to Vietnamese legend, the origins of Vietnam date back to around the year 2879 BCE, which marked the beginning of the Hồng Bàng dynasty — the first recorded dynasty in the nation’s history. For millennia, the Vietnamese people ruled over their own territory, which was invaded by members of China’s Han dynasty in 111 BCE. After a century of Chinese control, two women rose up to push back against their Chinese invaders, earning the status of national heroes in the process. The Trưng sisters — Trưng Trắc and Trưng Nhị — mobilized locals in an effort to avenge the death of the former’s husband, who had been executed by Chinese forces without trial. This newly formed army consisted of around 80,000 soldiers and 36 female generals. The forces rebelled against the Chinese in the year 39 CE, successfully driving the invaders out of the country. Though the sisters’ reign over the region was brief, as China recaptured the territory in 43 CE, the legend of their exploits and tragic fate only grew from there. Temples were dedicated in their honor throughout Vietnam, as people prayed to them for rain in times of drought. They remain important figures in Vietnamese history two millennia later.
Photo credit: Print Collector/ Hulton Archive via Getty Images
Armenia Was the First Country to Adopt Christianity as an Official Religion
Though modern-day Armenia did not achieve lasting independence until 1991, the country’s origins date back to around the year 2492 BCE according to Armenian mythology. In that year, an ancient Armenian warrior known as Hayk is believed to have defeated invading forces led by a Mesopotamia leader called Bel, which in turn saw what now encompasses modern-day Armenia fall under Hayk’s dominion. Many centuries later, Armenia made history by becoming the first country to adopt Christianity as its official state religion. Around the year 300 CE, an apostle named St. Gregory the Illuminator converted King Tiridates III of Armenia to Christianity, and it was made the official state religion in 301 CE. The newly formed Armenian Apostolic Church subverted the pagan ideology that once existed throughout the region, and it eventually became the country’s national church.
The First Recorded War Took Place in Present-Day Iran
The ancient kingdom of Elam, located in the southern region of modern-day Iran, contained settlements dating as far back as 7200 BCE. Millennia later, around the year 3200 BCE, the Proto-Elamite period began, marking the start of organized civilization throughout the region. Though few specifics are known about these early societies, what is known is that the region was the site of the first recorded war in human history. Around 2700 BCE, the Sumerian King Enmebaragesi led an attack against the Elamites, ultimately emerging victorious. Though there may have been conflicts before this, the battle — for which details are sparse — marks the earliest recorded account of a long-distance military campaign between opposing independent states.
The Oldest Surviving Anatomical Atlas Originated in China
In 1973, a seminal 2,200-year-old atlas of human anatomy written on ancient silk was discovered in south-central China, dating to the time of the Han dynasty (206 BCE to 220 CE) — making it the oldest surviving anatomical atlas ever discovered. Known as the Mawangdui medical manuscripts, these texts describe various “meridians” found throughout the human body, a term used to refer to arteries, blood vessels, and other internal elements. The Mawangdui texts also predate many other ancient Chinese texts related to acupuncture, suggesting that these early anatomical findings may have heavily influenced the science of acupuncture in the region. The artifact was uncovered in the tomb of a Han dynasty aristocrat named Xin Zhui (also known as Lady Dai), who was buried alongside copies of the medical texts in 168 BCE.
Advertisement
Advertisement
Another History Pick for You
Today in History
Get a daily dose of history’s most fascinating headlines — straight from Britannica’s editors.