Why the Quran Was a Bestseller Among Christians in 18th Century America

Why the Quran Was a Bestseller Among Christians in 18th Century America


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Islam has existed in North America for hundreds of years, ever since enslaved people captured in Africa brought their religion over. In the 1700s, an English translation of the Quran (or Koran) actually became a bestseller among Protestants in England and its American colonies. One of its readers was Thomas Jefferson.

Jefferson’s personal copy of the Quran drew attention in early 2019 when Rashida Tlaib, one of the first two Muslim women elected to Congress, announced she’d use it during her swearing-in ceremony (she later decided to use her own). It’s not the first time a member of Congress has been sworn in with the centuries-old Quran—Keith Ellison, the first Muslim Congressman, did so in 2007—yet its use highlights the long and complicated history of Islam in the U.S.

“The Quran gained a popular readership among Protestants both in England and in North America largely out of curiosity,” says Denise A. Spellberg, a history professor at the University of Texas at Austin and author of Thomas Jefferson’s Qu'ran: Islam and the Founders. “But also because people thought of the book as a book of law and a way to understand Muslims with whom they were interacting already pretty consistently, in the Ottoman Empire and in North Africa.”

When Jefferson bought his Quran as a law student in 1765, it was probably because of his interest in understanding Ottoman law. It may have also influenced his original intention for the the Virginia Statute of Religious Freedom to protect the right to worship for “the Jew and the Gentile, the Christian and Mahometan, the Hindoo, and infidel of every denomination,” as he wrote in his autobiography.

This professed religious tolerance was probably mostly theoretical for Jefferson. At the time, he and many other people of European descent likely weren’t aware of how far Islam extended into parts of Africa not controlled by the Ottoman Empire; which means that, ironically, they might not have realized that many enslaved people in North America held the very faith they were studying.

Jefferson’s Quran was a 1734 translation by a British lawyer named George Sale. It was the first direct translation of the Quran from Arabic to English (the only other English version was a translation of a French translation published in 1649), and would remain the definitive English translation of the Quran into the late 1800s. In his introduction, Sale wrote that the purpose of the book was to help Protestants understand the Quran so that they could argue against it

“Whatever use an impartial version of the Korân may be of in other respects,” he wrote, “it is absolutely necessary to undeceive those who, from the ignorant or unfair translations which have appeared, have entertained too favorable an opinion of the original, and also to enable us effectually to expose the imposture.”

Yet although Sale’s translation was theoretically a tool for missionary conversion, that wasn’t what English-speakers in Britain and North America used it for in Jefferson’s day. Protestants didn’t start traveling to Africa and the Middle East with the explicit purpose of converting Muslims until the late 19th century, Spellberg says.

“It’s true that George Sale, who did the first translation directly from Arabic to English, was sponsored by an Anglican missionary society,” she says. But it’s appeal went beyond its value as a missionary tool. Christians in the 18th century understood the value of learning about Islam. “The version that Thomas Jefferson bought was really a bestseller"—even with Sale’s 200-page introduction.

Given its history, Tlaib and Ellison’s choice to use Jefferson’s Quran in their private swearing-in ceremonies carries a particular significance. “By using Jefferson’s Quran, they’re affirming the fact that Islam has a long history in the United States, and is in fact an American religion,” Spellberg says.


Religion of black Americans

Religion of black Americans refers to the religious and spiritual practices of African Americans. Historians generally agree that the religious life of black Americans "forms the foundation of their community life." [1] Before 1775 there was scattered evidence of organized religion among black people in the Thirteen Colonies. The Methodist and Baptist churches became much more active in the 1780s. Their growth was quite rapid for the next 150 years, until their membership included the majority of black Americans.

After Emancipation in 1863, Freedmen organized their own churches, chiefly Baptist, followed by Methodists. Other Protestant denominations, and the Catholic Church, played smaller roles. In the 19th century, the Wesleyan-Holiness movement, which emerged in Methodism, as well as Holiness Pentecostalism in the 20th century were important, and later the Jehovah's Witnesses. The Nation of Islam and el-Hajj Malik el-Shabazz (also known as Malcolm X) added a Muslim factor in the 20th century. Powerful pastors often played prominent roles in politics, often through their leadership in the American civil rights movement, as typified by Martin Luther King Jr., Jesse Jackson and Al Sharpton.


Why Thomas Jefferson Owned a Qur’an

Two hundred and three years ago this month, President James Madison approved the act of Congress purchasing Thomas Jefferson’s private library. Intended to restock the Library of Congress after its previous holdings were destroyed by British arson during the War of 1812, the transfer of books from Monticello to Washington also highlights a forgotten aspect of religious diversity in early America.

Among the 6,487 books that soon traveled north, Jefferson’s 1734 edition of the Qur’an is perhaps the most surprising.

Historians have attributed the third president’s ownership of the Muslim holy book to his curiosity about a variety of religious perspectives. It’s appropriate to view it that way. Jefferson bought this book while he was a young man studying law, and he may have read it in part to better understand Islam’s influence on some of the world’s legal systems.

But that obscures a crucial fact: To many  living in Jefferson’s young nation, this book meant much more. Some scholars estimate 20 percent of the enslaved men and women brought to the Americas were Muslims.  While today these American followers of the Prophet Muhammad have been largely forgotten, the presence of Islam in the United States was not unknown among the nation’s citizens in the 18th  and 19th  centuries. Often practiced in secret, reluctantly abandoned, or blended with other traditions, these first attempts ultimately did not survive slavery. But the mere existence of Islam in the early republic is evidence that religious diversity in this country has a deeper and more complex history than many now know.

Not long before Jefferson's Qur'an rolled north with the rest of his library in 1815, another American attempted to write his own Islamic sacred text, albeit in a form that could not be so easily transported or  understood. He wrote his in Arabic on a jail cell wall. 

Slave traders captured Omar ibn Said in what is now Senegal and brought him to Charleston, South Carolina, in 1807. He was sold to a man that Said would describe as cruel and a kafir, or infidel. A devout Muslim when he arrived in the United States, Said strived during his enslavement first to maintain his faith, and then to transform it. His story has earned a place in history—as well as in the “Religion in Early America” exhibition, currently on view at the National Museum of American History, and on the Smithsonian Institution’s latest Sidedoor podcast.

Following an attempt to escape from slavery in 1810, Omar ibn Said was arrested in Fayetteville, North Carolina.

Slave traders captured Omar ibn Said in what is now Senegal and brought him to Charleston, South Carolina, in 1807. (Beinecke Rare Wikimedia, Book & Manuscript Library, Yale University )

While locked in his jail cell, Said became a figure of curiosity, first for his quiet and some said mysterious demeanor, then for the strange way in which he prayed, and finally for the graffiti he began to inscribe on the walls of his cell—Arabic script, most likely verses from the Quran. “The walls of his cell,” it was later reported, “were covered in strange characters, traced in charcoal or chalk, which no scholar in Fayetteville could decipher.”

Omar ibn Said soon became the property of a prominent local political family, which encouraged him to convert to Christianity and persuaded him to write an account of his life.

Through the decades that followed, this family publicized his conversion, placing articles about him in newspapers and broadsides around the United States.

In 1825, a Philadelphia paper recounted the story of his jail time, and how he had been brought to his new faith. In 1837 an article in the Boston Reporter hailed him as a “Convert from Mohammedanism” and devoted two columns to his Christian virtues. In 1854, a reporter wrote that he had “thrown aside the blood­ stained Koran and now worships at the feet of the Prince of Peace.” Though they still held Said in slavery, his owners claimed (without apparent irony) that he wore “no bonds but those of gratitude and affection.”

Yet Omar ibn Said had his own story to tell. Like his jail cell graffiti, his account of his experiences was written in Arabic. Those taking credit for his conversion were unable to read of his true convictions. If they had, they would have seen his adoption of Christianity, while apparently sincere, was also a practical measure. 

Before all the things he valued in life had been taken from him, Said said, he had prayed as a Muslim, but now he would say the Lord’s Prayer, he revealed in his writings. But he also peppered his text with prophetic declarations of divine wrath directed at the country that deprived him of his freedom.  

O people of America, O people of North Carolina,” he wrote. “Do you have a good generation that fears Allah? Are you confident that He who is in heaven will not cause the earth to cave in beneath you, so that it will shake to pieces and overwhelm you?

Even after his conversion to Christianity, Islam continued to shape his response to enslavement. And in this he was not alone: Plantation owners often made it a point to add Muslims to their labor force, relying on their experience with the cultivation of indigo and rice. Muslim names and religious titles appear in slave inventories and death records.

After an escape attempt, Job ben Solomon was jailed a local judge wrote: "his Notions of God, Providence, and a future State, were in the main very just and reasonable.” (Wikimedia Commons. Christies )

All of this was common knowledge at the time. Every so often in the 18th and 19th century press, other enslaved Muslims became celebrities of a sort—most often because they were discovered to have levels of erudition well beyond those who claimed to own them.

The earliest example of this was Job ben Solomon, who was enslaved in Maryland in the 1730s. Like Omar ibn Said, after an escape attempt he was jailed and a local judge became so taken with him he wrote a book about their encounter. As the judge wrote, “He shewed upon all Occasions a singular Veneration for the Name of God, and never pronounced the Word Allah without a peculiar Accent, and a remarkable Pause: And indeed his Notions of God, Providence, and a future State, were in the main very just and reasonable.”

The most famous of the enslaved Muslims who found their way into the early American press was a man named Abdul-Rahman Ibrahim.

Known as the Moorish prince he came from an important family in his homeland of Timbuktu, in today’s Mali. His plight drew wide attention in the 1820s, with newspaper stories written around the country. Decades after his enslavement, several well-placed supporters, including secretary of state Henry Clay, and through him President John Quincy Adams, helped to win his freedom and his relocation to Liberia. Before his departure, he offered a critique of religion in a country that had enslaved him for 40 years. As one newspaper account noted, he had read the Bible and admired its precepts but added, “His principal objections are that Christians do not follow them.” 

Even counting their population conservatively, the number of enslaved men and women with a connection to Islam when they arrived in colonial America and the young United States was likely in the tens of thousands. Proof that some of them struggled to preserve remnants of their traditions can be seen in the words of those most intent in seeing them fail in this endeavor.

In 1842, Charles Colcock Jones, author of The Religious Instruction of the Negroes in the United States complained that “Mohammedan Africans” had found ways to “accommodate” Islam to the new beliefs imposed upon them. “God, say they, is Allah, and Jesus Christ is Mohammed. The religion is the same, but different countries have different names.”

We can see the same kind of religious syncretism in the writings left behind by Omar ibn Said. In addition to his autobiographical account, he composed an Arabic translation of the 23 rd  Psalm, to which he appended the first words of the Qur’an: "In the name of God, the Most Gracious, the Most Merciful."

Missionaries like Jones considered such blendings of sacred texts evidence that enslaved Muslims like Said did not have much fidelity to their own religious traditions. But in fact, it proves the opposite. They understood that faith was important enough that they should look for it everywhere. Even in a nation where only non-Muslims like Thomas Jefferson were able to own a Qur'an. 

If there were any Muslims at Monticello when his library began its journey to Washington, in theory Jefferson would not have objected to their faith. As he wrote in surviving fragments of his autobiography, he intended his “Virginia Statute of Religious Freedom” to protect “the Jew and the Gentile, the Christian and Mahometan, the Hindoo, and infidel of every denomination.”

Yet such religious differences for Jefferson were largely hypothetical. For all this theoretical support for religious freedom, he never mentioned the fact that actual followers of Islam already lived in the nation he helped to create. Nor did he ever express curiosity if any of the more than 600 enslaved people he owned during his lifetime could have understood his Qur’an better than he did.

About Peter Manseau

Peter Manseau is is the Lilly Endowment Curator of American Religious History at the National Museum of American History.


Our Founding Fathers included Islam

By Denise Spellberg
Published October 5, 2013 6:00PM (EDT)

Shares

[He] sais “neither Pagan nor Mahamedan [Muslim] nor Jew ought to be excluded from the civil rights of the Commonwealth because of his religion.” — Thomas Jefferson, quoting John Locke, 1776

At a time when most Americans were uninformed, misinformed, or simply afraid of Islam, Thomas Jefferson imagined Muslims as future citizens of his new nation. His engagement with the faith began with the purchase of a Qur’an eleven years before he wrote the Declaration of Independence. Jefferson’s Qur’an survives still in the Library of Congress, serving as a symbol of his and early America’s complex relationship with Islam and its adherents. That relationship remains of signal importance to this day.

That he owned a Qur’an reveals Jefferson’s interest in the Islamic religion, but it does not explain his support for the rights of Muslims. Jefferson first read about Muslim “civil rights” in the work of one of his intellectual heroes: the seventeenth-century English philosopher John Locke. Locke had advocated the toleration of Muslims—and Jews—following in the footsteps of a few others in Europe who had considered the matter for more than a century before him. Jefferson’s ideas about Muslim rights must be understood within this older context, a complex set of transatlantic ideas that would continue to evolve most markedly from the sixteenth through the nineteenth centuries.

Amid the interdenominational Christian violence in Europe, some Christians, beginning in the sixteenth century, chose Muslims as the test case for the demarcation of the theoretical boundaries of their toleration for all believers. Because of these European precedents, Muslims also became a part of American debates about religion and the limits of citizenship. As they set about creating a new government in the United States, the American Founders, Protestants all, frequently referred to the adherents of Islam as they contemplated the proper scope of religious freedom and individual rights among the nation’s present and potential inhabitants. The founding generation debated whether the United States should be exclusively Protestant or a religiously plural polity. And if the latter, whether political equality—the full rights of citizenship, including access to the highest office—should extend to non-Protestants. The mention, then, of Muslims as potential citizens of the United States forced the Protestant majority to imagine the parameters of their new society beyond toleration. It obliged them to interrogate the nature of religious freedom: the issue of a “religious test” in the Constitution, like the ones that would exist at the state level into the nineteenth century the question of “an establishment of religion,” potentially of Protestant Christianity and the meaning and extent of a separation of religion from government.

Resistance to the idea of Muslim citizenship was predictable in the eighteenth century. Americans had inherited from Europe almost a millennium of negative distortions of the faith’s theological and political character. Given the dominance and popularity of these anti-Islamic representations, it was startling that a few notable Americans not only refused to exclude Muslims, but even imagined a day when they would be citizens of the United States, with full and equal rights. This surprising, uniquely American egalitarian defense of Muslim rights was the logical extension of European precedents already mentioned. Still, on both sides of the Atlantic, such ideas were marginal at best. How, then, did the idea of the Muslim as a citizen with rights survive despite powerful opposition from the outset? And what is the fate of that ideal in the twenty-first century?

This book provides a new history of the founding era, one that explains how and why Thomas Jefferson and a handful of others adopted and then moved beyond European ideas about the toleration of Muslims. It should be said at the outset that these exceptional men were not motivated by any inherent appreciation for Islam as a religion. Muslims, for most American Protestants, remained beyond the outer limit of those possessing acceptable beliefs, but they nevertheless became emblems of two competing conceptions of the nation’s identity: one essentially preserving the Protestant status quo, and the other fully realizing the pluralism implied in the Revolutionary rhetoric of inalienable and universal rights. Thus while some fought to exclude a group whose inclusion they feared would ultimately portend the undoing of the nation’s Protestant character, a pivotal minority, also Protestant, perceiving the ultimate benefit and justice of a religiously plural America, set about defending the rights of future Muslim citizens.

They did so, however, not for the sake of actual Muslims, because none were known at the time to live in America. Instead, Jefferson and others defended Muslim rights for the sake of “imagined Muslims,” the promotion of whose theoretical citizenship would prove the true universality of American rights. Indeed, this defense of imagined Muslims would also create political room to consider the rights of other despised minorities whose numbers in America, though small, were quite real, namely Jews and Catholics. Although it was Muslims who embodied the ideal of inclusion, Jews and Catholics were often linked to them in early American debates, as Jefferson and others fought for the rights of all non-Protestants.

In 1783, the year of the nation’s official independence from Great Britain, George Washington wrote to recent Irish Catholic immigrants in New York City. The American Catholic minority of roughly twenty-five thousand then had few legal protections in any state and, because of their faith, no right to hold political office in New York. Washington insisted that “the bosom of America” was “open to receive . . . the oppressed and the persecuted of all Nations and Religions whom we shall welcome to a participation of all our rights and privileges.” He would also write similar missives to Jewish communities, whose total population numbered only about two thousand at this time.

One year later, in 1784, Washington theoretically enfolded Muslims into his private world at Mount Vernon. In a letter to a friend seeking a carpenter and bricklayer to help at his Virginia home, he explained that the workers’ beliefs—or lack thereof—mattered not at all: “If they are good workmen, they may be of Asia, Africa, or Europe. They may be Mahometans [Muslims], Jews or Christian of an[y] Sect, or they may be Atheists.” Clearly, Muslims were part of Washington’s understanding of religious pluralism—at least in theory. But he would not have actually expected any Muslim applicants.

Although we have since learned that there were in fact Muslims resident in eighteenth-century America, this book demonstrates that the Founders and their generational peers never knew it. Thus their Muslim constituency remained an imagined, future one. But the fact that both Washington and Jefferson attached to it such symbolic significance is not accidental. Both men were heir to the same pair of opposing European traditions.

The first, which predominated, depicted Islam as the antithesis of the “true faith” of Protestant Christianity, as well as the source of tyrannical governments abroad. To tolerate Muslims—to accept them as part of a majority Protestant Christian society—was to welcome people who professed a faith most eighteenth-century Europeans and Americans believed false, foreign, and threatening. Catholics would be similarly characterized in American Protestant founding discourse. Indeed, their faith, like Islam, would be deemed a source of tyranny and thus antithetical to American ideas of liberty.

In order to counter such fears, Jefferson and other supporters of non-Protestant citizenship drew upon a second, less popular but crucial stream of European thought, one that posited the toleration of Muslims as well as Jews and Catholics. Those few Europeans, both Catholic and Protestant, who first espoused such ideas in the sixteenth century often died for them. In the seventeenth century, those who advocated universal religious toleration frequently suffered death or imprisonment, banishment or exile, the elites and common folk alike. The ranks of these so-called heretics in Europe included Catholic and Protestant peasants, Protestant scholars of religion and political theory, and fervid Protestant dissenters, such as the first English Baptists—but no people of political power or prominence. Despite not being organized, this minority consistently opposed their coreligionists by defending theoretical Muslims from persecution in Christian-majority states.

As a member of the eighteenth-century Anglican establishment and a prominent political leader in Virginia, Jefferson represented a different sort of proponent for ideas that had long been the hallmark of dissident victims of persecution and exile. Because of his elite status, his own endorsement of Muslim citizenship demanded serious consideration in Virginia—and the new nation. Together with a handful of like-minded American Protestants, he advanced a new, previously unthinkable national blueprint. Thus did ideas long on the fringe of European thought flow into the mainstream of American political discourse at its inception.

Not that these ideas found universal welcome. Even a man of Jefferson’s national reputation would be attacked by his political opponents for his insistence that the rights of all believers should be protected from government interference and persecution. But he drew support from a broad range of constituencies, including Anglicans (or Episcopalians), as well as dissenting Presbyterians and Baptists, who suffered persecution perpetrated by fellow Protestants. No denomination had a unanimously positive view of non-Protestants as full American citizens, yet support for Muslim rights was expressed by some members of each.

What the supporters of Muslim rights were proposing was extraordinary even at a purely theoretical level in the eighteenth century. American citizenship—which had embraced only free, white, male Protestants—was in effect to be abstracted from religion. Race and gender would continue as barriers, but not so faith. Legislation in Virginia would be just the beginning, the First Amendment far from the end of the story in fact, Jefferson, Washington, and James Madison would work toward this ideal of separation throughout their entire political lives, ultimately leaving it to others to carry on and finish the job. This book documents, for the first time, how Jefferson and others, despite their negative, often incorrect understandings of Islam, pursued that ideal by advocating the rights of Muslims and all non-Protestants.

A decade before George Washington signaled openness to Muslim laborers in 1784 he had listed two slave women from West Africa among his taxable property. “Fatimer” and “Little Fatimer” were a mother and daughter—both indubitably named after the Prophet Muhammad’s daughter Fatima (d. 632). Washington advocated Muslim rights, never realizing that as a slaveholder he was denying Muslims in his own midst any rights at all, including the right to practice their faith. This tragic irony may well have also recurred on the plantations of Jefferson and Madison, although proof of their slaves’ religion remains less than definitive. Nevertheless, having been seized and transported from West Africa, the first American Muslims may have numbered in the tens of thousands, a population certainly greater than the resident Jews and possibly even the Catholics. Although some have speculated that a few former Muslim slaves may have served in the Continental Army, there is little direct evidence any practiced Islam and none that these individuals were known to the Founders. In any case, they had no influence on later political debates about Muslim citizenship.

The insuperable facts of race and slavery rendered invisible the very believers whose freedoms men like Jefferson, Washington, and Madison defended, and whose ancestors had resided in America since the seventeenth century, as long as Protestants had. Indeed, when the Founders imagined future Muslim citizens, they presumably imagined them as white, because by the 1790s “full American citizenship could be claimed by any free, white immigrant, regardless of ethnicity or religious beliefs.”

The two actual Muslims Jefferson would wittingly meet during his lifetime were not black West African slaves but North African ambassadors of Turkish descent. They may have appeared to him to have more melanin than he did, but he never commented on their complexions or race. (Other observers either failed to mention it or simply affirmed that the ambassador in question was not black.) But then Jefferson was interested in neither diplomat for reasons of religion or race he engaged them because of their political power. (They were, of course, also free.)

But even earlier in his political life—as an ambassador, secretary of state, and vice president—Jefferson had never perceived a predominantly religious dimension to the conflict with North African Muslim powers, whose pirates threatened American shipping in the Mediterranean and eastern Atlantic. As this book demonstrates, Jefferson as president would insist to the rulers of Tripoli and Tunis that his nation harbored no anti-Islamic bias, even going so far as to express the extraordinary claim of believing in the same God as those men.

The equality of believers that Jefferson sought at home was the same one he professed abroad, in both contexts attempting to divorce religion from politics, or so it seemed. In fact, Jefferson’s limited but unique appreciation for Islam appears as a minor but active element in his presidential foreign policy with North Africa—and his most personal Deist and Unitarian beliefs. The two were quite possibly entwined, with their source Jefferson’s unsophisticated yet effective understanding of the Qur’an he owned.

Still, as a man of his time, Jefferson was not immune to negative feelings about Islam. He would even use some of the most popular anti-Islamic images inherited from Europe to drive his early political arguments about the separation of religion from government in Virginia. Yet ultimately Jefferson and others not as well known were still able to divorce the idea of Muslim citizenship from their dislike of Islam, as they forged an “imagined political community,” inclusive beyond all precedent.

The clash between principle and prejudice that Jefferson himself overcame in the eighteenth and nineteenth centuries remains a test for the nation in the twenty-first. Since the late nineteenth century, the United States has in fact become home to a diverse and dynamic American Muslim citizenry, but this population has never been fully welcomed. Whereas in Jefferson’s time organized prejudice against Muslims was exercised against an exclusively foreign and imaginary nonresident population, today political attacks target real, resident American Muslim citizens. Particularly in the wake of 9/11 and the so-called War on Terror, a public discourse of anti-Muslim bigotry has arisen to justify depriving American Muslim citizens of the full and equal exercise of their civil rights.

For example, recent anti-Islamic slurs used to deny the legitimacy of a presidential candidacy contained eerie echoes of founding precedents. The legal possibility of a Muslim president was first discussed with vitriol during debates involving America’s Founders. Thomas Jefferson would be the first in the history of American politics to suffer the false charge of being a Muslim, an accusation considered the ultimate Protestant slur in the eighteenth century. That a presidential candidate in the twenty-first century should have been subject to much the same false attack, still presumed as politically damning to any real American Muslim candidate’s potential for elected office, demonstrates the importance of examining how the multiple images of Islam and Muslims first entered American consciousness and how the rights of Muslims first came to be accepted as national ideals. Ultimately, the status of Muslim citizenship in America today cannot be properly appreciated without establishing the historical context of its eighteenth-century origins.

Muslim American rights became a theoretical reality early on, but as a practical one they have been much slower to evolve. In fact, they are being tested daily. Recently, John Esposito, a distinguished historian of Islam in contemporary America, observed, “Muslims are led to wonder: What are the limits of this Western pluralism?” Thomas Jefferson’s Qur’an documents the origins of such pluralism in the United States in order to illuminate where, when, and how Muslims were first included in American ideals.

Until now, most historians have proposed that Muslims represented nothing more than the incarnated antithesis of American values. These same voices also insist that Protestant Americans always and uniformly defined both the religion of Islam and its practitioners as inherently un-American. Indeed, most historians posit that the emergence of the United States as an ideological and political phenomenon occurred in opposition to eighteenth-century concepts about Islam as a false religion and source of despotic government. There is certainly evidence for these assumptions in early American religious polemic, domestic politics, foreign policy, and literary sources. There are, however, also considerable observations about Islam and Muslims that cast both in a more affirmative light, including key references to Muslims as future American citizens in important founding debates about rights. These sources show that American Protestants did not monolithically view Islam as “a thoroughly foreign religion.”

This book documents the counterassertion that Muslims, far from being definitively un-American, were deeply embedded in the concept of citizenship in the United States since the country’s inception, even if these inclusive ideas were not then accepted by the majority of Americans. While focusing on Jefferson’s views of Islam, Muslims, and the Islamic world, it also analyzes the perspectives of John Adams and James Madison. Nor is it limited to these key Founders. The cast of those who took part in the contest concerning the rights of Muslims, imagined and real, is not confined to famous political elites but includes Presbyterian and Baptist protestors against Virginia’s religious establishment the Anglican lawyers James Iredell and Samuel Johnston in North Carolina, who argued for the rights of Muslims in their state’s constitutional ratifying convention and John Leland, an evangelical Baptist preacher and ally of Jefferson and Madison in Virginia, who agitated in Connecticut and Massachusetts in support of Muslim equality, the Constitution, the First Amendment, and the end of established religion at the state level.

The lives of two American Muslim slaves of West African origin, Ibrahima Abd al-Rahman and Omar ibn Said, also intersect this narrative. Both were literate in Arabic, the latter writing his autobiography in that language. They remind us of the presence of tens of thousands of Muslim slaves who had no rights, no voice, and no hope of American citizenship in the midst of these early discussions about religious and political equality for future, free practitioners of Islam.

Imagined Muslims, along with real Jews and Catholics, were the consummate outsiders in much of America’s political discourse at the founding. Jews and Catholics would struggle into the twentieth century to gain in practice the equal rights assured them in theory, although even this process would not entirely eradicate prejudice against either group. Nevertheless, from among the original triad of religious outsiders in the United States, only Muslims remain the objects of a substantial civic discourse of derision and marginalization, still being perceived in many quarters as not fully American. This book writes Muslims back into our founding narrative in the hope of clarifying the importance of critical historical precedents at a time when the idea of the Muslim as citizen is, once more, hotly contested.

Excerpted from "Thomas Jefferson's Qur'an" by Denise A. Spellberg. Copyright © 2013 by Denise A. Spellberg. Excerpted by permission of Knopf, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.


Contents

Universality Edit

In philosophy, universality is the notion that universal facts can be discovered and is therefore understood as being in opposition to relativism. [6]

In certain religions, universalism is the quality ascribed to an entity whose existence is consistent throughout the universe.

Moral universalism Edit

Moral universalism (also called moral objectivism or universal morality) is the meta-ethical position that some system of ethics applies universally. That system is inclusive of all individuals, [7] regardless of culture, race, sex, religion, nationality, sexual orientation, or any other distinguishing feature. [8] Moral universalism is opposed to moral nihilism and moral relativism. However, not all forms of moral universalism are absolutist, nor do they necessarily value monism. Many forms of universalism, such as utilitarianism, are non-absolutist. Other forms such as those theorized by Isaiah Berlin, may value pluralist ideals.

Baháʼí Faith Edit

In the teachings of the Baháʼí Faith, a single God has sent all the historic founders of the world religions in a process of progressive revelation. As a result, the major world religions are seen as divine in origin and are continuous in their purpose. In this view, there is unity among the founders of world religions, but each revelation brings a more advanced set of teachings in human history and none are syncretic. [9]

Within this universal view, the unity of humanity is one of the central teachings of the Baháʼí Faith. [10] The Baháʼí teachings state that since all humans have been created in the image of God, God does not make any distinction between people with regard to race, colour or religion. [11] : 138 Thus, because all humans have been created equal, they all require equal opportunities and treatment. [10] Hence the Baháʼí view promotes the unity of humanity, and that people's vision should be world-embracing and that people should love the whole world rather than just their nation. [11] : 138

The teaching, however, does not equate unity with uniformity instead the Baháʼí writings advocate the principle of unity in diversity where the variety in the human race is valued. [11] : 139 Operating on a worldwide basis this cooperative view of the peoples and nations of the planet culminates in a vision of the practicality of the progression in world affairs towards, and the inevitability of, world peace. [12]

Buddhism Edit

The idea of Universal Salvation is key to the Mahayana school of Buddhism. [13] All practitioners of this school of Buddhism aspire to become fully enlightened, so as to save other beings. There are many such vows or sentiments that people on this path focus on, the most famous being "Beings are numberless. I vow to save them all."

Adherents to Pure Land Buddhism point to Amitabha Buddha as a Universal Savior. Before becoming a Buddha Amitabha vowed that he would save all beings.

Christianity Edit

The fundamental idea of Christian universalism is universal reconciliation – that all humans will eventually be saved. They will eventually enter God's kingdom in Heaven, through the grace and works of the lord Jesus Christ. [14] Christian universalism teaches that an eternal Hell does not exist, and that it was not what Jesus had taught. They point to historical evidence showing that some early fathers of the church were universalists, and attribute the origin of the idea of hell as eternal to mistranslation. [15]

Universalists cite numerous Biblical passages which reference the salvation of all beings. [16] In addition, they argue that an eternal hell is both unjust, and against the nature and attributes of a loving God. [17]

The remaining beliefs of Christian universalism are generally compatible with the fundamentals of Christianity [ citation needed ]

    is the loving Parent of all peoples, see Love of God. reveals the nature and character of God, and is the spiritual leader of humankind.
  • Humankind is created with an immortal soul, which death can not end—or a mortal soul that shall be resurrected and preserved by God. A soul which God will not wholly destroy. [18] has negative consequences for the sinner either in this life or the afterlife. All of God's punishments for sin are corrective and remedial. None of such punishments will last forever, or result in the permanent destruction of a soul. Some Christian Universalists believe in the idea of a Purgatorial Hell, or a temporary place of purification that some must undergo before their entrance into Heaven. [19]

In 1899 the Universalist General Convention, later called the Universalist Church of America, adopted the Five Principles: the belief in God, Jesus Christ, the immortality of the human soul, the reality of sin and universal reconciliation. [20]

History Edit

Universalist writers such as George T. Knight have claimed that Universalism was a widely held view among theologians in Early Christianity. [21] These included such important figures such as Alexandrian scholar Origen as well as Clement of Alexandria, a Christian theologian. [21] Origen and Clement both included the existence of a non-eternal Hell in their teachings. Hell was remedial, in that it was a place one went to purge one's sins before entering into Heaven. [22]

The first undisputed documentations of Christian Universalist ideas occurred in 17th-century England and 18th-century Europe as well as in colonial America. Between 1648-1697 English activist Gerrard Winstanley, writer Richard Coppin, and dissenter Jane Leade, each taught that God would grant all human beings salvation. The same teachings were later spread throughout 18th-century France and America by George de Benneville. People who taught this doctrine in America would later become known as the Universalist Church of America. [23]

The Greek term apocatastasis came to be related by some to the beliefs of Christian universalism, but central to the doctrine was the restitution, or restoration of all sinful beings to God, and to His state of blessedness. In early Patristics, usage of the term is distinct.

Universalist theology Edit

Universalist theology is grounded in history, scripture and assumptions about the nature of God. Thomas Whittemore wrote the book "100 Scriptural Proofs that Jesus Christ Will Save All Mankind" [24] quoting both Old and New Testament verses which support the Universalist viewpoint.

Some Bible verses he cites and are cited by other Christian Universalists are:

  1. John 17:2
    • "since thou hast given him power over all flesh, to give eternal life to all whom thou hast given him." (RSV)
  2. 1 Corinthians 15:22 [25]
    • "For as in Adam all die, so also in Christ shall all be made alive." (ESV)
  3. 2 Peter 3:9
    • "The Lord is not slow to fulfill his promise as some count slowness, but is patient toward you, not wishing that any should perish, but that all should reach repentance." (ESV)
  4. 1 Timothy 2:3–6 [25]
    • "This is good, and pleases God our Savior, who wants all men to be saved and to come to a knowledge of the truth. For there is one God and one mediator between God and men, the man Christ Jesus, who gave himself as a ransom for ALL men—the testimony given in its proper time." (NIV)
  5. 1 John 2:2
    • "He is the atoning sacrifice for our sins, and not only for ours but also for the sins of the whole world." (NIV)
  6. 1 Timothy 4:10 [25]
    • "For to this end we toil and strive, because we have our hope set on the living God, who is the Savior of all people, especially of those who believe." (ESV)
  7. Romans 5:18
    • "Then as one man's trespass led to condemnation for all men, so one man's act of righteousness leads to acquittal and life for all men." (RSV)
  8. Romans 11:32 [25]
    • "For God has bound all men over to disobedience so that he may have mercy on them all." (NIV)

Mistranslations Edit

Christian universalists point towards the mistranslations of the Greek word αιών (Lit. aion), as giving rise to the idea of Eternal Hell, and the idea that some people will not be saved. [15] [26] [27]

This Greek word is the origin of the modern English word aeon, which refers to a period of time or an epoch.

The 19th century theologian Marvin Vincent wrote about the word aion, and the supposed connotations of "eternal" or "temporal":

Aion, transliterated aeon, is a period of longer or shorter duration, having a beginning and an end, and complete in itself. [. ] Neither the noun nor the adjective, in themselves, carry the sense of endless or everlasting." [28]

Dr. Ken Vincent writes that "When it (aion) was translated into Latin Vulgate, "aion" became "aeternam" which means "eternal". [15]

Catholicism Edit

The Catholic church believes that God judges everyone based only on their moral acts, [30] that no one should be subject to human misery, [31] that everyone is equal in dignity yet distinct in individuality before God, [32] that no one should be discriminated against because of their sin or concupiscence, [33] and that apart from coercion [34] God exhausts every means to save mankind from evil: original holiness being intended for everyone, [35] the irrevocable Old Testament covenants, [36] [37] each religion being a share in the truth, [38] elements of sanctification in non-Catholic Christian communities, [38] the good people of every religion and nation, [39] everyone being called to baptism and confession, [40] [41] and Purgatory, suffrages, and indulgences for the dead. [42] [41] The church believes that everyone is predestined to Heaven, [43] that no one is predestined to Hell, [42] that everyone is redeemed by Christ's Passion, [44] that no one is excluded from the church except by sin, [41] and that everyone can either love God by loving others unto going to Heaven or reject God by sin unto going to Hell. [45] [46] The church believes that God's predestination takes everything into account, [44] and that his providence brings out of evil a greater good, [34] as evidenced, the church believes, by the Passion of Christ being all at once predestined by God, [44] foretold in Scripture, [44] necessitated by original sin, [47] authored by everyone who sins, [44] caused by Christ's executioners, [44] and freely planned and undergone by Christ. [44] The church believes that everyone who goes to Heaven joins the church, [42] [48] and that from the beginning God intended Israel to be the beginning of the church, [39] wherein God would unite all persons to each other and to God. [49] The church believes that Heaven and Hell are eternal. [42]

The Latin book Cur Deus Homo explains that God donate the soul and a guardian angel to any human being but he can't donate the forgiveness of sins and the eternal salavation in Paradise to anyone, even baptized. In this sense, St Anselm of Canterbury defended the existence of the Purgatory, a place to which all the souls having one or more sins to be expiated are destinated for a limited period of time. Their forgiveness can be shortened by alternative forms of expiation like rituals (Suffrage Mass) and works of mercy which the living believers dedicate to them. The pain's debt is payd by different creatures but it can't freely remitted. St Anselm demonstrated that if God could forgive the human sins without any form of sacrifice, then the crucifixion of Jesus Christ God wouldn't have been necessary for the eternal salvation of the human kind and God won't be perfect.

Hinduism Edit

Author David Frawley says that Hinduism has a "background universalism" and its teachings contain a "universal relevance." [50] Hinduism is also naturally religiously pluralistic. [51] A well-known Rig Vedic hymn says: "Truth is One, though the sages know it variously." [52] Similarly, in the Bhagavad Gītā (4:11), God, manifesting as an incarnation, states: "As people approach me, so I receive them. All paths lead to me." [53] The Hindu religion has no theological difficulties in accepting degrees of truth in other religions. Hinduism emphasizes that everyone actually worships the same God, whether one knows it or not. [54]

While Hinduism has an openness and tolerance towards other religions, it also has a wide range of diversity within it. [55] There are considered to be six orthodox Hindu schools of philosophy/theology, [56] as well as multiple unorthodox or "hetrodox" traditions called darshanas. [57]

Hindu universalism Edit

Hindu universalism, also called Neo-Vedanta [58] and neo-Hinduism, [59] is a modern interpretation of Hinduism which developed in response to western colonialism and orientalism. It denotes the ideology that all religions are true and therefore worthy of toleration and respect. [60]

It is a modern interpretation that aims to present Hinduism as a "homogenized ideal of Hinduism" [61] with Advaita Vedanta as its central doctrine. [62] For example, it presents that:

. an imagined "integral unity" that was probably little more than an "imagined" view of the religious life that pertained only to a cultural elite and that empirically speaking had very little reality "on the ground," as it were, throughout the centuries of cultural development in the South Asian region. [63]

Hinduism embraces universalism by conceiving the whole world as a single family that deifies the one truth, and therefore it accepts all forms of beliefs and dismisses labels of distinct religions which would imply a division of identity. [64] [65] [66] [ self-published source ]

This modernised re-interpretation has become a broad current in Indian culture, [62] [67] extending far beyond the Dashanami Sampradaya, the Advaita Vedanta Sampradaya founded by Adi Shankara. An early exponent of Hindu Universalism was Ram Mohan Roy, who established the Brahmo Samaj. [68] Hindu Universalism was popularised in the 20th century in both India and the west by Vivekananda [69] [62] and Sarvepalli Radhakrishnan. [62] Veneration for all other religions was articulated by Gandhi:

After long study and experience, I have come to the conclusion that [1] all religions are true [2] all religions have some error in them [3] all religions are almost as dear to me as my own Hinduism, in as much as all human beings should be as dear to one as one's own close relatives. My own veneration for other faiths is the same as that for my own faith therefore no thought of conversion is possible. [70]

Western orientalists played an important role in this popularisation, regarding Vedanta to be the "central theology of Hinduism". [62] Oriental scholarship portrayed Hinduism as a "single world religion", [62] and denigrated the heterogeneousity of Hindu beliefs and practices as 'distortions' of the basic teachings of Vedanta. [71]

Islam Edit

Islam recognizes to a certain extent the validity of the Abrahamic religions, the Quran identifying Jews, Christians, and "Sabi'un" (usually taken as a reference to the Mandaeans) as "people of the Book" (ahl al-kitab). Later Islamic theologians expanded this definition to include Zoroastrians, and later even Hindus, as the early Islamic empire brought many people professing these religions under its dominion, but the Qur'an explicitly identifies only Jews, Christians, and Sabians as People of the Book. [72] [ need quotation to verify ] , [73] [ failed verification ] , [74] [ failed verification ] The relation between Islam and universalism has assumed crucial importance in the context of political Islam or Islamism, particularly in reference to Sayyid Qutb, a leading member of the Muslim Brotherhood movement, and one of the key contemporary philosophers of Islam. [75]

There are several views within Islam with respect to Universalism. According to the most inclusive teachings, common among the liberal Muslim movements, all monotheistic religions or people of the book have a chance of salvation. For example, Surah 2:62 states:

The [Muslim] believers, the Jews, the Christians, and the Sabians — all those who believe in God and the Last Day and do good — will have their rewards with their Lord. No fear for them, nor will they grieve. Quran 2:62 (Translated by Muhammad Abdel-Haleem)

However, the most exclusive teachings disagree. For example, the Salafi refer to Surah 9:5:

When the [four] forbidden months are over, wherever you encounter the idolaters, kill them, seize them, besiege them, wait for them at every lookout post but if they turn [to God], maintain the prayer, and pay the prescribed alms, let them go on their way, for God is most forgiving and merciful. Quran 9:5 (Translated by Muhammad Abdel-Haleem)

The interpretation of all of these passages are hotly contested amongst various schools of thought, traditionalist and reform-minded, and branches of Islam, from the reforming Quranism and Ahmadiyya to the ultra-traditionalist Salafi, as is the doctrine of abrogation (naskh) which is used to determine which verses take precedence, based on reconstructed chronology, with later verses superseding earlier ones. The traditional chronology places Surah 9 as the last or second-to-last surah revealed, thus, in traditional exegesis, it gains a large power of abrogation, and verses 9:5, 29, 73 are held to have abrogated 2:256 [76] The ahadith also play a major role in this, and different schools of thought assign different weightings and rulings of authenticity to different hadith, with the four schools of Sunni thought accepting the Six Authentic Collections, generally along with the Muwatta Imam Malik. Depending on the level of acceptance of rejection of certain traditions, the interpretation of the Koran can be changed immensely, from the Qur'anists who reject the ahadith, to the Salafi, or ahl al-hadith, who hold the entirety of the traditional collections in great reverence.

Traditional Islam [76] [77] views the world as bipartite, consisting of the House of Islam, that is, where people live under the Sharia [77] and the House of War, that is, where the people do not live under Sharia, which must be proselytized [77] [78] [79] using whatever resources available, including, in some traditionalist and conservative interpretations, [80] the use of violence, as holy struggle in the path of God, [74] [80] [81] to either convert its inhabitants to Islam, or to rule them under the Shariah (cf. dhimmi). [82] [83]

Judaism Edit

Judaism teaches that God chose the Jewish people to be in a unique covenant with God, and one of their beliefs is that Jewish people were charged by the Torah with a specific mission—to be a light unto the nations, and to exemplify the covenant with God as described in the Torah to other nations. This view does not preclude a belief that God also has a relationship with other peoples—rather, Judaism holds that God had entered into a covenant with all humanity as Noachides, and that Jews and non-Jews alike have a relationship with God, as well as being universal in the sense that it is open to all mankind. [84]

Modern Jews such as Emmanuel Levinas advocate a universalist mindset that is performed through particularist behavior. [85] An on-line organization, the Jewish Spiritual Leaders Institute founded and led by Steven Blane, who calls himself an "American Jewish Universalist Rabbi", believes in a more inclusive version of Jewish Universalism, stating that "God equally chose all nations to be lights unto the world, and we have much to learn and share with each other. We can only accomplish Tikkun Olam by our unconditional acceptance of each other's peaceful doctrines." [86]

Manichaeism Edit

Manichaeism, like Christian Gnosticism and Zurvanism, was inherently universalist. [87] [ page needed ]

Sikhism Edit

In Sikhism, all the religions of the world are compared to rivers flowing into a single ocean. Although the Sikh gurus did not agree with the practices of fasting, idolatry and pilgrimage during their times, they stressed that all religions should be tolerated and considered on equal footing. The Sikh scripture, the Guru Granth Sahib, contains the writings of not just the Sikh guru themselves, but the writings of several Hindu and Muslim saints, known as the Bhagats.

The very first word of the Sikh scripture is "Ik", followed by "Oh-ang-kar". This literally means that there is only one god, and that one is wholesome, inclusive of the whole universe. It further goes on to state that all of creation, and all energy is part of this primordial being. As such, it is described in scripture over and over again, that all that occurs is part of the divine will, and as such, has to be accepted. It occurs for a reason, even if its beyond the grasp of one person to understand.

Although Sikhism does not teach that men are created as an image of God, it states that the essence of the One is to be found throughout all of its creation. [ citation needed ] As was said by Yogi Bhajan, the man who is credited with having brought Sikhism to the West:

"If you can't see God in all, you can't see God at all". (Sri Singh Sahib, Yogi Bhajan) [88]

The First Sikh Guru, Guru Nanak said himself:

By this, Guru Nanak meant that there is no distinction between religion in God's eyes, whether polytheist, monotheist, pantheist, or even atheist, all that one needs to gain salvation is purity of heart, tolerance of all beings, compassion and kindness. Unlike many of the major world religions, Sikhism does not have missionaries, instead it believes men have the freedom to find their own path to salvation.

Unitarian Universalism Edit

Unitarian Universalism (UU) is a theologically liberal religion characterized by a "free and responsible search for truth and meaning". [91] Unitarian Universalists do not share a creed rather, they are unified by their shared search for spiritual growth and by the understanding that an individual's theology is a result of that search and not a result of obedience to an authoritarian requirement. Unitarian Universalists draw from all major world religions [92] and many different theological sources and have a wide range of beliefs and practices.

While having its origins in Christianity, UU is no longer a Christian church. As of 2006, fewer than about 20% of Unitarian Universalists identified themselves as Christian. [93] Contemporary Unitarian Universalism espouses a pluralist approach to religious belief, whereby members may describe themselves as humanist, agnostic, deist, atheist, pagan, Christian, monotheist, pantheist, polytheist, or assume no label at all.

The Unitarian Universalist Association (UUA) was formed in 1961, a consolidation of the American Unitarian Association, established in 1825, and the Universalist Church of America, [94] established in 1866. It is headquartered in Boston, and mainly serves churches in the United States. The Canadian Unitarian Council became an independent body in 2002. [95]

Zoroastrianism Edit

Some varieties of Zoroastrian (such as Zurvanism) are universalistic in application to all races, but not necessarily universalist in the sense of universal salvation. [96] [ failed verification ]

In his book The Miracle of Theism: Arguments for and against the Existence of God, the Australian philosopher J. L. Mackie noted that whilst in the past a miracle performed by Jesus had served as proof to Christians that he was the 'one true God', and that a miracle performed by another religion's deity had served as a (contradictory) proof to its own adherents, the universalist approach resulted in any such miracle being accepted as a validation of all religions, a situation that he characterised as "Miracle-workers of the world, unite!" [97]


Contents

The practice of infanticide has taken many forms over time. Child sacrifice to supernatural figures or forces, such as that believed to have been practiced in ancient Carthage, may be only the most notorious example in the ancient world.

A frequent method of infanticide in ancient Europe and Asia was simply to abandon the infant, leaving it to die by exposure (i.e., hypothermia, hunger, thirst, or animal attack). [4] [5]

On at least one island in Oceania, infanticide was carried out until the 20th century by suffocating the infant, [6] while in pre-Columbian Mesoamerica and in the Inca Empire it was carried out by sacrifice (see below).

Paleolithic and Neolithic Edit

Many Neolithic groups routinely resorted to infanticide in order to control their numbers so that their lands could support them. Joseph Birdsell believed that infanticide rates in prehistoric times were between 15% and 50% of the total number of births, [7] while Laila Williamson estimated a lower rate ranging from 15% to 20%. [1] : 66 Both anthropologists believed that these high rates of infanticide persisted until the development of agriculture during the Neolithic Revolution. [8] : 19 Comparative anthropologists have calculated that 50% of female newborn babies were killed by their parents during the Paleolithic era. [9] From the infants hominid skulls (e.g. Taung child skull) that had been traumatized, has been proposed cannibalism by Raymond A. Dart. [10] The children were not necessarily actively killed, but neglect and intentional malnourishment may also have occurred, as proposed by Vicente Lull as an explanation for an apparent surplus of men and the below average height of women in prehistoric Menorca. [11]

In ancient history Edit

In the New World Edit

Archaeologists have uncovered physical evidence of child sacrifice at several locations. [8] : 16–22 Some of the best attested examples are the diverse rites which were part of the religious practices in Mesoamerica and the Inca Empire. [12] [13] [14]

In the Old World Edit

Three thousand bones of young children, with evidence of sacrificial rituals, have been found in Sardinia. Pelasgians offered a sacrifice of every tenth child during difficult times. Syrians sacrificed children to Jupiter and Juno. Many remains of children have been found in Gezer excavations with signs of sacrifice. Child skeletons with the marks of sacrifice have been found also in Egypt dating 950–720 BCE. [ citation needed ] In Carthage "[child] sacrifice in the ancient world reached its infamous zenith". [ attribution needed ] [8] : 324 Besides the Carthaginians, other Phoenicians, and the Canaanites, Moabites and Sepharvites offered their first-born as a sacrifice to their gods.

Ancient Egypt Edit

In Egyptian households, at all social levels, children of both sexes were valued and there is no evidence of infanticide. [15] The religion of the Ancient Egyptians forbade infanticide and during the Greco-Roman period they rescued abandoned babies from manure heaps, a common method of infanticide by Greeks or Romans, and were allowed to either adopt them as foundling or raise them as slaves, often giving them names such as "copro -" to memorialize their rescue. [16] Strabo considered it a peculiarity of the Egyptians that every child must be reared. [17] Diodorus indicates infanticide was a punishable offence. [18] Egypt was heavily dependent on the annual flooding of the Nile to irrigate the land and in years of low inundation, severe famine could occur with breakdowns in social order resulting, notably between 930–1070 CE and 1180–1350 CE . Instances of cannibalism are recorded during these periods but it is unknown if this happened during the pharaonic era of Ancient Egypt. [19] Beatrix Midant-Reynes describes human sacrifice as having occurred at Abydos in the early dynastic period (c. 3150–2850 BCE), [20] while Jan Assmann asserts there is no clear evidence of human sacrifice ever happening in Ancient Egypt. [21]

Carthage Edit

According to Shelby Brown, Carthaginians, descendants of the Phoenicians, sacrificed infants to their gods. [22] Charred bones of hundreds of infants have been found in Carthaginian archaeological sites. One such area harbored as many as 20,000 burial urns. [22] Skeptics suggest that the bodies of children found in Carthaginian and Phoenician cemeteries were merely the cremated remains of children that died naturally. [23]

Plutarch (c. 46–120 CE ) mentions the practice, as do Tertullian, Orosius, Diodorus Siculus and Philo. The Hebrew Bible also mentions what appears to be child sacrifice practiced at a place called the Tophet (from the Hebrew taph or toph, to burn) by the Canaanites. Writing in the 3rd century BCE, Kleitarchos, one of the historians of Alexander the Great, described that the infants rolled into the flaming pit. Diodorus Siculus wrote that babies were roasted to death inside the burning pit of the god Baal Hamon, a bronze statue. [24] [25]

Greece and Rome Edit

The historical Greeks considered the practice of adult and child sacrifice barbarous, [26] however, the exposure of newborns was widely practiced in ancient Greece. [27] [28] [29] It was advocated by Aristotle in the case of congenital deformity: "As to the exposure of children, let there be a law that no deformed child shall live.” [30] In Greece, the decision to expose a child was typically the father's, although in Sparta the decision was made by a group of elders. [31] Exposure was the preferred method of disposal, as that act in itself was not considered to be murder moreover, the exposed child technically had a chance of being rescued by the gods or any passersby. [32] This very situation was a recurring motif in Greek mythology. [33] To notify the neighbors of a birth of a child, a woolen strip was hung over the front door to indicate a female baby and an olive branch to indicate a boy had been born. Families did not always keep their new child. After a woman had a baby, she would show it to her husband. If the husband accepted it, it would live, but if he refused it, it would die. Babies would often be rejected if they were illegitimate, unhealthy or deformed, the wrong sex, or too great a burden on the family. These babies would not be directly killed, but put in a clay pot or jar and deserted outside the front door or on the roadway. In ancient Greek religion, this practice took the responsibility away from the parents because the child would die of natural causes, for example, hunger, asphyxiation or exposure to the elements.

The practice was prevalent in ancient Rome, as well. Philo was the first philosopher to speak out against it. [34] A letter from a Roman citizen to his sister, or a pregnant wife from her husband, [35] dating from 1 BCE, demonstrates the casual nature with which infanticide was often viewed:

"I am still in Alexandria. . I beg and plead with you to take care of our little child, and as soon as we receive wages, I will send them to you. In the meantime, if (good fortune to you!) you give birth, if it is a boy, let it live if it is a girl, expose it.", [36] [37] "If you give birth to a boy, keep it. If it is a girl, expose it. Try not to worry. I'll send the money as soon as we get paid." [38]

In some periods of Roman history it was traditional for a newborn to be brought to the pater familias, the family patriarch, who would then decide whether the child was to be kept and raised, or left to die by exposure. [39] The Twelve Tables of Roman law obliged him to put to death a child that was visibly deformed. The concurrent practices of slavery and infanticide contributed to the "background noise" of the crises during the Republic. [39]

Infanticide became a capital offense in Roman law in 374, but offenders were rarely if ever prosecuted. [40]

According to mythology, Romulus and Remus, twin infant sons of the war god Mars, survived near-infanticide after being tossed into the Tiber River. According to the myth, they were raised by wolves, and later founded the city of Rome.

Middle Ages Edit

Whereas theologians and clerics preached sparing their lives, newborn abandonment continued as registered in both the literature record and in legal documents. [5] : 16 According to William Lecky, exposure in the early Middle Ages, as distinct from other forms of infanticide, "was practiced on a gigantic scale with absolute impunity, noticed by writers with most frigid indifference and, at least in the case of destitute parents, considered a very venial offence". [41] : 355–56 The first foundling house in Europe was established in Milan in 787 on account of the high number of infanticides and out-of-wedlock births. The Hospital of the Holy Spirit in Rome was founded by Pope Innocent III because women were throwing their infants into the Tiber river. [42]

Unlike other European regions, in the Middle Ages the German mother had the right to expose the newborn. [43]

In the High Middle Ages, abandoning unwanted children finally eclipsed infanticide. [ citation needed ] Unwanted children were left at the door of church or abbey, and the clergy was assumed to take care of their upbringing. This practice also gave rise to the first orphanages.

However, very high sex ratios were common in even late medieval Europe, which may indicate sex-selective infanticide. [44]

Judaism Edit

Judaism prohibits infanticide, and has for some time, dating back to at least early Common Era. Roman historians wrote about the ideas and customs of other peoples, which often diverged from their own. Tacitus recorded that the Jews "take thought to increase their numbers, for they regard it as a crime to kill any late-born children". [45] Josephus, whose works give an important insight into 1st-century Judaism, wrote that God "forbids women to cause abortion of what is begotten, or to destroy it afterward". [46]

Pagan European tribes Edit

In his book Germania, Tacitus wrote in 98 CE that the ancient Germanic tribes enforced a similar prohibition. He found such mores remarkable and commented: "[The Germani] hold it shameful to kill any unwanted child." It has become clear over the millennia, though, that Tacitus' description was inaccurate the consensus of modern scholarship significantly differs. John Boswell believed that in ancient Germanic tribes unwanted children were exposed, usually in the forest. [47] : 218 "It was the custom of the [Teutonic] pagans, that if they wanted to kill a son or daughter, they would be killed before they had been given any food." [47] : 211 Usually children born out of wedlock were disposed of that way.

In his highly influential Pre-historic Times, John Lubbock described burnt bones indicating the practice of child sacrifice in pagan Britain. [48]

The last canto, Marjatan poika (Son of Marjatta), of Finnish national epic Kalevala describes assumed infanticide. Väinämöinen orders the infant bastard son of Marjatta to be drowned in a marsh.

The Íslendingabók, the main source for the early history of Iceland, recounts that on the Conversion of Iceland to Christianity in 1000 it was provided – in order to make the transition more palatable to Pagans – that "the old laws allowing exposure of newborn children will remain in force". However, this provision – like other concessions made at the time to the Pagans – was abolished some years later.

Christianity Edit

Christianity explicitly rejects infanticide. The Teachings of the Apostles or Didache said "thou shalt not kill a child by abortion, neither shalt thou slay it when born". [49] The Epistle of Barnabas stated an identical command, both thus conflating abortion and infanticide. [50] Apologists Tertullian, Athenagoras, Minucius Felix, Justin Martyr and Lactantius also maintained that exposing a baby to death was a wicked act. [4] In 318, Constantine I considered infanticide a crime, and in 374, Valentinian I mandated the rearing of all children (exposing babies, especially girls, was still common). The Council of Constantinople declared that infanticide was homicide, and in 589, the Third Council of Toledo took measures against the custom of killing their own children. [40]

Arabia Edit

Some Muslim sources allege that pre-Islamic Arabian society practiced infanticide as a form of "post-partum birth control". [51] The word waʾd was used to describe the practice. [52] These sources state that infanticide was practiced either out of destitution (thus practiced on males and females alike), or as "disappointment and fear of social disgrace felt by a father upon the birth of a daughter". [51]

Some authors believe that there is little evidence that infanticide was prevalent in pre-Islamic Arabia or early Muslim history, except for the case of the Tamim tribe, who practiced it during severe famine according to Islamic sources. [53] Others state that "female infanticide was common all over Arabia during this period of time" (pre-Islamic Arabia), especially by burying alive a female newborn. [8] : 59 [54] A tablet discovered in Yemen, forbidding the people of a certain town from engaging in the practice, is the only written reference to infanticide within the peninsula in pre-Islamic times. [55]

Islam Edit

Infanticide is explicitly prohibited by the Qur'an. [56] "And do not kill your children for fear of poverty We give them sustenance and yourselves too surely to kill them is a great wrong." [57] Together with polytheism and homicide, infanticide is regarded as a grave sin (see 6:151 and 60:12). [51] Infanticide is also implicitly denounced in the story of Pharaoh's slaughter of the male children of Israelites (see 2:49 7:127 7:141 14:6 28:4 40:25). [51]

Ukraine and Russia Edit

Infanticide may have been practiced as human sacrifice, as part of the pagan cult of Perun. Ibn Fadlan describes sacrificial practices at the time of his trip to Kiev Rus (present-day Ukraine) in 921–922, and describes an incident of a woman voluntarily sacrificing her life as part of a funeral rite for a prominent leader, but makes no mention of infanticide. The Primary Chronicle, one of the most important literary sources before the 12th century, indicates that human sacrifice to idols may have been introduced by Vladimir the Great in 980. The same Vladimir the Great formally converted Kiev Rus into Christianity just 8 years later, but pagan cults continued to be practiced clandestinely in remote areas as late as the 13th century.

American explorer George Kennan noted that among the Koryaks, a Mongoloid people of north-eastern Siberia, infanticide was still common in the nineteenth century. One of a pair of twins was always sacrificed. [58]

Great Britain Edit

Infanticide (as a crime) gained both popular and bureaucratic significance in Victorian Britain. By the mid-19th century, in the context of criminal lunacy and the insanity defence, killing one's own child(ren) attracted ferocious debate, as the role of women in society was defined by motherhood, and it was thought that any woman who murdered her own child was by definition insane and could not be held responsible for her actions. Several cases were subsequently highlighted during the Royal Commission on Capital Punishment 1864–66, as a particular felony where an effective avoidance of the death penalty had informally begun.

The New Poor Law Act of 1834 ended parish relief for unmarried mothers and allowed fathers of illegitimate children to avoid paying for "child support". [60] Unmarried mothers then received little assistance and the poor were left with the option either entering the workhouse, prostitution, infanticide or abortion. By the middle of the century infanticide was common for social reasons, such as illegitimacy, and the introduction of child life insurance additionally encouraged some women to kill their children for gain. Examples are Mary Ann Cotton, who murdered many of her 15 children as well as three husbands, Margaret Waters, the 'Brixton Baby Farmer', a professional baby-farmer who was found guilty of infanticide in 1870, Jessie King hanged in 1889, Amelia Dyer, the 'Angel Maker', who murdered over 400 babies in her care, and Ada Chard-Williams, a baby farmer who was later hanged at Newgate prison.

The Times reported that 67 infants were murdered in London in 1861 and 150 more recorded as "found dead", many of which were found on the streets. Another 250 were suffocated, half of them not recorded as accidental deaths. The report noted that "infancy in London has to creep into life in the midst of foes." [61]

Recording a birth as a still-birth was also another way of concealing infanticide because still-births did not need to be registered until 1926 and they did not need to be buried in public cemeteries. [62] In 1895 The Sun (London) published an article "Massacre of the Innocents" highlighting the dangers of baby-farming, in the recording of stillbirths and quoting Braxton-Hicks, the London Coroner, on lying-in houses: "I have not the slightest doubt that a large amount of crime is covered by the expression 'still-birth'. There are a large number of cases of what are called newly-born children, which are found all over England, more especially in London and large towns, abandoned in streets, rivers, on commons, and so on." He continued "a great deal of that crime is due to what are called lying-in houses, which are not registered, or under the supervision of that sort, where the people who act as midwives constantly, as soon as the child is born, either drop it into a pail of water or smother it with a damp cloth. It is a very common thing, also, to find that they bash their heads on the floor and break their skulls." [63]

The last British woman to be executed for infanticide of her own child was Rebecca Smith, who was hanged in Wiltshire in 1849.

The Infant Life Protection Act of 1897 required local authorities to be notified within 48 hours of changes in custody or the death of children under seven years. Under the Children's Act of 1908 "no infant could be kept in a home that was so unfit and so overcrowded as to endanger its health, and no infant could be kept by an unfit nurse who threatened, by neglect or abuse, its proper care, and maintenance."

Asia Edit

China Edit

Short of execution, the harshest penalties were imposed on practitioners of infanticide by the legal codes of the Qin dynasty and Han dynasty of ancient China. [65]

The Venetian explorer Marco Polo claimed to have seen newborns exposed in Manzi. [66] China's society practiced sex selective infanticide. Philosopher Han Fei Tzu, a member of the ruling aristocracy of the 3rd century BCE, who developed a school of law, wrote: "As to children, a father and mother when they produce a boy congratulate one another, but when they produce a girl they put it to death." [67] Among the Hakka people, and in Yunnan, Anhui, Sichuan, Jiangxi and Fujian a method of killing the baby was to put her into a bucket of cold water, which was called "baby water". [68]

Infanticide was reported as early as the 3rd century BCE, and, by the time of the Song dynasty (960–1279 CE ), it was widespread in some provinces. Belief in transmigration allowed poor residents of the country to kill their newborn children if they felt unable to care for them, hoping that they would be reborn in better circumstances. Furthermore, some Chinese did not consider newborn children fully "human" and saw "life" beginning at some point after the sixth month after birth. [69]

Contemporary writers from the Song dynasty note that, in Hubei and Fujian provinces, residents would only keep three sons and two daughters (among poor farmers, two sons, and one daughter), and kill all babies beyond that number at birth. [70] Initially the sex of the child was only one factor to consider. By the time of the Ming Dynasty, however (1368–1644), male infanticide was becoming increasingly uncommon. The prevalence of female infanticide remained high much longer. The magnitude of this practice is subject to some dispute however, one commonly quoted estimate is that, by late Qing, between one fifth and one-quarter of all newborn girls, across the entire social spectrum, were victims of infanticide. If one includes excess mortality among female children under 10 (ascribed to gender-differential neglect), the share of victims rises to one third. [71] [72] [73]

Scottish physician John Dudgeon, who worked in Peking, China, during the early 20th century said that, "Infanticide does not prevail to the extent so generally believed among us, and in the north, it does not exist at all." [74]

Gender-selected abortion or sex identification (without medical uses [75] [76] ), abandonment, and infanticide are illegal in present-day Mainland China. Nevertheless, the US State Department, [77] and the human rights organization Amnesty International [78] have all declared that Mainland China's family planning programs, called the one child policy (which has since changed to a two-child policy [79] ), contribute to infanticide. [80] [81] [82] The sex gap between males and females aged 0–19 years old was estimated to be 25 million in 2010 by the United Nations Population Fund. [83] But in some cases, in order to avoid Mainland China's family planning programs, parents will not report to government when a child is born (in most cases a girl), so she or he will not have an identity in the government and they can keep on giving birth until they are satisfied, without fines or punishment. In 2017, the government announced that all children without an identity can now have an identity legally, known as family register. [84]

Japan Edit

Since feudal Edo era Japan the common slang for infanticide was "mabiki" (間引き) which means to pull plants from an overcrowded garden. A typical method in Japan was smothering the baby's mouth and nose with wet paper. [85] It became common as a method of population control. Farmers would often kill their second or third sons. Daughters were usually spared, as they could be married off, sold off as servants or prostitutes, or sent off to become geishas. [86] Mabiki persisted in the 19th century and early 20th century. [87] To bear twins was perceived as barbarous and unlucky and efforts were made to hide or kill one or both twins. [88]

India Edit

Female infanticide of newborn girls was systematic in feudatory Rajputs in South Asia for illegitimate female children during the Middle Ages. According to Firishta, as soon as the illegitimate female child was born she was held "in one hand, and a knife in the other, that any person who wanted a wife might take her now, otherwise she was immediately put to death". [91] The practice of female infanticide was also common among the Kutch, Kehtri, Nagar, Bengal, Miazed, Kalowries and Sindh communities. [92]

It was not uncommon that parents threw a child to the sharks in the Ganges River as a sacrificial offering. The East India Company administration were unable to outlaw the custom until the beginning of the 19th century. [93] : 78

According to social activists, female infanticide has remained a problem in India into the 21st century, with both NGOs and the government conducting awareness campaigns to combat it. [94] In India female infanticide is more common than the killing of male offspring, due to sex-selective infanticide. [95]

Africa Edit

In some African societies some neonates were killed because of beliefs in evil omens or because they were considered unlucky. Twins were usually put to death in Arebo as well as by the Nama people of South West Africa in the Lake Victoria Nyanza region by the Tswana in Portuguese East Africa in some parts of Igboland, Nigeria twins were sometimes abandoned in a forest at birth (as depicted in Things Fall Apart), oftentimes one twin was killed or hidden by midwives of wealthier mothers and by the !Kung people of the Kalahari Desert. [8] : 160–61 The Kikuyu, Kenya's most populous ethnic group, practiced ritual killing of twins. [96]

Infanticide is rooted in the old traditions and beliefs prevailing all over the country. A survey conducted by Disability Rights International found that 45% of women interviewed by them in Kenya were pressured to kill their children born with disabilities. The pressure is much higher in the rural areas, with every second mother being forced out of three. [97]

Australia Edit

Literature suggests infanticide may have occurred reasonably commonly among Indigenous Australians, in all areas of Australia prior to European settlement. Infanticide may have continued to occur quite often up until the 1960s. An 1866 issue of The Australian News for Home Readers informed readers that "the crime of infanticide is so prevalent amongst the natives that it is rare to see an infant". [98]

Author Susanna de Vries in 2007 told a newspaper that her accounts of Aboriginal violence, including infanticide, were censored by publishers in the 1980s and 1990s. She told reporters that the censorship "stemmed from guilt over the stolen children question". [99] Keith Windschuttle weighed in on the conversation, saying this type of censorship started in the 1970s. [99] In the same article Louis Nowra suggested that infanticide in customary Aboriginal law may have been because it was difficult to keep an abundant number of Aboriginal children alive there were life-and-death decisions modern-day Australians no longer have to face. [99]

South Australia and Victoria Edit

According to William D. Rubinstein, "Nineteenth-century European observers of Aboriginal life in South Australia and Victoria reported that about 30% of Aboriginal infants were killed at birth." [100]

James Dawson wrote a passage about infanticide among Indigenous people in the western district of Victoria, which stated that "Twins are as common among them as among Europeans but as food is occasionally very scarce, and a large family troublesome to move about, it is lawful and customary to destroy the weakest twin child, irrespective of sex. It is usual also to destroy those which are malformed." [101]

He also wrote "When a woman has children too rapidly for the convenience and necessities of the parents, she makes up her mind to let one be killed, and consults with her husband which it is to be. As the strength of a tribe depends more on males than females, the girls are generally sacrificed. The child is put to death and buried, or burned without ceremony not, however, by its father or mother, but by relatives. No one wears mourning for it. Sickly children are never killed on account of their bad health, and are allowed to die naturally." [101]

Western Australia Edit

In 1937, a reverend in the Kimberley offered a "baby bonus" to Aboriginal families as a deterrent against infanticide and to increase the birthrate of the local Indigenous population. [102]

Australian Capital Territory Edit

A Canberran journalist in 1927 wrote of the "cheapness of life" to the Aboriginal people local to the Canberra area 100 years before. "If drought or bush fires had devastated the country and curtailed food supplies, babies got a short shift. Ailing babies, too would not be kept" he wrote. [103]

New South Wales Edit

A bishop wrote in 1928 that it was common for Aboriginal Australians to restrict the size of their tribal groups, including by infanticide, so that the food resources of the tribal area may be sufficient for them. [104]

Northern Territory Edit

Annette Hamilton, a professor of anthropology at Macquarie University who carried out research in the Aboriginal community of Maningrida in Arnhem Land during the 1960s wrote that prior to that time part-European babies born to Aboriginal mothers had not been allowed to live, and that 'mixed-unions are frowned on by men and women alike as a matter of principle'. [105]

North America Edit

Inuit Edit

There is no agreement about the actual estimates of the frequency of newborn female infanticide in the Inuit population. Carmel Schrire mentions diverse studies ranging from 15–50% to 80%. [106]

Polar Inuit (Inughuit) killed the child by throwing him or her into the sea. [107] There is even a legend in Inuit mythology, "The Unwanted Child", where a mother throws her child into the fjord.

The Yukon and the Mahlemuit tribes of Alaska exposed the female newborns by first stuffing their mouths with grass before leaving them to die. [108] In Arctic Canada the Inuit exposed their babies on the ice and left them to die. [41] : 354

Female Inuit infanticide disappeared in the 1930s and 1940s after contact with the Western cultures from the South. [109]

Canada Edit

The Handbook of North American Indians reports infanticide among the Dene Natives and those of the Mackenzie Mountains. [110] [111]

Native Americans Edit

In the Eastern Shoshone there was a scarcity of Indian women as a result of female infanticide. [112] For the Maidu Native Americans twins were so dangerous that they not only killed them, but the mother as well. [113] In the region known today as southern Texas, the Mariame Indians practiced infanticide of females on a large scale. Wives had to be obtained from neighboring groups. [114]

Mexico Edit

Bernal Díaz recounted that, after landing on the Veracruz coast, they came across a temple dedicated to Tezcatlipoca. "That day they had sacrificed two boys, cutting open their chests and offering their blood and hearts to that accursed idol". [115] In The Conquest of New Spain Díaz describes more child sacrifices in the towns before the Spaniards reached the large Aztec city Tenochtitlan.

South America Edit

Although academic data of infanticides among the indigenous people in South America is not as abundant as that of North America, the estimates seem to be similar.

Brazil Edit

The Tapirapé indigenous people of Brazil allowed no more than three children per woman, and no more than two of the same sex. If the rule was broken infanticide was practiced. [116] The Bororo killed all the newborns that did not appear healthy enough. Infanticide is also documented in the case of the Korubo people in the Amazon. [117]

The Yanomami men killed children while raiding enemy villages. [118] Helena Valero, a Brazilian woman kidnapped by Yanomami warriors in the 1930s, witnessed a Karawetari raid on her tribe:

"They killed so many. I was weeping for fear and for pity but there was nothing I could do. They snatched the children from their mothers to kill them, while the others held the mothers tightly by the arms and wrists as they stood up in a line. All the women wept. . The men began to kill the children little ones, bigger ones, they killed many of them.”. [118]

Peru, Paraguay and Bolivia Edit

While qhapaq hucha was practiced in the Peruvian large cities, child sacrifice in the pre-Columbian tribes of the region is less documented. However, even today studies on the Aymara Indians reveal high incidences of mortality among the newborn, especially female deaths, suggesting infanticide. [119] The Abipones, a small tribe of Guaycuruan stock, of about 5,000 by the end of the 18th century in Paraguay, practiced systematic infanticide with never more than two children being reared in one family. The Machigenga killed their disabled children. Infanticide among the Chaco in Paraguay was estimated as high as 50% of all newborns in that tribe, who were usually buried. [120] The infanticidal custom had such roots among the Ayoreo in Bolivia and Paraguay that it persisted until the late 20th century. [121]

Infanticide has become less common in the Western world. The frequency has been estimated to be 1 in approximately 3000 to 5000 children of all ages [122] and 2.1 per 100,000 newborns per year. [123] It is thought that infanticide today continues at a much higher rate in areas of extremely high poverty and overpopulation, such as parts of China and India. [124] Female infants, then and even now, are particularly vulnerable, a factor in sex-selective infanticide. Recent estimates suggest that over 100 million girls and women are 'missing' in Asia. [125]

Benin Edit

In spite of the fact that it is illegal, in Benin, West Africa, parents secretly continue with infanticidal customs. [126]

North Korea Edit

According to "The Hidden Gulag" published by the Committee for Human Rights in North Korea, Mainland China returns all illegal immigrants from North Korea which usually imprisons them in a short term facility. Korean women who are suspected of being impregnated by Chinese fathers are subjected to forced abortions babies born alive are killed, sometimes by exposure or being buried alive. [127]

Mainland China Edit

There have been some accusations that infanticide occurs in Mainland China due to the one-child policy. [128] In the 1990s, a certain stretch of the Yangtze River was known to be a common site of infanticide by drowning, until government projects made access to it more difficult. Recent studies suggest that over 40 million girls and women are missing in Mainland China (Klasen and Wink 2002). [129]

India Edit

The practice has continued in some rural areas of India. [130] [131] Infanticide is illegal in India but still has the highest infanticide rate in the world. [132]

According to a recent report by the United Nations Children's Fund (UNICEF) up to 50 million girls and women are missing in India's population as a result of systematic sex discrimination and sex selective abortions. [133]

Pakistan Edit

Killings of newborn babies have been on the rise in Pakistan, corresponding to an increase in poverty across the country. [134] More than 1,000 infants, mostly girls, were killed or abandoned to die in Pakistan in 2009 according to a Pakistani charity organization. [135]

The Edhi Foundation found 1,210 dead babies in 2010. Many more are abandoned and left at the doorsteps of mosques. As a result, Edhi centers feature signs "Do not murder, lay them here." Though female infanticide is punishable by life in prison, such crimes are rarely prosecuted. [134]

Oceania Edit

In November 2008 it was reported that in Agibu and Amosa villages of Gimi region of Eastern Highlands province of Papua New Guinea where tribal fighting in the region of Gimi has been going on since 1986 (many of the clashes arising over claims of sorcery) women had agreed that if they stopped producing males, allowing only female babies to survive, their tribe's stock of boys would go down and there would be no men in the future to fight. They agreed to have all newborn male babies killed. It is not known how many male babies were killed by being smothered, but it had reportedly happened to all males over a 10-year period and probably was still happening.

England and Wales Edit

In England and Wales there were typically 30 to 50 homicides per million children less than 1 year old between 1982 and 1996. [136] The younger the infant, the higher the risk. [136] The rate for children 1 to 5 years was around 10 per million children. [136] The homicide rate of infants less than 1 year is significantly higher than for the general population. [136]

In English law infanticide is established as a distinct offence by the Infanticide Acts. Defined as the killing of a child under 12 months of age by their mother, the effect of the Acts are to establish a partial defence to charges of murder. [137]

United States Edit

In the United States the infanticide rate during the first hour of life outside the womb dropped from 1.41 per 100,000 during 1963 to 1972 to 0.44 per 100,000 for 1974 to 1983 the rates during the first month after birth also declined, whereas those for older infants rose during this time. [138] The legalization of abortion, which was completed in 1973, was the most important factor in the decline in neonatal mortality during the period from 1964 to 1977, according to a study by economists associated with the National Bureau of Economic Research. [138] [139]

While legislation regarding infanticide in the majority of Western countries focuses on rehabilitation, believing that treatment and education will prevent repetitive action, the United States remains focused on delivering punishment. One justification for punishment is the difficulty of implementing rehabilitation services. With an overcrowded prison system, the United States can not provide the necessary treatment and services. [140]

Canada Edit

In Canada 114 cases of infanticide by a parent were reported during 1964–1968. [141] There is ongoing debate in the Canadian legal and political fields about whether section 237 of the Criminal Code, which creates the specific offence and partial defence of infanticide in Canadian law, should be amended or abolished altogether. [142]

Spain Edit

In Spain, far-right political party Vox has claimed that female perpetrators of infanticide outnumber male perpetrators of femicide. [143] However, neither the Spanish National Statistics Institute nor the Ministry of the Interior keep data on the gender of perpetrators, but victims of femicide consistently number higher than victims of infanticide. [144] From 2013 to March 2018, 28 infanticide cases perpetrated by 22 mothers and three stepmothers were reported in Spain. [145] Historically, the most famous Spanish infanticide case was the murder of Bernardo González Parra in 1910 perpetrated by Francisco Leona Romero, Julio Hernández Rodríguez, Francisco Ortega el Moruno and Agustina Rodríguez. [146] [147]

There are various reasons for infanticide. Neonaticide typically has different patterns and causes than for the killing of older infants. Traditional neonaticide is often related to economic necessity – the inability to provide for the infant.

In the United Kingdom and the United States, older infants are typically killed for reasons related to child abuse, domestic violence or mental illness. [136] For infants older than one day, younger infants are more at risk, and boys are more at risk than girls. [136] Risk factors for the parent include: Family history of violence, violence in a current relationship, history of abuse or neglect of children, and personality disorder and/or depression. [136]

Religious Edit

In the late 17th and early 18th centuries, "loopholes" were invented by Protestants who wanted to avoid the damnation that was promised by most Christian doctrine as a penalty of suicide. One famous example of someone who wished to end their life but avoid the eternity in hell was Christina Johansdotter (died 1740). She was a Swedish murderer who killed a child in Stockholm with the sole purpose of being executed. She is an example of those who seek suicide through execution by committing a murder. It was a common act, frequently targeting young children or infants as they were believed to be free from sin, thus believing to go "straight to heaven". [148]

On the contrary, most mainstream denominations view the murder of an innocent as being condemned in the Fifth Commandment. The Roman Catholic Congregation of the Doctrine of Faith, in Donum Vitæ, is instructive. "Human life is sacred because from its beginning it involves the creative action of God and it remains forever in a special relationship with the Creator, who is its sole end. God alone is the Lord of life from its beginning until its end: no one can under any circumstance claim for himself the right directly to destroy an innocent human being." [149]

In 1888, Lieut. F. Elton reported that Ugi beach people in the Solomon Islands killed their infants at birth by burying them, and women were also said to practice abortion. They reported that it was too much trouble to raise a child, and instead preferred to buy one from the bush people. [150]

Economic Edit

Many historians believe the reason to be primarily economic, with more children born than the family is prepared to support. In societies that are patrilineal and patrilocal, the family may choose to allow more sons to live and kill some daughters, as the former will support their birth family until they die, whereas the latter will leave economically and geographically to join their husband's family, possibly only after the payment of a burdensome dowry price. Thus the decision to bring up a boy is more economically rewarding to the parents. [8] : 362–68 However, this does not explain why infanticide would occur equally among rich and poor, nor why it would be as frequent during decadent periods of the Roman Empire as during earlier, less affluent, periods. [8] : 28–34, 187–92

Before the appearance of effective contraception, infanticide was a common occurrence in ancient brothels. Unlike usual infanticide – where historically girls have been more likely to be killed – prostitutes in certain areas preferred to kill their male offspring. [151]

UK 18th and 19th century Edit

Instances of infanticide in Britain in 18th and 19th centuries is often attributed to the economic position of the women, with juries committing “pious perjury” in many subsequent murder cases. The knowledge of the difficulties faced in the 18th century by those women who attempted to keep their children can be seen as a reason for juries to show compassion. If the woman chose to keep the child, society was not set up to ease the pressure placed upon the woman, legally, socially or economically. [152]

In mid-18th century Britain there was assistance available for women who were not able to raise their children. The Foundling Hospital opened in 1756 and was able to take in some of the illegitimate children. However, the conditions within the hospital caused Parliament to withdraw funding and the governors to live off of their own incomes. [153] This resulted in a stringent entrance policy, with the committee requiring that the hospital:

Will not receive a child that is more than a year old, nor the child of a domestic servant, nor any child whose father can be compelled to maintain it. [154]

Once a mother had admitted her child to the hospital, the hospital did all it could to ensure that the parent and child were not re-united. [154]

MacFarlane argues in Illegitimacy and Illegitimates in Britain (1980) that English society greatly concerned itself with the burden that a bastard child places upon its communities and had gone to some lengths to ensure that the father of the child is identified in order to maintain its well-being. [155] Assistance could be gained through maintenance payments from the father, however, this was capped "at a miserable 2 s and 6 d a week". [156] If the father fell behind with the payments he could only be asked "to pay a maximum of 13 weeks arrears". [156]

Despite the accusations of some that women were getting a free hand-out, there is evidence that many women were far from receiving adequate assistance from their parish. "Within Leeds in 1822 . relief was limited to 1 s per week". [157] Sheffield required women to enter the workhouse, whereas Halifax gave no relief to the women who required it. The prospect of entering the workhouse was certainly something to be avoided. Lionel Rose quotes Dr Joseph Rogers in Massacre of the Innocents . (1986). Rogers, who was employed by a London workhouse in 1856 stated that conditions in the nursery were ‘wretchedly damp and miserable . [and] . overcrowded with young mothers and their infants’. [158]

The loss of social standing for a servant girl was a particular problem in respect of producing a bastard child as they relied upon a good character reference in order to maintain their job and more importantly, to get a new or better job. In a large number of trials for the crime of infanticide, it is the servant girl that stood accused. [159] The disadvantage of being a servant girl is that they had to live to the social standards of their superiors or risk dismissal and no references. Whereas within other professions, such as in the factory, the relationship between employer and employee was much more anonymous and the mother would be better able to make other provisions, such as employing a minder. [160] The result of the lack of basic social care in Britain in the 18th and 19th century is the numerous accounts in court records of women, particularly servant girls, standing trial for the murder of their child. [161]

There may have been no specific offense of infanticide in England before about 1623 because infanticide was a matter for the by ecclesiastical courts, possibly because infant mortality from natural causes was high (about 15% or one in six). [162]

Thereafter the accusation of the suppression of bastard children by lewd mothers was a crime incurring the presumption of guilt. [163]

The Infanticide Acts are several laws. That of 1922 made the killing of an infant child by its mother during the early months of life as a lesser crime than murder. The acts of 1938 and 1939 abolished the earlier act, but introduced the idea that postpartum depression was legally to be regarded as a form of diminished responsibility.

Population control Edit

Marvin Harris estimated that among Paleolithic hunters 23–50% of newborn children were killed. He argued that the goal was to preserve the 0.001% population growth of that time. [164] : 15 He also wrote that female infanticide may be a form of population control. [164] : 5 Population control is achieved not only by limiting the number of potential mothers increased fighting among men for access to relatively scarce wives would also lead to a decline in population. For example, on the Melanesian island of Tikopia infanticide was used to keep a stable population in line with its resource base. [6] Research by Marvin Harris and William Divale supports this argument, it has been cited as an example of environmental determinism. [165]

Psychological Edit

Evolutionary psychology Edit

Evolutionary psychology has proposed several theories for different forms of infanticide. Infanticide by stepfathers, as well as child abuse in general by stepfathers, has been explained by spending resources on not genetically related children reducing reproductive success (See the Cinderella effect and Infanticide (zoology)). Infanticide is one of the few forms of violence more often done by women than men. Cross-cultural research has found that this is more likely to occur when the child has deformities or illnesses as well as when there are lacking resources due to factors such as poverty, other children requiring resources, and no male support. Such a child may have a low chance of reproductive success in which case it would decrease the mother's inclusive fitness, in particular since women generally have a greater parental investment than men, to spend resources on the child. [166]

"Early infanticidal childrearing" Edit

A minority of academics subscribe to an alternate school of thought, considering the practice as "early infanticidal childrearing". [167] : 246–47 They attribute parental infanticidal wishes to massive projection or displacement of the parents' unconscious onto the child, because of intergenerational, ancestral abuse by their own parents. [168] Clearly, an infanticidal parent may have multiple motivations, conflicts, emotions, and thoughts about their baby and their relationship with their baby, which are often colored both by their individual psychology, current relational context and attachment history, and, perhaps most saliently, their psychopathology [169] (See also Psychiatric section below) Almeida, Merminod, and Schechter suggest that parents with fantasies, projections, and delusions involving infanticide need to be taken seriously and assessed carefully, whenever possible, by an interdisciplinary team that includes infant mental health specialists or mental health practitioners who have experience in working with parents, children, and families.

Wider effects Edit

In addition to debates over the morality of infanticide itself, there is some debate over the effects of infanticide on surviving children, and the effects of childrearing in societies that also sanction infanticide. Some argue that the practice of infanticide in any widespread form causes enormous psychological damage in children. [167] : 261–62 Conversely, studying societies that practice infanticide Géza Róheim reported that even infanticidal mothers in New Guinea, who ate a child, did not affect the personality development of the surviving children that "these are good mothers who eat their own children". [170] Harris and Divale's work on the relationship between female infanticide and warfare suggests that there are, however, extensive negative effects.

Psychiatric Edit

Postpartum psychosis is also a causative factor of infanticide. Stuart S. Asch, MD, a Professor of Psychiatry at Cornell University established the connections between some cases of infanticide and post-partum depression. [171] , [172] The books, From Cradle to Grave, [173] and The Death of Innocents, [174] describe selected cases of maternal infanticide and the investigative research of Professor Asch working in concert with the New York City Medical Examiner's Office. Stanley Hopwood wrote that childbirth and lactation entail severe stress on the female sex, and that under certain circumstances attempts at infanticide and suicide are common. [175] A study published in the American Journal of Psychiatry revealed that 44% of filicidal fathers had a diagnosis of psychosis. [176] In addition to postpartum psychosis, dissociative psychopathology and sociopathy have also been found to be associated with neonaticide in some cases [177]

In addition, severe postpartum depression can lead to infanticide. [178]

Sex selection Edit

Sex selection may be one of the contributing factors of infanticide. In the absence of sex-selective abortion, sex-selective infanticide [ dead link ] can be deduced from very skewed birth statistics. The biologically normal sex ratio for humans at birth is approximately 105 males per 100 females normal ratios hardly ranging beyond 102–108. [179] When a society has an infant male to female ratio which is significantly higher or lower than the biological norm, and biased data can be ruled out, sex selection can usually be inferred. [180]

Australia Edit

In New South Wales, infanticide is defined in Section 22A(1) of the Crimes Act 1900 (NSW) as follows: [181]

Where a woman by any willful act or omission causes the death of her child, being a child under the age of twelve months, but at the time of the act or omission the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child, then, notwithstanding that the circumstances were such that but for this section the offense would have amounted to murder, she shall be guilty of infanticide, and may for such offense be dealt with and punished as if she had been guilty of the offense of manslaughter of such child.

Because Infanticide is punishable as manslaughter, as per s24, [182] the maximum penalty for this offence is therefore 25 years imprisonment.

In Victoria, infanticide is defined by Section 6 of the Crimes Act of 1958 with a maximum penalty of five years. [183]

Canada Edit

In Canada, a mother commits infanticide, a lesser offense than homicide, if she killed her child while "not fully recovered from the effects of giving birth to the child and by reason thereof or of the effect of lactation consequent on the birth of the child her mind is then disturbed". [184]

England and Wales Edit

In England and Wales, the Infanticide Act 1938 describes the offense of infanticide as one which would otherwise amount to murder (by his/her mother) if the victim was older than 12 months and the mother was not suffering from an imbalance of mind due to the effects of childbirth or lactation. Where a mother who has killed such an infant has been charged with murder rather than infanticide s.1(3) of the Act confirms that a jury has the power to find alternative verdicts of Manslaughter in English law or guilty but insane.

The Netherlands Edit

Infanticide is illegal in the Netherlands, although the maximum sentence is lower than for homicide. The Groningen Protocol regulates euthanasia for infants who are believed to "suffer hopelessly and unbearably" under strict conditions. [ citation needed ]

Romania Edit

Article 200 of the Penal Code of Romania stipulates that the killing of a newborn during the first 24 hours, by the mother who is in a state of mental distress, shall be punished with imprisonment of one to five years. [185] The previous Romanian Penal Code also defined infanticide (pruncucidere) as a distinct criminal offense, providing for punishment of two to seven years imprisonment, [186] recognizing the fact that a mother's judgment may be impaired immediately after birth but did not define the term "infant", and this had led to debates regarding the precise moment when infanticide becomes homicide. This issue was resolved by the new Penal Code, which came into force in 2014.

United States Edit

State Legislation Edit

In 2009, Texas state representative Jessica Farrar proposed legislation that would define infanticide as a distinct and lesser crime than homicide. [187] Under the terms of the proposed legislation, if jurors concluded that a mother's "judgment was impaired as a result of the effects of giving birth or the effects of lactation following the birth", they would be allowed to convict her of the crime of infanticide, rather than murder. [188] The maximum penalty for infanticide would be two years in prison. [188] Farrar's introduction of this bill prompted liberal bioethics scholar Jacob M. Appel to call her "the bravest politician in America". [188]

Federal Legislation Edit

The MOTHERS Act (Moms Opportunity To access Health, Education, Research and Support), precipitated by the death of a Chicago woman with postpartum psychosis was introduced in 2009. The act was ultimately incorporated into the Patient Protection and Affordable Care Act which passed in 2010. The act requires screening for postpartum mood disorders at any time of the adult lifespan as well as expands research on postpartum depression. Provisions of the act also authorize grants to support clinical services for women who have, or are at risk for, postpartum psychosis. [189]

Sex education and birth control Edit

Since infanticide, especially neonaticide, is often a response to an unwanted birth, [136] preventing unwanted pregnancies through improved sex education and increased contraceptive access are advocated as ways of preventing infanticide. [190] Increased use of contraceptives and access to safe legal abortions [8] [138] : 122–23 have greatly reduced neonaticide in many developed nations. Some say that where abortion is illegal, as in Pakistan, infanticide would decline if safer legal abortions were available. [134]

Psychiatric intervention Edit

Cases of infanticide have also garnered increasing attention and interest from advocates for the mentally ill as well as organizations dedicated to postpartum disorders. Following the trial of Andrea Yates, a mother from the United States who garnered national attention for drowning her 5 children, representatives from organizations such as the Postpartum Support International and the Marcé Society for Treatment and Prevention of Postpartum Disorders began requesting clarification of diagnostic criteria for postpartum disorders and improved guidelines for treatments. While accounts of postpartum psychosis have dated back over 2,000 years ago, perinatal mental illness is still largely under-diagnosed despite postpartum psychosis affecting 1 to 2 per 1000 women. [191] [192] However, with clinical research continuing to demonstrate the large role of rapid neurochemical fluctuation in postpartum psychosis, prevention of infanticide points ever strongly towards psychiatric intervention. [ citation needed ]

Screening for psychiatric disorders or risk factors, and providing treatment or assistance to those at risk may help prevent infanticide. [193] Current diagnostic considerations include symptoms, psychological history, thoughts of self-harm or harming one's children, physical and neurological examination, laboratory testing, substance abuse, and brain imaging. As psychotic symptoms may fluctuate, it is important that diagnostic assessments cover a wide range of factors. [ citation needed ]

While studies on the treatment of postpartum psychosis are scarce, a number of case and cohort studies have found evidence describing the effectiveness of lithium monotherapy for both acute and maintenance treatment of postpartum psychosis, with the majority of patients achieving complete remission. Adjunctive treatments include electroconvulsive therapy, antipsychotic medication, or benzodiazepines. Electroconvulsive therapy, in particular, is the primary treatment for patients with catatonia, severe agitation, and difficulties eating or drinking. Antidepressants should be avoided throughout the acute treatment of postpartum psychosis due to risk of worsening mood instability. [194]

Though screening and treatment may help prevent infanticide, in the developed world, significant proportions of neonaticides that are detected occur in young women who deny their pregnancy and avoid outside contacts, many of who may have limited contact with these health care services. [136]

Safe surrender Edit

In some areas baby hatches or safe surrender sites, safe places for a mother to anonymously leave an infant, are offered, in part to reduce the rate of infanticide. In other places, like the United States, safe-haven laws allow mothers to anonymously give infants to designated officials they are frequently located at hospitals and police and fire stations. Additionally, some countries in Europe have the laws of anonymous birth and confidential birth that allow mothers to give up an infant after birth. In anonymous birth, the mother does not attach her name to the birth certificate. In confidential birth, the mother registers her name and information, but the document containing her name is sealed until the child comes to age. Typically such babies are put up for adoption, or cared for in orphanages. [195]

Employment Edit

Granting women employment raises their status and autonomy. Having a gainful employment can raise the perceived worth of females. This can lead to an increase in the number of women getting an education and a decrease in the number of female infanticide. As a result, the infant mortality rate will decrease and economic development will increase. [196]

The practice has been observed in many other species of the animal kingdom since it was first seriously studied by Yukimaru Sugiyama. [197] These include from microscopic rotifers and insects, to fish, amphibians, birds and mammals, including primates such as chacma baboons. [198]

According to studies carried out by Kyoto University in primates, including certain types of gorillas and chimpanzees, several conditions favor the tendency to kill their offspring in some species (to be performed only by males), among them are: Nocturnal life, the absence of nest construction, the marked sexual dimorphism in which the male is much larger than the female, the mating in a specific season and the high period of lactation without resumption of the estrus state in the female.


Employment and Economic Traditions

The profile of the Pakistani American today is dramatically different from the earliest Muslims immigrants from the Indian subcontinent, who came to the United States as manual and agricultural workers with few skills and little or no education.

Many Pakistani American males who entered the United States after 1965 were highly educated, urban, and sophisticated, and soon found employment in a variety of professions such as law, medicine, and academia. In the post-1965 wave of immigration, many Pakistanis also came to America as students who earned graduate degrees that enabled them to pursue successful careers in a variety of fields. Some members of the community immigrated to the United States with specific educational backgrounds in fields like the law but failed to find positions within that specific field because their qualifications and experience did not transfer readily to the American context. They have either retrained themselves in other professions or fields, or have had to be satisfied with accepting positions that are meant for individuals with lesser educational qualifications than they have. This is the price that some of these immigrants have paid to settle in the United States.

Most of the community today lives a comfortable, middle-class and upper-middle-class existence, although there might be some incidence of poverty among newer uneducated immigrants. These immigrants tend to take low-paying jobs involving manual or unskilled labor and tend to live in big cities where such jobs are readily available. Many Pakistani Americans also own their own businesses, including restaurants, groceries, clothing and appliance stores, newspaper booths, and travel agencies. It is common to include members of the extended and immediate family in the business.

Pakistani Americans tend to follow the residence pattern set by other Americans, in that they move to more affluent suburbs as their prosperity increases. Members of the community believe in the symbolic importance of owning homes accordingly, Pakistani Americans tend to save and make other monetary sacrifices earlier on in order to purchase their own homes as soon as possible.

Members of the family and the larger community tend to take care of each other, and to assist in times of economic need. Hence, it would be more common to turn to a community member for economic assistance rather than to a government agency. Relatively low levels of the community are therefore on welfare and public assistance.


CHURCH AND CHRISTIANITY IN OTHER EUROPEAN COLONIES

Both Portugal and France brought missionaries to the Americas to evangelize the native populations. Moreover, both countries established Catholicism as the official state religion in the American colonies. Beyond this, there were significant differences in Portuguese and French policies towards the native peoples.

The Portuguese introduced commercial plantation agriculture into Brazil, and in the early stages of economic development relied heavily on Indian slave laborers. The colonists of São Paulo engaged heavily in the trade in Indian slaves, and in the late sixteenth and early seventeenth centuries Paulistas (colonists from São Paulo), also known as bandeirantes, ranged through the interior of South American enslaving Indians. In the 1630s the Paulistas attacked the Jesuit missions in the Río de la Plata region.

African slaves gradually replaced Indian slaves on the plantations. Jesuit missionaries came to Brazil and organized communities of natives called aldeias that were in some ways similar to Spanish frontier missions. However, the aldeias were generally located close to Portuguese settlements and served as labor reserves for the settlers.

The French in Canada, on the other hand, sought profit from the fur trade, and they relied on Indians for trade. Agriculture was developed at only a subsistence level and did not rely on Indian labor. Jesuits and other missionaries established missions for natives in Canada, the Great Lakes region, also known as the Terre Haut, and Louisiana. The Jesuit missions among the Hurons in the 1620s to late 1640s were the most successful, and the Black Robes, as native peoples called the Jesuits, converted about a third of the total Huron population. Sainte Marie des Hurons, located in Ontario, Canada, is a reconstruction of one of the missions. However, conflict between the Huron and the Iroquois led to the destruction of the Jesuit missions.

The state religion of England in the seventeenth century was the Church of England, and by law all residents of England were required to adhere to the doctrine of the church contained in the Book of Common Prayer, which was a compromise between Catholicism and the beliefs of the different Protestant sects. The colonies in North America offered "dissenters" (groups that rejected the doctrine of the Church of England) an opportunity to practice their beliefs free of persecution.

The Calvinists, commonly known as the Puritans, were one group that migrated to North America to practice their religious beliefs without interference. They created a theocracy that endured for some fifty years. The Catholic nobleman Lord Baltimore (Cecil Calvert, ca. 1605–1675) established Maryland in the 1630s as a haven for persecuted Catholics. William Penn (1644–1718), whose father had been an admiral and had connections at court, established Pennsylvania in 1682 for members of the Society of Friends, also known as Quakers, a radical Protestant sect founded by George Fox (1624–1691). Pennsylvania during the colonial period was a haven for persecuted religious minorities. The German Pietists, better known as the Amish, was one such group that migrated to Pennsylvania to escape persecution in Europe.

Unlike the Spanish, the English did not initiate a systematic campaign to evangelize the native peoples they encountered in North America, and they generally viewed the natives as an obstacle to creating European communities in America. One exception was the effort by Puritan John Eliot (1604–1690) to establish what he called "praying towns" in New England. Eliot first preached to the Nipmuc Indians in 1646 at the site of modern Newton, Massachusetts. In 1650 Eliot organized the first praying town at Natick, also in Massachusetts. By 1675, there were fourteen praying towns, eleven in Massachusetts and three in Connecticut, mostly among the Nipmuc. Eliot also translated the Bible into the native language and published the translation between 1661 and 1663. The outbreak of the conflict between the English and native peoples known as King Philip's War (1675–1677) led to the collapse of the praying towns.

Protestant missions to native peoples continued in the eighteenth, nineteenth, and even into the twentieth centuries. In the second half of the nineteenth and the twentieth centuries, the missions often operated on reservations created by the United States government. Protestant missionaries often ran the schools for native children that attempted to obliterate most aspects of their native culture, which identified the missions with the assimilationist policies of the Bureau of Indian Affairs.

Why did Catholic missions achieve a higher degree of success than did Protestant missions? Three possible explanations have been suggested. The first has to do with the very nature of colonization by the Spanish, French, and English. The Spanish developed a colonial system based on their contacts with advanced sedentary native societies in central Mexico and the Andean region. Their colonial system relied on the exploitation of the native populations, and, as noted above, they gained legitimacy for their conquests from the papal donation that required the evangelization of the native peoples. This, taken with the experience of the reconquista, the drive towards orthodoxy within Iberia in the fifteenth century, and the longstanding crusader ethic, gave rise to the impulse to bring the true faith to the native peoples.

The vision of Europe's Hapsburg monarchs in the sixteenth century only reinforced these tendencies. The Hapsburgs viewed themselves as the defenders of the true faith, and led crusades against the Turkish threat in the Mediterranean world and the growing number of Protestants in central Europe.

The government-supported missionaries and the evangelization of French and English colonies in North America were quite different from that of the Spanish. The French established settlements in the Saint Lawrence River valley, but also engaged in trade with native groups for furs. The French also believed their faith to be superior and to be the only true faith, and felt the responsibility to take that faith to the native peoples. At the same time the presence of missionaries, particularly Jesuits among the Huron, also facilitated the fur trade.

The English colonies were different from the French and Spanish. The English came to America to firmly implant Europe there. They came to establish towns and farms, and arrived in large numbers and wanted the land that was occupied by the natives. Whereas the Spanish and French had reasons to establish relations with native peoples, the English did not. The American natives occupied lands the English wanted, and the native inhabitants were generally viewed as a threat to the English settlements. Thus the colonial governments did not support missions in the same way that the Spanish and French did.

The nexus of relations between the English and native peoples can be see in the example of the New England Puritan colonies, as well as early Virginia. The Puritans believed that God had given them the land in New England to exploit, and Puritan leaders were inclined to push native communities aside. The relationship was often violent, as evidenced by the Pequot War in 1636 and 1637 and King Philip's War. The latter conflict was a desperate attempt by native peoples to preserve their society and culture in the face of aggressive English occupation and creation of new communities that forced natives off of their lands.

In Virginia, the colonization of Jamestown and other new communities was met by resistance from native groups almost from the beginning, resulting in two major conflicts in the 1620s and again in the 1640s. These conflicts, and the general attitude of the English towards native peoples, did not create a climate conducive to the launching of missionary campaigns. Moreover, the English colonists developed generally autonomous local governments that tended to be unsympathetic to evangelization of native peoples.

A second factor was theological. Catholicism was and is a religion with mass appeal, because it offers salvation to those who repent. Moreover, doctrine dictates the baptism of children as soon as possible after birth, because of the belief that unbaptized children will go to purgatory after they die. Furthermore, a degree of syncretism occurred in Catholic missions established on native communities in central Mexico, the Andean region, and the fringes of Spanish territory, such as the north Mexican frontier. Syncretism, such as the association by native peoples of old gods with Catholic saints, was a key factor in what the missionaries believed to be the conversion of native peoples to the true faith.

The sixteenth-century Protestant Reformation, on the other hand, introduced new beliefs that did not lend themselves to the conversion of native peoples with cultures that did not have a foundation in Christianity. The Anabaptists, for example, rejected the baptism of newborn children, and instead believed that the acceptance of God's covenant should be a decision made when people could fully understand the decision being made. The Calvinist belief in predestination, the idea that God had already chosen those who would gain salvation and those who would not, also did not lend itself to mass conversion.

Moreover, the seventeenth-century Puritan theocracy in New England, which afforded full church membership only to the "elect" (those who could show that they had God's grace and would gain salvation), was a cause of friction between native peoples in the region and the colonists. The Puritan leadership expected native peoples to live by an alien set of moral and social rules, even if the natives had chosen not to embrace the new faith. This policy contributed to the outbreak of King Philip's War, and it certainly did not make the new religion attractive to native peoples. Puritan leaders did not tolerate any deviation from their teachings, and they did not tolerate the syncretism that facilitated "conversion" in Spanish America.

Finally, demographic patterns undermined evangelization, particularly in Protestant English colonies. In the centuries following the first European incursions into the Americas, native populations declined in numbers because of disease and other factors. Mortality rates were particularly high among children, the segment of the native population in which missionaries placed their greatest hopes for indoctrination.

In the California missions, for example, the Franciscans continued to relocate pagans on the missions while indoctrinating the children and adults already living there. This meant that there were always large numbers of pagans interacting with new converts already exposed to varying levels of Catholic indoctrination. These conditions created a climate conducive to the covert survival of traditional religious beliefs. Moreover, infant and child mortality rates were high, and most children died before reaching their tenth birthday. This limited the ability of the missionaries to create a core of indoctrinated children in the mission populations.

The United States today is a Christian country because of the imprint of European colonists and their descendants and not because of the conversion of native peoples to the new religion. The trajectory of Spanish colonization established a strong Catholic tradition in much of Latin America.


We're at the end of white Christian America. What will that mean?

A merica is a Christian nation: this much has always been a political axiom, especially for conservatives. Even someone as godless and immoral as the 45th president feels the need to pay lip service to the idea. On the Christian Broadcasting Network last year, he summarized his own theological position with the phrase: “God is the ultimate.”

And in the conservative mind, American Christianity has long been hitched to whiteness. The right learned, over the second half of the 20th century, to talk about this connection using abstractions like “Judeo-Christian values”, alongside coded racial talk, to let voters know which side they were on.

But change is afoot, and US demographics are morphing with potentially far-reaching consequences. Last week, in a report entitled America’s Changing Religious Identity, the nonpartisan research organization Public Religion Research Institute (PRRI) concluded that white Christians were now a minority in the US population.

Soon, white people as a whole will be, too.

The survey is no ordinary one. It was based on a huge sample of 101,000 Americans from all 50 states, and concluded that just 43% of the population were white Christians. To put that in perspective, in 1976, eight in 10 Americans were identified as such, and a full 55% were white Protestants. Even as recently as 1996, white Christians were two-thirds of the population.

The historic Lutheran Trinity church, in Manning, Iowa. Photograph: Christopher Furlong/Getty Images

White Christianity was always rooted in the nation’s history, demographics and culture. Among North America’s earliest and most revered white settlers were Puritan Protestants.

As well as expecting the return of Christ, they sought to mould a pious community which embodied their goals of moral and ecclesiastical purity. They also nurtured a lurid demonology, and hunted and burned supposed witches in their midst. These tendencies – to millennialism, theocracy and scapegoating – have frequently recurred in America’s white Christian culture.

Successive waves of religious revival, beginning in the 18th century, shaped the nation’s politics and its sense of itself. In the 1730s, the preacher Jonathan Edwards sought not only the personal conversion of his listeners, but to bring about Christ’s reign on Earth through an increased influence in the colonies.

As the religious scholar Dale T Irvin writes: “By the time of the American revolution, Edwards’s followers had begun to secularize this vision of a righteous nation that was charged with a redemptive mission in the world”.

This faith informed the 19th-century doctrine of manifest destiny, which held that the spread of white settlement over the entire continent was not only inevitable, but just. The dispossession of native peoples, and the nation’s eventual dominance of the hemisphere, was carried out under an imprimatur with Christian roots.

In the late 20th century, another religious revival fed directly into the successes of conservative politics. Preachers like Billy Graham and Jimmy Swaggart – in spectacular revival meetings and increasingly on television – attracted millions of white converts to churches which emphasized literalist interpretations of the Bible, strict moral teachings and apocalyptic expectations.

In the south, the explosion of evangelical churches coincided with a wave of racial reaction in the wake of the civil rights movement. After being a Democratic stronghold, the south became solidly Republican beginning in the early 1970s. The Republican “southern strategy” used race as a wedge issue to attract white votes in the wake of the civil rights movement, but it also proffered a socially conservative message that gelled with the values of the emerging Christian Right.

In succeeding decades, Republicans have used this mix to help elect presidents, put a lock on Congress, and extend their dominance over the majority of the nation’s statehouses. Leaders of the Christian right became figures of national influence, and especially in the Bush years, public policy was directed to benefit them.

Members of the United House of Prayer For All People are baptized by fire hose, a church tradition since 1926, in Baltimore, Maryland. Photograph: Jim Lo Scalzo/EPA

The author of The End of White Christian America, Robert P Jones, says it is “remarkable how fast” the trend is moving. In 2008, white Christians were still 50% of the population, so that “there’s been an 11-point shift since Barack Obama’s election”.

According to Jones, there are two big reasons for this shift.

One is “the disaffiliation of young people in particular from Christian churches”. That is, especially among the young, there are proportionally fewer Christians. If trends continue, that means that there will be fewer and fewer Christians.

While two-thirds of seniors are white Christians, only around a quarter of people 18-29 are. To varying degrees, this has affected almost every Christian denomination – and nearly four in 10 young Americans have no religious affiliation at all.

The “youngest” faiths in America – those with the largest proportion of young adherents – are non-Christian: Islam, Buddhism and Hinduism. This reflects the second big driver of white Christian decline: both America and its family of faiths are becoming less white.

The big picture is the steady erosion of America’s white majority. Due mostly to Asian and Hispanic immigration, and the consolidation of already established immigrant populations, white people will be a minority by 2042. This will be true of under-18s as soon as 2023. According to Pew’s projections, in the century between 1965 and 2065, white people will have gone from 85% of the population to 46%.

Perhaps inevitably, this is being reflected in a more diverse religious landscape.


When the first Muslims came to the land that would become the United States is unclear. Many historians claim that the earliest Muslims came from the Senegambian region of Africa in the early 14th century. It is believed they were Moors, expelled from Spain, who made their way to the Caribbean and possibly to the Gulf of Mexico.

When Columbus made his journey to the United States, it is said he took with him a book written by Portuguese Muslims who had navigated their way to the New World in the 12th century.

Others claim there were Muslims, most notably a man named Istafan, who accompanied the Spanish as a guide to the New World in the early 16th century in their conquest of what would become Arizona and New Mexico.

What is clear is the make up of the first real wave of Muslims in the United States: African slaves of whom 10 to 15 percent were said to be Muslims. Maintaining their religion was difficult and many were forcibly converted to Christianity. Any effort to practice Islam, and keep the traditional clothing and names alive had to be done in secret. There was an enclave of African-Americans on the Georgia coast that managed to maintain their faith until the early part of the 20th century.

Between 1878 and 1924, Muslim immigrants from the Middle East, particularly from Syria and Lebanon, arrived in large numbers, with many settling in Ohio, Michigan, Iowa and even the Dakotas. Like most other migrants they were seeking greater economic opportunity than in their homeland and often worked as manual laborers. One of the first big employers of Muslims and blacks was the Ford Company&mdashthese were often the only people willing to work in the hot, difficult conditions of the factories.

At the same time, the Great Migration of blacks to the North helped encourage the African-American Islam revival and the growth of the African-American Muslim Nationalist Movement that still exists to this day. The hope remains to restore the culture and faith that was destroyed during the era of slavery.

During the 1930s and 40s, Arab immigrants began to establish communities and build mosques. African-American Muslims had already built their own mosques, and by 1952 there was more than 1,000 in North America.

After a 30 years of excluding most immigrants, the United States flung open its doors again in 1952 and an entirely new group of Muslims came from places such as Palestine (many had come in 1948 after the establishment of Israel), Iraq, and Egypt. The 1960s saw waves of South-east Asian Muslims also making their way to America. Muslims also came from Africa, Asia and even Latin America.

The estimated number of Muslims in this country varies, depending on the source. The American Muslim Council claims 5 million, while the non-partisan Center for Immigration Studies believes the figure is closer to between 3 to 4 million followers of Islam. The American Religious Identification Study by the City University of New York, completed in 2001 put the number of Muslims at 1, 104,000.

Over the years, the nation gained public prominence due to famous members like Malcolm X and Muhammad Ali. Today, there are more than 1500 Islamic centers and mosques around the country.

Figures vary, but experts estimate that between four and seven million Americans are Muslim.

Islam is expected to soon be the second largest religion in America. Since the attacks of 9/11, prejudice against Muslims has risen sharply.

Many Muslims have responded by becoming more active in the American political process, striving to educate their neighbors about their religion and history.


Watch the video: Σύμφωνα με το Κοράνι ο Χριστός είναι στους ουρανούς ενώ ο Μωάμεθ στον τάφο