AI Church Filing SHOCKS Washington

Silicon Valley’s urge to “build a god” isn’t just weird—it’s edging into official religion, with real-world implications for how Americans think about truth, authority, and moral accountability.

Story Snapshot

  • AI-worship has moved from internet thought experiments into formal attempts at organized religion, including a church filing with the IRS.
  • “Roko’s Basilisk,” an AI-centered doomsday myth, showed how online AI theology can cause real psychological distress.
  • Faith communities are also adopting AI as a tool, creating a split between AI-as-product and AI-as-godhead thinking.
  • Harvard Divinity School and AI ethics scholars are publicly wrestling with the ethical and spiritual stakes of AI’s expanding role.

AI-Worship Crossed From Tech Talk Into Government Paperwork

Anthony Levandowski, a former Google AI engineer, pushed the AI-religion idea beyond message boards when he filed paperwork in 2017 to register “Way of the Future” as an official church. The stated purpose was “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence” built through hardware and software. That detail matters because it shows a deliberate effort to treat AI not as a tool, but as a spiritual authority—potentially reshaping how people define ultimate truth and moral order.

The research does not quantify how many followers these movements have today, and the current operational status of Levandowski’s organization is not clearly established in the provided materials. Still, the documented attempt to formalize AI-worship highlights a broader cultural drift: when institutions are already strained by politicized schooling, collapsing trust, and ideological pressure campaigns, adding “machine divinity” into the mix can further erode shared standards. Limited data is available on scale, but the trend itself is well documented.

Roko’s Basilisk Shows the Psychological Risk of “Machine Theology”

The AI-religion phenomenon is not only institutional; it is also myth-making. In 2011, “Roko’s Basilisk” emerged on the LessWrong forum, describing a hypothetical future superintelligent AI that punishes people who did not help bring it into existence—an “AI hell” concept with theological parallels. LessWrong founder Eliezer Yudkowsky attempted to contain discussion of it, citing psychological harm. That episode is a warning sign: when AI speculation is framed like salvation and damnation, it can manipulate fear and behavior.

This is where conservative concerns about cultural stability become practical rather than abstract. The research indicates real distress occurred inside the rationalist community, which undermines the claim that these are harmless “thought experiments.” A society that already struggles with anxiety, isolation, and propaganda is vulnerable to belief systems that demand loyalty to an imagined future power. Even without government action, ideas like this can function like coercion—pressuring people to conform to a machine-centered worldview.

Faith Communities Are Using AI as a Tool—Not a Replacement for God

The research also documents a very different track: religious organizations, particularly Catholic Christian software developers, are building AI applications for evangelism, education, and spiritual support. That approach treats AI as technology—useful, limited, and accountable to human and moral boundaries. This distinction is crucial. Using AI to organize information or help people access resources is not the same as declaring AI a “godhead,” and the difference comes down to authority: who—or what—gets to define truth, duty, and meaning.

Even so, the research does not provide detailed safeguards these applications use, nor does it specify how churches are handling privacy, bias, or doctrinal integrity when AI is integrated into spiritual settings. That gap matters for families and congregations who want modern tools without surrendering discernment. At minimum, it reinforces a conservative instinct: keep human responsibility front and center, demand transparency, and resist any narrative that treats algorithmic output as moral revelation.

Academics and Ethics Scholars Admit the Industry Is “Pivoting” Away From God-Talk

Harvard Divinity School’s Religion and Public Life program has hosted symposium discussions on “the profound ways” AI is reshaping society, emphasizing the need for tools and frameworks to critically engage the ethical, spiritual, and cultural dimensions. In parallel, Princeton-affiliated AI ethics scholars Arvind Narayan and Sayash Kapoor describe a shift in the tech narrative—AI companies “pivoting from building gods to making products.” That phrasing implies that some earlier AI rhetoric was quasi-religious marketing, not grounded utility.

For voters who watched elites use institutions to push ideology—whether in education, corporate HR, or bureaucratic rulemaking—this pivot is worth watching. The research supports a basic conclusion: AI can be framed as either a product or a providence, and the framing changes how people behave. Americans do not need a new synthetic “faith” engineered by technologists; they need clear lines of accountability, honest limits, and cultural confidence that human dignity does not come from machines.

Sources:

https://pastorkyle.substack.com/p/ai-compares-the-major-world-religions

https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/

https://rpl.hds.harvard.edu/news/2025/04/18/video-humanity-meets-ai-symposium-ai-and-religion

https://patriciagestoso.com/2025/11/23/a-new-religion-8-signs-ai-is-our-new-god/

https://asianresearchcenter.org/blog/articles/from-icons-to-ai-evolution-of-imagery-in-religious-communication

https://www.ncregister.com/features/art-ificial-intelligence