Faith Leaders Unite Against AI Chatbots
The push came with coffee and sheet cake after Sunday services: a group of Grand Forks clergy and lay leaders said they are urging a U.S. House panel on artificial intelligence to establish guardrails for AI chatbots, citing risks to privacy, pastoral care, and civic trust, according to a joint statement organizers shared with Grand Forks Local. Their letter asks the bipartisan House task force studying AI to prioritize protections for children, transparency labels on synthetic content, and clear avenues to report harm.
Framed as a moral appeal rather than a technical brief, the statement grounds its urgency in widely accepted principles of human dignity and truth-telling, echoing calls from national faith documents such as the Vatican’s 2024 message on artificial intelligence and peace, which warns of the “grave risk of discrimination, intrusion, and social fragmentation” without safeguards, according to the Holy See’s text. Local clergy say the aim is not to ban tools outright but to set baseline standards so “convenience doesn’t outrun conscience,” pointing to the volume of AI-enabled scams and voice-clone robocalls that have already reached parishioners, as documented by the Federal Communications Commission’s consumer alerts on AI-generated calls.
The letter is addressed to House leaders overseeing AI policy and references ongoing federal efforts, including the White House’s 2023 executive order directing agencies to develop safety tests and privacy guidance for AI systems, according to the administration’s fact sheet. Organizers say they want Congress to convert those executive actions into law and to clarify how chatbots should disclose AI use, store user data, and avoid impersonation.
Understanding the Core Concerns
Faith leaders outline three core risks: deceptive content that erodes trust in institutions, privacy violations through data-intensive chat apps, and the blurring of lines between spiritual counsel and automated advice, according to the coalition’s statement. They cite examples of voice-clone fraud targeting seniors and political deepfakes that can mislead voters, issues federal regulators are already flagging. The FCC has warned that AI-generated voice cloning used in unsolicited robocalls is illegal under the Telephone Consumer Protection Act and has moved to expand enforcement, according to the agency’s public notice on AI voice cloning.
Several traditions ground their critique in existing ethical frameworks. Catholic and interfaith signatories point to the “Rome Call” principles—transparency, inclusion, responsibility, impartiality, reliability, and security—as a baseline for any deployment of AI, according to the Rome Call for AI Ethics initiative. Protestant and Jewish leaders in the group emphasize duties to protect the vulnerable and to prevent false witness, arguing that chatbot disclosures and age-appropriate design are concrete expressions of those duties.
At a practical level, the statement distinguishes among today’s tools. Chatbots like ChatGPT and similar systems can draft emails, summarize documents, and simulate conversation, powered by large language models trained on vast datasets, according to the National Institute of Standards and Technology’s AI Risk Management Framework. Those same capacities enable realistic impersonation and persuasive disinformation when misused, which is why the group urges transparency labels, audit trails for high-risk uses, and clear redress when systems cause harm.
Impact on Community and Society
In daily life, AI chatbots are increasingly embedded in customer service portals, health intake forms, and classroom study aids, including in college towns like Grand Forks where universities and startups pilot new tools, according to national usage data from Pew Research Center showing roughly one in five U.S. adults had used ChatGPT by mid-2023. For small businesses on DeMers Avenue and Gateway Drive, the appeal is speed: chat assistants can answer routine questions and draft marketing copy, reducing costs. For residents, the downside shows up in inboxes and on phones—unsolicited pitches, cloned voices, and hard-to-verify claims—problems federal agencies say are rising with generative AI.
Local educators note that AI’s benefits in tutoring and accessibility must be balanced with academic integrity and data privacy, themes the University of North Dakota has highlighted as it expands teaching and research on autonomy and human-centered technology, according to UND’s public communications. Healthcare and social-service providers also see promise in translation and triage, but they face new duties to validate outputs, protect sensitive records, and maintain the human relationships at the core of care, consistent with NIST’s risk framework guidance on governance and monitoring.
Local Impact: Grand Forks
Congregations report increased questions from families about AI tools used in schoolwork and social media, and they are seeking plain-language guidance, according to the organizers’ statement.
Small businesses and nonprofits say they want clarity on when to disclose chatbot use to customers and donors, aligning with transparency practices recommended by NIST.
Military families and students in a high-mobility community worry about identity theft and impersonation scams tied to AI voice cloning, a trend the FCC and state attorneys general have warned about.
Legislative and Political Landscape
Congress has not enacted a comprehensive AI law, but the House formed a bipartisan task force to develop recommendations and has held hearings on transparency, child safety, and election integrity, according to reporting by national outlets and committee schedules. The White House’s 2023 executive order instructs agencies to set testing, watermarking, and privacy guidance for advanced AI models, while the Federal Trade Commission has warned companies to keep their “AI claims in check” and avoid unfair or deceptive practices, according to FTC business guidance. The FCC has stated that AI voice-clone robocalls violate the TCPA, giving state and federal enforcers a clearer path to act against bad actors, per the agency’s notices.
Chatbots sit at the intersection of these efforts. Labeling synthetic content, authenticating voices, and safeguarding minors’ data could move through House Energy and Commerce or Judiciary panels, informed by the House AI task force’s work, according to committee jurisdictions. Nationally, regulators are also weighing disclosures for political ads that use AI; the Federal Election Commission has sought public comment on whether to treat deceptive AI content in campaign materials as a prohibited “fraudulent misrepresentation,” according to the FEC’s rulemaking docket.
North Dakota officials typically coordinate consumer protection on robocalls and fraud through the Attorney General’s office, which encourages residents to report scams and identity theft, according to the agency’s consumer resources. Local leaders say that federal standards on chatbot transparency and age-appropriate design would give state and local enforcers clearer tools to act.
What’s Next for Faith Leaders and Policymakers?
The Grand Forks coalition says it will brief congregations on AI basics, host community Q&A sessions with technologists, and share model “digital dignity” guidelines that parishes, synagogues, and nonprofits can adopt, according to the organizers’ statement. They also plan to request meetings with the North Dakota congressional delegation to discuss transparency labeling, stronger privacy protections for teens, and disclosures when digital assistants are used in public services.
On the policymaking front, House leaders continue to solicit testimony from technologists, civil-society groups, and industry on chatbot safeguards, while agencies implement the 2023 executive order’s directives on safety testing and government procurement, according to the White House and committee calendars. Advocates in Grand Forks say pairing federal rules with local digital-literacy efforts can reduce harm without stifling innovation that benefits classrooms, clinics, and small firms.
Call to Action and Continued Discussion
Faith leaders are inviting residents to participate in upcoming forums on AI ethics and safety, with dates to be posted through congregational newsletters and community calendars, according to the coalition’s statement. Residents can track federal developments, submit comments to regulators, and contact North Dakota’s congressional offices to share views on chatbot transparency and youth protections.
If you’ve experienced an AI-related scam or deceptive call, report it to the North Dakota Attorney General’s Consumer Protection division and the FCC, which aggregate complaints to prioritize enforcement. For students and families, UND’s news office and campus units will post guidance on classroom use of AI tools and digital safety as policies evolve.
Helpful links:
White House Executive Order on AI — what federal agencies are doing: whitehouse.gov
NIST AI Risk Management Framework — voluntary guidance for organizations: nist.gov
FCC AI Voice Cloning and Robocalls — why some calls are illegal and how to report: fcc.gov
FTC: Keep your AI claims in check — marketing and fairness basics: ftc.gov
FEC request for comment on AI in political ads — rulemaking overview: fec.gov
ND Attorney General Consumer Resources — file a complaint or get help: attorneygeneral.nd.gov
City of Grand Forks alerts and meetings: grandforksgov.com
UND news and events on technology and research: und.edu/news
What to Watch
House committees are expected to continue AI hearings as the task force refines recommendations that could shape disclosure and child-safety requirements for chatbots in the months ahead. Federal agencies will roll out additional guidance tied to the 2023 AI executive order, including safety testing and labeling practices. Locally, watch for interfaith forums and UND-hosted panels that translate national rules into practical steps for families, students, and small businesses.