Today, we’re going to look at a provision in the OBBBA that has recently stirred a lot of controversy. Marjorie Taylor Greene railed against this provision, which imposes a 10-year federal ban on states and local governments from enacting or enforcing any laws regulating artificial intelligence (AI) systems, models, or automated decision-making tools involved in interstate commerce. She said she would not have voted for the bill if she had known that this provision was in it.
Of course, this means she voted for the bill without reading it and knowing what was in it, so that’s on her. It also highlights a point that I complain about frequently: megabills are a way for politicians to slip through all kinds of pork and unpopular provisions. Each provision should be its own individual bill that politicians vote on, which would create greater transparency in the entire process. But that’s not what this article is about. I’m here to talk about the AI provision, so let’s do that.
What the Provision Actually Does
The AI provision in the OBBBA doesn’t just nibble around the edges of state authority, it takes a sledgehammer to it. Under this provision, for the next ten years, states and local governments are flat-out banned from passing or enforcing any laws or regulations that touch artificial intelligence systems involved in interstate commerce. Now, that phrase—interstate commerce—might sound narrow, but in today’s world, it covers practically everything short of your grandma’s knitting club.
Whether it’s a major corporation using AI to filter job applicants, hospitals using machine learning to diagnose patients, or police departments rolling out facial recognition and predictive crime models, if the tool is connected to the internet, crosses a state line, or deals with a national company, it’s hands-off for the states. Washington, D.C. just put up a giant “Do Not Touch” sign on the entire AI landscape, and they nailed it down with federal law for a full decade.
Let’s put this in plain English: If your town wants to pass a law to ban facial recognition tech from being used in public schools or by local cops? Nope. Want to regulate AI-driven landlord software that might be discriminating against renters in your state? Sorry. Want to require companies to disclose when a robot, not a person, made a decision about your healthcare, job application, or bank loan? You’re out of luck. Uncle Sam says those decisions belong to the feds now and the states can go pound sand.
This is a sweeping federal power grab, cloaked in the language of “innovation” and “efficiency.” But what it really does is strip away the right of local communities to protect their people from technological overreach, bias, and abuse. It says that if some tech lobbyist in Washington thinks a system is good for the nation, your state doesn’t get a say.
This isn’t about protecting innovation, it’s about protecting powerful interests. Big Tech firms don’t want fifty different sets of rules. They want one soft-touch federal framework they can influence with lobbyists and campaign checks. Meanwhile, everyday Americans are left defenseless against systems they neither control nor understand.
In short, this provision tells states, “Sit down, shut up, and let the machines run the show.” That’s not just bad policy, that’s the opposite of self-governance. And it’s certainly not in line with the biblical model of leaders serving and protecting their communities.
The Case for Control
Now, to be fair, there are some arguments being tossed around in defense of this provision. The main one? Consistency. The tech giants and their cheerleaders in D.C. argue that if every state has its own patchwork of AI laws, innovation will grind to a halt. Imagine a software company having to navigate 50 different legal codes just to release one update. They say that’s a recipe for chaos.
And sure, there’s a kernel of truth in that. Technology, especially AI, doesn’t stop at state lines. If your data is zipping through servers in Ohio, bouncing off cloud platforms in Texas, and running analytics in California, it’s pretty hard to call that “local.” So, the argument goes: better to have one national rulebook than a bureaucratic swamp of conflicting regulations.
Then there’s the economic angle. Proponents claim that if we slow down AI development here in the U.S., China will outpace us and dominate the global tech future. That’s become the rallying cry: “We can’t let China win.” So, to win, we’ve got to cut red tape, let the geniuses build, and stay out of their way.
And finally, libertarian-minded folks might look at this and think, “Hey, fewer laws, more freedom. Isn’t that what we want?” They’re not wrong to want less government interference, but the devil, as always, is in the details. And in this case, the devil might be in the algorithm.
Why This Provision Is a Bad Deal for America
Now let’s talk about the real-world concerns, the ones the Beltway crowd would rather you ignore.
First, this is a direct hit on federalism. The Tenth Amendment wasn’t a suggestion; it was a design feature. It’s there to make sure that the states can serve as laboratories of democracy, responding to their citizens’ unique needs. When the federal government grabs power like this, it doesn’t just stifle local governance, it undermines the very structure of our republic.
Second, there’s the issue of accountability. AI systems are being used to make decisions that were once made by humans, real people with names, responsibilities, and the ability to be questioned. When an algorithm decides who gets a job, a loan, or even medical care, and the state can’t step in to question that process or provide guardrails, we’re headed for a world with a whole lot of injustice and very little recourse.
The Bible reminds us that “a false balance is abomination to the Lord: but a just weight is his delight” (Proverbs 11:1). If AI systems are being used to weigh the worth of human beings but no one is allowed to check whether those weights are just, that’s not freedom, that’s tyranny in a hoodie and sneakers.
Third, Big Tech is not a neutral actor. These are the same folks who’ve proven they can’t be trusted with your data, your privacy, or your children’s attention spans. They’ve shown time and again that they care more about profits and power than people. And now we’re giving them a decade-long holiday from meaningful oversight? That’s not deregulation, that’s abdication.
Making AI Serve People—Not Rule Them
This provision is not only misguided, but also dangerous. It concentrates too much power in the hands of too few people, removes vital checks and balances, and turns local government into little more than a spectator while unelected tech elites shape the moral fabric of our future.
God did not create us to be ruled by faceless algorithms or unreachable bureaucrats. He created us to live in communities, accountable to one another, guided by truth, justice, and wisdom. As Christians, we are called to defend the vulnerable and speak truth to power, even if that power comes wrapped in silicon and startup buzzwords.
We can and should pursue innovation. But that pursuit must be tempered by morality, transparency, and the right of each community to govern itself. If we surrender that right for the illusion of convenience and progress, we won’t just lose control of our technology, we’ll lose part of our humanity.
It’s time for the Senate to fix this mess. This is one swamp creature that needs to be dragged out into the daylight. They need to fix or remove this provision, preserve local authority, and make AI serve the people, not rule them.
Discover more from The Independent Christian Conservative
Subscribe to get the latest posts sent to your email.