Galson Insights: AI, Cyber, and Emerging Tech Trends

What No One Is Saying About the Big New AI Law

Written by Christopher Richardson | Jun 24, 2025 9:01:52 PM

What is the new U.S. AI bill? 

A new federal law has put a 10-year pause on AI-specific regulations in the U.S. The goal, according to lawmakers, is to give companies the freedom to innovate without being bogged down by red tape. On the surface, that sounds great. 

But as Galson researchers focused on risk, policy, and technology leadership, we believe there’s more to the story, and it directly impacts how your business adopts AI responsibly. 

Why is the AI regulation pause a risk to businesses? 

Many are calling this bill a “win” for innovation. But here’s the truth: 

  • Without oversight, AI systems can introduce serious risk, especially in industries like healthcare, finance, HR, and government contracting. 
  • Bias and privacy issues go unchecked, leaving your business open to lawsuits, customer backlash, or compliance blowback down the line. 
  • The biggest tech firms benefit most. They have the power and lawyers to navigate the gray areas. Smaller and mid-sized companies? You're on your own. 

This isn’t just about what the law says, it’s about what it doesn’t do. No AI regulation doesn’t mean no risk. 

What are the hidden concerns behind the bill? 

Several lawmakers bought stock in major AI firms, like Palantir, shortly before or after supporting this bill. That doesn’t prove wrongdoing, but it raises a question we believe every business leader should ask: 

Are public policies being shaped for innovation, or for investor return? 

When policies protect market power but don’t address ethical and operational risk, it leaves responsible companies holding the bag. 

What should business leaders do in response? 

  1. Strengthen your internal AI oversight now

Even without new laws, your company still needs a system to monitor: 

  • Bias in AI decisions (especially for hiring, customer targeting, credit scoring, etc.) 
  • Data collection and privacy risks 
  • System reliability and explainability 

Build or partner for an AI audit process. Now, not later. 

  1. Use privacy regulations as your baseline

Leverage existing laws like: 

  • GDPR (EU) 
  • CCPA/CPRA (California) 
  • HIPAA or GLBA (for health and financial data) 

Just because AI is unregulated doesn’t mean data use is. 

  1. Demand clarity from vendors and partners

Ask clear questions: 

  • How was the AI trained? 
  • Can you explain the outcome it produced? 
  • What safeguards are in place? 

Vendors who can’t answer these are a risk to your reputation and bottom line. 

Why does this matter for the future of your business? 

AI isn’t a siloed tool. It will touch every part of your company. Marketing. HR. Finance. Operations. If you don’t have a clear approach to AI governance, you’re not just lagging, you’re exposed. 

At Galson, we help leaders cut through the noise. Our stance is simple: 

  • Innovation without accountability is not innovation. 
  • Responsible businesses shouldn’t wait for regulation to do the right thing. 
  • The winners will be the ones who prepare now. 

TL;DR: Executive Summary 

  • A new U.S. law delays AI regulation for 10 years. 
  • This benefits large tech companies but creates risk for everyone else. 
  • Without oversight, your business could face bias, legal exposure, or lost customer trust. 
  • Start managing AI risk today using internal audits, data privacy compliance, and better vendor accountability. 

Next Steps: How Galson Research Can Help 

Galson Research helps leaders: 

  • Evaluate AI tools and vendors for risk and trustworthiness 
  • Pressure test internal systems using our AI Risk Audit Framework 
  • Develop governance plans to prepare for future AI regulation 

Let’s make tech make sense before it costs you. 

Originally authored by Susanna Cox. Adapted for Galson Research by our editorial team.