Regulatory Compliance for SaaS and AI: How HIPAA, NYDFS, and TRAIGA Are Forcing a Security Reset
Aug 12, 2025
Each update introduces new obligations centered on SaaS security, including identity, inventory, purpose, and oversight. Here, we break down the major changes and what they mean for regulatory compliance.
The pace of SaaS and AI adoption is outpacing most security teams, and regulators are now racing to close the gap.
Over the past year, a wave of updates from HIPAA, the New York Department of Financial Services (NYDFS) Cybersecurity Regulation, and the proposed Texas Responsible Artificial Intelligence Governance Act (TRAIGA) have sent a clear message: it’s time to take control of how SaaS and generative AI apps are discovered, governed, and used.
The regulatory compliance updates aren’t just about risk (although there is plenty there); they’re also about restoring oversight. Most organizations lost visibility when users started choosing their own apps, connecting AI tools to company data, and working beyond the bounds of security’s reach. Now, the burden is on CISOs, SecOps, and governance teams to fix that—and quickly.
If the past decade was the era of “move fast and break things” (thank you, Mark Zuckerberg), these new controls are about understanding what broke and who’s responsible. Regulations now demand comprehensive asset inventories, app usage justifications, consistent enforcement of MFA, and formal impact assessments for high-risk SaaS and AI tools. These aren’t small tweaks. They reflect a broader shift toward identity-aware governance in a SaaS environment that security teams didn’t design.
Why Are These Regulatory Updates Happening Now?
Regulators don’t usually move fast. When they do, it’s because something fundamental has shifted, and that’s exactly what’s happening now. How organizations adopt software, manage identities, and interact with AI has changed faster than the frameworks designed to oversee them. These updates aren’t arbitrary; they’re an attempt to realign policy with shifts in digital environments and the risks that stem from them.
SaaS and GenAI: The New Governance Headache
Every new SaaS app or AI tool spun up by a business unit or individual user quietly reshapes your risk surface. More often than not, no one’s watching, no one’s approving, and no one’s prepared.
Grip’s 2025 SaaS Security Risks Report uncovered that organizations use up to 8x more SaaS apps than they sanctioned. Consider the shadow tenants, unmonitored browser extensions, shadow SaaS, and shadow AI tools likely connected to your sensitive systems...these tools don’t just introduce risk; they also disrupt governance. You can’t enforce controls or compliance policies on assets you don’t know about.
Case in point: In 2023, Microsoft’s red team accidentally exposed 38TB of internal data through a misconfigured shared access signatures (SAS) token in an AI training environment. What made it especially serious wasn’t just the technical misstep. It was the absence of any governance guardrails that might have caught it: no policies for token use, no oversight of data exposure, and no visibility into how the AI environment was managed. That’s the part regulators are now aiming to fix.
Attackers Love Unsanctioned Apps
If there’s one thing attackers love more than an insecure app, it’s an unsanctioned one. Once they compromise an identity, they don’t care whether the app it accesses was approved by IT—if it holds sensitive data, it’s a target. Credentials, tokens, browser extensions, and orphaned access have all been exploited in recent breaches, especially across the sprawling, unmonitored SaaS ecosystem.
93% of organizations experienced two or more identity-related breaches in the past year.
In CyberArk’s 2024 Identity Security Threat Landscape Report, 93% of organizations experienced two or more identity-related breaches in the past year. Many of those incidents may have started with poor password practices, but the bigger issue is that IT never had the chance to enforce stronger controls in the first place. When users adopt SaaS or AI tools without involving security, often there's no MFA, no SSO, and no visibility. Legacy technology in place at most companies isn’t designed for this environment. Regulators are stepping in to force a reset.
Hackers and Regulators Agree: It’s Not Just What, It’s Why
While risk is driving urgency, governance is what these updates are really about. TRAIGA, for instance, doesn’t just ask whether an AI tool is “high risk.” It starts by asking a more basic question: Why are you using it at all? The framework requires organizations to document the tool’s intended purpose, how decisions are being made around adoption, and who’s responsible for oversight. Only after that comes the second step: evaluating the risk it poses to the organization based on access, usage patterns, and potential downstream impact.
It’s governance first, then risk prioritization—and that’s a significant shift.
HIPAA’s updates do the same. They strengthen MFA and encryption requirements but also insist on risk-based access policies and updated inventories—two governance functions that often lag behind technical security in maturity. In other words, regulators aren’t just asking whether you’re protecting data. They want to know whether you understand which tools have access to it, why they’re being used, and who’s responsible for keeping them in check.
Regulatory Compliance Changes: A Focus on SaaS and GenAI
The specifics vary by framework, but the direction is consistent: SaaS and AI are being brought under formal regulatory control, not just in Washington but at the state level, too.
HIPAA is expanding its security rule to address tighter access controls and software oversight. NYDFS has finalized a major update to its Part 500 Cybersecurity Regulation, with additional requirements taking effect through 2025. And the proposed state legislation sets a precedent that regulating AI tools won't stop at the federal level.
NIST’s guidance remains an important foundation: It introduced many of the core identity and access principles now being codified into law. However, regulations like HIPAA, NYDFS, and TRAIGA bring those principles into active enforcement.
Each update introduces new obligations centered on SaaS security, including identity, inventory, purpose, and oversight.
Below, we break down the major changes and what they mean for regulatory compliance.
HIPAA 2025: Zeroing in on SaaS
The 2025 HIPAA Security Rule updates introduce stricter authentication and usage-based controls in direct response to the rise of SaaS-based health tech and AI-powered diagnostics.
Key elements:
Mandatory MFA for cloud-based systems accessing ePHI
Risk-based access policies that factor in app sensitivity and user role
Ongoing inventories of all assets, including SaaS and AI apps
Assessment of AI tools used in treatment or administrative workflows to ensure they meet the same privacy and security standards as traditional systems.
Removal of extraneous software
For healthcare organizations and business associates, this signals a shift away from static controls toward more dynamic, usage-aware governance.
NYDFS Part 500: Enforcement-Backed Oversight for Financial Services
The New York Department of Financial Services amended its Cybersecurity Regulation in late 2023 with several major provisions, especially those affecting SaaS and cloud security, going into effect in 2025.
Requirements taking effect November 1, 2025:
Mandatory MFA for all privileged and remote access to systems, including third-party and SaaS tools that handle non-public information
Comprehensive asset inventories, including cloud-based and third-party applications, with documentation of ownership, review frequency, and update history
Stricter access controls, including justification for privileged accounts and removal of unnecessary access
Monitoring and vulnerability scanning for all systems in scope
Although the regulation doesn’t explicitly name “shadow SaaS,” the inventory and access requirements effectively require that all tools—whether sanctioned or not—are documented and managed.
For financial services firms, FinTechs, and insurers, NYDFS now represents one of the most prescriptive and enforceable cybersecurity regulations in the U.S.
TRAIGA: AI Governance Takes Center Stage
TRAIGA is a recognition that AI systems now make or influence decisions about people, data, and systems in ways that require formal accountability. While TRAIGA is still working its way through Congress, Colorado has already enacted similar legislation, making it the first U.S. state to codify AI governance into law. Its requirements closely mirror TRAIGA’s proposed framework, signaling that federal standards are beginning to take shape at the state level, with California and Utah expected to follow soon.
What TRAIGA introduces:
AI impact assessments: Organizations must identify which systems are “high risk” and explain how decisions are made, reviewed, and governed.
Purpose justification: Businesses must document why a tool is in use, how it aligns with intended outcomes, and whether safer alternatives exist.
Ongoing monitoring: Risk isn’t static, especially with AI models that evolve over time. TRAIGA requires visibility into how tools perform in practice, not just on paper.
For organizations, AI governance is no longer optional or confined to data scientists. It’s becoming a shared responsibility across compliance, legal, and security teams.
The Pattern: From Security to Accountability
The trend in HIPAA, NYDFS, and TRAIGA updates is the same: security controls are now the baseline, not the endgame. Regulators want to know who made the decision to adopt a tool, how it was evaluated, whether it was necessary, and what risks it introduces, not just whether it was patched or encrypted.
These changes aren't cosmetic. They demand better coordination across IT, security, privacy, and compliance because no single team owns SaaS or AI anymore.
What the Regulatory Updates Mean for Security Teams
These regulatory updates don’t just reflect a changing risk surface. They reflect a shift in responsibility. It’s no longer enough to secure systems; you have to explain them. Justify their use. Track who’s using what, and why. That means inventory, identity mapping, and access governance aren’t just technical goals anymore. They’re compliance requirements.
With SaaS and AI tools, that’s especially hard to do manually. What’s in use changes constantly. Controls get bypassed. Justifications get lost. Grip closes that gap by automatically discovering SaaS and AI apps across the organization, scoring their risk, and surfacing missing security controls, like absent MFA, users who bypass it, or abandoned accounts still connected to core systems. Grip enables you to monitor usage over time, capture business justifications, and get remediation guidance when something falls out of policy or below your risk tolerance. The same foundations that reduce SaaS and AI risk also prepare you for regulatory compliance and an audit. You don’t need two separate approaches. You just need one that works.
Want help turning regulatory requirements into action? Download the SaaS & AI Security Compliance Cheat Sheet for a breakdown of what each framework expects and how to meet those expectations without slowing down your team.