South Carolina-based Sandhills Medical Foundation disclosed a ransomware attack affecting nearly 170,000 patients, while dental software provider Practice by Numbers fixed a bug that exposed private health records across thousands of practices. The incidents highlight growing cybersecurity vulnerabilities as healthcare organizations increasingly adopt AI-powered patient management systems.
According to Sandhills Medical’s security notice, the organization discovered the ransomware attack on May 8, 2025, but only disclosed the breach publicly nearly one year later. The Inc Ransom group claimed responsibility and made stolen files available for download on their leak website in June 2025.
Scope of Healthcare Data Compromised
The Sandhills Medical breach exposed comprehensive patient data including names, dates of birth, Social Security numbers, driver’s licenses, government-issued identification, passports, financial information, and personal health information. SecurityWeek reported that the Maine Attorney General’s Office confirmed 169,875 individuals were affected.
Separately, Practice by Numbers — which provides patient management software to over 5,000 dental practices nationwide — patched a security flaw that allowed portal users to access other patients’ medical documents. According to TechCrunch’s investigation, the vulnerability was exploitable by simply changing document numbers in web addresses, as the system used sequential numbering for patient files.
The dental software bug exposed patient medical histories, photo identification, and personal information across thousands of practices before being fixed following media disclosure.
AI Integration Amplifies Security Risks
Healthcare organizations are rapidly deploying AI systems for diagnosis, patient management, and clinical decision support, but these integrations often lack robust security frameworks. The Practice by Numbers incident demonstrates how seemingly simple software flaws can expose vast patient databases when amplified across distributed healthcare networks.
NVIDIA’s recent analysis of AI agent deployments shows healthcare as a primary adoption sector, with organizations implementing locally-hosted AI assistants for patient data analysis. However, the security implications of these distributed AI systems remain largely unaddressed by current healthcare IT frameworks.
Rising Ransomware Targeting Healthcare AI
The Inc Ransom group’s attack on Sandhills Medical follows a pattern of cybercriminals specifically targeting healthcare organizations with AI-enhanced patient management systems. Healthcare data commands premium prices on dark web markets due to its comprehensive nature and long-term value for identity theft.
Ransomware groups increasingly target the AI infrastructure that healthcare organizations use for predictive analytics, clinical decision support, and patient monitoring systems. When these AI systems are compromised, attackers gain access to not just current patient data, but also the algorithmic insights derived from that data.
FDA Oversight Gaps in AI Healthcare Security
While the FDA has approved numerous AI diagnostic tools for clinical use, current regulatory frameworks don’t adequately address the cybersecurity requirements for AI-powered healthcare systems. The agency’s focus on clinical efficacy and safety has left significant gaps in data protection standards for AI implementations.
Healthcare organizations deploying AI systems often rely on third-party vendors like Practice by Numbers, creating complex supply chain vulnerabilities. When these vendors experience security breaches, the impact cascades across thousands of healthcare providers and millions of patients.
Clinical Trial Data at Risk
The security vulnerabilities extend beyond patient management to clinical research. AI systems used in drug discovery and clinical trials contain highly sensitive intellectual property and patient data that, when compromised, can impact both individual privacy and pharmaceutical innovation.
Recent security assessments show that clinical trial management systems using AI for patient recruitment and data analysis often lack the security controls applied to traditional electronic health records.
Hospital AI Deployment Security Standards
Major hospital systems are implementing AI across diagnostic imaging, patient monitoring, and clinical workflows, but security standards haven’t kept pace with deployment speed. The distributed nature of modern healthcare AI — from bedside monitoring to cloud-based diagnostic analysis — creates multiple attack vectors.
Healthcare organizations need comprehensive security frameworks that address both traditional IT infrastructure and AI-specific vulnerabilities. This includes secure model deployment, encrypted data pipelines, and audit trails for AI decision-making processes.
Patient Portal Vulnerabilities
The Practice by Numbers incident highlights a critical vulnerability in patient portal implementations. As healthcare organizations digitize patient access through AI-enhanced portals, basic security controls like proper access controls and data isolation become critical.
Many healthcare AI systems rely on web-based interfaces that, when improperly secured, can expose entire patient databases through simple URL manipulation or session hijacking.
What This Means
The healthcare sector’s rapid AI adoption is outpacing cybersecurity implementation, creating systemic vulnerabilities that threat actors are actively exploiting. Organizations must balance AI innovation with robust security controls, particularly as patient data becomes increasingly valuable to cybercriminals.
Healthcare AI security requires a fundamental shift from traditional IT security models to frameworks that address the unique risks of distributed AI systems, third-party AI vendors, and the massive data aggregation that modern healthcare AI enables. Regulatory bodies like the FDA need to expand oversight beyond clinical efficacy to include comprehensive cybersecurity requirements for AI-powered healthcare systems.
The incidents at Sandhills Medical and Practice by Numbers demonstrate that healthcare AI security failures can expose hundreds of thousands of patients simultaneously, making robust security controls not just a technical requirement but a patient safety imperative.
FAQ
How many patients were affected by recent healthcare AI security breaches?
Nearly 170,000 patients were affected by the Sandhills Medical ransomware attack, while Practice by Numbers’ vulnerability potentially exposed patients across over 5,000 dental practices nationwide before being patched.
What types of patient data were compromised in these breaches?
The breaches exposed comprehensive patient information including Social Security numbers, medical histories, financial data, government-issued identification, and personal health information. The dental software bug also exposed patient photos and identification documents.
How can healthcare organizations protect AI systems from cyberattacks?
Healthcare organizations should implement comprehensive security frameworks that include encrypted data pipelines, proper access controls, regular security audits, and vendor risk assessments. They should also ensure AI systems have audit trails and secure model deployment practices.
Related news
- Technology’s role in making Indian healthcare truly patient-centric – Healthcare Asia Magazine – Google News – Healthcare
Sources
- Sandhills Medical Says Ransomware Breach Affects 170,000 – SecurityWeek






