EU Healthcare IoT AI Compliance
A practical, acronym-filled guide for development teams to get and stay compliant.
If you’re on a European team building a health or wellbeing product that connects to the internet, collects biometric data or uses AI to personalise user experience, you’re almost certainly operating under multiple EU regulatory frameworks at once. The challenge for developers is that most guidance for digital health app compliance treats the AI Act, GDPR and the Medical Device Regulation as separate topics. In practice they overlap, and the interactions between them create real challenges for healthcare IoT compliance.
We mapped out how all three frameworks apply to healthcare IoT products and what they require from your team, including how to approach health data compliance as a continuous, ground-up process.
Why Healthcare IoT Overlaps Three EU Regulations
The EU’s regulatory landscape for digital health is the result of several major regulations developed at different times, each with distinct scopes, that now converge on the same category of product. These frameworks don’t replace each other; for example, a health monitoring app that uses AI to analyse wearable data could be subject to all three simultaneously.
GDPR governs how personal data is collected, processed and stored, and health data sits at the top of its sensitivity hierarchy. The EU AI Act regulates AI systems according to the risk they pose to individuals, with healthcare applications falling almost universally into the highest permitted risk category. Medical Device Regulation (MDR) then applies specifically to software that performs a medical function related to diagnosing, treating or monitoring patients.
What the EU AI Act Actually Means for Your Product
The EU AI Act entered into force in August 2024 and is being implemented in phases. For most healthcare IoT builders, the deadline that matters most is August 2026, when obligations for high-risk AI systems become broadly applicable. Products currently in development need to be designed with these requirements in mind.
For high-risk systems, the AI Act imposes a multitude of obligations. Your team will need to produce and maintain technical documentation covering the system’s design, data requirements and validation methodology. You’ll need to implement human oversight mechanisms, meaning the system must be designed so that a human can intervene to override or shut it down. Logging and traceability are mandatory throughout the system’s operational life, and you’ll need to demonstrate that the system meets defined standards for accuracy, robustness and healthcare IoT security.
Key AI Act Dates for Healthcare IoT Teams
- February 2025: prohibited AI practices and AI literacy obligations already in effect
- August 2026: high-risk AI system obligations broadly applicable
- August 2027: full scope of the AI Act applies
How the AI Act Classifies Healthcare AI
Under Annex III of the AI Act, AI systems intended for use in health and wellbeing contexts are classified as high-risk. This applies regardless of company size or funding stage, irrespective of whether your product is a consumer app or a clinical tool. The full compliance burden takes effect from the moment you place a high-risk AI system on the market or put it into service in the EU.
In June 2025, the Medical Device Coordination Group published MDCG 2025-6, a guidance document specifically addressing how the AI Act and MDR interact. It introduced the concept of “Medical Device Artificial Intelligence” (MDAI) to describe products subject to both regimes. If your product has any diagnostic, monitoring or clinical decision-support function, REVIEW THIS GUIDANCE (linked above) before you finalise your healthcare IoT compliance approach.
GDPR in a Healthcare IoT Context
Most development teams building digital products have some familiarity with the EU’s General Data Protection Regulation (GDPR). The challenge is that standard health app GDPR posture (with cookie consent banners, privacy policy, data processing agreement with your cloud provider, etc.) doesn’t come close to covering the IoT GDPR requirements that apply when you’re collecting health data continuously from connected devices.
IoT devices generate data passively. If you’re building connected health infrastructure, a wearable constantly streams data in the background, across multiple systems, with the user having limited visibility into what’s being collected or where it’s going. That creates a set of specific IoT healthcare data privacy challenges your architecture needs to address.
The controller versus processor distinction matters here. If you’re building a SaaS platform that processes health data on behalf of a client organisation, your obligations differ from those of a direct-to-consumer app. Getting this classification wrong has downstream liability consequences for data processing agreements as well as your users’ rights.
Cross-border data transfers add further considerations. If your infrastructure spans EU and non-EU regions, appropriate transfer mechanisms under Chapter V of GDPR will be required.
Health Data, Consent and Privacy by Design
Health data is classified as GDPR special category data under Article 9, which means it can only be processed under specific legal bases, which for consumer health apps means explicit consent. Explicit consent requirements are stricter than for ordinary personal data and must be genuinely withdrawable.
This obligation extends to inferred health data. If your device or algorithm can derive health-related information from raw inputs like heart rate variability, sleep patterns or activity levels, that data may be treated as health data even if it wasn’t explicitly collected as such.
Building Compliance In, Not On Top
Article 25 of GDPR requires privacy by design and by default. In healthcare IoT, this means compliance posture is determined at the architecture stage. The decisions that matter most include how much data is collected at the sensor level, what payload granularity is transmitted to your backend, how long data is retained and who has access to it.
Pseudonymisation, or separating personal identifiers from health-related data, is one of the most effective technical measures available at this stage. Combined with data minimisation at the point of collection and role-based access controls throughout your system, it reduces your exposure in the event of a breach and demonstrates proactive compliance to regulators.
Where MDR Fits In and When It Doesn't Apply to You
Medical device software regulations in the EU apply to software that constitutes or incorporates a medical device, specifically software intended to be used for a medical purpose such as diagnosis, prevention, monitoring or treatment of a disease or condition. The technical term is Software as a Medical Device (SaMD), governed by the EU Medical Device Regulation software framework under MDR 2017/745.
A general wellness app or a fitness coaching platform typically falls outside MDR scope. The regulation is concerned with clinical function, not health adjacency. If your product makes recommendations based on user preferences rather than clinical data, or tracks activity without making health claims, you’re likely outside the MDR perimeter.
A symptom-checker that suggests a user consult a doctor, or a sleep analysis tool that flags potential disorders, these are the kinds of products where SaMD classification is somewhat undefined. This grey area is where many health and wellbeing startups actually sit.
If you’re building in this territory, MDCG 2019-11 is the primary guidance document for determining whether your software qualifies as a medical device. It’s detailed, but working through it early is considerably less costly than discovering your product needs MDR compliance after it’s already on the market.
Compliance Checklist by Build Stage
Healthcare IoT compliance involves a set of decisions and actions distributed across the product lifecycle. Our EU AI Act compliance checklist maps the most critical regulatory obligations to three project stages most startup teams will recognise:
Discovery and Design
- Determine whether your product is likely to trigger MDR. If uncertain, work through MDCG 2019-11 before committing to an architecture.
- Classify your AI system under the AI Act risk framework. If high-risk, plan for human intervention and logging requirements.
- Define your GDPR role (controller or processor) and document the legal basis for processing health data.
- Apply data minimisation principles at the sensor and payload design level; decisions made here are expensive to reverse later.
- Begin building your Article 25 privacy by design posture: pseudonymisation strategy, access controls, data retention policy.
MVP
- Consent mechanisms must meet Article 9 requirements: explicit, granular and withdrawable.
- Implement logging and audit trail capabilities required for AI Act high-risk compliance.
- Complete a Data Protection Impact Assessment (DPIA) under GDPR Article 35, mandatory for high-risk IoT processing of health data.
- If MDR applies, begin building your technical file and engage a notified body early.
Scaling
- Establish post-market monitoring processes for your AI system as required under the AI Act.
- Review cross-border data transfer mechanisms if expanding into or operating across non-EU infrastructure.
- Cover AI Act obligations for your quality management system: if you’re already ISO 13485 certified, these can likely be integrated rather than run in parallel
- Revisit DPIA and data governance documentation as processing activities evolve.
Teams that handle EU healthcare IoT compliance best treat it as an engineering discipline instead of a legal formality, and prioritise secure healthcare software development from the earliest stages of product design.
Let’s Talk About Your Compliance Strategy
At DO OK, we build compliance considerations into the development process from day one. We can help you understand where your product sits across these regulatory frameworks and what that means for your architecture. Get in touch today and start a conversation about how we can help you build compliant from the ground up.
Want to know more about how we work? Learn about the DO OK approach to secure software development and product discovery workshops on our blog.