AI GOVERNANCE POLICY
Effective Date: 3 January 2026
- Purpose
This AI Governance Policy establishes the principles, roles, and controls governing the design, development, deployment, and use of Artificial Intelligence (AI) systems within QUASR+, a SaaS platform for incident reporting, risk management, and patient safety improvement across global healthcare markets.
The objective is to ensure that AI capabilities within QUASR+ support patient safety, quality improvement, and regulatory compliance, while avoiding harm, bias, misuse, or inappropriate automation of clinical or safety-critical decisions.
- Scope
This Policy applies to:
- All AI and machine-learning features embedded within the QUASR+ platform
- AI systems used for safety event analysis, risk identification, trend detection, and workflow support
- All employees, contractors, vendors, and partners involved in AI-related activities
- All data used to train, validate, test, or operate AI systems
This includes but is not limited to:
- Incident classification and categorization models
- Risk scoring and prioritization algorithms
- Trend detection and predictive safety analytics
- Natural language processing (NLP) for free-text incident narratives
- Generative AI used for summaries, insights, or safety recommendations
- Definitions
- AI System: Software that uses machine learning, statistical models, or logic-based techniques to generate outputs such as predictions, recommendations, classifications, or content.
- High-Risk AI: AI systems that may impact patient safety, clinical outcomes, regulatory compliance, or protected health information (PHI).
- Human-in-the-Loop (HITL): A process where qualified humans oversee, validate, or approve AI outputs before action is taken.
- Governance Structure
4.1 AI Governance Committee
QUASR+ shall maintain an AI Governance Committee responsible for oversight of all AI capabilities within the platform.
- Executive sponsor
- Patient safety or clinical governance representative
- Product or AI/engineering lead
- Information security lead
- Privacy and data protection officer
- Legal or regulatory affairs representative
Responsibilities:
- Approve AI-enabled product features and use cases
- Classify AI systems by risk level
- Review and approve high-risk AI features prior to release
- Oversee compliance with global healthcare and data protection regulations
- Review AI-related incidents, complaints, or safety concerns
- AI Risk Classification
All AI systems within QUASR+ must be classified prior to development or deployment:
Risk Level | Description | Examples |
Low | No direct impact on patient safety or decision-making | Reporting dashboards, descriptive analytics |
Medium | Supports safety workflows or prioritization | Incident categorization, trend analysis |
High | Could influence safety actions or organizational responses | Predictive risk alerts, automated safety recommendations |
High-risk AI systems require enhanced validation, documented safeguards, and formal approval by the AI Governance Committee.
- Data Governance & Privacy
- AI systems must comply with applicable global healthcare and data protection laws (e.g. HIPAA, GDPR, PDPA, local health data regulations).
- QUASR+ shall process patient, staff, and incident data strictly for defined safety and quality purposes.
- Data used for AI training and operation must adhere to:
- Data minimization and purpose limitation principles
- Role-based access controls
- Appropriate retention and deletion requirements
- De-identification, pseudonymization, or aggregation shall be applied wherever feasible, particularly for model training.
- Human Oversight & Patient Safety Controls
- AI systems within QUASR+ are designed to support, not replace, human judgment in patient safety and quality governance.
- AI outputs must not be treated as definitive conclusions or root cause determinations.
- Medium- and high-risk AI outputs must be subject to human review by qualified users (e.g. quality managers, safety officers, risk managers, clinicians).
- Clear user guidance shall be provided on:
- Appropriate interpretation of AI outputs
- Known limitations and confidence boundaries
- Prohibited or unintended uses
- Transparency & Explainability
- Users must be informed when AI is used within the QUASR+ platform
- Security Controls
- AI systems must comply with the Company’s information security policies.
- Controls must address:
- Model theft or inversion risks
- Prompt injection and data leakage (for generative AI)
- Secure APIs and access controls
- Regular security testing shall be conducted.
- Third-Party & Vendor AI
- Third-party AI providers must undergo due diligence, including:
- Security and privacy assessments
- Regulatory compliance review
- Contractual safeguards on data usage
- Vendors must not use Company or customer data for training without explicit authorization.
- Monitoring & Lifecycle Management
- AI systems must be continuously monitored for:
- Performance degradation
- Bias drift
- Safety incidents
- Material changes to models require re-validation and approval.
- AI systems must have defined retirement or decommissioning plans.
- AI Incident & Safety Issue Management
- AI-related issues (e.g. misleading insights, incorrect categorization, unintended risk signals) must be reported through defined internal processes.
- Issues shall be assessed for patient safety, compliance, and reputational impact.
- Corrective actions, including model adjustment or feature suspension, must be documented and tracked.
- Training & Awareness
- Employees involved in AI activities must receive periodic training on:
- Responsible AI principles
- Data privacy and security
- Clinical safety considerations
- Compliance & Audit
- Compliance with this Policy is mandatory.
- The Company reserves the right to audit AI systems and processes.
- Non-compliance may result in disciplinary action or termination of contracts.
- Policy Review
This Policy shall be reviewed at least annually or upon:
- Significant regulatory changes
- Introduction of new high-risk AI systems
- Material AI incidents
Approved by: Tan Hak Yek
Effective Date: 3 January 2026
Next Review Date: 3 January 2027