Keeping patients safe with responsible AI
Transparent safeguards, clinician oversight, and privacy-first design for post-discharge care coordination
Our Safety Promise
Shifa AI is designed to support, never replace, clinical judgment
Supports clinical judgment
Shifa AI augments care teams by flagging potential concerns, but all clinical decisions remain with licensed healthcare providers.
Clinician-in-the-loop by design
Every alert requires human review and action. No automated clinical decisions are made without provider oversight.
Built for coordination, not diagnosis
Our platform focuses on post-discharge follow-up and care coordination, not medical diagnosis or treatment recommendations.
Conservative approach
When in doubt, we escalate. Better to over-flag than miss a potential concern that could impact patient safety.
Pre-HIPAA, Pilot-Ready
Transparent about our current compliance status and pilot approach
No PHI during pilots
We do not process protected health information (PHI) during our pilot phase. All data is de-identified or simulated.
Technical controls in place
Encryption in transit, access controls, and audit logging are already implemented to prepare for future HIPAA compliance.
HIPAA path planned
We have a clear roadmap to full HIPAA compliance for enterprise deployments, including BAA agreements and enhanced security controls.
Clinician-in-the-Loop Safeguards
Multiple layers of human oversight and control
Conservative triage approach
When the system is uncertain about risk level, it defaults to escalating to human review rather than making assumptions.
Manual override capabilities
Clinicians can override any system recommendation, reassess risk levels, and resolve alerts based on their clinical judgment.
Time-bound alert SLAs
All alerts have defined response timeframes, and audit trails track every action taken by care team members.
Transparency & Limits
Clear about what our platform does and doesn't do
No diagnostic claims
Shifa AI is not a medical device and does not provide diagnostic information. It flags potential concerns for human review.
Explainable alerts
When an alert is triggered, we show exactly why (specific keywords, patterns, or risk factors) to help clinicians understand the reasoning.
Human interpretation required
Patient messages may be incomplete or ambiguous. Clinicians must interpret and act on the information based on their expertise.
Bias & Fairness
Committed to equitable AI that serves all patients
Rules-first approach
We start with explicit clinical rules and guidelines. AI adjudication is feature-flagged and subject to ongoing audit and review.
Diverse clinical input
Our development process includes ongoing review with diverse clinicians to identify and address potential bias in our algorithms.
Documented failure modes
We maintain detailed records of system limitations and failure modes to continuously improve and reduce disparities in care.
Feedback loops
Continuous feedback from care teams helps us identify and reduce false negatives and ensure equitable care across all patient populations.
Important: Medical Emergencies
If you are experiencing a medical emergency, call 911 or your local emergency number immediately.
Shifa AI is designed for routine post-discharge follow-up and care coordination, not emergency triage.
The platform must not be used for urgent medical situations without immediate human review and appropriate escalation to emergency services.
Data Privacy & Governance
How we protect data during our pilot phase
De-identified data only
Pilot data contains no names, MRNs, phone numbers, or other direct identifiers. All personal information is removed or simulated.
Logging with redaction
All system logs are automatically redacted to remove any potential identifiers, following the principle of least privilege.
Secure key handling
Encryption keys are managed securely with proper access controls and rotation policies in place.
Data retention policy
Pilot artifacts are retained only as long as necessary for evaluation and are securely deleted according to our retention schedule.
Roadmap to Compliance
Our path to full HIPAA compliance and enterprise readiness
HIPAA Compliance (Planned)
- Business Associate Agreements (BAA) with cloud providers
- Encryption at rest using Key Management Service (KMS)
- Fine-grained Role-Based Access Control (RBAC)
- Comprehensive audit log export capabilities
- Periodic security risk assessments
Future Considerations
- Optional FDA SaMD evaluation if product evolves toward clinical decision support
- Third-party security review before processing PHI
Contact & Reporting
Questions, concerns, or issues? We're here to help
Safety & Privacy Questions
For questions about our safety practices, privacy policies, or compliance roadmap:
Report an Issue
If you encounter a safety concern or technical issue:
- • Email us at safety@shifa-ai.com
- • Include as much detail as possible about the issue
- • We respond to safety reports within 24 hours
- • Technical issues are addressed within 2 business days
Ready to pilot Shifa AI?
Join forward-thinking healthcare organizations in testing our safe, transparent approach to post-discharge care coordination.
Join the PilotCompliance Notice
Shifa AI is in pre-HIPAA pilot development and does not process protected health information (PHI). Pilots use de-identified or simulated data. Shifa AI supports care coordination and does not diagnose or treat conditions.