Data Use & Privacy Overview
Last updated: November 26, 2025
Contents
| Snapshot | Details |
|---|---|
| Default posture | PHI-Safe Mode with HIPAA-eligible infrastructure and no general model training on PHI |
| Data minimization | Clear split between stored data vs. short-lived / in-memory processing |
| Human access | Limited to support, security, and abuse review under strict controls |
If you have any questions or feedback, email us at privacy@optimiq.us or hi@optimiq.us.
For full details on how we collect, use, disclose, and protect personal data (including PHI), please see our:
- Terms of Service
- Privacy Policy
- Business Associate Agreement (BAA) or Data Processing Addendum (where applicable)
This page is a plain-English summary of how OptimIQ handles your data across our products (Risk Adjustment & Value-Based Care, Claims Intelligence, Post-Acute Optimization, OptimIQ Health, and related AI agents).
1. High-level summary
Across all OptimIQ products:
- PHI and claims/clinical data are treated as healthcare data first, AI data second. We process PHI only to deliver the services your organization has contracted for, under a BAA where required.
- We do not use PHI or customer-identifiable data to train general-purpose models shared across customers, unless you have explicitly and contractually opted into a specific joint R&D or de-identified training program.
- We may use de-identified and aggregated data (for example, statistics about feature usage or error rates) to improve performance, reliability, capacity planning, and safety, as permitted by our contracts and applicable law.
- Third-party AI and infrastructure providers (for example, Google Cloud / Vertex AI) are configured with HIPAA-eligible / “no training” settings where available for PHI workloads, and are covered by appropriate BAAs or data processing agreements.
2. Privacy modes & settings in OptimIQ
Your organization’s admin can choose how strictly your data is used for analytics and model improvement. For production healthcare workloads, we recommend PHI-Safe Mode.
A. PHI-Safe Mode (default for healthcare customers)
This is the default data posture when OptimIQ is handling PHI and claims/clinical data under a BAA:
-
No model training on PHI or customer-identifiable content
- We do not use PHI, claims, or clinical notes to train shared foundation models that power other customers’ environments.
- We may still use de-identified, aggregate metrics (for example, “% of gaps closed per month”) to improve the platform, but not in a way that identifies your organization or individuals.
-
Zero training at third-party model providers for PHI workloads
- When we invoke cloud AI services (for example, HIPAA-eligible Vertex AI endpoints), we configure them with no training / zero data retention options where the provider supports this and under their healthcare/enterprise scope.
- These providers may maintain limited logs for abuse detection and service reliability, but not for training general-purpose models, consistent with their enterprise documentation and our agreements.
-
Limited operational logging
- We log standard telemetry (timestamps, internal IDs, error traces, and minimal contextual metadata) to run, secure, and troubleshoot the Service.
- Access to logs that may include PHI (for example, in an error trace) is tightly restricted to authorized personnel and governed by our security policies and your BAA.
-
Encryption and access controls
- Data in transit is protected with TLS 1.2+; data at rest is encrypted with industry-standard algorithms (for example, AES-256).
- Access to PHI is role-based and audited, aligned with HIPAA technical safeguard expectations.
B. De-identified / Synthetic Data Mode (optional, opt-in)
Some customers choose to share de-identified or synthetic datasets with OptimIQ for joint research, benchmarking, or co-developed models.
In this optional mode (which must be explicitly agreed in writing):
- We work only with datasets that your organization has de-identified under HIPAA (for example, expert determination or safe harbor) or synthetic data that does not relate to real individuals.
- These datasets may be used to train or fine-tune OptimIQ models that can be deployed back into your environment and, where agreed, generalized to others.
- The details (scope, ownership of resulting models, retention, and evaluation) are governed by a separate addendum to your agreement.
If you don’t explicitly opt into this, we treat your data under the stricter PHI-Safe Mode.
C. “Bring Your Own Model / Key” Mode (optional)
Some customers integrate their own AI endpoints (for example, a Vertex AI project under their GCP account, or another HIPAA-eligible deployment they control):
-
Requests still transit OptimIQ’s backend.
- We handle prompt construction, retrieval, and policy checks on our side before forwarding requests to your chosen model endpoint.
-
Your provider’s data-use terms apply.
- Prompts and Outputs sent to your configured endpoint are subject to that provider’s privacy policy, data retention, and BAA/DPA with your organization.
- We strongly recommend configuring those endpoints with zero-training / no-data-retention settings where available, especially for PHI.
-
OptimIQ does not add any additional training use on top of what your chosen provider does, beyond any explicit de-identified R&D arrangement you have with us.
3. What we store vs. what is temporary
To keep performance strong while preserving privacy, different types of data are handled differently.
A. Data we may store
Depending on your configuration and the products you use, we may store:
-
Claims and clinical data imported into the platform (for example, eligibility files, 837/835 transactions, CCDAs, vitals, and post-acute assessments).
-
AI agent history and worklists needed to maintain task state, auditability, attribution, and gap tracking.
-
Embeddings and metadata for search and analytics:
- When you connect data sources (EHR extracts, PDFs, policy documents), we compute vector embeddings over that content for semantic search and AI retrieval.
- We may store embeddings and lightweight metadata (such as IDs, timestamps, document types, and hashes) to power fast search and context retrieval.
-
Configuration and usage logs, including org settings, feature flags, role assignments, and high-level usage statistics.
All persistent PHI is stored encrypted and governed by your agreement and BAA.
B. Data that is short-lived or in-memory
To reduce latency and bandwidth, we may temporarily cache:
- Prompt payloads and Outputs in memory while a request is executing,
- Recently accessed records (for example, the last few claims or encounters for the same patient) in short-lived caches,
- Intermediate files used during ingestion, transformation, or export.
These caches:
- Are time-limited (short TTLs) and purged regularly,
- Are not used to train shared models in PHI-Safe Mode, and
- Are scoped to your tenant and environment.
4. When humans at OptimIQ can see your data
We aim to minimize human access to PHI and customer data, but there are situations where limited, auditable access is necessary:
-
Support & troubleshooting
- If you open a ticket and explicitly share examples or authorize support access, a small number of trained engineers or support staff may view specific records or logs to diagnose issues.
-
Security & abuse investigations
- If our systems detect suspicious or abusive activity (for example, attempts to exfiltrate data, attack other systems, or misuse PHI), our security and compliance teams may review relevant logs and payloads.
-
Compliance & audit
- We may need to access limited data to satisfy HIPAA, SOC 2–style, or other audit and regulatory obligations, under strict access controls.
All such access is:
- Role-based and least-privilege,
- Logged and auditable, and
- Governed by internal policies and your BAA.
We do not use this type of access to quietly build training datasets from your PHI.
5. What we never do
Across all OptimIQ products and modes, we do not:
- Sell PHI or personal data.
- Use PHI for third-party advertising, ad networks, or cross-context behavioral advertising.
- Train public, consumer, or cross-customer models on your PHI or data that identifies your organization, unless you have explicitly agreed to a de-identified or synthetic data program.
- Share your claims/clinical data with payers, vendors, or other customers except as directed by you or as required by law.
- Build a “shadow model” of your patient population for resale or external profiling.
6. Relationship to our Privacy Policy, Terms & BAA
This Data Use & Privacy Overview is meant to be practical and easy to understand. It does not replace:
- Our Privacy Policy – which explains legal bases, data subject rights, and jurisdiction-specific details;
- Our Terms of Service – which govern your use of the platform, including limitations of liability and acceptable use; and
- Any Business Associate Agreement (BAA) or Data Processing Addendum – which controls how we handle PHI as your business associate / processor under HIPAA and other laws.
If there is any conflict, the BAA and your signed agreement with OptimIQ control for PHI and other regulated data.