DIS Security & Trust
Last updated: 17 November 2025
Our core assumption:
When you connect DIS, you’re giving us access to the heart of your engineering organization: source code, technical docs, issues, and conversations. Everything about our architecture and processes is built around protecting that trust.
Dagg Intelligence Systems (“DIS”) provides an agent-native software development control plane with a knowledge machine that turns your internal knowledge into actionable, executable insights. DIS is designed primarily for small and mid-sized software development teams and projects.
We do not recommend DIS for safety-critical environments such as life-support systems, rail signaling, or similar high-sensitivity use cases.
1. Architecture & Data Flows
DIS connects to the tools your engineering team already uses:
- Source control: GitHub
- Knowledge & docs: Notion, Google Docs
- Issue tracking & planning: Linear
- Communication: Slack
Planned integrations include Confluence, Jira, Trello, Perforce and other enterprise platforms.
From these systems we ingest:
- Customer account data: workspace/project admins and members, emails, and identifiers.
- Customer content: code, documents, comments, tickets, and selected messages to power the knowledge graph and agent workflows.
- Metadata & usage analytics: events and metrics about how the product is used (never to train third-party models).
We do not intentionally process special categories of personal data (such as health data, biometrics, or data about children). DIS is focused on software development artifacts.
Data storage & retention
- Customer content and metadata are stored persistently to support the knowledge machine and historical context.
- Customers can delete an entire project; project deletion triggers hard deletion of related content from our primary data stores.
- Within a project, items may initially be soft-deleted (for safety and recovery) but are permanently removed when the project itself is deleted.
- After project deletion, remaining traces are limited to encrypted backups until those backups expire.
- Logs are retained for 30 days and are filtered to avoid sensitive content where possible.
2. Hosting, Regions & Data Residency
- DIS is hosted on Google Cloud Platform (GCP).
- We currently operate in an EU region and, for the scope of this document, only support EU residency.
- Customer data (databases and backups) is stored in EU-based infrastructure.
- Primary data stores:
- PostgreSQL for structured data, including vector embeddings.
- Neo4j Aura for graph data.
Current deployment model:
- Multi-tenant deployment on GCP, with multiple customer tenants per deployment environment.
- Data isolation is enforced through logical separation: tenant identifiers, row-level security policies, and application-level access controls ensure that customer data remains isolated and inaccessible to other customers.
- In PostgreSQL, tenant isolation is enforced via tenant IDs and query-level filtering.
- In Neo4j Aura, we use tenant-scoped group IDs combined with access controls to ensure logical separation between customers.
3. Identity, Access & Permissions
Authentication
- DIS uses Auth0 for authentication and identity.
- We support SSO via Google (Gmail and similar identity providers).
- We do not currently allow username/password logins, so we do not store customer passwords in DIS.
- Where supported by Auth0, we can enable multi-factor authentication (MFA) via the identity provider.
Authorization & roles
Access within a DIS project is controlled with simple, explicit roles:
- Owner – Full control over the project. Can delete or archive the project, and manage integrations, settings, and membership.
- Admin – Read access to all project data. Can manage integrations (enable/disable connectors) and invite members.
- Member – Read access to project data. Cannot modify integrations or delete/archive the project.
4. Application Security & SDLC
Security is integrated into our development lifecycle:
- Code review: All changes are reviewed before deployment.
- Static analysis (SAST): Automated checks to catch common security issues.
- Dependency & container scanning (SCA): Scans for known vulnerabilities in dependencies.
- Secret scanning: Detection and prevention of hard-coded secrets in source code.
- AI-assisted review: Internal AI tools augment code review and surface potential issues.
Change management
- We operate staging and production environments.
- Changes are tested in staging before deployment to production.
- Dependencies and base images are updated on a daily/weekly cadence, depending on criticality.
Penetration testing
- We have not yet completed a third-party penetration test.
- Independent external testing is on our near-term roadmap, and results will inform further hardening.
5. Data Protection: Encryption, Backups, Deletion
Encryption in transit
- All communication between clients and our services is encrypted using TLS.
- Service-to-service communication within our infrastructure is also encrypted.
Encryption at rest
- All primary data stores (PostgreSQL, Neo4j Aura) and backups are encrypted at rest using GCP-supported mechanisms.
- Sensitive items such as access tokens are additionally encrypted at the application level before storage.
- Encryption keys are managed via GCP’s key management. Access to encryption keys is restricted to the CTO and enforced via least-privilege IAM controls.
- A formal key rotation policy is on our roadmap.
Backups & restoration
- We perform daily backups of critical data stores.
- Backups are stored in GCP, encrypted and access-controlled.
- Backup retention currently ranges between 7 and 30 days, depending on the system.
- We regularly test backup restoration and recovery procedures to validate that we can restore data if needed.
Data deletion
- Customers can request project deletion, which triggers hard deletion of that project’s data from active stores, typically within 24 hours.
- Backups expire automatically at the end of their retention period.
- We do not retain customer content for longer than needed to provide the service and backups.
6. Network & Infrastructure Security
- All resources are deployed inside GCP VPCs with private subnets for internal services and firewall rules limiting inbound and outbound traffic.
- The only externally exposed components are:
- The DIS web application/API.
- An MCP server used for agent integrations, protected by authentication and network controls.
- We use GCP’s WAF, rate limiting, and DDoS protections in front of public endpoints.
- Secrets such as DB passwords and API keys are stored in GCP Secret Manager for production and NordPass for development-related secrets.
- Access to production infrastructure and secrets is restricted to the CTO, following least-privilege principles.
7. Internal Access & Employee Security
- Access to production data and systems is limited to a small set of engineering roles on a least-privilege, need-to-know basis.
- Access is logged and periodically reviewed.
- Environment separation:
- Dev: Local developer environments, isolated from production.
- Staging: Separate GCP deployment used for testing and validation.
- Production: Live customer environment, separate deployment but same overall billing account.
Device & workstation security
- Disk encryption is required for devices accessing production systems.
- Screen lock timeout is set to 30 seconds.
- Onboarding and offboarding procedures are being formalized; at present these are handled centrally by the CTO to ensure timely revocation of access when someone leaves.
8. Logging, Monitoring & Incident Handling
- We log authentication events, admin actions, system errors, and selected data access and runtime process events (without logging all raw content).
- Logs are centralized using GCP logging and monitoring, retained for 30 days, and access-controlled and filtered to reduce exposure of sensitive data.
Incident response
- We are formalizing a written incident response plan.
- Security-relevant alerts are handled directly by the core engineering leadership.
- In the event of a security incident that materially affects customer data, we commit to notifying affected customers without undue delay, once we have enough information to provide a meaningful update.
9. Third-Party Services & Subprocessors
Key third-party services currently used include:
- Google Cloud Platform (GCP) – infrastructure, networking, storage, databases, logging, and monitoring.
- Auth0 – authentication and identity management.
- Langfuse - supplementary for tracking and improving LLM inference.
- AI/LLM providers – see AI section below.
We do not yet publish a public subprocessor list, but we treat third-party vendors as subprocessors where they handle customer data and apply internal review and data protection agreements as we scale.
10. Privacy, GDPR & Data Subject Rights
- For most customers, DIS acts as a data processor: you, the customer, remain the controller of your data.
- Customer data is stored in the EU (GCP EU region).
- We currently focus on supporting deletion rights: Customers can delete projects, triggering data deletion and backup expiry as described above.
- For full details on data collection, usage, retention, and your rights, please see our Privacy Policy.
Items in progress
- A standard Data Processing Agreement (DPA) that can be incorporated into customer contracts.
- Additional tooling to support broader data subject rights (access/export, rectification, objection, portability) as the product and customer base grow.
Cookies & tracking
- We use only strictly necessary cookies required to operate the service.
- We do not use cookies for third-party marketing.
11. AI & Model Security
DIS uses AI to power agents and the knowledge machine. Today, this includes:
- Large language models from providers such as OpenAI, Grok, Anthropic, and potentially others over time.
- Vector embeddings generated via OpenAI and stored in PostgreSQL.
Key principles
- Tenant isolation:
- Vector data is stored in shared PostgreSQL databases with logical separation enforced through tenant identifiers and application-level access controls.
- Neo4j Aura uses tenant-scoped group IDs to ensure logical separation between customers.
- No training on your data:
- We have a strict policy that customer data is not used to train or fine-tune DIS models.
- Where possible, we opt out of third-party model providers using prompts or outputs for their own training.
Prompt & completion logging
- We log prompts/outputs for up to 30 days to support debugging, monitoring, and abuse detection.
- These logs are not used for model training and are subject to the same access controls and deletion practices as other logs.
12. Business Continuity & Disaster Recovery
- We maintain daily encrypted backups of critical systems with 7–30 days of retention.
- We test backup restoration to validate that we can recover from data loss scenarios.
- A documented disaster recovery (DR) plan, including explicit RTO/RPO targets, is on our roadmap; current recovery capabilities are constrained primarily by backup schedules and infrastructure provisioning times.
13. Customer Responsibilities
Security is a shared responsibility. DIS is responsible for securing the platform and infrastructure, but customers are responsible for:
- Managing user accounts, roles, and permissions within DIS.
- Configuring and securing their identity provider (SSO, MFA policies, etc.).
- Securing their own endpoints and devices.
- Managing access and security settings for integrated systems (GitHub, Notion, Linear, Slack, Google Docs, etc.).
14. Roadmap & Continuous Improvement
We treat security and privacy as ongoing work. Near-term priorities include:
- Formalizing a documented incident response plan.
- A disaster recovery plan with explicit RTO/RPO.
- A consistent onboarding/offboarding process as the team scales.
- Running independent third-party penetration tests and addressing findings.
- Published: Data Processing Addendum (DPA) with subprocessor list.
- Expanding support for data subject rights and admin-level security controls.
Additional Resources:
For more information about our policies and legal terms, please review:
- Privacy Policy - How we collect, use, and protect your data
- GDPR Data Policy - Our GDPR compliance and data subject rights
- Data Processing Addendum - Our data processing terms and obligations
- Terms of Use - Legal terms governing your use of DIS