1. Purpose
This policy establishes the principles and requirements for the responsible, ethical, and secure development, deployment, and use of artificial intelligence (AI) at Dualboot Partners. It covers both the creation of proprietary AI systems (“building AI”) and the integration or use of third-party AI solutions (“building with AI”). The goal is to ensure that all AI activities align with company values, legal obligations, and industry best practices, while managing risks and protecting stakeholders.
2. Scope
This policy applies to:
- All employees, contractors, and third parties involved in AI-related activities.
- All projects, products, and services that involve the development, deployment, or use of AI, whether proprietary or third-party.
- All data, infrastructure, and processes supporting AI systems.
This includes, but is not limited to, software development, data science, product management, IT, and business operations.
3. Definitions
- Artificial Intelligence (AI): Systems or technologies that perform tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, perception, or language understanding.
- Building AI: The process of designing, developing, training, testing, and deploying proprietary AI models, algorithms, or systems.
- Building with AI: The process of integrating, configuring, or using third-party AI tools, APIs, or platforms within company products, services, or internal processes.
- AI System: Any software, hardware, or combination thereof that uses AI techniques to perform tasks. Third-Party AI: AI solutions, models, or services developed and maintained by external vendors or providers.
4. Roles & Responsibilities
- AI Governance Lead: Accountable for overseeing AI policy implementation, monitoring compliance, and serving as the point of contact for AI-related issues.
- Developers/Engineers: Responsible for following this policy in all AI development and integration activities, including documentation and risk management.
- Data Owners: Ensure that data used in AI projects is accurate, secure, and compliant with privacy and data protection requirements.
- Product Managers/Project Leads: Ensure that AI use cases are reviewed for compliance, risk, and ethical considerations.
- All Staff: Required to report any concerns, incidents, or suspected violations of this policy.
5. Acceptable Use
- AI must only be used for purposes that are legal, ethical, and aligned with company values. Prohibited uses include, but are not limited to: unlawful discrimination, privacy violations, unapproved automated decision-making, manipulation, or any use that could harm individuals or groups. All AI systems must be approved by the AI Governance Lead before deployment.
- Human oversight must be maintained for all critical AI-driven decisions, especially those affecting individuals’ rights or well-being.
6. Data Management
- Data used for AI must be collected, processed, and stored in compliance with applicable laws and company policies.
- Data must be accurate, relevant, and limited to what is necessary for the intended AI purpose. Sensitive or personal data must be anonymized or pseudonymized where feasible.
- Data quality and integrity must be maintained throughout the AI lifecycle.
- Data used in third-party AI tools must be subject to the same standards as internal data.
7. Model Development & Deployment (Building AI)
- AI models must be designed to minimize bias, support explainability, and allow for human oversight. All models must undergo rigorous validation and testing, including checks for accuracy, fairness, and security vulnerabilities.
- Documentation must include model purpose, data sources, training methods, validation results, and known limitations.
- Models must be monitored post-deployment for performance, bias, and unintended consequences, with mechanisms for rollback or remediation if issues are detected.
8. Third-Party AI Integration (Building with AI)
- Only third-party AI tools and services that have been vetted and approved by the AI Governance Lead may be used.
- Third-party providers must demonstrate compliance with security, privacy, and regulatory requirements. Contracts with third-party AI vendors must address data ownership, security, privacy, support, and incident response.
- Regular reviews of third-party AI tools must be conducted to ensure ongoing compliance and performance.
9. Risk Management
- All AI projects must undergo a risk assessment to identify and address potential risks, including bias, security, privacy, explainability, and operational impact.
- Risk mitigation strategies must be documented and implemented before deployment.
- Ongoing risk monitoring is required, with periodic reviews and updates as needed.
10. Compliance & Legal
- All AI activities must comply with applicable laws, regulations, and industry standards (e.g., GDPR, CCPA, EU AI Act, NIST, ISO).
- Compliance activities, including assessments and audits, must be documented and retained for review. Legal counsel must be consulted for any AI use cases with significant legal, regulatory, or ethical implications.
11. Incident Response
- All AI-related incidents, including data breaches, model failures, or ethical concerns, must be reported immediately to the AI Governance Lead.
- Incident response procedures must include investigation, containment, remediation, and notification of affected parties as required.
- Lessons learned from incidents must be documented and used to improve AI practices and controls.
12. Training & Awareness
- All staff involved in AI activities must receive regular training on responsible AI development and use, including legal, ethical, and security considerations.
- Training must cover this policy, relevant laws and regulations, and best practices for AI risk management.
- Awareness campaigns should be conducted to promote a culture of responsible AI use across the organization.
13. Review & Updates
- This policy must be reviewed at least annually, or whenever significant changes occur in AI technology, regulation, or company operations.
- Reviews must consider feedback from stakeholders, audit findings, and changes in the external environment. All updates and approvals must be documented in the version history.