Whistleblowing has been in the news due to Boeing issues being exposed by an internal whistle blower.   Management has policies in place for human whistle blowers but hasn’t even begun to think about AI being used by Whistle Blowers.  AI is becoming a huge business risk at all levels and should be part of the organizational strategic planning.  This post reflects what a policy would contain.

The key management tool to ensure AI does not become a tool that Negatively impacts your organization has just been released as a guide for organizations to manage the use of AI inside and outside of your organization.  ISO 42001 is risk based and works hand in hand with ISO 27001 and ISO 9001.

Considering the evolving regulatory landscape surrounding artificial intelligence (AI), including the EU AI Act and potential future directives from bodies like the U.S. Department of Health and Human Services (HHS), establishing a whistleblower policy has emerged as a proactive measure for organizations. While current laws may not explicitly mandate such policies, the need to comply with emerging regulations underscores the importance of instituting robust internal frameworks. Effectively, these policies serve as a preemptive strategy to ensure adherence to evolving standards and mitigate potential risks associated with AI usage.

Central to the justification for implementing a whistleblower policy is the necessity to gain comprehensive insight into all facets of AI utilization within your organization. With self-reporting often being the primary means of identifying AI usage, gaps in awareness can arise, particularly regarding third-party applications or unauthorized AI deployments. Tools such as the Truyo AI Governance Platform offer invaluable assistance in scanning and identifying AI usage, addressing the challenge of oversight and enabling organizations to bolster their compliance efforts. Until you have deployed a tool to find AI usage like Truyo, fostering a culture of transparency and accountability, organizations can navigate the complexities of AI governance with greater confidence and resilience.

What Should Be in Your AI Whistleblower Policy

  1. Purpose and Scope:
    • Encourages reporting of misconduct related to AI systems.
    • Covers all operations and locations of the organization.
    • Defines potential acts warranting reporting.
  2. Protection for Whistleblowers:
    • Ensures protection from retaliation, discrimination, or harassment.
    • Maintains confidentiality of reporters’ identities.
  3. Reporting Mechanism:
    • Provides multiple channels for reporting concerns.
    • Promises confidentiality and professionalism in handling reports.
  4. Investigation:
    • Conducts initial assessment and impartial investigation.
    • Informs reporters of investigation progress and outcome.
  5. Corrective Action:
    • Takes appropriate disciplinary or remedial action upon findings.
    • Implements measures to prevent future occurrences.
  6. Policy Review and Updates:
    • Conducts annual reviews for effectiveness and compliance.
    • Reserves the right to amend policy as necessary.
  7. Responsibility:
    • Expects compliance from all employees, contractors, and partners.
    • Assigns responsibility to a Compliance Officer for oversight.

A comprehensive whistleblower policy aims to foster a culture of accountability and ethical AI use within the organization, aligning with the principles outlined in the EU AI Act and other upcoming regulations. In addition to the important components above, employees should be informed of their right to report, how to submit a report, and what the reporting process entails.

About Ale Johnson

Ale Johnson is the Marketing Manager at Truyo

Our PECB published course is designed to assist you in this endeavor. Contact me at training@cpisys for more information and get started on your AI Management System Implementation journey!

ISO-IEC-42001-LI