Responsible Use of AI Systems Procedure Template| ISO 42001 AIMS

by Poorva Dange

The company needs a responsible AI system use both for stakeholder trust building and regulatory compliance. When using AI, the organisation helps users understand core elements that should exist before or during AI system use. The procedure specifies the main elements of responsible AI usage alongside the duties of AI end-users and company. 

Responsible Use of AI Systems Procedure Template| ISO 42001 AIMS

Responsibilities of AI users 

1. Understand the AI System

  • Users need to have both awareness and understanding regarding the operation boundaries and functionalities that AI systems offer and restrict.
     
  • Users should learn about AI systems operational range to utilize AI effectively while preventing system misuse.

  • Users require knowledge about AI system procedures for output generation through transparent and explainable methods to conduct proper decision-making processes.

2. All AI systems should receive the necessary implementation scope from their intended users.

  • Everybody needs to operate AI systems that company has authorized and authorized for their specific functional purpose.

  • The use of AI systems needs authorization because unauthorized or improper usage of these systems generates decisions that break regulatory requirements along with AI principles. 

  • Foreseeing, implementing, and managing countermeasures that align with suitable purposes drives customer safety and inclusive development and environmental and social preservation.

3. Be cautious about Data Privacy

  • All users must respect current national privacy regulations in addition to the security protocols which the company enforces to protect sensitive information. The company policy explicitly bans the unauthorized release of private information and other confidential data using AI systems.

  • Data privacy/protection maintenance stands as fundamental to company because it ensures the trust and reputation of the company and satisfies the requirements of HIPAA, GDPR and EU AI Act among other regulations.

  • The input of personal data needs to be limited only to authorized users who need to provide it to AI chatbot systems.

4. Verify Outputs

  • Users must perform an accurate verification of AI system output before finalizing any decisions that depend on it.

  • Verification requirements are in effect because AI systems sometimes generate improper or biased data that can be avoided through reliable output assessment processes.

  • Users create trust in AI systems when they perform output verification protocols to ensure the AI system generates consistent and duplicable results.

5. Report Issues

  • Users must deliver their reports concerning AI system problems through officially designated channels and according to your predefined AI Systems Incident and Concern Management Procedure.

  • Users should report their issues quickly so the problems can be solved instantly while AI systems receive updates based on AI principles.

  • The Issue reporting mechanism enables improved AI system performance through its capacity to maintain human oversight together with accountability features.

6. Engage in Continuous Learning

  • Using Artificial Intelligence systems effectively demands users to stay updated about best practices and ethical standards and technological advances that relate to these systems.

  • AI users require regular learning about its emerging technology since this helps them adopt appropriate AI system operation practices.

  • All AI principles serve as the foundation for maintaining responsible AI system utility.
Responsible Use of AI Systems Procedure Template| ISO 42001 AIMS

7. Avoid Misuse

  • AI system users are obligated to avoid illicit and dangerous applications of the systems that could lead to manipulation and deception.

  • AI misuse leads to both legal violations as well as damage to personal safety and social structures and negative impacts on a company's image.

  • When users share AI-generated deep-fake videos with others it causes negative effects on public decisions that result in societal damages.

  • The principle of safety combined with inclusive growth and societal and environmental well-being requires avoiding misuse since it protects individuals from harm and facilitates reliable and safe and inclusive AI systems.

8. Do not Ignore Bias

  • Users have an obligation to detect system output biases regardless of their existence and they must file bias reports through proper channels.

  • AI systems need this safeguard because they carry inherent risks for developing biased output results despite existing safeguards. 

  • When an AI recruitment system displays bias toward specific user groups, the users should report this defective behavior while using the system.

9. Do not Share Your Credentials

  • Users should refrain from distributing access specifics, including logon names and passwords to themselves or others and they should prevent the other users from using their AI system authentication.

  • The need arises because sharing credentials creates risks for the improper use of AI systems. 

  • The customer support employee providing his/her system access credentials to an AI chatbot thus creates a vulnerability that permits hackers to exploit those credentials for system entry.

  • Security and privacy combined with robustness form the foundation of AI system protection because one must stop unauthorized people from accessing these systems.

10. Avoid Overreliance

  • Users should avoid trusting entire AI system decisions which demand critical decisions from them.

  • Why is it needed? AI systems exist to provide additional support during human decision-making processes instead of taking over human responsibilities. 

  • Users within healthcare receive AI system recommendations about patient health yet physicians need to make the conclusive choice.

  • The principle requires human oversight for AI systems since this enables both accountability and responsible decision-making processes.

LLM (Large Language Models) and Generative AI Usage

The adoption of these systems demands attention to multiple associated concerns that all users must understand.

1. Every user must exercise caution when dealing with misinformation alongside potential inaccuracies and biased content.

  • LLMs produce text content which appears genuine but contains instances of misinformation or inaccurate data or biased information.
  • The use of distorted or prejudice-based incorrect information in decision procedures leads to non-compliance with AI principles.
  • Users must verify information obtained from AI system-generated articles because these articles often include false assertions which need verification from trusted sources.
  • The fair distribution of precise and unprejudiced content allows audiences to build trust in addition to maintaining fairness within the communication process.

2. Users need to actively check for potential violations of copyright and intellectual property during their work.

  • Generative AI generates material that could omit original copyrighted content and intellectual property without proper attribution.
  • The mandate requires users to validate that the AI-generated data avoids any copyright infringement.
  • Image usage from AI system output poses copyright risks because verification of copyrights is essential before exploitation.
  • The Principle of Accountability and Human Agency and Oversight mandates that users maintain control over generated results since users bear responsibility for everything they produce.

3. Always Be Careful about Data Privacy Risks

  • Users need to protect their sensitive or personal data before sharing them with LLMs because improper sharing might cause privacy violations.
  • The establishment of this requirement helps stop LLMs from misusing important or personal input data.
  • Temporal and personal information may become subject to privacy violations when users operate generative AI tools without privacy protections.
  • AI system user awareness requires data security to create an understanding of valid input data. 

Conclusion

A Procedure for Responsible Use of AI Systems provides direction to staff and stakeholders about ethical approaches to incorporate AI technologies correctly. The specified procedure protects against AI misuse through transparent operations while honoring privacy rights and strengthens the organization's track record on trustworthy AI as per ISO 42001 requirements.