Articles

Do you need an AI Policy?

02 December 2024

Since generative AI programmes rose to prominence in 2022, it has been adapted, used and misused by people and businesses across the world. It may be tempting to make use of AI to make some processes more efficient, or to take care of tedious admin work. However, this new technology comes with many risks if you choose to embrace it. It’s important to make clear to staff how and when AI may be used within your organisation, and a comprehensive AI policy can lay out your expectations for its use.

What are the uses and risks of AI?

As a machine learning programme, AI programmes like Chat GPT scrape huge portions of the internet and use that information to generate responses to whatever question or prompt you put in. However, this “scraping” is indiscriminate, and can just as easily scrape bad data as it can good data. Bad data can include anything from misinformation to offensive phrases and attitudes. AI programmes have also been known to completely make things up (“hallucinate”).  Therefore, it is advisable to verify any data, rules, facts etc. that might be quoted in AI generated documents.  When producing written work with AI, it is also advisable to rewrite the content in your own words. This allows you to avoid accidental plagiarism from the AI’s scaped content, remove any mistakes and tailor the writing to your brands tone of voice.

AI can also be used to draft generic emails, find relevant content in large documents, digest large amounts of raw data instantly, among other things. However, generative AI systems continue to learn from what we ask them, this means that any personal or sensitive information can be taken by the AI and regurgitated elsewhere without you knowing about it. This can pose an enormous data breach risk and, with client and employee data, it is very important to ensure that any use of AI is in compliance with GDPR rules. If using AI for any admin tasks, don’t include sensitive information, anonymise, and redact information where needed.

Some employers have taken to using AI in the recruitment process in order to scan through large volumes of applicants and whittle them down to the best candidates. This needs to be implemented responsibly as historic bias has been known to bleed into AI decision making. A robot looking at the hiring decisions made by hundreds of employers can absorb the unconscious bias of those employers, exacerbating unfair hiring practises. Worse still, many AI programmes are “black box” systems, which means not even the AI developers can see exactly why the AI made the decision it did. This lack of transparency means if a candidate appeals the decision, the employer will not be able to explain why they were turned down. 

AI is only as good as the information it can access or is provided to it.  Therefore, we recommend using AI to compliment human decision making, rather than outright replacing it. Human judgement will help the business avoid potentially damaging and unjustifiable decisions. Employers have also found uses for AI to compliment employee onboarding processes, to aid performance management, conduct and productivity analysis, to manage remote workers, to work alongside employees with e.g. automation of repetitive or dangerous tasks to enhance efficiency and safety etc. 

However, if your organisation uses AI, as well as a number of potential legal risks, you should also weigh up the benefits of its use against some of the other negatives.  AI tools lack any element of human judgment or morality and the over reliance on AI can erode the personal nature of the employer/employee or business/client relationship. This can, for example, damage the relationship between an employee and their line manager which in turn can lead to potential employment disputes.  

What should I include in my policy?

Creating an AI policy is considered best practice for any organisation engaging with artificial intelligence technologies, ensuring ethical practices, compliance with regulations, and fostering trust among teams. Including the following elements in an AI policy helps organisations navigate the complex ethical, legal, and social implications of AI technologies, promoting responsible innovation and building trust among users and the wider community.

Here is a list of just some of the elements that an organisation should include in their AI policy:

  1. Purpose and Scope
    • Clearly define the purpose of the AI policy and its scope within the organisation.
  2. Ethical Principles
    • Outline the core ethical principles guiding AI use within the organisation, such as fairness, transparency, accountability, and respect for user privacy.
  3. Terminology
    • Explain/define key terminology used in the policy.
  4. AI use in the workplace
    • Explain which AI systems are permitted within the organisation and for what uses.
    • Provide guidelines for use. .
    • Include a statement that any breach of this policy will have potentially very serious disciplinary consequences and the disciplinary policy can be referred and directed to.
    • Include information outlining who is responsible for meeting the costs of any use of AI.
  5. Monitoring
    • Outline your right to monitor the use of AI in the organisation to mitigate any risks.
  6. Privacy and Data Governance
    • Include policies for handling and protecting data used by AI systems, complying with data protection laws like GDPR. This section should detail how data is collected, stored, used, and shared, and why this is important/potential consequences etc.
  7. Compliance with Laws and Regulations
    • Ensure the AI policy follows all relevant laws, regulations, and standards. This includes adhering to industry-specific regulations and any future laws regarding AI ethics and governance.
    • Explain the legal risks associated with AI-generated content.
  8. Training and Support
    • Include clear guidance on AI training and technical support and ensure you put this into practice as an organisation.
  9. Accountability and Oversight
    • Establish clear accountability structures and oversight mechanisms for AI projects.
    • Explain the recording keeping that is required to ensure compliant use of authorised AI applications and to enhance its usage by the workforce.

Contact Us

At Slater Heelis our Employment team are experienced in drafting and reviewing HR policies surrounding new and developing technologies. As the landscape around AI continues to grow, we are committed to staying up to date on the latest trends and technologies to make sure we can support our clients with up-to-date policy documentation. The above list is provided as a general example of the types of things that we would advise are included in an AI policy; it is not a substitute for the bespoke advice and policy drafting that we recommend an organisation obtains before making use of AI technologies. It is advisable for organisations using AI technologies to ensure they have a carefully drafted AI policy in place, as whilst this cannot remove all of the potential pitfalls, it can limit some of the risks involved with the use of AI technologies; leaving you to utilise this revolutionary technology within your organisation. If you would like to talk to our employment and HR specialists, fill out our contact form or give us a call on 0330 111 3131.

)
Sign Up

Sign in to continue reading

Access all our articles and search the provider directory for free.