Summary advice

When using any newly available technology in your role it is important to consider the risks and benefits.

The available Large Language Models (LLMs) and Generative Artificial Intelligence (Generative AI) tools have not been validated as safe and effective for use in healthcare; nor have the risks and benefits been adequately evaluated in the Aotearoa New Zealand health context.

Health New Zealand | Te Whatu Ora DOES NOT ENDORSE the use of LLMs or Generative AI tools where non-public information is used to train the model or used within the context of the model.

NAIAIEAG advises that Health NZ employees and contractors:

  • Must NOT:
    • Enter any personal, confidential or sensitive patient or organisational data into LLMs or Generative AI tools
    • Use these LLM or Generative AI tools for any clinical decision, or any personalised patient-related documentation, or for personalised advice to patients

If Generative AI tools or LLMs are used for any other purpose, Te Whatu Ora employees and contractors are fully responsible for checking the information generated.

This technology is rapidly advancing and changing – safe and appropriate uses that help our staff and the population of Aotearoa NZ may well be developed in the near future.

However, Health NZ National Artificial Intelligence (AI) and Algorithm Expert Advisory Group (NAIAEAG) currently advises a precautionary approach as outlined above, due to risks around breach of privacy, inaccuracy of output, bias, lack of transparency and data sovereignty. We will continue to investigate these risks and the use of these tools.

We recommend that any staff members with ideas or plans for potential use cases register with NAIAEAG for advice around appropriate process and to help us advance our understanding of safe, effective and appropriate use.  

What do we mean by LLMs and Generative AI Models?

Generative AI is a type of artificial intelligence technology powered by very large machine learning models (algorithms) that are pre-trained on vast amounts of data to produce various types of content, including text, imagery, audio and synthetic data (computer code).

LLMs are a type of Generative AI that have been specifically designed to help generate text-based content. LLM deep learning algorithms are pre-trained on massively large data sets of written human language and textual data to summarise, generate and predict new text-based content.

There are broadly two ways to access Generative AI and LLM tools: publicly via the internet (free or paid), and privately via closed network authorised access. The primary difference between the two is in the control and privacy of the data used both to train and operate the model going forward.

Well-known examples of public Generative AI tools are ChatGPT, a user interface for GPT models built by OpenAI, and Bard, built by Google. Other popular public models include Dolly from Databricks, LLAMA from Meta Corporation and MedPaLM2 from Google (a healthcare specific model). Health NZ does not have any private LLMs available to us at this time.

What are the potential benefits and risks?

Currently these Generative AI tools appear to be good at summarising information and describing it in language that makes sense to people, however that information may not be correct or based on evidence. Their major strength is in predicting what will be said leading to “hallucination” or more accurately confabulation.

 

The potential uses of these tools for healthcare have not yet been adequately evaluated for risks or unintended consequences. The technology is also rapidly advancing and changing. Therefore the NAIAEAG is advising a precautionary approach. 

 

These risks do not necessarily preclude the use of Generative AI in healthcare in the future. But currently the risks have not been adequately assessed, mitigated, and weighed against the potential benefits.

 

Some of the known potential risks include:

Privacy breach

Generative AI tools require a prompt to undertake their activities which could be a few words or a large amount of data and could include information (including health information) that can be used to reconstruct who it is about. Using identifiable information in this way will almost certainly be a breach of privacy if an individual has not authorised the use of such information or the use is not ‘directly related’ to the reason for its initial collection.

 

Other considerations with these types of tools are that the provider of the model may continue to use the data for training Generative AI, and that it is not consistent with aspects of the NZ Privacy Act and Health Information Privacy Code, such as the right to access and correct personal information held by an agency and requirements for storage and security of personal information.

 

For further information, refer to the Privacy Commissioner’s guide on generative AI:  https://www.privacy.org.nz/publications/guidance-resources/generative-artificial-intelligence/

Inaccurate information

Generative AI tools can generate false and inaccurate responses that appear to be credible or from a medical or scientific source. They have been known to invent information and sources (sometimes referred to as ‘confabulation’). Generative AI tools can also produce different answers to the same prompt on each request. This means that checking answers for accuracy and veracity is ineffective, even using the identical prompt.

Inequities and Bias

Generative AI is most likely to have been trained on data with underlying biases because, for example, it under-represents or misrepresents minority populations such as Māori, women, older people, disabled people and people who are gender diverse, as it is drawing from current information sources. Reliance by Generative AI taught on these existing biased data sources may reinforce social and health inequities if they are used in health.

Lack of transparency

The processes by which Generative AI models produce outputs is not transparent to users – it is like a black box. This lack of transparency makes robust assessment of validity and ethical considerations very difficult.

Data sovereignty

The Generative AI models currently available in public forums are unlikely to respect or appropriately address Māori Data Sovereignty.

 

This omission raises concerns related to the potential misuse of Māori data in AI applications.

Intellectual Property

The content generated may infringe the Intellectual Property rights of others, such as copyright and trademarks. 

 

Issues of Intellectual Property ownership and liability are complex and remain unresolved.

Whom can I talk to about this?

The NAIAEAG will continue to update advice on these models/tools as we are able. We recommend that any plans for potential use are registered with our group for advice on appropriate process and support. We also encourage anyone with relevant knowledge or ideas for potentially safe and appropriate uses of LLMs/generative AI to provide input to our investigations.

Please contact NAIAEAG via our online form: https://forms.office.com/r/5cemkdANdU

Downloadable resources

Here are posters and tiles to help you spread the work about how Health New Zealand | Te Whatu Ora uses AI in healthcare.