On this page
Summary
Health New Zealand’s National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG) has been evaluating the use of generative artificial intelligence (AI) tools and Large Language Models (LLMs), and advises a precautionary approach to their use within Health New Zealand, especially around risks to breach of privacy, inaccuracy of output, bias, lack of transparency and data sovereignty.
Although generative AI tools and commercial LLMs are increasingly available, there is limited data on the utility, validity and safety of using these technologies and the risks and benefits have not been adequately evaluated in the New Zealand health context.
Health New Zealand does not endorse the use of unapproved LLMs or Generative AI tools where sensitive patient or organisational information is used within the context of these AI tools.
NAIAIEAG advises that Health NZ employees and contractors must not:
- Enter any personal, confidential or sensitive patient or organisational information into unapproved LLMs or Generative AI tools
- Use LLMs or generative AI tools for any clinical decisions, or for personalised advice to patients
Where Generative AI tools or LLMs are used in other contexts, Health New Zealand employees and contractors are fully responsible for checking the information generated, and including an acknowledgement that Generative AI tools were used in the creation of the content.
The technology is rapidly advancing and NAIAEAG requires any staff members with ideas or plans for potential use of generative AI or LLMs to contact NAIAEAG for advice about an appropriate process to follow. This will also help NAIAEAG advance its understanding of safe, effective and appropriate use.
National Artificial Intelligence and Algorithm Expert Advisory Group
Learn more about the NAIAEAG including membership, Terms of Reference and contact details for queries or proposals.
Artificial Intelligence and Algorithm Expert Advisory Group – Health New Zealand | Te Whatu Ora
Definitions: Generative AI and Large Language Models
Generative AI is a type of artificial intelligence technology powered by deep learning algorithms that are pre-trained on vast amounts of data to produce various types of content, including text, imagery, audio and videos.
LLMs are a type of Generative AI that have been specifically designed to help generate text-based content. LLMs are pre-trained on massively large data sets of written human language and textual data to summarise, predict and generate new text-based content.
There are broadly two ways to access Generative AI and LLM tools: publicly via the internet (free or paid), and privately via closed network authorised access. The primary difference between the two is in the control and privacy of the data processed by the AI.
Well-known examples of public Generative AI tools are ChatGPT, a user interface for GPT models built by OpenAI, and Bard, built by Google. Other popular public models include Dolly from Databricks, LLAMA from Meta Corporation and MedPaLM2 from Google (a healthcare specific model). Health New Zealand does not have any private LLMs available to us at this time.
Benefits of AI
Generative AI tools appear to be good at summarising information and describing it in language that makes sense to people. Their major strength is in predicting the next most likely word. These tools can generate convincing texts but are also prone to generating inaccurate or misleading information.
Risk of AI
Limited data exists about the utility, validity and safety of using these technologies. The potential use of these tools for healthcare have not yet been adequately evaluated for risks or unintended consequences. Therefore, the NAIAEAG advises a precautionary approach.
Privacy breach
Privacy breach
Generative AI tools require a prompt to undertake their activities which could be a few words or a large amount of data and could include information (including health information) that can be used to reconstruct who it is about. Using identifiable information in this way will almost certainly be a breach of privacy if an individual has not authorised the use of such information or the use is not ‘directly related’ to the reason for its initial collection.
Other considerations with these types of tools are that the provider of the model may continue to use the data for training Generative AI, and that it is not consistent with aspects of the New Zealand Privacy Act and Health Information Privacy Code, such as the right to access and correct personal information held by an agency and requirements for storage and security of personal information.
For further information, refer to the Privacy Commissioner's guide on generative AI (external link)
Inaccurate information
Inaccurate information
Generative AI tools can generate false and inaccurate responses that appear to be credible or from a medical or scientific source. They have been known to invent information and sources (sometimes referred to as ‘confabulation’ or ‘hallucinations’). Generative AI tools can also produce different answers to the same prompt on each request. This means that checking answers for accuracy and veracity is ineffective, even using the identical prompt.
Inequities and bias
Inequities and bias
Generative AI is most likely to have been trained on data with underlying biases because, for example, it under-represents or misrepresents minority populations such as Māori, women, older people, disabled people and people who are gender-diverse, as it is drawing from current biased information sources. Generative AI taught on these existing biased data sources may reinforce social and health inequities if they are used in health.
Lack of transparency
Lack of transparency
The processes by which Generative AI models produce outputs are not transparent to users. This lack of transparency makes robust assessment of validity and ethical considerations very difficult. Data sovereignty Generative AI models available in public forums are unlikely to respect or appropriately address Māori Data Sovereignty. Data entered into LLMs available in public forums are typically retained by the LLM provider. This raises concerns about the potential misuse of Māori data in AI applications.
Supported languages
Supported languages
Generative AI models have largely been developed with a focus on major global languages, leaving Te Reo Māori and other minority languages spoken within Aotearoa New Zealand under-represented. The lack of support for these languages can lead to misunderstanding, inaccuracies, and miscommunication.
Intellectual Property
Intellectual Property
The content generated may infringe the Intellectual Property rights of others, such as copyright and trademarks. Issues of Intellectual Property ownership and liability are complex and remain unresolved.
NAIAEAG Endorsed Tools
Health New Zealand is evaluating a number of LLMs for use by staff but no tools have been endorsed for use.
A limited small-scale test of an AI scribe tool (Tuhi), developed by Health New Zealand, is under way.