Using artificial intelligence (AI) in general practice has potential benefits for delivering care, but its use must align with regulatory requirements and good clinical governance to ensure safe and compliant implementation in healthcare delivery.
CQC is committed to encouraging the use of innovative technology in health and care where it benefits people who use services. We work with our partner organisations in the AI and Digital Regulations Service for health and social care to ensure this happens.
This guidance applies to IT-based and digital solutions used in a practice as well as AI tools.
What is AI?
In this guidance, we define AI broadly as: ‘Technologies that simulate human intelligence to perform complex tasks by learning from data on how to complete those tasks.’
AI-powered tools are designed to learn from data to generate dynamic responses based on different patterns, contexts, and a variety of inputs. Their outputs may differ even with similar inputs, adapting over time or based on training data.
Rules-based automation tools are different to AI as they follow fixed, rule-based logic using pre-defined algorithms or conditions. They produce the same outcome for a given input every time, with no capacity to learn or adapt unless manually updated.
Common AI products
In general practice, AI is starting to be used in a variety of ways, for example to manage administrative tasks and improve efficiency in systems and to support clinical decision making. The following are some common types of AI tools. As the technology advances, there will be more examples of new tools and uses.
Ambient voice technology (AVT)
These AI scribe devices transcribe speech from patient consultations directly into structured medical documentation, such as patient notes, summaries and letters. See guidance from NHS England: Adopting ambient scribing products in health and care settings (NHS Transformation Directorate).
AI triage tools
Automated triage systems assess the nature and urgency of a patient’s symptoms using a structured online questionnaire, before directing them to the most appropriate care pathway. For example, a GP appointment, pharmacy, emergency department/A&E, self-care, or physiotherapy.
AI results processing
This can include sorting through investigation results, such as blood tests and notifying of abnormal findings.
AI clinical documentation tools
These can be used for creating and processing medical documents and letters, such as AI generated discharge summaries.
AI diagnosis and treatment planning tools
Some uses include:
- in a clinical decision support system (CDSS), to assist clinicians in formulating differential diagnoses based on a patient's symptoms, suggesting next appropriate investigations and treatment options
- helping to interpret medical imaging such as generating preliminary reports for X-rays
- identifying potentially serious skin conditions from lesion images; these flagged cases can then be prioritised for further clinical assessment or referral.
AI chatbots and generative AI
AI chatbots that use large language models (LLMs) can be used as search engines to answer complex queries as well as:
- writing case notes and reflections for portfolios or to draft responses to patient queries and generate letters
- supporting clinicians when making a diagnosis and planning treatment, although this is not recommended
- helping patients to list their symptoms and find out about the condition they may have and any self-care options.
AI predictive modelling
AI can be used to:
- predict missed appointments and help to schedule appointments
- forecast levels of staffing needed and assist with staff rostering
- assess patient risk, such as predicting a risk of certain cancers.
Potential benefits of AI
Enhanced clinical efficiency and workflow
AI tools can allow more meaningful contact time with patients. By flagging urgent issues early, AI could enable clinicians to prioritise higher-risk patients, provide more timely interventions, and reduce delays in care pathways. This has the potential to improve patient care.
Freeing up clinicians’ time
AI tools can automate time-consuming administrative and clinical tasks, reducing the workload on staff and allowing them to focus on patient-facing activities. Tasks can also be completed faster, improving productivity.
Improving access and equity
AI-driven triage and ‘digital front doors’ aim to enable patients to access help regardless of whether telephone lines are available. Intelligent prioritisation could help ensure patients are seen based on clinical need and directed onto the appropriate pathway.
Cost savings
Improving operational and clinical efficiency may reduce costs.
Data-driven insights
AI can analyse trends across patient populations, helping practices to proactively manage chronic conditions, identify at-risk individuals, and improve population health outcomes.
Procurement and implementation
Compliance with the NHS Digital clinical risk management standards
When adopting an eligible digital technology on behalf of the NHS, providers need to first meet the safety standards set by NHS Digital.
NHS Digital has set 2 clinical risk management standards:
- the DCB0129 applies to developers of the digital technology, to help them to show evidence of the clinical safety of their technology
- the DCB0160 applies to adopters of the technology, to assure them that the technology is safe to use in health and social care.
These standards require both developers and adopters to carry out a risk assessment on the digital technology.
As an adopter (for example, the digital lead of a GP practice), the DCB0160 standard supports the safe deployment and use of digital technologies by requiring providers to:
- think about how they will deploy and use the digital technology
- make sure the developer is compliant with DCB0129
- carry out a clinical risk assessment
- provide evidence of effective risk management.
NHS Digital has also produced the Digital Technology Assessment Criteria (DTAC) to support providers when procuring any new digital technologies. The DTAC is designed to help healthcare organisations to assess suppliers at the point of procurement or as part of a due diligence process, to make sure digital technologies meet NHS Digital’s minimum baseline standards.
MHRA classification
The Medicines and Healthcare products Regulatory Agency (MHRA) is the UK government body responsible for ensuring that medicines, healthcare products and medical devices (including Software and artificial intelligence (AI) as a medical device (Medicines & Healthcare products Regulatory Agency)) are safe, effective, and of high quality.
It regulates and approves medical devices by assessing their risk, issuing UKCA or CE markings, and ensuring compliance with relevant safety standards.
MHRA also oversees post-market surveillance, including monitoring adverse incidents through the Yellow Card scheme, to protect patients and maintain public confidence in medical technologies and software products.
If you use an AI tool in your practice that influences clinical decision-making, it is likely to be qualified as a medical device under MHRA regulations. The classification of the device is based on its intended use and should be provided in the instructions for use (IFU) of any product.
The software should only be used within the constraints described there otherwise the adopters will take on significantly more risk. The class correlates with the potential level of risk to patients from the device:
- Class I – lowest potential risk to patients, self-certified by the developer
- Class IIa – needs formal MHRA assessment from this class onwards
- Class IIb
- Class III – highest potential risk to patients
You should ask AI companies to provide evidence of:
- MHRA registration
- UKCA or CE marking
- device classification and clinical safety/regulatory documentation.
By law, the manufacturer must know and disclose this information if the product is regulated. There is also an online database where you can search registered medical devices – the public access registration database (PARD). If unsure about a product, please contact MHRA directly.
Governance and oversight
To comply with the DCB0160 standard, adopters of new digital technologies are required to nominate a clinical safety officer (CSO). The CSO must:
- be a senior clinician
- have a current registration with a professional body such as the General Medical Council or Nursing and Midwifery Council
- have sufficient training in digital clinical safety and clinical risk management to a practitioner level. See: Digital Clinical Safety training (NHS England Digital).
If your practice does not have the expertise within the organisation, it may be appropriate to ask for advice from the commissioning organisation, primary care network or a third-party provider of this service.
The CSO will be responsible for the process as defined in the standard. This may include undertaking or overseeing the clinical risk management activities, such as:
- evaluating the evidence that clinical risks have been mitigated or accepted
- ensuring that the risk management processes are documented and recorded appropriately
- reviewing or developing key documents such as the clinical safety case report, hazard log and clinical risk management plan
- organise hazard workshops.
What we will look at when we assess
We may not look at every quality statement and associated regulation under a key question when we assess your practice. But our assessments will focus on your systems and processes to ensure the safe and compliant use of AI. We will look for evidence of this in the following areas:
Procurement and governance: Any AI tools must have been procured in line with relevant evidence and regulatory standards (DCB0160, DTAC, MHRA registration (if applicable). We will check that they are being used appropriately and safely by reviewing your clinical governance arrangements.
If an AI tool is supplied from an NHS procurement list it is reasonable for the practice to assume that appropriate developer standards have been met but there is still the need to ensure that it is deployed in line with its intended purpose and evidence of regulatory standards reviewed.
- Risk assessment: Your practice should have a hazard log and risk assessments completed in relation to AI tools.
- Responsibilities: There should be a responsible CSO and digital lead for AI technologies and related clinical governance. They should have completed relevant training.
- Human oversight: To ensure safe and effective working, implementation and outcomes there should be appropriate human oversight, monitoring and evaluation of AI outputs and processes. You need to demonstrate that AI is being used as a support tool – not a replacement for human oversight. We will check that this is happening through your audits, a significant incident log or other quality improvement activity.
- Learning from errors: If something goes wrong, we will check for established systems to report and investigate occurrences. This includes reporting to software developers as well as through the MHRA Yellow Card system. Any lessons learned should be shared internally, for example in staff meetings and with the CSO, and shared externally to digital leads in the integrated care system or through the Learn From Patient Safety Events (LFPSE) service if patient safety is at risk.
- Data protection: All practices should be able to demonstrate how third-party venders have met assurances on how data is shared, stored or used. For example, through:
- complying with the UK General Data Protection Regulation (GDPR) including a Record of Processing Activities (RPOA)
- Data Protection Impact Assessments (DPIAs)
- cybersecurity arrangements
- the NHS Data Security and Protection Toolkit (DSPT)
Consent: The type of consent required (implied or explicit) will be determined by the type of AI technology and its intended use.
However, as AI technologies are new, you do need to tell people that you are using them. This is about being transparent and allowing people to object, rather than asking for explicit consent.
For example, you do not need to ask for explicit consent from patients or people using your service before using an AI scribe to perform tasks that deliver individual care. In these cases, it is appropriate to rely on implied consent under the common law duty of confidentiality. But, as AI scribes are new, you do need to tell patients they’re being used.
See: Artificial Intelligence (NHS Transformation Directorate)
Refer to the ICO guidance: How do we ensure lawfulness in AI? (ICO)
- Staff training: Staff should have received appropriate training to be competent to use AI tools.
Equity in access: People should not be digitally excluded. It is essential that practices offer a non-digital route to access care, along with digital ways. You should take practical steps to reduce digital exclusion by looking at digital skills, connectivity and accessibility among the patient population.
See more: Inclusive digital healthcare: a framework for NHS action on digital inclusion (NHS England)
- Managing bias: There may be possible bias in AI around certain population groups, based on the quality of data that was used to develop it. You should seek assurance on mitigating that risk.
Further information
- Principles for artificial intelligence (AI) and its application in healthcare (BMA’s report on principles of AI)
- Using AI safely and responsibly in primary care (The MDU)
- Evidence Standards Framework for Digital Health Technologies (NICE)
- AI in healthcare: navigating the noise (NHS Confederation)
As a disclaimer, we cannot verify the linked external publications for accuracy. Guidance and resources will change as the technology evolves, so you should check the original organisations’ websites for updates.
GP mythbusters
SNIPPET GP mythbusters RH
Clearing up some common myths about our inspections of GP and out-of-hours services and sharing agreed guidance to best practice.