skip to main content
 Logo

Introduction

Artificial Intelligence (AI) technology continues to develop at a rapid pace, and its use is becoming increasingly widespread in both our homes and places of work, including social care services. However, when introducing AI solutions in social care, it is important to also consider key issues such as transparency, consent, co-production, reliability, data protection and confidentiality.

This briefing highlights some of the ways AI is currently being used in the social care sector, as well as looking at the issues which should be considered before making a decision to use AI technology in the workplace.

What is Artificial Intelligence?

Artificial Intelligence (AI) refers to technology which is created to learn and solve problems in ways which used to require human intelligence. To do this, AI systems are trained using large amounts of data and information which enables them to learn how to identify patterns. This means they can then carry out many different tasks, including having ‘conversations’ that sound human-like.

Generative AI is a term used for technology which can create new, original content such as text, images, audio, and video, by learning patterns from large datasets.

AI chatbots are a type of generative AI that can create text in a way that mimics human conversation. This means that in response to prompts (or questions) the chatbot can create text replies on a specific topic or offer suggestions and advice.

(See also AI MythBusters – Digital Care Hub)

Potential Uses and Opportunities

Against a backdrop of rising demand for services, increasingly complex and challenging budgets, local authority social care departments and care providers have begun to explore whether there is potential for AI to help to improve efficiency, reduce costs, enhance decision-making and provide better services. Some current examples are shared below. (Please note these are just a ‘snapshot’ as the pace and range of solutions being developed means it is not possible to cover everything here.)

Support with administrative tasks

Using AI to complete routine tasks offers practitioners the potential to reduce the time they have to spend on administration, meaning they can focus more on direct work with carers, adults with care and support needs, children and families.

Transcribing notes from assessments and meetings

AI can be used to record and then summarise assessments and other meetings. This is one of the main ways which AI has been used so far in social care, and for which there is currently the most data on the benefits and the challenges of its use.

Typically, this approach involves a social care practitioner using a specially designed AI tool to record a conversation or meeting on their smartphone. At the outset it is important that everyone present understands that the conversation is being recorded and gives their consent to this.

Assessments and meeting summaries can be generated very quickly after the recording has ended. However, it is important that staff members then thoroughly check the transcripts, to ensure they are accurate and appropriately record what the practitioner observed during the meeting. A worker’s professional judgement will always be an important part of analysing interactions with people and ensuring recommendations from meetings meet the person’s stated needs and preferred outcomes.

The main benefits of using these types of AI include being able to greatly reduce the time taken to write up assessments and then identify appropriate support – which can potentially reduce waiting times. Some workers also reported that by not having to focus on writing notes they were better able to observe people’s body language, surroundings and other non-verbal cues and could get to know the person better.

Not having a backlog of notes to write up can help practitioners better manage their workload, reduce stress and free up time for more direct work with adults and children.  However, this form of AI is not suitable for use with everyone, including those with limited verbal communication skills, unclear speech or cognitive impairments for example.

Also, worker contributions will still have an important part to play, particularly in ensuring the person’s voice is central to the recording, and that case notes and assessments tell that individual person’s story. Everyone who uses social care services has a legal right to access their case records, so it is important they can help them understand and make sense of decisions which have affected their lives.

AI technology can also be used to record and write up notes from supervision sessions, and potentially other meetings, though some research has found that transcripts from meetings involving a larger number of participants were not as accurate.

More information AI tool improves direct work in adult social care despite accuracy concerns, practitioners report (Community Care)  and findings from a pilot using ‘Magic Notes’ in Kingston Council can be found on the LGA website

At the ‘front door’

AI chatbots can be used on the telephone and online to quickly answer queries and signpost people to appropriate resources. Where a chatbot cannot help answer the person’s question or if they would prefer to speak to a human, then calls are transferred to a staff member.

Derby City Council use a digital assistant called Darcie for their local authority ‘front door’ (telephone and online). About half of the phone calls received are transferred to staff for follow up in person. More information on experience of introducing Darcie and its impact can be found on the Derby City Council website.

Benefits include people being able to access support 24 hours a day, at a time to suit them, a faster response time and less time spent on hold.

Matching need to available services

As support services in local areas can vary and change, it can be challenging for staff to keep up to date with what is currently available, causing delay and frustration if people are referred for services which are not suitable or no longer exist. Local authorities are exploring AI tools that can take information about a person’s needs and preferences and provide a list of suggested local services.

Resource allocation / scheduling

AI ‘scheduling’ tools have the potential to help organisations use staff time more effectively. Analysing factors such as location, urgency of need and availability of resources, these tools can help optimise schedules for home care visits so that carers spend less time on the road and more time supporting the people who need them.

Supporting unpaid carers

Chabots which can carry out carer’s assessments and provide support for carers have also been developed. The benefits of using AI in this way include the carer being able to complete an assessment at their convenience, reducing waiting times for assessments and enabling rapid signposting to appropriate support.

AI tools can also support carers with admin tasks such as managing medication reminders and making routine / follow up appointments for the person they care for.

Supporting care providers

AI solutions and assistive technology (also called technology enabled care) can also be used by care providers to provide safer and more effective care to people with care and support needs. For example, lights to help to prevent falls, and sensors which can monitor a person’s movements at home, to highlight if there is a change in their routine. Tools which can assess levels of pain in non-verbal adults have also been developed.

Data analysis

By being able to quickly analyse a lot of information, AI has the potential to help identify and predict trends and patterns in need which can support the commissioning of relevant services and optimise the use of resources.

Risks and Considerations

AI is a tool that carries with it risks, as well as opportunities. Within social care and safeguarding in particular, human connection will always be important, and AI can only be part of the solution, not the whole solution.

The Oxford statement on the responsible use of generative AI in Adult Social Care states:

“Without careful oversight and transparency when this technology is being used, these risks could have a direct impact on peoples’ human rights and core issues such as safeguarding, data privacy, data security, equality, choice and control, and the quality of care”.

Reliability / accuracy of information

AI models can generate ‘hallucinations’, which is the term used when a system produces highly plausible but incorrect results.

In particular, free and easily available AI tools (such as ChatGPT and Gemini) should not be relied on without additional checks, as they can:

  • produce apparently credible responses, which may turn out to be entirely wrong;
  • quote sources that do not exist;
  • quote text from a genuine source that does not actually appear in that source.

This is because:

  • public AI chatbots do not provide answers from authoritative / respected databases. They generate new text using an algorithm (or process) based on the prompts they receive and the data they have been trained upon. This means the output which AI chatbots generate is what the model predicts to be the most likely combination of words (based on the documents and data that it holds as source information). This is not necessarily the most accurate answer;
  • as with any other information generally available on the internet, AI tools may be useful to easily find material you would recognise as correct but have not got to hand, but they are a poor way of conducting research to find new information you cannot verify. They may be best seen as a starting point for further research, rather than providing immediately correct facts;
  • the quality of any answers you receive will depend on how you engage with the relevant AI tool, including the prompts you enter and the quality of the underlying datasets. These may include misinformation (whether deliberate or not), selective data, or data that is not up to date.

Bias

AI tools generate responses based on the dataset they are trained upon. This means that any human biases or errors present in that data can embedded or made worse by generative AI, including, for example, racial and gender stereotypes. BASW’s Statement on Social Work and Generative Artificial Intelligence (2025) states that:

‘the equalities impact of the deployment of any generative AI applications requires serious consideration by any local authorities and other organisations employing them’.

Data protection, confidentiality and consent

Issues of confidentiality and data protection are of crucial importance in the field of safeguarding and social care, given the sensitive and personal nature of data being collected. Clear safeguards must therefore be in place to ensure AI tools used to process personal data comply with the UK General Data Protection Regulations (GDPR). Organisations should conduct data protection impact assessments before introducing AI products into their services.

In particular, personal or confidential information should not be shared with freely available AI chatbots because they could store and analyse that information to learn and improve their responses. This means that, in effect, any information shared with a public AI chatbot should viewed as being published to all the world.

Ethics and potential breaches of professional obligations

BASW guidance advises that social workers who choose to use generative AI to generate content should do so consciously and remember that they remain accountable for any decisions and recommendations made. Generated content needs to be checked, revised where necessary, and all actions and decisions should be able to be justifiable and defensible.

As social work is a relationship based and human-centred profession, it is important that services are transparent and that people receiving support know when this being provided by AI and what this means for them and their data.

Think Local, Act Personal have published guidance on Principles and priorities for the responsible use of Generative AI in care and support.  They suggest the following overarching principles:

  1. Generative AI tools must be designed and used to deliver better social care, which gives people more choice and control over their lives.
  2. Where Generative AI tools are used in social care, their design and use must be aligned with the core principles of equity, dignity, justice and safety.
  3. Generative AI tools created for use in social care must be co-produced with people who will use them – people who draw on care and support, people who provide unpaid care for family and friends and care workers.
  4. Where Generative AI tools are used in adult social care, their design and use must be aligned with the legal frameworks governing social care, including the Care Act 2014, the Mental Capacity Act etc.
  5. There must be clear mechanisms for ensuring that these principles are adhered to, and for holding people to account when they are not.

Potential criminal offences and / or civil liability

The High Court has recently considered cases where unchecked AI-generated content led to inaccurate submissions being submitted in court proceedings.  Deliberately placing false material before the court could potentially constitute criminal offences such as perverting the course of justice (which carries  a maximum sentence of life imprisonment) or contempt of court (Ayinde v Haringey [2025] EWHC 1383 (Admin)).

Rolling out the use of AI – What Would Help?

  • Need for guidance and regulation – BASW have called for the UK governments to regulate the use of generative AI in public services, and to issue clear guidance on the appropriate and responsible use of generative AI in the workplace.
  • When new AI tools are being developed, co-production with social work staff and people who use services will help to ensure that any AI tools introduced support ethical, personalised care that enables people to live their lives their way.
  • Training in the use of AI – will be key to building confidence in current staff and should also form part of social work education provided by colleges and universities. When any AI tools are introduced, local authorities should ensure staff are offered training on its use, including guidance on effective prompts and how to explain AI to people who use services.
  • Sharing of learning and experience from different local authorities and care providers will be a key to an effective roll out of AI tools and systems which can make a real difference to the delivery of social care services (see for example, the LGA Artificial Intelligence Hub).
  • Evaluations of AI tools should be able to evidence benefits for people using services, not just organisations.

Conclusion

‘It’s crucial to emphasise that AI should not replace human connection in social care. The most effective AI solutions augment the capabilities of care professionals, enabling them to provide more compassionate, personalised and effective care’ (Skills for Care). 

AI can reduce the time social care practitioners spend on administrative burdens, improving response time and freeing up staff to focus on face to face work. However, human oversight is, and will remain crucial, in particular to avoid:

  • unreliable and inaccurate and potentially biases results;
  • potential breaches of confidentiality and data protection legislation.

And to ensure that:

  • care and support is person-centred, relationship based and provided with empathy and dignity; and
  • safety, confidence and public trust in the social work profession is not eroded.

Further Reading

Artificial Intelligence Hub (Local Government Association) – including artificial intelligence study bank and advice on buying AI responsibly.

Socitm – applying AI responsibly; including case studies and example terms of reference for an AI ethics board.

Generative AI & Social Work Practice Guidance – Initial guidance for practice and ethics (BASW) 

Artificial intelligence in social work (Social Work England)

Principles and priorities for the responsible use of Generative AI in care and support (Think Local, Act Personal) 

ADASS West Midlands – AI and Adult Social Care webinars (2025) 





« Back to Briefings

      LinkedIn