Artificial Intelligence (AI) technology continues to develop at a rapid pace, and its use is becoming increasingly widespread in both our homes and places of work, including social care services. However, when introducing AI solutions in social care, it is important to also consider key issues such as transparency, consent, co-production, reliability, data protection and confidentiality.
This briefing highlights some of the ways AI is currently being used in the social care sector, as well as looking at the issues which should be considered before making a decision to use AI technology in the workplace.
Artificial Intelligence (AI) refers to technology which is created to learn and solve problems in ways which used to require human intelligence. To do this, AI systems are trained using large amounts of data and information which enables them to learn how to identify patterns. This means they can then carry out many different tasks, including having ‘conversations’ that sound human-like.
Generative AI is a term used for technology which can create new, original content such as text, images, audio, and video, by learning patterns from large datasets.
AI chatbots are a type of generative AI that can create text in a way that mimics human conversation. This means that in response to prompts (or questions) the chatbot can create text replies on a specific topic or offer suggestions and advice.
(See also AI MythBusters – Digital Care Hub)
Against a backdrop of rising demand for services, increasingly complex and challenging budgets, local authority social care departments and care providers have begun to explore whether there is potential for AI to help to improve efficiency, reduce costs, enhance decision-making and provide better services. Some current examples are shared below. (Please note these are just a ‘snapshot’ as the pace and range of solutions being developed means it is not possible to cover everything here.)
Using AI to complete routine tasks offers practitioners the potential to reduce the time they have to spend on administration, meaning they can focus more on direct work with carers, adults with care and support needs, children and families.
AI can be used to record and then summarise assessments and other meetings. This is one of the main ways which AI has been used so far in social care, and for which there is currently the most data on the benefits and the challenges of its use.
Typically, this approach involves a social care practitioner using a specially designed AI tool to record a conversation or meeting on their smartphone. At the outset it is important that everyone present understands that the conversation is being recorded and gives their consent to this.
Assessments and meeting summaries can be generated very quickly after the recording has ended. However, it is important that staff members then thoroughly check the transcripts, to ensure they are accurate and appropriately record what the practitioner observed during the meeting. A worker’s professional judgement will always be an important part of analysing interactions with people and ensuring recommendations from meetings meet the person’s stated needs and preferred outcomes.
The main benefits of using these types of AI include being able to greatly reduce the time taken to write up assessments and then identify appropriate support – which can potentially reduce waiting times. Some workers also reported that by not having to focus on writing notes they were better able to observe people’s body language, surroundings and other non-verbal cues and could get to know the person better.
Not having a backlog of notes to write up can help practitioners better manage their workload, reduce stress and free up time for more direct work with adults and children. However, this form of AI is not suitable for use with everyone, including those with limited verbal communication skills, unclear speech or cognitive impairments for example.
Also, worker contributions will still have an important part to play, particularly in ensuring the person’s voice is central to the recording, and that case notes and assessments tell that individual person’s story. Everyone who uses social care services has a legal right to access their case records, so it is important they can help them understand and make sense of decisions which have affected their lives.
AI technology can also be used to record and write up notes from supervision sessions, and potentially other meetings, though some research has found that transcripts from meetings involving a larger number of participants were not as accurate.
More information – AI tool improves direct work in adult social care despite accuracy concerns, practitioners report (Community Care) and findings from a pilot using ‘Magic Notes’ in Kingston Council can be found on the LGA website
AI chatbots can be used on the telephone and online to quickly answer queries and signpost people to appropriate resources. Where a chatbot cannot help answer the person’s question or if they would prefer to speak to a human, then calls are transferred to a staff member.
Derby City Council use a digital assistant called Darcie for their local authority ‘front door’ (telephone and online). About half of the phone calls received are transferred to staff for follow up in person. More information on experience of introducing Darcie and its impact can be found on the Derby City Council website.
Benefits include people being able to access support 24 hours a day, at a time to suit them, a faster response time and less time spent on hold.
As support services in local areas can vary and change, it can be challenging for staff to keep up to date with what is currently available, causing delay and frustration if people are referred for services which are not suitable or no longer exist. Local authorities are exploring AI tools that can take information about a person’s needs and preferences and provide a list of suggested local services.
AI ‘scheduling’ tools have the potential to help organisations use staff time more effectively. Analysing factors such as location, urgency of need and availability of resources, these tools can help optimise schedules for home care visits so that carers spend less time on the road and more time supporting the people who need them.
Chabots which can carry out carer’s assessments and provide support for carers have also been developed. The benefits of using AI in this way include the carer being able to complete an assessment at their convenience, reducing waiting times for assessments and enabling rapid signposting to appropriate support.
AI tools can also support carers with admin tasks such as managing medication reminders and making routine / follow up appointments for the person they care for.
AI solutions and assistive technology (also called technology enabled care) can also be used by care providers to provide safer and more effective care to people with care and support needs. For example, lights to help to prevent falls, and sensors which can monitor a person’s movements at home, to highlight if there is a change in their routine. Tools which can assess levels of pain in non-verbal adults have also been developed.
By being able to quickly analyse a lot of information, AI has the potential to help identify and predict trends and patterns in need which can support the commissioning of relevant services and optimise the use of resources.
AI is a tool that carries with it risks, as well as opportunities. Within social care and safeguarding in particular, human connection will always be important, and AI can only be part of the solution, not the whole solution.
The Oxford statement on the responsible use of generative AI in Adult Social Care states:
“Without careful oversight and transparency when this technology is being used, these risks could have a direct impact on peoples’ human rights and core issues such as safeguarding, data privacy, data security, equality, choice and control, and the quality of care”.
AI models can generate ‘hallucinations’, which is the term used when a system produces highly plausible but incorrect results.
In particular, free and easily available AI tools (such as ChatGPT and Gemini) should not be relied on without additional checks, as they can:
This is because:
AI tools generate responses based on the dataset they are trained upon. This means that any human biases or errors present in that data can embedded or made worse by generative AI, including, for example, racial and gender stereotypes. BASW’s Statement on Social Work and Generative Artificial Intelligence (2025) states that:
‘the equalities impact of the deployment of any generative AI applications requires serious consideration by any local authorities and other organisations employing them’.
Issues of confidentiality and data protection are of crucial importance in the field of safeguarding and social care, given the sensitive and personal nature of data being collected. Clear safeguards must therefore be in place to ensure AI tools used to process personal data comply with the UK General Data Protection Regulations (GDPR). Organisations should conduct data protection impact assessments before introducing AI products into their services.
In particular, personal or confidential information should not be shared with freely available AI chatbots because they could store and analyse that information to learn and improve their responses. This means that, in effect, any information shared with a public AI chatbot should viewed as being published to all the world.
BASW guidance advises that social workers who choose to use generative AI to generate content should do so consciously and remember that they remain accountable for any decisions and recommendations made. Generated content needs to be checked, revised where necessary, and all actions and decisions should be able to be justifiable and defensible.
As social work is a relationship based and human-centred profession, it is important that services are transparent and that people receiving support know when this being provided by AI and what this means for them and their data.
Think Local, Act Personal have published guidance on Principles and priorities for the responsible use of Generative AI in care and support. They suggest the following overarching principles:
The High Court has recently considered cases where unchecked AI-generated content led to inaccurate submissions being submitted in court proceedings. Deliberately placing false material before the court could potentially constitute criminal offences such as perverting the course of justice (which carries a maximum sentence of life imprisonment) or contempt of court (Ayinde v Haringey [2025] EWHC 1383 (Admin)).
‘It’s crucial to emphasise that AI should not replace human connection in social care. The most effective AI solutions augment the capabilities of care professionals, enabling them to provide more compassionate, personalised and effective care’ (Skills for Care).
AI can reduce the time social care practitioners spend on administrative burdens, improving response time and freeing up staff to focus on face to face work. However, human oversight is, and will remain crucial, in particular to avoid:
And to ensure that:
Artificial Intelligence Hub (Local Government Association) – including artificial intelligence study bank and advice on buying AI responsibly.
Socitm – applying AI responsibly; including case studies and example terms of reference for an AI ethics board.
Generative AI & Social Work Practice Guidance – Initial guidance for practice and ethics (BASW)
Artificial intelligence in social work (Social Work England)
ADASS West Midlands – AI and Adult Social Care webinars (2025)