Is AI ready for care? Let’s make sure social care is ready for AI

Digital care Hub article on AI in Social Care. Written 10th April 2025.

4/10/20254 min read

AI tools like ChatGPT and other generative systems are already showing up in social care. Care workers are using them to draft care plans, reduce admin, and even support people emotionally through AI-powered chatbots. But while the tech is moving fast, our thinking about how we use it responsibly hasn't quite caught up. For example, it’s tempting to use AI to develop a care plan – but it is very high risk and not recommended as it’s putting sensitive data into a potential open environment and there is a high risk that it will not be adequately tailored and personalised for the individual.

Because of this individuals and organisations across the care community came together over this last year to create guidance and a Call to Action – a shared set of principles, expectations and next steps for how AI can (and should) be used in social care. This work was convened by the Institute for Ethics in AI , Digital Care Hub and Daniel Casson💚 of Casson Consulting. It culminated in the first every AI in Social Care Summit at the end of March and it's generating incredible discussions – just check out social posts using #AIinSocialCare.

We’re asking everyone involved in social care – whether you draw on care, support someone, work in care, develop tech, fund services or shape policy – to get involved, endorse the Call to Action, and help shape what happens next.

How people are already using AI in care

We've heard care workers and providers talk about how they’re using generative AI to write up notes - especially helpful for staff whose first language isn’t English. People are using tools like ChatGPT to handle letters, emails and paperwork. Others are using it to come up with creative activity ideas, or to prep for conversations with professionals by checking symptoms or conditions. Some are exploring wellbeing support through chatbots too.

These tools can be empowering – but they’re also raising valid concerns. People worry about whether the outputs are accurate, if they’re breaching data protection rules, or whether they’re making the right judgment call when using AI to shape someone’s support.

Why it’s time to act

Generative AI can produce care plans that sound convincing but include totally made-up or misleading suggestions – known as “hallucinations”. It doesn’t understand context in the way a human does. It may repeat biases from its training data. And without strong data protection controls, it could lead to privacy risks or information being mishandled. That’s a big issue in a sector like social care where relationships, trust and rights matter deeply.

What do we mean by ‘responsible use’?

Here’s the simple definition we co-produced:

“The responsible use of (generative) AI in social care means that the use of AI systems supports – and does not undermine, harm or unfairly breach – fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.”

So what are we asking people to do?

1. If you're part of the care community – read and use the guidance.

We’ve put together practical “I” and “We” statements to help people think about how AI is being used. If you’re someone who draws on care, a care worker or an unpaid carer, use the “I” statements to check whether AI tools are being used in a way that feels safe, supportive and respectful. If you’re a provider, supplier or developer, use the “We” statements to reflect on whether you’re doing things ethically, transparently and with care values in mind.

2. Let’s keep working together.

One of the biggest lessons from this project is that we’re stronger when we work across roles and listen to each other. Let’s keep that going. Whether you’re developing tech, providing care, commissioning services or drawing on support, there’s value in sharing what we’re learning and shaping the future of AI in care together.

3. Governments and regulators – we need clear rules.

Right now, there’s a gap when it comes to regulation of AI in social care. We’re calling on government to work with regulators to close that gap. That means having enforceable guidelines, better accountability, and making sure all of this aligns with the Care Act, mental capacity law and existing rights-based frameworks. There needs to be a designated body responsible for leading this.

4. Policymakers – invest in the right infrastructure.

We need systems and funding that support inclusive innovation. That means helping small care providers and local authorities access ethical tech, supporting co-production at every stage, and making sure testing and development happens in ways that are grounded in real life care settings – not just labs.

5. Funders and system leaders – let’s rethink how innovation works.

Too often, the rewards of care technology go to big companies, not to the communities whose data helped shape them. We’re calling for new business models that give more power and value back to people and places. Let’s back innovation that’s ethical, inclusive and supports growth across the whole care ecosystem.

6. And to the Department of Health and Social Care – your upcoming tech standards need to be rooted in care values.

We know DHSC is working on new national standards for care tech. This is your opportunity to embed ethics, human rights, the wellbeing principle, and real-life learning from people in the sector. Please make sure those standards reflect what matters most in care – not just what’s technically possible.

Tech suppliers – sign the pledge

We’ve also launched a Pledge for Tech Suppliers. It’s a public commitment to co-production, transparency and ethical design. If you're developing or supplying AI tools for social care, sign the pledge. If you’re commissioning or buying tech – ask your suppliers if they have.

You can find it all here: www.digitalcarehub.co.uk/AIinSocialCare

What’s next? The AI in Social Care Alliance

We’re launching a new Alliance to keep this work going. It’ll be a space for shared learning, guidance, and sector-led leadership on AI in care. Everyone is welcome – whether you’re just starting to explore AI or you’re already rolling out new tools.

What you can do now

Head to www.digitalcarehub.co.uk/AIinSocialCare to:

  • Read the guidance

  • Endorse the Call to Action

  • Sign or share the Tech Suppliers' Pledge

  • Register your interest in the Alliance

And get involved in the conversation under #AIinSocialCare.

Thanks to everyone who helped make this happen, especially: Caroline Emmer De Albuquerque Green, PhD Daniel Casson💚 Ian McCreath FRSA Think Local Act Personal (TLAP) Donald Macaskill Karolina Gerlich FRSA The Care Workers' Charity Scottish Care National Care Forum techUK

Let’s build a future where AI strengthens care – not just systems. Let’s do it together.