Artificial intelligence (AI) is no longer a distant innovation—it’s already beginning to shape how we deliver, manage, and personalize care. From back-office automation to smart monitoring systems, AI offers new ways to support staff, improve resident outcomes, and extend capacity. But how do we adopt these tools responsibly, without losing the human relationships and values at the heart of social care?
That was the focus of a recent global roundtable hosted by Scottish Care, in partnership with the Global Ageing Network, the National Care Forum, and the Ontario Long Term Care Association. Panelists and participants explored one key question: How do we maximize the benefits of AI while protecting privacy, choice, and the rights of those we serve?
Defining ‘Responsible AI’ in Social Care
A resource discussed during the event was a new White Paper from Oxford University’s Institute for Ethics in AI, The Responsible Use of Generative AI in Social Care. Donald Macaskill, CEO of ScottishCare was among the wide group of stake holders who contributed to the White Paper. Donald describes it as an articulation of the boundaries which need to be considered in the use of AI. For example, it emphasizes understanding the tasks that can be done using AI without jeopardizing the human connection that is at the heart of social care. “At the heart of the shared understanding of the ‘responsible use of generative AI’ should be a definition of ‘care’ that recognises the central role of human rights and trusting human relationships between people drawing on care and people providing care, as well as relationships between other groups in social care, including family carers, social workers, commissioners, regulators and inspectors of services. Its use should centre on values underlying high quality care, such as autonomy, person-centredness, and wellbeing.”
Data Governance Matters: Building a Foundation of Trust
Roxana Sultan, Chief Data Officer and Vice President of Health of the Vector Institute in Canada, discussed the opportunity, using AI in health and long term care, to unlock a lot of rich data that can help improve efficiency and inform clinical practices. In light of that, systems and policies around data governance are essential. She referenced the 5 Safes Framework adopted by the UK as the underlining of data governance:
- Safe data: data is treated to protect any confidentiality concerns.
- Safe projects: research projects are approved by data owners for the public good.
- Safe people: researchers are trained and authorised to use data safely.
- Safe settings: a SecureLab environment prevents unauthorised use.
- Safe outputs: screened and approved outputs that are non-disclosive.
Once it is clear that an AI tool is safe by these measures, only then is it appropriate to adopt to workflows and infrastructure.
Empowering People Through AI Literacy
From the provider perspective, Travis Gleinig, Chief Innovation Officer for United Methodist Homes in New Jersey, emphasized that AI or other technology tools need to be viewed through a human lens, focused on how they can empower residents, clients and staff. To be effective, it is essential that we build a muscle around AI literacy through training. Training will help us understand how to effectively use AI, to evaluate its safety and trustworthiness and to understand its limitations and potential biases. And, because AI is evolving so rapidly, this cannot be done just once. Training and capacity building must be ongoing. Travis reminded us that AI is a tool not a decision-maker and therefore, it is important to manage expectations about what it can do well and what it cannot. As with the introduction of any technology, to build trust with the user it is critical to communicate its value, whether to an individual or an organization.
Smith Sloan concludes that a critical element of an ethical framework is to ensure human rights are protected. Key to that is to employ human-centered design. The more we involve consumers, caregivers, families, and staff in collaborative ideation, design and implementation, the more we will ensure rights are protected and the results are valuable. With any technology, the human needs to be at the center. As the global conversation about AI becomes more robust, an ethical lens, such as that developed by the team at Oxford University, becomes even more essential to ensuring the benefits of AI can be fully realized and the risks mitigated.

Katie Smith Sloan | Executive Director | Global Ageing Network