When did AI really start to be something that was on the agenda for you professionally and/or personally?
I think you probably do consume AI in a traditional sense where AI is not just generative AI. It's optical character recognition (OCR), it's robotic process automation (RPA), it's behavioural science. But I think you start to consume it without knowing you consume it.
So the experience that I had was really in a professional sense as opposed to personal. We had started to use account recognition, OCR technology, to look at invoices coming in and matching them to purchase orders. We've been using it for six years or so without knowing that it is actually under the banner of AI.
Obviously when AI started being talked about more over the last two to three years, you actually realise you've been using it for quite some time.
We then moved into using RPA and there was a particular use case which we've been recognised at the Housing Technology Awards where we, as a sector, needed to process around 10,000 Universal credit claims and it was previously done manually. We created a UC bot using an RPA technology on the Microsoft Azure platform which automated that entire process. It avoided about £800,000 of overtime costs and it was doing it with 97% accuracy.
We were the first to do it in our sector and we've since collaborated and shared our learnings with another Housing Association. I think the exciting stuff - generative AI and using behavioural science - our experience has been focused on Caseload Manager with Voicescape, using the predictive arrears. It's a business-driven process rather than me driving it, which is fantastic.
Those are the three areas I would say that over the last five years have really taken off at Thirteen Group.
It seems like AI is already quite important at Thirteen Group. Do you still think you're at early stages in embracing its full potential? Is this going to grow and are there going to be more applications you're going to be using AI in over the next few years?
I think there's a lot of foundational elements that we need to overcome first before we can really get to even more exciting stuff. With true generative AI – ChatGPT, Copilot etc – we need to get the basics right around gaining data confidence. We need to really understand and put in place a data governance framework and strong data ownership.
At a recent AI event I attended, the attendees agreed that when you combine bad data with AI, you get bad results. And that can cause more damage than it's worth. We need to get to that point where the business understands that if we get the basics right around data confidence and governance then the outcomes can be almost limitless.
As a sector overall, do you think AI has been utilised as much as it can be? If not, what's holding it back?
I think we're using it as much as we're comfortable using it.
So, when we talk about RPA and OCR technology and Caseload Manager, there's a lot of work and effort that needs to go into sanitising that data and getting that data confidence. But we need to do that in a broader sense across not just our customer data, but also our property data and colleague data as well, if we want to go even further.
I think the reason why we can’t yet let generative AI loose with the likes of Copilot and GPT-4 is purely because of those foundations that we need to sort first. We've done basic things like we've created an app that you put a job title in and it will create a job spec and a job role, and it's scarily good. So, we're using it in a really controlled way where it can't do too much damage. We have milestones over the next couple of years to move further towards generative AI, but we need to start the foundations right first.
As a sector, we're using it as much as we can, but until we collaborate and come together to agree what good looks like from a data confidence point of view, I don't think we can really let loose with it yet. I'm not really risk averse, but I still don't have the confidence in our data yet to say: "yeah, just press the button and go ahead and do your work now." The sector isn’t there yet.
So, you talk about the collaboration needed, who leads with that?
I think collaboration across the social housing sector is probably the best I've seen in the ten years I've worked in the sector, but it's still siloed. We’ll see certain housing associations really nail data governance and data confidence and they're willing to share that. But that's the easy part.
You can share a document you’ve created at one organisation outlining what good looks like within that organisation, but how do you make that fit another organisation? You can't necessarily take one approach and carbon copy it at another organisation, because every housing association has a different culture with different operating geographies and different challenges.
So, although we are collaborating more and more and sharing knowledge, the challenge is how do you apply that to your operating context within your housing association. There's still a lot more work to do.
There's been talk of a national database of customer and property details that we all contribute to, but who drives and owns that? Is it a task for the regulator? I'm not sure.
With your specific job role, involving IT, cyber and data security, would you say AI is a central component of your role today and now much more central will it be in the future?
We've seen some really exciting things with Copilot for cybersecurity, for example combating threats more proactively and really reducing the stress on the team to manually intercept attacks. That's the bit where I'm really excited from an IT security standpoint - seeing how that can reduce a lot of the manual intervention that the team has to do at the moment.
What are the main risks involved with AI in social housing and how can we overcome those?
We have to be really careful around what data we're using, how we're using it and what buy-in we need from the customer to use it. The last thing we want to do is create a GDPR nightmare and get reported to the ICO.
That's been a particular bridge we've not crossed yet, but we have seen some housing associations tackle that well and put formal data ethics frameworks in place. That's something that we need to follow suit with.
That doesn’t just mean creating a document, it means really getting the buy-in from the customers and being really clear about how we're using their data in the context of AI and what outcomes we expect to get from it as well. A lot of it needs to focus on demonstrating to the customers the benefits they will derive from us using their data with AI.
I think that one use case for AI has been being able to better understand and help silent customers. As organisations, we already direct resources towards customers who ring us and complain and make a lot of noise. But what about those ones that don't ring us? How do we know that we're really keeping in touch with them and supporting them effectively?
There’s been some good strides made to use AI to bring to the surface those silent customers and to proactively get in touch with them.
But then the opposite side of all this, from a risk point of view, is that a provider may have so much data that they overlay AI on to tell them where their issues are; but that creates a new challenge that they don't have the capacity to cope with the new insights that AI is revealing to them.
That's been a bit of a tricky conundrum as well, have we got the capacity and the money to deal with whatever AI identifies for us?
Do you think there's somewhere where a line that AI shouldn't cross or the things that AI shouldn't be employed to do in this sector?
In some ways, when it comes to human engagement, technology can actually make an experience worse and this is where there needs to be a boundary for AI.
In terms of social housing, I’m specifically talking about customer experience and the importance of human interaction.
The majority of the demographics of our customers don't want to talk to a bot when they need support or to get a task completed. We can certainly interweave some automation into processes but ultimately you can't get rid of that human touch.
This is particularly the case with some of the complex cases we have at Thirteen Group around mental disorders and deprivation etc. Indeed, across the board many of the complex problems that social housing providers need to deal with just can't be solved through technology and AI alone, in my opinion.
We can use AI to be more proactive and identify issues and prioritise who we talk to and what about, but you can't replace that kind of real human experience with AI.
Caseload Manager is the perfect example of getting the balance right where we're using it – helping us prioritise and understand potential issues but, when it comes to interventions, the customer will speak to a human being.
Do you think there is enough guidance and regulation around AI in the social housing sector?
Fundamentally no.
There's been some really good use cases of it but no one's really owned the conversation around things like how to create a data ethics framework. No one is asking questions like: How do you get the buy-in from the customer? How do you embed that into the culture of the organisation? All of those things, there's not really much guidance on.
When we talk about regulation, at the moment it’s a little bit like the Wild Wild West. Certainly from what I've seen in the RSH’s recent Sector Risk Profile, there's no mention of AI, which is one of the foundations that we need to start from. There's certainly not enough for me in there about how to create a framework and embed it into an organisation.
In an ideal world, who would you like to see take charge of that and what would taking charge look like?
There's a lot of thought leaders in AI, it’s not new in that sense of the word. AI has been talked about for many years, going back to at least WWII and there's a lot of thought leaders in the disruptive innovators network.
A lot of really good vendors are also taking a lead and sharing knowledge too; the likes of Microsoft are using AI to great effect for example. Obviously, Voicescape is probably the best use I've seen in social housing and we've been the biggest contributor to their Caseload Manager; we effectively developed it alongside them which was amazing.
However, in terms of who takes the lead within the sector, I don't think it's a one size fits all because social housing providers all operate very differently because of the unique geographies we each operate in and the different challenges within each of those geographies.
At the same time, I wouldn't say that the regulator needs to be leading it. I think that they could do more; it would be good if they fleshed out their Sector Risk Profile to have some high-level guidance on AI. But to set an entire framework, I think that’s probably a collaborative effort between the likes of me and my professional peers and equivalents across all social housing providers.
We need to come together and explore what good looks like – we're getting better, but we're not quite there yet.
In terms of AI governance principles and frameworks within the Thirteen Group, that is something you’ve mentioned you don't formally have in place yet? Is that right?
Going back to the point about strong foundations, we need to nail them first. When I say that, I mean data governance, data ethics frameworks, all of that.
We’ve seen some Housing Associations create AI strategies and frameworks which set out where they see their organisation going with AI over the next two, three, four, five and ten years. It's a very light document, it doesn't have to be War and Peace.
That’s where I’d like to get Thirteen Group to. But I’m also aware that it’s very easy to create a document; the challenge is how do you get the necessary buy-in for it from other key stakeholders within a social housing provider. So, for example, the Director of Customer Experience, Repairs Directors, the Executive Directors. They need to be contributing to and believing in this strategy and framework.
At Thirteen Group, I would like to move to Objective Key Results. That means instead of thinking ‘we need great data’, the focus should be on ‘why do we need great data?’, ‘how are we going to use it?’, ‘what benefit are we going to get from having fantastic data?’
We're trying to sell it to these stakeholders, and explaining to them the importance of getting their data in order. But we recognise that we need to reframe the tone of language and shape it more around the outcomes that are most important to them. We need to explain how strong data can allow them to utilise AI to spend less on responsive repairs, for example, or to bring up satisfaction scores.
Last question, for anyone social housing decision-makers taking their first steps into AI, what advice would you give them?
I would say it’s about starting with the basics. It gets daunting when you talk about Copilot and generative AI, that for me is more of an end goal.
So, start lightly in this space.
Start with RPA or start light with OCR for example. In parallel, work on data ownership, data governance, data confidence. And if you take that approach, rather than going all in with the likes of Copilot straight away, you’ll be much more likely to win the hearts and minds of the organisation towards accepting AI.
Another thing I've had to learn over the last years is that you've got to speak to AI like it's a human being. Don't speak to it like it's a bot or it's this computer. This is a skill in itself that most people will need to learn and develop. But if you speak to it like a human, you actually get more out of it.
If this is something you would like to read more about, download the full FREE whitepaper here.
Get in touch?
Want to see how Voicescape technology could help your housing association? Get in touch