When did AI really start to be something that was on the agenda for you?
It really came on the agenda for me with the launch of Alexa, Google Assistant, and things like that in your home. I started to look and see opportunities in the workplace.
In my work environment, it's probably the last 18 months where I've started doing some serious work in artificial intelligence. I think the technology is mature enough now to start having some level of usage and have some real good customer outcomes. Over the next five years I think there’s going to be exponential growth across every sector and across every industry. And people are going to start really seeing the benefits of it.
So it’s been a steady development of things coming into your home and you seeing the potential and then seeing some of these commercial tools coming in that really do have customer benefit.
How has your attitude changed towards AI over time?
It has changed and that's because of the professionalisation of AI tools.
When many of these AI tools first came out, it seemed a bit like the Wild West. You worried about your data and where it was going to be hosted, obviously had to have consideration of GDPR and all the privacy aspects and the ethical aspects as well.
Now you're able to buy enterprise class AI tools, for example enterprise class ChatGPT and Copilot’s there. Google has just launched its AI. That’s providing a level of reassurance that your data is secure and it's not being used for the training or anything like that.
That's been the turning point for me in terms of having a level of trust that my commercial data is relatively safe and I know that I can start sharing it in the AI space without worrying that it's going to be going to different areas of the world or used for different purposes I don’t know about.
That’s particularly important in the housing sector as well where we’re dealing with some of the most vulnerable people in the country. You have to be very thoughtful and have moral and ethical responsibility for keeping customers’ data safe.
How are you using AI at Platform Group today and are there any other applications you're using it for at the minute as well or planning to use it in in the near future?
There’s a couple of things that we've done with it.
The first one is we used it for silent tenants. It's safe to say that the responsibilities of a Housing Association has expanded over the last 10 years as social services have disappeared at the margins and in turn the core service that we provide has expanded out to be able to provide that support.
So what we looked at was how we can use AI to identify potential or current silent customers and prioritise them for tenancy health checks. We took a number of variables around, for example, whether they're capped on gas, whether they're paying the rent by a University Credit and brought those together to create a model that gives a percentage possibility that they are a silent tenant. If they are towards the top of the list, we then prioritise them for our tenant support visits.
That’s a real-world outcome of finding people that didn't want to engage with us, but because they'd been flagged up on the system, we were still able to engage with them and give them support they need.
We've used deep learning to look at potential damp and mould issues. That's another piece of work we're doing now - looking at the makeup of our properties. So that could be the family makeup, it could be the location and trying to find the propensity for damp and mould cases so we can proactively intervene in those cases.
Like many in the sector, we’ve had a big influx of damp and mould cases and most organisations are very reactive to it. We thought if we could move into a proactive space we could prevent it from becoming an issue in the first place. That could be either through repair works or through customer education.
We’ve also introduced enterprise ChatGPT for CD managers and exec teams. We’ve started to educate them on how to use it for document analysis and things like that. So we've started deploying that now with guidelines in place.
How big do you think the opportunity for is for AI and social housing?
The opportunity for AI in social housing is huge.
I think it's massive across every sector, but in housing we've got this bottleneck of resources where our income is either being restricted or our responsibilities are growing, so we're trying to do more with the same resource or, in some cases, less.
So I see the real use case for AI as being able to deal with those day-to-day transactional inquiries, so whether that's a bot that's answering queries for customers, whether that's prioritisation for housing officers, whether that's doing basic decision-making, whether people can have a pet or not use that information. AI can take away that front layer, which is probably 80% of transactional enquiries and pieces of work that we have to do. That then allows our people to focus on that layer of 20% that we have to deal with on a face-to-face basis.
One thing that technology is rubbish at is creating and maintaining relationships with people and customers. I think the real strength we’ve got in our sector is that real customer focus and strong relationship with customers and understanding of their vulnerabilities and needs and being able to work with them. So AI frees up time to be able to deal with those more vulnerable customers more.
There's obviously a load of opportunities and benefits with AI. With that in mind, do you think AI is being used in the sector as much as it could be? And if not, what do you think is holding back that use of AI?
There's some really good cases out there. So I think some people are embracing it.
I did a workshop at Housing Digital and asked ‘how many people are actually using AI or how people are thinking about using it, and how many people are not. And actually, the majority of people were dabbling with it; very few people weren't using it at all.
There's definitely a financial aspect, but I also think there's a fear aspect around what's going to happen with our data and there’s a fear of jobs and of the ethical considerations. What I think we're missing in the sector are those defined guidelines around what's the best use case for it, what guardrails you need to put in place to protect customers and the business. And I think that's the missing element. A lot of people are holding off waiting to see others in the sector lead and start producing these guardrails.
Who should lead with that? You mentioned you know does it come from cross sector collaboration or should it be something the regulator could help set the boundaries for?
I think cross sector collaboration is going to be a key point. We need to work together to be able to define what good AI looks like in terms of ethics and applications. We can do that.
I've got a piece of work going through my governance process at the moment, which is an AI position paper setting out ethical frameworks and guidelines. I've had a few requests from other CIOs to share that piece of work.
But I also think there should be something that's led by a sector-wide body. Whether it's regulator led or another body, it would be good to have something central that everybody can contribute to and sign up to. At the moment there is no consistency or standardised and accepted approach to AI; people are doing what they like and I think we just need to have clear guidelines as a sector so we're all working to the same framework.
Do you mind just talking me through what you think the biggest risks are in reality and the ways you can overcome them?
The first one to pick up on is data security and data privacy.
You need to be really careful about what tools you're using and where you're using them. It’s important that anyone in the sector does a proper privacy impact assessment before they start deploying their data. That should be a standard process that everybody has, having their cyber people look at the cyber consideration and having their information governance people look at the information governance aspect and actually working that through.
There's very basic considerations. You need to be looking at whether the tool is UK based and making sure the data is not being taken by other people and accessed by other people or used to train anyone else.
The other key risk is biassing your data sets. AI is only as good as the information it gets. So, to take an example. So, you know, traditionally software development has been a very male dominated industry. So actually if you pick out the best 10 CVs the last 20 years of what a software dev professional looked like and feed that into AI, it's probably going to prioritise a male because that's the data it's got. But actually what you need to do is to understand how AI works and take out that bias in your data set.
So, you've got to ensure you have a diverse data set in the first place to establish the understanding of what a good looks like; then AI won’t have unconscious bias when it starts outputting recommendations.
The law of unintended data capture is a risk. When you use artificial intelligence, by its very nature, you're going to collect data that’s outside of what your original scope is. It's very easy to capture lots of things, but we don't want to use all those things because it might be unethical to do so. For example, you're going to be able to capture things about that person, about that customer, about that home that you don't intend to.
So, you need to have very clear and transparent lines in there about what data you're going to capture, what you're going to use and what you're actually going to disregard.
Artificial intelligence actually gives you answers to questions that you've not even thought about. There's things out there that perhaps you didn't even think were an issue, but the data actually points into this area. That's going to be the power of it: you've got this completely uncompromised piece of technology that's looking across your data and actually identifying an issue that you never even thought of or an issue. It's going to be asking those unanswered or unasked questions.
Do you think there needs to be a line drawn in the sand in terms of where AI just shouldn't be used in social housing?
There should be a human decision-maker at the end of every AI recommendation. AI can be very useful for producing anything and being able to look at any data but I think you have to be very careful with any decisions you make on the back of that and make sure that you've got that augmentation between the AI and the human-centric decision-making. That's where the sweet spot is.
I'd be very reluctant to have to say, let artificial intelligence go its own way without any human intervention.
In terms of AI governance, you've mentioned that you’ve created documented and formalised guidelines, is that correct? Is that something you would recommend all social housing providers to have in place?
I think it's critical to have guidelines because I've heard horror stories already about people using the free version of ChatGPT and uploading documentation to that.
You need to have a line in the sand for an organisation. It's not about stifling innovation, it's just really about letting people understand the risk and understand the safe way of using AI and what they can do legitimately in their roles.
So for example, is it ethical for you to use ChatGPT to produce an outline of a report for you if producing reports is part of your job? Well arguably, yes it is ethical because you've got another tool that's allowing you to do so, as long as you've got the common sense field to cross check that and making sure that it actually does align to your policies and procedures.
It's just being really clear about where those divisions lie. However, it shouldn't be technology driven - that’s key as well.
I'm a CIO, but Platform Groups’ AI guidance paper isn't just produced by the technology department without HR involvement and governance people, for example. Everybody across the business has an input into that paper because it shouldn't be technology driven. It is about business outcomes, so the whole business has a part to play.
Finally, for anyone in the social housing sector taking the first steps, a few steps behind where you are with AI, what pieces of advice would you give to them?
Talk to someone who has already been there. Understand what pitfalls they have fallen down, understand what's been really successful and the tools that they've used and what's worked really well.
One of the beauties of working in housing is you can pick up the phone and you can speak to somebody and you can share things. It’s unique and it's really positive and I think it’s important to utilise that culture in the housing sector as much as possible. Use it to share and help each other understand that best practice.
Anything else you would like to add before we wrap?
Artificial intelligence has got massive opportunities for the sector and for the country as a whole. As a sector, we should be embracing those opportunities and putting resources behind understanding those opportunities.
But we should also be understanding the risks and having a clear conversation about what those risks are for both the company and for our customers as well. Guidelines are important to guide these conversations, but they should be limited. Guidelines should be established in the frame of ‘this is innovation and innovation should be good’.
If this is something you would like to read more about, download the full FREE whitepaper here.
Get in touch?
Want to see how Voicescape technology could help your housing association? Get in touch