<img src="https://secure.enterprise-consortiumoperation.com/792603.png" style="display:none;">
Book a demo

    When did AI really start to be something that was on the agenda for you?

    Our engagement with AI really began around 18 months ago, which aligns to a significant degree with its broader mainstream recognition. That period was notable for the widespread adoption of AI platforms like ChatGPT and Bing Chat, which highlighted the technology's potential beyond niche applications. 

    I was particularly interested in the context of education, where institutions were considering how to regrade and remark degree theses and projects because they had accepted that AI was simply going to be an accepted part of doing the legwork from now on. They shifted focus towards evaluating the intellectual rigour rather than just the procedural outputs of student work. This sparked my interest, as it demonstrated AI’s capacity to alleviate manual burden and allow human resources to be directed towards more valuable, thoughtful endeavours. 

    In response, Yorkshire Housing has initiated targeted experiments with AI technologies over the past year. We’ve been aiming to understand where AI can best serve our objectives without rushing its implementation. The continuous and rapid advancement of AI technology has been both a challenge and an exhilaration. It necessitates a culture of constant learning and adaptability; tools and techniques that seemed rudimentary six months ago have quickly evolved to become highly effective solutions. 

    The biggest challenge has been just keeping up with the pace of change and improvement! It’s unparalleled in my experience. 

    It means we need a proactive stance, as waiting on the side lines for a ready-made solution and agreed industry standard is not an option. The fast-moving evolution of AI means that any official guidance becomes obsolete almost as soon as it's written. So, our strategy is focused around experimentation - adopting a 'fail fast, fail cheap' mentality that encourages innovation through trial and error. While many initiatives may not yield immediate results, the learnings we gain are invaluable, and occasionally, we uncover a gem. There’s also a broader cultural and sectoral challenge we must confront. The housing sector traditionally leans towards caution, and can sit back and wait for either guidance or prescription. That isn’t going to happen with AI, its rapid progression doesn’t allow for the luxury of delay or indecision.

    How important would you say AI is to Yorkshire Housing today? How important do you think it will become?

    AI is set to be a game-changer for us, similar to the transformative effect of the internet but expected to unfold much faster. Currently, AI plays a role in some small-scale operational areas, enhancing our efficiency and customer service. Its impact, especially in automating the heavy lifting of housing services and improving logistical coordination across Yorkshire, which is a vast and varied region, is just beginning to be felt.

    So I think the issue with AI is we don't know what it could potentially do. It feels at the moment like it's limitless, but I think it will be the engine that runs our business probably in five years time.

    What do you feel is the overall opportunity for AI to help overcome the significant issues facing the sector?

    AI is set to revolutionise several aspects of our operations in the coming years. Firstly, by automating repetitive tasks, AI can significantly enhance our efficiency and improve job satisfaction for our staff. This transition allows them to focus on more rewarding and complex tasks. 

    Secondly, AI’s capacity for trend analysis is invaluable. It can interpret vast datasets, far beyond human capability, to provide actionable insights for service development. This deep analysis will guide us in refining and expanding our offerings effectively. 

    Thirdly, sentiment analysis is emerging as a crucial tool. As we strive to improve customer interactions, AI can analyse communication patterns to ensure we're meeting expectations and addressing concerns adequately. This extends to understanding public sentiment through social media and other channels, helping us gauge and improve our public image.

    Lastly, AI’s ability to harness open-source data represents a significant opportunity. By integrating diverse data sets, such as fire, police, and local government information, AI can help us develop targeted interventions that enhance community well-being. This approach, which utilises publicly available data, enables us to construct a comprehensive picture of the needs and trends within specific locales, driving informed decision-making and proactive service adjustments.


    Do you believe the social housing sector is utilising AI as much as it could do? What is holding the sector back in its use of AI?

    I’d say the sector's engagement with AI is currently being hindered by a combination of fear, misinformation, and operational pressures. Early negative media portrayals and high-profile scepticism, such as those expressed by the likes of Elon Musk, have contributed to a cautious approach towards AI. This has, to an extent, reinforced the sector's reluctance to embrace this technology. 

    The everyday demands of the sector leave little room for the necessary strategic reflection on AI’s potential applications as well. This 'bandwidth' issue is significant – there's a widespread struggle to step back and view operations from a 'helicopter perspective' to identify areas where AI could be beneficial. Despite these challenges, I really believe the sector needs to carve out space and capacity to do this.

    The parallels between the initial scepticism surrounding the internet and the current apprehension towards AI are striking. Just as the internet has ultimately become a beneficial force in most aspects of modern life, enhancing flexibility and communication, I believe that AI will follow a similar trajectory once misconceptions are dispelled and its practical benefits are fully understood. 

    The fear of the unknown is a considerable barrier, but it's not insurmountable. This is why contributing to discussions and reports on this subject is vital for 

    Apart from the services Voicescape offers, have you introduced, or are you looking at introducing, AI into any other of Yorkshire Housing’s services and operations? Are any emerging opportunities particularly taking your interest?

    Yes, we’ve got a couple of things going. We're currently implementing Salesforce, which incorporates intriguing AI functionalities. This integration is making us consider additional AI applications we could incorporate simultaneously.

    We’re also engaging with Amazon Web Services (AWS) to explore their advanced AI solutions. On the communication front, we have opted for Bing Chat over ChatGPT, primarily due to the perceived security benefits within the Microsoft ecosystem.

    At a more granular level, we're leveraging AI through Adobe for document management. Its ability to swiftly summarise extensive PDFs is a game-changer for efficiency, especially for those of us with limited patience for lengthy documents. This feature sums up the practical, everyday benefits of AI, from streamlining customer service to enhancing internal operations. As previously discussed, sentiment analysis is a significant focus for us. The ability of AI to monitor and evaluate our call handling processes in real time presents a fantastic opportunity. It could allow us to analyse conversations and identify customer sentiment, altering us when a customer may feel dissatisfied with the resolution of their call. With that, we can take a more proactive and compassionate approach in our follow-up, ensuring that all customer concerns are thoroughly addressed. Currently, this is the main area we're exploring and discussing internally, as we see it as a significant step forward in enhancing customer satisfaction.

    What are the challenges and risks of using AI in the social housing sector and how do you think they can be overcome?

    Firstly, it's important to recognise that our position within a regulated sector shouldn't inhibit our exploration and implementation of AI technologies. However, we need to be vigilant in terms of the platforms we use, the origins and destinations of our data, and how information is reported. Questioning and seeking assurances is crucial for overcoming potential risks associated with the use of AI. While embracing AI, like adopting any new technology, carries inherent risks, these can be managed effectively. If you're asking the right questions and are clear on what you want AI to do, that goes quite a long way to mitigate some of those risks. The problems come in when people let AI run ragged through their business without having a clear purpose for it. So, clarity regarding the intended purpose of AI within operations and maintaining robust oversight mechanisms are critical for risk mitigation.

    The challenge posed by deep fakes is also a big concern in my eyes, particularly in terms of fraud and security breaches. We need to be proactive in educating our community - both staff and customers - about the risks and realities of digital deceit. Just as we have adapted to the dangers of phishing scams, we must now raise awareness about the sophistication of deep fakes and the importance of questioning digital authenticity.

    Do you think there is anywhere we should draw the line in terms of how much and where we apply AI in the social housing sector?

    The line is wherever it strays into anything that makes you feel uncomfortable from a moral perspective. In my eyes, gut instinct is something that is underutilised - if something makes you think "I'm not too sure about that." you’ve got to take a step back and understand what's making you feel like. 

    When it comes down to it, we’re dealing with a lot of sensitive data and a whole cross-section of people, some of whom will be in stressed situations. We’ve got a duty of care in terms of how we manage their data, but also how we manage them in an objective and a fair way. There’s a risk that AI could, depending on what you're doing, potentially marginalise groups, either intentionally or unintentionally. So we need to guard against those risks.

    Are the right people leading the way in AI in the sector at the minute (in social housing and in general)?

    I don’t mean this in a disparaging way, but I don't think anybody is leading the way on AI. 

    That’s because I don't think anybody can lead on something that is changing faster than they are able to explain to others or report on. It's something that we need to be having a conversation around as a sector, within our trade and professional bodies. Initiatives like this report are also really positive. 

    There’s a real opportunity to harness this opportunity better, particularly as we are going down the route of introducing more professionalisation and requirements for professional qualifications within the housing sector. 

    In my eyes, this isn’t something for Central Government to lead on. The average piece of legislation takes about 18 months to go through Parliament and within that timeframe, the situation with AI will have changed so much. So any legislation will be out of date before it’s even got on the statute book. 

    So no one is currently leading, but I’m not sure that’s a bad thing. The sector just needs clear support. However, I don’t mean regulation by that; I mean helping organisations understand how they can keep safe and how AI can contribute to the success of their business and improve the quality of customer services. I think it's just showing people the art of the possible and letting them run with it. 

    Do you have AI governance practices / principles in place? Who developed them? How? Have they been tested or externally verified? Do you feel that this is a good example for the sector to follow?

    At the moment sits with our Technology, Insight and Change teams. And within that team is obviously also our Data Security and our Information Security teams.

    However, we haven't got any set principles or governance around AI yet. That’s a deliberate decision - we're still at the exploratory stage and putting strict governance frameworks or limits on that when we’re trying to explore seems counter-intuitive. 

    We need to learn lessons from how overly prescriptive many of us in the sector were about internet usage when it was a burgeoning technology. We need to treat grown up adults as grown up adults and give them the parameters in which to work in, but also give them the freedom and flexibility to explore and experiment with the opportunities AI can bring to them in their working lives.

    Would you recommend that approach to other organisations?

    Everybody is in a very different place depending on their risk appetite, mindset and stage they are at in understanding the transformative potential of AI.

    Leaders in the sector will make their own decisions about that in the same way they have made their own decisions as organisations about their approach to flexible working post pandemic. There are some that prescribe that people have to go back into an office, an X number of days per week or even prescribe which days per week. Our approach is to say ‘you're a grown-up adult, you don't need us to tell you where you need to do your job and how and when’. However other organisations may take different approaches. So it's the same principle

    How should RSLs communicate with stakeholders, including residents, about how, where, and why they use AI?

    AI is a tool rather than a standalone sort of entity. So, of course we need to be transparent about why we are doing certain things, such as how we use customers’ data and when we are using, for example, sentiment analysis on calls. 

    It's not necessarily just about saying simply, ‘we're using AI’ without context. In fact, that would just create confusion and potentially concern. That's not just from our customers, it even happens to me. 

    What core elements do you think should be in AI governance practices and principles?

    I think you can argue this two ways. 

    I am really nervous about setting governance principles and approaches because the landscape is changing so quickly, that the framework that you're putting today will not be relevant by the time it is firmly established. It will need to be continuously reviewed and changed to stay relevant and useful. 

    Of course, there needs to be a principle about not using it for immoral purposes and illegal purposes. That’s a given, but of course if someone is intent on using it for those purposes, they will find ways around principles and guidelines. 

    So it’s a really difficult question because until we're clear on what AI can potentially do and also where it's potentially higher risk, I think having a framework in place now feels too early. 

    Finally, do you have any regrets in terms of introducing AI to your organisation to date?

    I only regret not picking up and running with AI sooner. 

    I think there's an issue here about how AI may get reported and discussed. You see conversations, particularly on social media, where it’s treated like it's the rise in the machines and that the world's going to end as a result of this. 

    We know there's going to be bits that we get wrong or we decide that, on reflection, that’s not the best use for it. I think we need to be really honest and open about that and acknowledge where an application of AI hasn’t worked quite as we’d hoped but continue finding really good uses for it.

    If this is something you would like to read more about, download the full FREE whitepaper here.

    Get in touch?

    Want to see how Voicescape technology could help your housing association? Get in touch