November 13, 2024

While we should undoubtedly proceed with care and caution, underpinning AI deployment with good data allows organisations to balance regulatory and moral risks, says Yohan Lobo, Industry Solutions Manager, Financial Services at M-Files

AI safety and security has been a hotly discussed topic over recent weeks − numerous high-profile figures expressed concern at the rate of global AI development at the UK’s AI Safety Summit, held at Bletchley Park.

Even King Charles weighed in on the subject when virtually addressing the summit’s attendees stating, “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.”

Additionally, in his first King’s speech delivered on Tuesday where he set out the UK government’s legislative agenda for the coming session of parliament, King Charles explained the government’s intention to establish “new legal frameworks to support the safe commercial development” of revolutionary technologies such as AI.

Yohan believes that avoiding the pitfalls brought to our attention at the summit and in the King’s Speech hinge on organisations leveraging AI solutions that are built on a foundation of high-quality data.

Yohan said: “Mass adoption of AI presents one of the most significant opportunities in corporate history, which businesses will do their utmost to cash in on, with this technology capable of delivering exponential increases in efficiency and allowing organisations to scale at speed.

“However, concerns rightfully raised at the UK’s Global AI Safety Summit and reinforced in the King’s Speech demonstrate the importance of developing AI ethically and ensuring that organisations looking to take advantage of AI solutions consider how they can best protect their customers.

“Data quality lies at the heart of the global AI conundrum – if organisations intend to start deploying Generative AI (GenAI) on a wider scale, it’s vital that they understand how Large Language Models (LLMs) operate and whether the solution they implement is reliable and accurate.

“The key to this understanding is having control over where the data the LLM gains its knowledge from. For example, if a GenAI solution is given free rein to scour the internet for information, then the suggestions it provides will be untrustworthy, as you can’t be sure whether it has come from a reliable source. Bad data in always means bad language out.

“In contrast, if you only allow a model to draw from internal company data, the degree of certainty that any answers provided can be relied upon is significantly higher. Any LLMs grounded in trusted information can be incredibly powerful tools and a guaranteed way of boosting the efficiency of an organisation.

“The level of human involvement in AI integration will also play a crucial role in its safe use. We must continually treat AI like an intern, even if a solution has been operating dependably for an extended period of time. This means regular audits and considering the findings of AI as recommendations rather than instructions.”

Yohan concluded: “Ultimately, companies can contribute to the safe and responsible development of AI by only deploying GenAI solutions that they can trust and that they fully understand. This begins by controlling the data the technology is based on and ensuring that a human is involved at every stage of deployment.”

Read more:
UK government and King Charles’ safety concerns highlight the importance of AI ethics