top of page
Writer's pictureMichael Tobin

The Importance of Responsible AI

Updated: Apr 26

AI. It's everywhere, impossible to avoid, I don't think we'll see AI replace professionals any time soon, but I do think it will be put in place to assist people and increase efficiency at an accelerated rate. As I write this blog post, there's a button "Create AI Title" and another "Finish with AI" - and don't get me wrong - I'm half tempted to push both. I'm sure generative AI knows much more about writing a catchy article than me, or it might generate a more catchy title - but it goes against the point of writing this.


As someone who works with Azure, I'm used to seeing new technology everyday, it's my job to stay up-to-date with these things and understand how they work, and how to implement them. However the mass adoption of AI feels different, and I'm all for AI - I love new technology but I also believe how we adopt it, is just as - no - even more important than the fact that we do adopt it.


Okay, so I think Generative AI is cool, I also worry about the risks, what can we do about it?


Thankfully, there's a lot of resources out there. The NIST Trustworthy & Responsible Artificial Intelligence Resource Center (or AIRC for short) has a great framework, and a 210 page playbook (NIST AIRC - Home). I've reviewed the framework and it aims to enhance the integration of trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. It provides a structured approach to manage risks associated with AI.


The NIST AI RMF is a great tool for both organisations looking to set standards and also individuals tasked with the responsibility for developing and deploying AI.


I stumbled upon the framework whilst doing the Fundamentals of Responsible Generative AI badge on MS Learn and I'd strongly recommend the course (Fundamentals of Responsible Generative AI - Training | Microsoft Learn) Microsoft have aligned their 4 stage approach to Responsible AI practices for Azure OpenAI models to the NIST AI RMF but I also think their approach is slightly easier to understand - and broken down in a much more readable format you can find it here: Overview of Responsible AI practices for Azure OpenAI models - Azure AI services | Microsoft Learn.


In the context of Azure Open AI, something I've been playing around with in the lab a lot recently, there's a list of methodologies applied to abuse monitoring, is it perfect? is it unbeatable? Probably not at this point, but as more people adopt it, I suspect it will be trained to improve year on year. One really interesting concept I stumbled upon and makes absolute sense is Red teaming your LLM.


If you're not familiar with red teaming, it's a way to test against a real world cyber-attack by emulating one, typically with external organisations to prevent cognitive errors such as groupthink and confirmation bias.


In the context of LLMs it's a a best practice for assessing the safety and potential harms of large language models (LLMs). These exercises involve probing and testing LLMs to identify risks, gaps in safety systems, and areas for improvement. Red teamers, often a diverse group of experts, evaluate LLMs’ outputs and inform mitigation strategies. While not a replacement for systematic measurement efforts, red teaming ensures responsible AI development. Microsoft, for instance, conducts red teaming for its Azure OpenAI Service models. More details can be found here: Planning red teaming for large language models (LLMs) and their applications - Azure OpenAI Service | Microsoft Learn.


While I've focused here on NIST and MS, there are other frameworks out there, and lessons being learnt every day, I'm confident that AI can be adopted in the right way if we stick to a standard, and test often. The right resources are in place to help us adopt this technology, my biggest fear is the speed of adoption may force people (and therein organisations) to try and get "ahead" without considering their approach and methodology.


63 views

Comments


bottom of page