atlas by clearpeople

AI and Information Governance

13 June 2023
  

With so much information available about Governance when it comes to Artificial Intelligence (AI), people have been turning to AI to help them collate and provide a suitable response. And why not, this is one of the benefits we’ve been promised, AI that will help us to be more efficient and productive. Enabling us to focus on the higher value tasks. So, what’s the problem, we’re just doing what we’ve been told to do, do more with less.

Well, in principle there isn’t a problem, for knowledge workers this is an evolution of search and how we access large volumes of useful (and some not so useful) information. We can apply our own knowledge, combine, and refine, reword, and tailor the returned information into something that is relevant and applicable for our purposes. This is often done with our tacit understanding of the purpose at the forefront of the process.

For example, to write this article an initial decision needed to be made, who is the target audience for it? AI Governance can be about the management and auditing of AI data and the models applied. That is relevant to any organization developing AI solutions. We could talk about the importance of including the techniques used to train the models (e.g., human-assisted), parameters applied and the testing metrics. Setting the context that this article was to be about the governance of the outcomes when it comes to using AI generated content contextualized the article.

Apologies if you are here for the former, please search again!

Governance in relation to the use of AI content in your organization

Not only is the topic important, but the context. Without context, this article could have been about people building AI solutions, those looking at the regulation for AI and/or people who want to use it to help them be productive. By the way, you are in the right place if you are interested in governance in relation to the use of AI content in your organization.

Governance regarding AI generated content provides us with a framework in which to work safely. It ensures that knowledge workers can safely find, discover, and use content with tools they have available. Governance provides us with the confidence that the information we are using is appropriate.

Information governance should be something you are already doing, and it now needs to be extended to how you manage AI generated content. If you don’t have information governance policy, processes and procedures in place, best start now!

If you read the AI news grabbing headlines, the recommendations are simple:

  • Know your sources. Where has the information come from, is it a ‘trusted’ source, how has the AI model been developed/trained?
  • Understand what you asked for. What context did you provide, is the available content reflective of the topic, is the topic unusual, are there proven citations from the sources?
  • Validate the facts and appropriateness with others. Is it factually correct, evidence of accuracy, is it free from bias, is it ethical, does it align with our company culture?  
  • Review and refine before publishing. Is it engaging, does it read well, does it align with your company language?
  • Regulate and set standards. Does it comply with any privacy obligations, is it being used appropriately (who ‘owns’ it, copyright, open-source agreements, personal data protection, is AI recognized as a contributor), is it still valid to use/keep?
  • Understand what it costs. Processing Large Language Models (LLM) isn't cheap. There is a considerable amount of compute power being consumed to produce the output. Do you know how this cost is being equated, are you getting good value from what people are asking and the outcomes they achieve, and how are you balancing this against your organizations net zero strategy?  

There are limitations to how much you can achieve of the above with AI. There are certain external risks that we will have limited control over such as knowing the sources and the algorithm that makes the decision on what to present. Though we can reduce and manage the risk by putting in controls such as getting better with our prompts to mitigate using the wrong information. Already looking for a Prompt Engineer/Writer to get the best out of AI? We’ve all got better using search (and how search got better understanding us) and we will need to upskill with AI sourced and generated content.

Extending your information governance framework to include AI is essential. Acknowledging and treating with a risk-based approach may be one way of tackling the problem. Putting in place controls to minimize the likelihood and consequence of each risk.

Different scenarios of where AI may be applied

When it comes to extending and strengthening your governance practices, it is worth considering the different scenarios of where AI may be applied. Here are some examples:

  • Give me more think time. Make meetings more effective, organize, summarize, track actions, targeted output, notify me of anything I need to follow-up.
  • Help me get the right answer. Refine and reduce the information I need to read to provide me with the best possible answers.
  • Get me started. Provide an initial draft that I can enhance and produce quality content efficiently.
  • Do mundane tasks for me. Automate the transfer of unstructured data into meaningful structured reporting data.
  • Analyze and find patterns/trends for me. Look at large data samples for patterns and predict potential outcomes or interesting correlations.
    Hence the governance framework required will need to adapt and evolve to manage different working scenarios, depending on where and how you apply AI.

One of the key pillars of Atlas is governance by design. For example, to reduce the burden of compliance any content added to Atlas is categorized based on where it is saved. This also improves its findability as well as ensuring appropriate use and retention. Thus, people are confident in the use of the content and ensures that they have the right information and can use it appropriately. Content that is being generated using AI needs to follow the same rules and the people creating content using AI need to be clear on their role and responsibilities in this process.

Those creating AI (and superintelligence) are rightly calling for regulation. That isn’t a reason why we shouldn’t look to use AI in our work where it is applicable. As long as we understand;

  • how it is been used,
  • we’ve taken the time to put in place governance and,
  • importantly, we are monitoring, managing, and addressing any potential risks.


AI related articles

There are plenty of excellent articles available on the subject and some ‘words of warning’ as well. Here is a selection worth a read:

AI is coming.

What can AI help with?

What can go wrong and how do we regulate?

The authoritative Enterprise Document Management eBook

The authoritative enterprise Document Management Ebook Everything you need to know about getting the most of your Microsoft 365 DMS ebook coverA comprehensive eBook on document management. Download this eBook to learn:

  • A brief history of document management and where it fits today
  • The digital evolution
  • Document management and Microsoft
  • Benefits of a modern digital workplace DMS
  • What next and how to make the move
  • and much more

 

Author bio

Grant Newton

Grant Newton

Grant is an outcome-focused delivery professional. He has a track record of embedding enduring capabilities through new technologies and processes across a range of organisations. There is nothing Grant enjoys more than seeing happy customers and business value happen together through the changes introduced.

View all articles by this author View all articles by this author

Get our latest posts in your inbox