Generative AI at work: Using policy to protect your organization is the critical first step

The introduction of generative AI (GenAI) into the mainstream has taken the world by storm. All of a sudden, jobseekers are using ChatGPT to write cover letters, people are chatting with historical figures, aspiring interior designers are redesigning their homes, and uninspired wordsmiths are turning to AI to create poetry. While its explosion has been a long time coming, its rapid adoption feels startling and it is set to change the way we live and work in ways we have yet to experience. 

Against this background, businesses around the world are discovering the impact and disruptive potential this technology will bring. Not only does GenAI and its pace of change demand a robust understanding and desire for curiosity from enterprises wanting to gain a competitive edge, but it also requires critical thinking to ensure they are benefitting from these advances in technology in the most compliant, educated, and safest of ways. 

This Think Piece series explores some of the most pertinent themes surrounding generative AI, including privacy, education, and its role in cybersecurity, starting with policy.

But first, what is generative AI?

Generative AI tools are artificial-intelligence-based tools that are capable of producing new content, such as text, images, videos, and code, based on any number of inputs the tools can learn from, including existing data and user-submitted prompts or instructions.

ChatGPT is not the entirety of generative AI

Generative AI tools can be built on a variety of foundation models (GPT-3, DALL-E, BARD) to handle all kinds of use cases. For example, in OpenAI’s ecosystem, ChatGPT is a consumer-grade, web-based chatbot built on GPT 3.5 and 4 that allows individuals to have human-like conversational dialogue based on text-based prompts. In contrast, Github’s GPT-3 powered Copilot provides developers with coding suggestions. Though both are built on the same large language model (LLM), ChatGPT is trained on text databases from the internet, and Copilot is trained on publicly available code from GitHub. So, depending on the data set that the foundation model is trained on, it can power any number of applications to handle a wide range of tasks and functions. Herein lies the limitless potential of generative AI and the explosion of creativity and new capabilities. 

The GenAI is out of the bottle

In a recent poll by Gartner of over 2500 executives, it was revealed that as generative AI gains mainstream popularity, roughly 70% of executives are now in exploration mode when it comes to the technology, and 68% believe that its benefits outweigh its risks. 

Not everyone is happy with GenAI’s arrival though. Some enterprises (and even whole countries) have taken an anti-GenAI stance and blocked usage of the technology due to privacy and data leakage concerns. However, this ruling seems to be a competitive disadvantage. Smart use of generative AI will allow businesses to streamline repetitive tasks, mitigate the shortage of talent, and protect and maintain the confidentiality of personal information  – take the redaction and summarization challenge, for example. When dealing with PII data that is critical for investigations, would you rather feed it to a technology that removes all knowledge of data after processing or a human SOC analyst? What do we think preserves the rights and privacy concerns of individuals impacted better?

Red Sift’s approach to the use of GenAI

Undoubtedly, the potential benefits that generative AI offers to assist with business operations are profound, but it is equally important to recognize the organizational risks that come with unregulated use. So, for businesses looking to embed GenAI tools into their operations, there is an urgent need for a robust compliance regime to ensure safe and compliant usage. 

Here at Red Sift, we have taken a risk-balanced stance to create a policy for the use of Generative AI Tools by Red Sift employees. We want to ensure that our employees are embracing and exploring these powerful new technologies to understand how they work, what they’re capable of, and how they can be used meaningfully, all while ensuring they’re using them in a safe and compliant way. 

A summary of the Red Sift policy

To shine a light on the importance of creating a policy-driven approach to govern the use and manage potential risks of AI in business circumstances, we are sharing some excerpts from our own AI policy here.

Our approach to regulating the use of Generative AI Tools within Red Sift has focused on four main points:

  1. Classification of types of use

Here we outline a classification framework for the types of use for Generative AI tools. The aim of this classification is to ensure that prohibited uses are clear to all employees and that internal mechanisms to evaluate risks are focused on uses that are susceptible to medium to high risk. Each category provides a brief description of the uses and examples so that employees can quickly detect which potential uses fall within which category, and take action on how to obtain authorization (if required) for the particular use they wish to carry out. 

 2. Approved Generative AI tools

This is effectively a whitelist of Generative AI tools that have been approved for their use by employees at Red Sift for work-related purposes. It seeks to clarify which version of which tool is approved for work-related purposes, which type of account should be used by employees (with a focus on centralizing use within Red Sift company accounts, avoiding the proliferation of individual employee accounts) and outlines the process that must be followed by the potential user to obtain authorization.

  1. Risk Assessment Tool 

We provide team members with a Risk Assessment Tool, which allows them to assess the risk of that particular use (including considerations for Privacy, Security, Ethics, and Intellectual Property Rights implications), and is aimed at fostering a reflective, responsible, and risk-based use approach by potential users. The use and completion of this Risk Assessment Tool and documentation of results is compulsory for all team members that want to engage in exceptional high-risk uses as well as for uses that are permitted with prior authorization.

  1. Internal inventory & transparency

Here we define how each employee must keep a record of each project that will make use of Generative AI Tools for work-related tasks. It also describes how content generated wholly or partially by Generative AI Tools that is being exchanged between internal teams and content generated from Generative AI that will be presented to a Customer must be visibly labeled as “content wholly/partially generated by AI.” The aim of this section is to promote internal record-keeping.

Our policy is a live document that will continue to change throughout its lifecycle, particularly as legislative and regulatory proposals advance. 

Our aim is to empower human intelligence, foster an environment of trust, and encourage the best judgment and common sense in the deployment of the tools.

For example, when our developers use AI to help with their coding – if the tooling provides them with suggestions that look to be copied and pasted, don’t use them!

A framework for readers to build and/or adapt their own policy

When building a Generative AI policy for a business of any size, readers should consider the following:

Research first, both internal and external: Who is your audience and what is their knowledge level? Who will be using this policy internally? What is the current regulatory and legislative framework, and where is it heading? What are others saying in your space? 

There has been a lot of buzz around AI and Generative AI. It is worth checking in with the people who will actually be using the policy to guide their decision-making and tailor the design of the policy to the needs of its audience. No need to be re-iterative or over-explain things, but a good introduction that covers the basics will go a long way in helping potential users understand the contents of the policy. 

Be clear about which uses are prohibited and which are allowed, and under which conditions: This is the meat and potatoes of the policy. If this is not clear in the first draft, reiterate. The ultimate test of clarity will come once the policy is released and users put it into motion, but you should ensure that the final working draft is as clear as possible in this regard. We encourage the policy creators to utilize visual aids such as flowcharts, images, and other design elements to help users of the policy navigate and process the document, and to go back to it from time to time. Clarity can always be improved.

Review when significant regulatory and legislative changes occur: This policy must be a living document. AI is in its infancy, and regulatory and legislative proposals are currently at a similar stage. However, in this short amount of time, we have already seen providers of Generative AI Tools respond to regulatory action (i.e. Italy’s ban on ChatGPT being lifted after ChatGPT implemented the “turn off chat history” function). It is imperative to stay on top of it to ensure your policy doesn’t become outdated, fast.

Invite feedback and continuously update your FAQs: Include a section answering some of the questions that are likely to come up once the policy is distributed. The FAQ section of any document can seem like a drag to draft, but it can be a great missed-spot detector for uncovering grey areas in the rest of the document – use it to your advantage.

How should businesses assess their risks?

Businesses should assess their risks based on the nature of their business and their long-term strategy, whether this strategy involves the use of Generative AI, whether in-house or through vendors that incorporate it into their products, or not. However, most importantly, businesses should consider that use of Generative AI is already occurring within the business, whether it has been detected or not, and set that as the starting point of their risk assessment analysis. 

To that end, a proactive approach to risk assessment should be prioritized – if there is no guidance or regulation on the use of Generative AI within the company, the drafting and deployment of guidance or policy should be prioritized. Depending on the nature of the business, some risk factors may weigh more heavily than others. Still, a broad consideration of Legal, Ethical & Reputational, Privacy, Security, Intellectual Property Rights, and Financial risks should be carried out – with a focus on Privacy, Security, and Intellectual Property rights at the helm. 

Conclusion

Realizing the benefits of AI is going to become one of the most critical challenges businesses face today. Ultimately, those who are able to explore its potential quickly and responsibly while ensuring flexible and adaptable policy creation will reap the rewards. Keep an eye out for Part 2 of our ‘Generative AI at work’ series!

PUBLISHED BY

Francesca Rünger-Field

31 May. 2023

SHARE ARTICLE:

Categories

AI

Recent Posts

VIEW ALL
News

Winter wins: Red Sift OnDMARC wraps up 2024 as a G2 DMARC…

Francesca Rünger-Field

The season of giving has brought us another reason to celebrate! Red Sift OnDMARC continues its winning streak in G2’s Winter 2025 report, earning Leader status in the DMARC category for another consecutive season. This recognition reflects our strong market presence and the unwavering satisfaction of our customers. Cheers to wrapping up 2024 on…

Read more
AI

Text classification in the age of LLMs

Phong Nguyen

As natural language processing (NLP) advances, text classification remains a foundational task with applications in spam detection, sentiment analysis, topic categorization, and more. Traditionally, this task depended on rule-based systems and classical machine learning algorithms. However, the emergence of deep learning, transformer architectures, and Large Language Models (LLMs) has transformed text classification, allowing for…

Read more
Security

How to drive cybersecurity as a top business priority

Jack Lilley

Everyone has a role to play in protecting the enterprise. Whether you’re shaping strategy or implementing solutions, aligning efforts to mitigate critical risks ensures a stronger, more resilient enterprise. If you missed Red Sift’s recent webinar on “From Data to Buy-In: Driving Cybersecurity as a Top Business Priority” we’ve got you covered. The session…

Read more
DMARC

BreakSPF: How to mitigate the attack

Red Sift

BreakSPF is a newly identified attack framework that exploits misconfigurations in the Sender Policy Framework (SPF) a widely used email authentication protocol. A common misconfiguration involves overly permissive IP ranges, where SPF records allow large blocks of IP addresses to send emails on behalf of a domain. These ranges often include shared infrastructures like…

Read more