The transformative power of automation and generative AI

In this second article in our generative AI-themed Think Piece series, we quickly cover the business benefits of combining automation and generative AI. From there, we will cover cybersecurity use cases where AI-assisted automation augments decision-making, bridges the cyber skills gap and safeguards against privacy concerns. 

The benefits of combining automation and generative AI

The macroeconomic uncertainty of recent years has seen businesses placing a strong emphasis on efficient growth and cost reduction. This has led CEOs and leaders to prioritize automation as a key strategy to deliver immediate value across various areas of their organizations. According to a 2022 study by Salesforce, an estimated 90% of businesses are already leaning into automation to bolster their operational capabilities. 

Considering Gartner’s recent findings revealing that 70% of executives are currently exploring generative AI, the take-off of this coupling in businesses everywhere seems imminent. For example:

  • UK-based renewable energy company, Octopus Energy, integrated AI into their systems to respond to customer emails in February 2023. It is now doing the job of 250 people and has replied to more than a third of customer emails. And that’s not all, its AI-written emails delivered an 80% customer satisfaction rate, a 15% improvement on the emails previously written by staff members. 
  • Across the pond, Californian software company Freshworks has done something similar, enhancing their proprietary AI tool to allow customer service agents to respond quickly to customers, marketers to compose more compelling copy, and salespeople to craft prospecting emails.

These innovative shifts in business practices are yielding significant advantages, allowing companies to optimize operations, achieve cost savings, and enhance overall organizational performance.

The right way to build a cybersecurity program with AI-assisted automation

With the rise of complex cyberattacks and a staffing shortage, cybersecurity is set to reap the rewards of AI and automation. Today’s cybersecurity leaders are already implementing AI to build more scalable programs. Here’s how:

Augmented decision-making

While large-scale data analysis has been possible through the use of ML/AI models for a long time, it relied on an end-user knowing what to do once the data had been analyzed and presented back. With the rise of Large Language Models (LLMs), AI-powered tools are elevated and able to perform a much wider range of tasks, such as extracting valuable insights from huge data sets and generating text-based recommendations, enabling businesses to make data-driven decisions swiftly. 

A good example of this is Red Sift’s first GPT-4 powered feature, Relevance Detection which discovers and auto-classifies assets that have previously not been found or have been left unsecured. It automatically scans identities found across DNS, WHOIS, and SSL certificate sources and generates tailored recommendations that help customers decide whether an identity should be monitored and why. Not only does this functionality save huge amounts of time and mitigate human error, but most importantly, it allows organizations to quickly secure any vulnerabilities that exist that had gone previously undetected. 

“The average Fortune 500 company takes 12 or more hours to find a serious vulnerability, while bad actors focusing on a site takes less than 45 minutes. This means that discovering assets that are unknown, short-lived or impersonating is critical, and discovering them quickly is equally as critical.”

Rahul Powar, CEO, Red Sift.

The process of discovery and classification is incredibly tedious to do manually, difficult to do consistently, and complex for a human to undertake. This makes Relevance Detection just one great example of where AI can do an important job easily, consistently, and on a fully automated basis.

Bridging the cybersecurity skill shortage problem

The cybersecurity skill shortage is well known – it is estimated that a worldwide gap of 3.4 million cybersecurity workers exists. Gartner predicts that by 2025, lack of talent or human failure will be responsible for over half of significant cyber incidents. With the demand for cybersecurity professionals surpassing the available talent pool, exploring the potential of generative AI as a solution holds promise as it supports:

Resource reallocation

AI’s ability to quickly identify cyber attacks surpasses that of human analysts, all without the risk of fatigue or human error. By relieving the burden on cybersecurity personnel carrying out manual discovery, organizations can redirect their focus and resources to where they are most needed.

Proactive threat hunting

AI can proactively search for potential threats and vulnerabilities within an organization’s network and systems. By continuously scanning for indicators of compromise and conducting penetration testing, it can identify weaknesses and security gaps, allowing organizations to address them before malicious actors exploit them.

While generative AI cannot replace human expertise entirely, it can serve as a force multiplier, enabling organizations to combat cyber threats more effectively and alleviate the strain caused by the cybersecurity skills gap.

Consistency and continuity on otherwise hard-to-define problems

Even when organizations have access to cyber talent, new challenges emerge because, well, we’re only human. Human nature introduces a degree of inconsistency, for example when it comes to the variations in assessments from employee to employee. What’s more, accomplishing tasks continuously is challenging due to our basic human needs; we require sleep and take vacations and so some level of discontinuity when carrying out tasks is inevitable. 

Inconsistencies in performance culminating in human error are often a significant contributing factor to cybersecurity incidents; the 2023 Data Breach Investigations Report found that 74% of breaches involved the human element, which includes social engineering attacks, errors or misuse.

Let’s take the impersonation problem as an example. Every day, we speak with enterprises concerned with the severity of domain and brand impersonation they face. It isn’t uncommon for established, well-known enterprises to have tedious, manual processes in place to manage their domain estates, for example, in spreadsheets. This is a convoluted and complex thing to do when fraudulent domain incidents progress and develop so rapidly. 

Managing lists of owned domains as well as identifying impersonation domains and any resulting legal processes in spreadsheets quickly becomes a time-consuming, costly, and error-prone process. If an employee overlooks a lookalike domain that goes on to launch a phishing attack, an organization’s reputation and revenue could be at risk.

Purpose-built AI-powered technology like Brand Trust is needed to fight this problem – a solution that automates the consistent and continuous discovery of over 150 million hostnames a day. AI can detect the changes that happen to websites far faster and better than a human can, freeing up human resources to be reallocated to respond to threats as and when they emerge. 

Safeguarding against privacy concerns

As we generate and share more data, privacy concerns have become critical. However, automation and generative AI can be a powerful combination when it comes to safeguarding data. 

Automation can enhance data privacy by streamlining data management processes. Manual handling of data is not only time-consuming but also increases the risk of human error and unauthorized access. By automating data collection, storage, and analysis, organizations can minimize the number of human touchpoints, reducing the potential for privacy breaches. Automated systems can enforce strict access controls, encrypt sensitive information, and regularly update security protocols, thereby strengthening data privacy.

Keep an eye out for an upcoming article in this series, where we’ll delve into how generative AI can positively impact data privacy. Stay tuned!

Summary

In summary, the combination of automation and generative AI revolutionizes operational capabilities across multiple fronts. It streamlines operations, empowers decision-making, mitigates human error, helps bridge the skills shortage gap, and enhances the operator experience. 

Curious to find out how an AI-assisted solution like Brand Trust can help your organization uncover vulnerabilities that you haven’t yet discovered? Get in touch to book your free demo now.

PUBLISHED BY

Francesca Rünger-Field

20 Jun. 2023

SHARE ARTICLE:

Categories

AI

Recent Posts

VIEW ALL
News

Winter wins: Red Sift OnDMARC wraps up 2024 as a G2 DMARC…

Francesca Rünger-Field

The season of giving has brought us another reason to celebrate! Red Sift OnDMARC continues its winning streak in G2’s Winter 2025 report, earning Leader status in the DMARC category for another consecutive season. This recognition reflects our strong market presence and the unwavering satisfaction of our customers. Cheers to wrapping up 2024 on…

Read more
AI

Text classification in the age of LLMs

Phong Nguyen

As natural language processing (NLP) advances, text classification remains a foundational task with applications in spam detection, sentiment analysis, topic categorization, and more. Traditionally, this task depended on rule-based systems and classical machine learning algorithms. However, the emergence of deep learning, transformer architectures, and Large Language Models (LLMs) has transformed text classification, allowing for…

Read more
Security

How to drive cybersecurity as a top business priority

Jack Lilley

Everyone has a role to play in protecting the enterprise. Whether you’re shaping strategy or implementing solutions, aligning efforts to mitigate critical risks ensures a stronger, more resilient enterprise. If you missed Red Sift’s recent webinar on “From Data to Buy-In: Driving Cybersecurity as a Top Business Priority” we’ve got you covered. The session…

Read more
DMARC

BreakSPF: How to mitigate the attack

Red Sift

BreakSPF is a newly identified attack framework that exploits misconfigurations in the Sender Policy Framework (SPF) a widely used email authentication protocol. A common misconfiguration involves overly permissive IP ranges, where SPF records allow large blocks of IP addresses to send emails on behalf of a domain. These ranges often include shared infrastructures like…

Read more