Generative AI is revolutionizing industries, including cybersecurity, by driving innovation, efficiency, and improved outcomes. However, its rapid adoption introduces new risks, requiring businesses to balance opportunities with challenges.
A 2024 study by Bell, Canada's largest telecommunications company, surveyed 600 Canadian business leaders and IT professionals to explore the evolving role of GenAI. Here are the key takeaways from the survey for cybersecurity professionals.
"Generative AI applications can be very useful for customer support and translating data into text-based information to make it easier for people to understand the data. GenAI is also commonly used to implement AI assistants to answer customer support questions," said Anmol Agarwal, a senior security researcher with a large company in the Dallas-Forth Worth area.
Despite its potential, GenAI adoption is slowed by significant concerns, including:
"Generative AI can output sensitive information if queries include sensitive information as it is trained on the data it is asked. Therefore, when using GenAI tools, it is important to be careful when asking questions," Agarwal said. "Any information in your question could be revealed to another user. For example, if you ask GenAI to create a program for you and give it company data, if another person asks a similar question, GenAI might respond with your company data in an example."
"Innovation without trust isn't sustainable. Let's ensure that as we push AI's boundaries, we also strengthen its foundation," said Toronto's Helen Oakley, Founding Partner of the AI Integrity & Safe Use Foundation (AISUF). "Generative AI's greatest potential lies not just in what it can create, but in the trust it can inspire. By implementing trust and transparency principles, we build a legacy of secure and ethical progress."
[RELATED: Embedding Trust as a Strategic Asset in Technical Leadership]
Organizations are deploying various controls to secure GenAI environments, including:
"It's also important to train staff that Generative AI can make mistakes and sometimes hallucinate information that is not true; therefore, overreliance on these GenAI tools should be discouraged," Agarwal said, adding a fourth risk to consider:
"The complexity of AI systems creates fertile ground for security threats to emerge, often unnoticed until the damage is already done," Oakley added. "Agentic AI is not a new concept. However, Generative AI is creating a new paradigm by combining creativity and collaboration with autonomy. With this evolution, it is important that we build AI systems that are both innovative and trustworthy."
"As multi-agent AI systems become more interconnected and autonomous, they also highlight critical challenges such as ensuring transparency in decision-making and safeguarding against vulnerabilities that could compromise the entire pipeline," Oakley said. "Compromised AI supply chains are a ticking time bomb."
For cybersecurity professionals, the report emphasizes the following actions:
Dr. Peter Holowka, Director of Educational Technology at West Point Grey Academy in Vancouver, B.C., said he would add the following to the "strengthen governance" section: "Understand the norms in your industry concerning AI use, and also pay attention to the evolving baseline in other industries and society."
The 2024 Bell Study reveals what organizations and their cybersecurity teams already know: that Generative AI is no longer a novelty, it's a transformative tool reshaping cybersecurity and other industries. By adopting robust risk mitigation strategies and aligning with regulatory frameworks, organizations can unlock the full potential of GenAI while protecting their assets and customers. As cybersecurity professionals, it's time to take the lead in shaping a secure and innovative AI-powered future.
"This research is a great example as to how technology is moving fast, and it is our responsibility as cyber experts to understand the technology that is being presented to the world. Data privacy with GenAI is a problem that we (as an industry) are very focused on right now. AI is advertised to automate our jobs and can help ease our day-to-day tasks, but companies should also be discussing the risks that come with these models," said Reanna Schultz, Founder of CyberSpeak Labs LLC and host of the Defenders in Lab Coats podcast."This research highlights a small piece of the emerging threat tactics that are being identified with AI. From a cyber defense perspective, this something we should be also focusing on in addition to data privacy. Indicators and malware family behaviors will be evolving to prevent detection from security tools."
Schultz continued, "It’s important for us as professionals to understand research like this to know what trends are happening in our community so we can educator our users to defend for a better tomorrow."
Two experts from Schellman—Danny Manimbo, Principal, ISO Practice Director, and AI Assessment Leader; and Kent Blackwell, Director, Penetration Testing Team—recently presented on "How to Build Trustworthy and Secure AI Systems: Key Frameworks & Vulnerabilities You Need to Know" at SecureWorld Seattle. Here's what they had to say:
"With the rise of AI regulations (both internationally and at the state level in the U.S.), there has been an increased stress on the importance of AI governance and risk management programs in promoting compliance and trustworthy AI systems," Manimbo said. "We've seen ISO 42001 emerge as it is named in some AI regulations (ex: Colorado AI Act) and also referenced by trend-setting organizations like Microsoft in their Data Protection Requirements (DPR) Supplier Security and Privacy Assurance (SSPA) v10 updates, which include AI requirements and even a mandate of ISO 42001 certification for certain high-risk AI systems."
"We've seen a sharp increase in the number of AI-enabled applications in 2024, but we've also seen a few companies trial AI services for inclusion and determine the value-add isn't there yet," Blackwell said. "It's a very similar cycle to the advent of cloud computing that we saw 15 years ago. Some companies are jumping in with both feet and others are taking a wait-and-see approach to ensure their AI strategy aligns with their overall business strategy.”
Blackwell added, "The call-out in the study around 'Divided: Confident but also highly worried about risks (data exposure, copyright violations, bias, model poisoning)' is what we've seen the most at Schellman. Leaders know AI is the future and want to be best enabled to take advantage, but the amount of unknown risks it potentially poses gives them pause about rolling out anything quickly."