author photo
By Cam Sivesind
Thu | Nov 21, 2024 | 7:53 AM PST

Generative AI is revolutionizing industries, including cybersecurity, by driving innovation, efficiency, and improved outcomes. However, its rapid adoption introduces new risks, requiring businesses to balance opportunities with challenges.

A 2024 study by Bell, Canada's largest telecommunications company, surveyed 600 Canadian business leaders and IT professionals to explore the evolving role of GenAI. Here are the key takeaways from the survey for cybersecurity professionals.

1. GenAI adoption is accelerating

  • Broad Use Cases: More than 70% of professionals use GenAI for tasks like automating workflows, drafting documents, fraud detection, and data search. IT departments lead the way, with GenAI integrated into key operational processes.
  • Industry Variances: Retail and manufacturing sectors show high adoption, deploying GenAI for customer service, inventory management, and fraud detection. Regulated industries like banking and insurance are more cautious but steadily piloting GenAI solutions.

"Generative AI applications can be very useful for customer support and translating data into text-based information to make it easier for people to understand the data. GenAI is also commonly used to implement AI assistants to answer customer support questions," said Anmol Agarwal, a senior security researcher with a large company in the Dallas-Forth Worth area.

2. Risks are a barrier to adoption

Despite its potential, GenAI adoption is slowed by significant concerns, including:

  • Data Security: 60% of organizations worry about proprietary data being exposed through GenAI. Misuse of sensitive data in training large language models (LLMs) is a top concern.

"Generative AI can output sensitive information if queries include sensitive information as it is trained on the data it is asked. Therefore, when using GenAI tools, it is important to be careful when asking questions," Agarwal said. "Any information in your question could be revealed to another user. For example, if you ask GenAI to create a program for you and give it company data, if another person asks a similar question, GenAI might respond with your company data in an example." 

  • Cyber Threats: AI-powered phishing and deepfakes are cited as major risks by late adopters, while early adopters are more worried about advanced threats like model poisoning and tampering.
  • Legal and Ethical Risks: Organizations fear copyright violations, privacy breaches, and biased AI outputs that could harm customer trust or violate regulations.

"Innovation without trust isn't sustainable. Let's ensure that as we push AI's boundaries, we also strengthen its foundation," said Toronto's Helen Oakley, Founding Partner of the AI Integrity & Safe Use Foundation (AISUF). "Generative AI's greatest potential lies not just in what it can create, but in the trust it can inspire. By implementing trust and transparency principles, we build a legacy of secure and ethical progress."

[RELATED: Embedding Trust as a Strategic Asset in Technical Leadership]

3. Mitigation strategies to manage risks

Organizations are deploying various controls to secure GenAI environments, including:

  • Data Security: Measures include encryption, access controls, monitoring, and data classification. However, less than half conduct regular data audits or document data accountability.
  • Application Security: Vulnerability scanning, API security, and LLM firewalls are becoming standard. These protect against injection attacks and ensure secure integration of GenAI with enterprise systems.
  • Privacy: Policies for customer consent, privacy assessments, and training staff to avoid bias are essential. These steps align with the forthcoming Canadian Artificial Intelligence and Data Act (AIDA).

"It's also important to train staff that Generative AI can make mistakes and sometimes hallucinate information that is not true; therefore, overreliance on these GenAI tools should be discouraged," Agarwal said, adding a fourth risk to consider:

  • "Human in the Loop: It is important to have a human-in-the-loop when working with Generative AI; that is, a person, or a team of people, should review and verify the results of GenAI in case a  GenAI tool makes a mistake."

"The complexity of AI systems creates fertile ground for security threats to emerge, often unnoticed until the damage is already done," Oakley added. "Agentic AI is not a new concept. However, Generative AI is creating a new paradigm by combining creativity and collaboration with autonomy. With this evolution, it is important that we build AI systems that are both innovative and trustworthy."

4. The future of GenAI in cybersecurity
  • Emerging Opportunities: Generative AI is being used for threat detection and incident response, enhancing scalability and effectiveness. Businesses are optimistic, with nearly half expecting significant advancements in AI within five years.
  • Preparing for Regulations: Most organizations are taking steps to comply with AIDA, but readiness varies by industry. Regulated sectors like banking lead in preparedness, while laggards risk falling behind.

"As multi-agent AI systems become more interconnected and autonomous, they also highlight critical challenges such as ensuring transparency in decision-making and safeguarding against vulnerabilities that could compromise the entire pipeline," Oakley said. "Compromised AI supply chains are a ticking time bomb."

5. Key recommendations

For cybersecurity professionals, the report emphasizes the following actions:

  • Strengthen Governance: Establish steering committees to oversee AI policies and risks.

Dr. Peter Holowka, Director of Educational Technology at West Point Grey Academy in Vancouver, B.C., said he would add the following to the "strengthen governance" section: "Understand the norms in your industry concerning AI use, and also pay attention to the evolving baseline in other industries and society."

  • Adopt Layered Security: Protect data, applications, and user interactions through integrated controls.
  • Prioritize Data Classification: Enhance visibility into sensitive data to mitigate exposure risks.
  • Leverage AI Defensively: Use GenAI for proactive threat detection and response to combat increasingly sophisticated cyberattacks. Agarwal noted, "I don't think GenAI is used here. Traditional AI is used for proactive threat detection and response."
  • Enable Responsible Adoption: Focus on building trust by aligning AI deployments with privacy and ethical guidelines.

The 2024 Bell Study reveals what organizations and their cybersecurity teams already know: that Generative AI is no longer a novelty, it's a transformative tool reshaping cybersecurity and other industries. By adopting robust risk mitigation strategies and aligning with regulatory frameworks, organizations can unlock the full potential of GenAI while protecting their assets and customers. As cybersecurity professionals, it's time to take the lead in shaping a secure and innovative AI-powered future.

"This research is a great example as to how technology is moving fast, and it is our responsibility as cyber experts to understand the technology that is being presented to the world. Data privacy with GenAI is a problem that we (as an industry) are very focused on right now. AI is advertised to automate our jobs and can help ease our day-to-day tasks, but companies should also be discussing the risks that come with these models," said Reanna Schultz, Founder of "This research highlights a small piece of the emerging threat tactics that are being identified with AI. From a cyber defense perspective, this something we should be also focusing on in addition to data privacy. Indicators and malware family behaviors will be evolving to prevent detection from security tools."

Schultz continued, "It’s important for us as professionals to understand research like this to know what trends are happening in our community so we can educator our users to defend for a better tomorrow."

Two experts from Schellman--Danny Manimbo, Principal | ISO Practice Director | AI Assesment Leader; and Kent Blackwell, Director, Penetration Testing Team--recently presented on "How to Build Trustworthy and Secure AI Systems: Key Frameworks & Vulnerabilities You Need to Know" at SecureWorld Seattle. Here's what they had to say:

“With the rise of AI regulations (both internationally and at the state-level the U.S.) there has been an increased stressed on the importance of AI governance and risk management programs in promoting compliance and trustworthy AI systems," Manimbo said. "We’ve seen ISO 42001 emerge as it is named in some AI regulations (ex: Colorado AI Act) and also referenced by trend setting organizations like Microsoft in their Data Protection Requirements (DPR) Supplier Security and Privacy Assurance (SSPA) v10 updates which include AI requirements and even a mandate of ISO 42001 certification for certain high-risk AI systems.”

“We’ve seen a sharp increase in the number of AI-enabled applications in 2024, but we’ve also seen a few companies trial AI services for inclusion and determine the value-add isn’t there yet," Blackwell said. "It’s a very similar cycle to the advent of cloud computing that we saw 15 years ago. Some companies are jumping in with both feet and others are taking a wait-and-see approach to ensure their AI strategy aligns with their overall business strategy.”

Blackwell added, “The call-out in the study around ‘Divided: Confident but also highly worried about risks (data exposure, copyright violations, bias, model poisoning)’ is what we’ve seen the most at Schellman.  Leaders know AI is the future and want to be best enabled to take advantage, but the amount of unknown risks it potentially poses gives them pause about rolling out anything quickly.”

Comments