It’s impossible to deny the profound impact of generative artificial intelligence (GenAI) on all aspects of our daily lives, especially at work. Organizations are undergoing widespread GenAI adoption across the workplace, with McKinsey reporting more than three-quarters (78%) of organizations using AI in at least one business function, such as marketing and sales, customer service and human resources (HR).

At the same time, employees are increasingly turning to publicly available GenAI assistants, often without understanding the risks; a recent survey by TELUS Digital found that more than two-thirds (68%) of enterprise employees are accessing publicly-available GenAI assistants at work, such as ChatGPT, Microsoft Copilot or Google Gemini, through personal accounts. Even more concerning, 57% admitted to entering sensitive information into public GenAI assistants, potentially exposing private company information or putting their organization and its stakeholders at risk. The widespread use of public GenAI tools is fueling the rise of “shadow AI,” which refers to the use of unapproved AI tools by employees without information technology (IT) oversight.

This presents a critical challenge in the era of AI: How can companies set and enforce clear expectations to employees to ensure safe, responsible usage of the technology? Some organizations have established strong guardrails and policies, while others have been slow to do so, leaving employees in the dark. This lack of compliance creates blind spots that can lead to data leakage, compliance breaches and security threats.

Establishing GenAI Policies

Without well-defined GenAI policies and governance frameworks, which establish clear guidelines for ethical use, security, compliance and risk management, employees are left guessing on what they should or shouldn’t be doing with the technology at work. The TELUS Digital survey revealed that 44% of employees said their company either does not have AI guidelines in place, or they don’t know if they do. Given the rapid evolution of GenAI, it’s imperative that companies provide clear expectations, policies and best practices to employees. Comprehensive AI policies should cover key areas, including how GenAI works and its potential for misinformation due to model hallucinations. They should also highlight the security risks associated with shadow AI and clear guidelines on best practices for leveraging AI safely and effectively across various business functions.

However, having policies on paper isn’t enough. Employees need ongoing education and training to understand acceptable AI usage.

The Role of Training in AI Security

When it comes to AI technology, an organization’s biggest vulnerability is its people. In fact, according to Proofpoint’s 2024 “Voice of the CISO” report, 74% of chief information security officers (CISOs) say human error is their organization’s most significant cyber vulnerability. Most employees don’t misuse AI intentionally, but they have access to powerful tools at their fingertips, and many lack the guidance or knowledge on how to use them responsibly. Despite the clear need for AI education, our survey found that less than a quarter (24%) of employees say their company requires mandatory AI training. The first line of defense in any organization is a strong AI training program.

With new AI tools constantly entering the market, a one-time education session isn’t enough. Organizations need an ongoing strategy that can keep employees AI-literate and security-conscious.

Companies should offer learning and development (L&D) opportunities when, where and how they’re most convenient for team members, whether hybrid, in-person or remote. AI education should be interactive, accessible and adaptable to different learning styles and work environments. Organizations can offer various methods to structure AI training for maximum effectiveness, including:

  • Host hands-on workshops where employees can experiment with GenAI under the guidance of IT and/or security managers in their organizations.
  • Offer microlearning modules that are bite-sized, easily digestible tutorials that employees can access as needed.
  • Provide real-world case studies to demonstrate the risks and benefits of AI in action.
  • Employees should also have access to internal knowledge-sharing resources, frequently asked questions (FAQs) and expert-led sessions to reinforce their learning.

By making AI training flexible, relevant and engaging, companies can empower employees to confidently and responsibly integrate AI into their workflows.

The Second Line of Defense: Providing a Secure Enterprise AI Platform

While training is essential, education alone isn’t enough to ensure safe AI use in the workplace. Ideally, companies should provide employees with access to an enterprise-safe AI platform that balances the benefits of GenAI with built-in security and compliance features.

A secure, enterprise-grade AI environment allows employees to confidently use AI tools for everyday tasks such as knowledge searches, summarization, copywriting, image generation, and code development, without risking exposure of sensitive company data. By offering a safe and governed space to engage with AI, organizations can reduce the likelihood that employees will resort to unauthorized, less secure tools that may compromise confidentiality.

Companies that do not have the resources to build their own AI infrastructure need to consider working with an expert partner that can help implement a secure, enterprise-grade AI platform with built-in compliance, governance, and data protection features. This ensures businesses can confidently adopt AI while maintaining security and operational integrity.

A Holistic Approach to AI Training

While some roles within an organization may be better suited to leverage GenAI for task efficiency and workflow optimizations in the near term, GenAI security training and best practices must be embedded across an entire organization. To maximize its effectiveness, training content can be tailored for different teams or roles as appropriate. For example, those within IT may require deeper security training. Those in marketing may require training around ethical AI use in customer data management.

AI security in any company isn’t a one-and-done initiative; it’s an ongoing commitment that requires continuous education, secure tools, and a proactive approach. Organizations that invest in both robust training and secure AI solutions will be best positioned to harness GenAI’s potential while mitigating risks.