Rise of ‘shadow AI’ presents dilemmas for tech leaders

It can indicate that employees find the tool useful and easy to incorporate into their workflow. However, it is essential for organizations to establish clear guidelines and policies regarding the use of such tools to ensure data security and compliance. Open communication between employees, managers, and the IT team is key to maintaining a secure and productive work environment.

As generative AI becomes more accessible to the public, some employees are using unsanctioned tools without telling their managers. The ripple effects of this “shadow AI” include higher productivity and possibly more favorable performance reviews for employees, dilemmas for their managers and security headaches for tech staff. Technology leaders at banks and other companies must decide whether to close their eyes to this rogue use of artificial intelligence, police it with lockdowns or forced disclosure, or develop other approaches.

Productivity Benefits of Generative AI

Adding to the difficulty of the decision is that generative AI is bringing productivity benefits to the workplace. A study the Federal Reserve Bank of St. Louis published in February found that across industries, 28% of workers use generative AI at work. Among those who had used it in the past week, all reported that the technology saved them some time at work, with varying degrees of time saved, depending on the frequency of use.

A separate study found that managers evaluated content produced with the assistance of ChatGPT more favorably than work done without it, as long as they didn’t know generative AI was involved. The study highlighted the importance of perceptions and transparency in the use of AI tools in the workplace.

Challenges for Banks and Companies

Banks are facing challenges in managing the use of generative AI tools by their employees. Despite efforts to restrict access to such tools, employees are finding ways to utilize them for various tasks, including document retrieval, email drafting, and call summarization. This poses security risks and operational challenges for banks, especially in sensitive areas like data protection and information security.

Younger workers are more comfortable with generative AI tools, and their use is becoming increasingly prevalent in various teams within banks and companies. This trend raises questions about governance, risk management, and the ethical implementation of AI technologies in the workplace.

Best Practices for Banks

Banks like Zions Bank have implemented best practices to manage the use of generative AI tools among their employees. By providing gated access to approved AI tools, setting up security controls, and conducting risk assessments, banks can create a framework for responsible AI usage.

Clear policies, employee training, and governance documents are essential components of a successful AI strategy within banks. Encouraging responsible AI use through training programs and hackathons can help foster a culture of innovation while mitigating risks associated with shadow AI.

Recommendations for Firms

Experts recommend that firms establish clear policies regarding AI use, make disclosure of AI use mandatory, and create incentive systems to ensure fair recognition of employees’ efforts. Transparency, fair recognition, and well-balanced incentives are crucial for the successful integration of AI tools like ChatGPT in the workplace.

In conclusion, the adoption of generative AI tools presents both opportunities and challenges for banks and companies. By embracing these technologies thoughtfully and implementing robust governance and risk management practices, organizations can harness the benefits of AI while minimizing potential risks associated with shadow AI.

Scroll to Top