On 26 March 2025, K2 Integrity hosted a webinar discussing generative AI, its associated risks, governance strategies, and the future trajectory of AI adoption. The webinar featured Jason Straight, senior managing director and leader of the Cyber Resilience and Digital Investigations practice at K2 Integrity; Maggie Rose, vice president of client solutions at K2 Integrity; Olivia Makara, director in the Cyber Resilience practice at K2 Integrity; and Brian Finch, partner at Pillsbury, Winthrop, Shaw and Pittman. View a recording of the session here.
This webinar was the first in a series from K2 Integrity focusing on the generative AI landscape and the importance of proactive governance.
Key Terms
- Artificial Intelligence (AI): Computer systems that can perform tasks resembling human intelligence.
- Generative Artificial Intelligence (GenAI): A type of AI system built to create new content such as text, images, and audio.
- Machine Learning (ML): A computer science field focused on training systems to learn from data and improve performance without explicit programming.
- Large Language Model (LLM): A type of ML model that is specifically designed for natural language processing (NLP).
- Shadow AI: Unsanctioned AI systems leveraged by employees.
Business Impacts, Risks, and Governance
As GenAI continues to evolve, organizations must navigate the complexities of GenAI governance, security, and ethical considerations. While some industries may be impacted by GenAI less than others, its adoption is widely viewed as a necessity for businesses to remain competitive. Organizations are leveraging GenAI to enhance products and services, pursue new markets, expand productivity, and improve communication and collaboration. They are also using it to personalize the customer experience, optimize business operations, and improve efficiency. It is important to note that GenAI itself is not a competitive advantage—rather, the ability to implement GenAI securely and effectively is what will differentiate successful businesses.
While GenAI offers transformative opportunities, it also introduces significant risks, including:
- Data Privacy and Leakage: Organizations risk proprietary or sensitive information being exposed through GenAI interactions, potentially leading to compliance violations and reputational harm. Technological controls to detect and prevent model exploitation and diligent data classification are critical mitigations.
- Cybersecurity Threats: GenAI can be exploited by cyber criminals to identify system vulnerabilities, automate attacks, craft novel malware, create deepfakes, conduct social engineering campaigns, and manipulate information, increasing the complexity of cybersecurity defense. In addition, the deployment of GenAI expands an organization’s attack surface, giving adversaries another angle of approach. AI model risk assessments and adversarial testing are necessary to identify and evaluate security controls for the evolving threat landscape.
- AI Inaccuracies: GenAI models can produce inaccurate or misleading results, posing reputational and legal risks. Evaluation and monitoring of model outputs helps organizations identify and respond to unexpected model responses.
- Regulatory Uncertainty: The lack of consistent GenAI regulations across jurisdictions creates compliance challenges, making proactive governance essential.
- Shadow AI: Employees may use unauthorized GenAI tools within organizations, potentially exposing company data and increasing security vulnerabilities. Technological controls to monitor and restrict access to AI tools, clear usage policies, and employee training on AI risks and best practices can help to mitigate risk.
Governance frameworks provide organizations with structured approaches to managing GenAI risks while maximizing its benefits. One such framework, NIST AI 600-1: AI Risk Management Framework: Generative AI Profile, outlines risks either exacerbated or introduced by GenAI that organizations should consider when identifying and prioritizing risks and controls. The full list of risks can be found linked in the resources below.
Other frameworks include ISO/IEC 42001, NIST AI 600-1, US Federal Reserve SR 11-07, European Union AI Act, and OWASP Governance Controls. (Links to these frameworks can be found under “Resources” below.) In the United States, it is likely that there will be limited federal oversight of AI during the current administration, although some states may choose to enact their own governance models. Organizations are encouraged to proactively develop AI governance policies, aligning with existing security and compliance regulations, to ensure AI deployments remain secure, ethical, and legally compliant.
Lessons Learned
- Embrace GenAI—your employees already have!
- Identify key stakeholders and build consensus around your Gen AI strategy.
- Inventory Shadow AI.
- Classify sensitive data.
- Articulate desired results and outcomes, then measure success.
- Carefully define your use cases—start with low risk, high impact.
- Consider impact on workforce—be transparent about job redefinitions.
- Train your workforce—and keep training.
- Monitor inputs AND outputs.
- Conduct adversarial testing for security vulnerabilities, hallucinations, and model drift.
- Be transparent about your use of GenAI in creating deliverables.
- Consider business benefits of GenAI relative to the investment.
Resources
Risk and Governance Frameworks
- NIST 600-1: https://doi.org/10.6028/NIST.AI.600-1
- ISO 42001: ISO/IEC 42001:2023 – AI management systems
- OWASP Security and Governance Checklist: LLM Applications Cybersecurity and Governance Checklist v1.1 – English – OWASP Top 10 for LLM & Generative AI Security
Security Frameworks
- Google SAIF: Secure AI Framework – SAIF
- IBM Framework for Securing Generative AI: Introducing the IBM Framework for Securing Generative AI | IBM
Background Information
- HiddenLayer 2025 AI Threat Landscape Report: HiddenLayer’s 2025 AI Threat Landscape Report
Conclusion
Generative AI presents both opportunities and risks for businesses. By proactively implementing AI governance frameworks, monitoring security threats, and fostering responsible AI adoption, organizations can harness GenAI’s full potential while mitigating associated risks.