The debate about how AI safety took center stage globally in 2023 with the advent of generative AI, and in the European Union has culminated in the thorough and sensible EU AI Act, which reached its final approval on March 13, 2024. This marks a turning point for AI, and heralds a new era for technology where powerful and high-risk AI systems are expected to have guardrails by design.
At Credo AI, we believe that AI is the ultimate competitive advantage for modern enterprises. At the same time, AI without guardrails can backfire — whether it’s shutting down expensive facial recognition systems, or dealing with out of control LLMs that send stock prices into a tailspin.
Public AI pitfalls have led to rapidly eroding trust in AI, with 52% of Americans believing AI is “not safe or secure”.How do you establish trust and safety in AI?
What we’ve learned at Credo AI from years of research is that there is not one box to check. AI adoption at enterprises requires constant oversight, usually by a directly responsible individual with executive support. Ensuring an AI system is safe for organizational use — whether built, bought, or procured — requires AI-specific Governance, Risk, and Compliance (GRC) workflows.
Join us as our CEO and Founder Navrina Singh meets with Member of European Parliament Dragos Tudorache about how and why trustworthy AI by design is critical for global economic and societal safety and security.
Attend this webinar to learn:
• An overview of the historic EU AI Act
• What is a risk-based approach and what does it mean for enterprises globally
• The current state of AI safety, and how regulation can help bring trust to the ecosystem and enterprises