Responsible AI

Shaping Global AI Literacy: Credo AI Participates at the EU AI Office Workshop

Credo AI participated in the first AI literacy workshop hosted by the EU AI Office in 2024.

January 13, 2025
Author(s)
Vassilis Rovilos
Evi Fuelle
Contributor(s)
No items found.
No items found.

On 11 December 2024, Credo AI participated in the first AI literacy workshop hosted by the EU AI Office, under the auspices of the AI Pact. We offered insights from our work with leading enterprise AI tools and emphasized the importance of implementing tailored, role-specific literacy programs which:

  • start with an assessment of current AI literacy levels to identify gaps;
  • blend foundational AI concepts with sector-specific applications to ensure relevance; and,
  • include ongoing learning opportunities to keep pace with rapid advancements in AI technologies and regulations.

The requirement on “AI literacy” for enterprises pursuant to the EU AI Act will take effect on February 2, 2025, by which time organizations including providers and deployers of general use AI systems must ensure adequate AI literacy measures for their personnel, and develop and implement these measures (e.g., through training or reporting mechanisms).

  • Credo AI Governance Advisory helps enterprises build AI Governance capability and sets the frameworks to adopt AI at scale. We offer a comprehensive and tailored approach that empowers your enterprise to build AI literacy and operationalize AI governance with confidence. Providing a clear, structured approach, we help you build the foundation you need to scale AI responsibly and effectively. Learn more.

Driving Trust in AI Adoption

AI literacy plays a crucial role in fostering trust in AI systems by establishing a foundation of transparency, fairness, and responsible use. A solid understanding of AI's capabilities and limitations is essential for informed decision-making and ensuring that AI technologies are aligned with societal values. Without widespread AI literacy, there is a significant risk of these systems being misunderstood, misused, or mistrusted, ultimately limiting their potential to contribute positively to society.

AI literacy is a cornerstone for ensuring AI systems are deployed responsibly and equitably across society and organizations. The recent discussions around the EU AI Act, particularly around the scope of Article 3(56) and Article 4, highlight the growing importance of AI literacy, particularly for businesses and consumers—this is a multidimensional competency that combines technical knowledge with an understanding of societal, ethical, and legal implications.

How is “AI Literacy” defined in the EU AI Act?

According to Article 3(56) of the EU AI Act, AI literacy refers to “the skills, knowledge, and understanding that enable stakeholders—including providers, deployers, and individuals impacted by AI systems—to make informed decisions.” AI literacy goes beyond grasping technical concepts to include awareness of the broader societal opportunities and risks AI brings. This dual focus on technical and sociotechnical aspects makes AI literacy critical for navigating the modern AI landscape responsibly.

For organizations, AI literacy provides the foundation for compliance with AI regulations, such as the AI Act, while also ensuring that AI systems align with ethical standards and societal expectations. For individuals, it serves as a means of empowerment—enabling them to protect their rights, exercise democratic control, and make informed decisions when interacting with AI systems, while also increasing their trust but also well-versed awareness of their capabilities and productivity fitness.

The Complexity of Achieving AI Literacy

AI literacy is inherently complex and context-dependent. Different stakeholders, roles, and industries require varying levels of AI knowledge. For instance, a healthcare professional using AI-assisted diagnostics requires knowledge specific to medical AI applications. In contrast, a finance sector employee utilizing AI-driven fraud detection needs literacy tailored to financial systems. Similarly, a developer designing AI systems will require deep technical expertise that differs significantly from the understanding needed by regulators -for enforcement purposes- or deployers.

This context-driven diversity means there is no one-size-fits-all approach to AI literacy. While Article 4 of the EU AI Act mandates that stakeholders attain a “sufficient level” of literacy, this sufficiency depends on the specific roles, responsibilities, and AI systems in question.

AI Pact Workshop as a forum for sharing tailored, role-specific literacy programs

On 11 December, 2024, Credo AI participated in-person in the first AI literacy workshop in Brussels, organised by the AI Office under the auspices of the AI Pact. Addressing the abovementioned diverse requirements, Credo AI emphasized the importance of implementing tailored, role-specific literacy programs, including starting with an assessment of current AI literacy levels to identify gaps. Credo AI shared more details about our AI Academy program, building AI literacy with Global 2000 enterprises by blending foundational AI concepts with sector-specific applications, including ongoing learning opportunities to keep pace with rapid advancements in AI technologies and regulations.

There was an expressed consensus amongst participants in the workshop that effective AI literacy training should combine theoretical knowledge with practical applications, such as real-world case studies, interactive exercises, and gamified learning tools. This approach would help stakeholders develop both the knowledge and the critical thinking skills needed to engage responsibly with AI systems.

Governance and Monitoring

AI literacy is not a one-time effort but an ongoing process that must be supported by broader governance strategies. Discussions highlighted the importance of establishing monitoring mechanisms to track participation in training programs, measure their effectiveness in improving AI literacy levels, and align literacy initiatives with organizational goals and regulatory expectations. Comprehensive documentation of training programs, assessments, and outcomes is also essential. This not only ensures compliance readiness but also prepares organizations for audits by regulatory authorities.

AI literacy is not just an internal organizational responsibility; it also extends to society as a whole. Individuals who interact with or are affected by AI systems—including consumers and vulnerable populations—must also have the tools to understand and navigate AI's impacts. Recital 20 of the EU AI Act calls on the European Commission and Member States to promote voluntary codes of conduct and tools to advance AI literacy. Collaborations between regulators, industry, academia, and civil society were recognized as key to achieving this goal. Such partnerships can create inclusive and accessible literacy initiatives that address the diverse needs of various groups.

Next Steps and Practical Guidance

Looking ahead, the European Commission’s AI Office identified the following milestones as crucial for advancing AI literacy. By 2 February 2025, Article 4 of the AI Act will become applicable, with enforcement by national Market Surveillance Authorities starting looking into its enforcement by 2 August 2025. The AI Office has stressed the importance of a flexible, context-specific approach to achieving AI literacy. Organizations must consider their roles as providers, deployers, and users of AI systems, as well as the fact that compliance should be proportionate to the classified risk levels associated with their AI systems. The AI Office, alongside the AI Board and the AI Pact community (to which Credo AI is a co-signatory), will embark on drafting voluntary guidance on AI literacy by Q3/Q4 2025. Moreover, the AI Office shared that a public webinar, anticipated in February 2025, will further elaborate on these efforts.

The AI Pact discussions underscored that AI literacy is a dynamic and multifaceted effort critical for the responsible deployment and use of AI. It is not just about understanding the technology but also about addressing its broader societal, ethical, and regulatory impacts. By fostering AI literacy, organizations and individuals can ensure compliance with evolving AI regulations, build trust in AI systems, and maximize AI's benefits while minimizing its risks. Through tailored training programs, effective governance, and broader societal engagement, AI literacy can become the foundation for an innovative, equitable, and responsible AI ecosystem—one that benefits everyone. As AI continues to evolve, fostering literacy will ensure that it remains a force for innovation and positive societal change.

Start your AI Literacy Journey today with Credo AI’s Governance Advisory, which helps your enterprise build AI literacy and operationalize AI governance with confidence -  get started!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.