The European Union (EU) AI Act implementation continues to unfold as per the dates prescribed in the Regulation, and the first requirements for enterprises enter into force beginning on 2 February 2025: the AI Literacy requirements (pursuant to Article 4) and the ban on prohibited AI practices (pursuant to Article 5). Enforcement by national Market Surveillance Authorities, ensuring adequate and timely enforcement against AI systems presenting a risk to health, safety and fundamental rights, is set to follow from 2 August 2025 onwards.
Enterprises must be prepared to meet AI literacy requirements and identify which use cases are prohibited to prevent their deployment in order to avoid non-compliance penalties. In this blog, we share insights into how to meet these EU AI Act requirements with the Credo AI Platform and our AI Advisory offering - a proactive, layered approach for navigating and complying with the complexities of this regulatory framework. Importantly, the European Commission has lined-up some further practical guidance on complying with both the above-mentioned requirements, which will be instrumental in clarifying significant pending questions on how to operationalise those (anticipated to be published shortly).
AI Literacy as a Cornerstone for Responsible AI Governance
AI literacy is more than technical expertise—it encompasses an understanding of societal, ethical, and regulatory implications of AI technologies. The EU AI Act defines AI literacy as the combination of skills and knowledge necessary for stakeholders to engage responsibly with AI systems. This multidimensional competency ensures that AI systems align with societal values and regulatory standards, reducing the risks of misuse and mistrust. With this in mind, the EU AI Office has stressed the importance of a flexible, context-specific approach to achieving AI literacy. Organizations must consider their roles as providers, deployers, and users of AI systems, as well as the fact that compliance should be proportionate to the classified risk levels associated with their AI systems.
Credo AI recognizes the complexity of achieving this level of literacy. Different industries, roles, and organizational maturity levels require varied approaches to learning. A healthcare professional using AI diagnostics has vastly different literacy needs than a finance-sector employee leveraging AI for fraud detection or a policymaker designing AI regulations. Our solutions address these challenges with flexible, role-specific programs and tools, inherently departing from a one-size-fits-all approach.
Prohibited Practices under the microscope
The EU AI Act prohibits certain AI systems deemed to pose unacceptable risks to fundamental rights, health, and safety. These bans apply universally to all operators, regardless of their role (developer, provider, deployer), and carry strict penalties of up to €35 million or 7% of global annual turnover for non-compliance.
Among them are manipulative and deceptive AI systems designed to subliminally influence behavior, such as emotionally manipulative content recommendations or hidden persuasion techniques in advertising. AI systems that exploit vulnerable individuals, including those targeting people based on age, disabilities, or financial hardship, are also banned due to their potential for harm.
Social scoring systems, which assess individuals based on their behavior or personal traits and lead to discriminatory or unfair treatment in unrelated contexts, are strictly prohibited. Similarly, predictive policing AI based solely on personality assessments or profiling, without objective evidence, is banned to prevent unjustified targeting and potential biases in law enforcement.
The Act also prohibits AI systems that scrape and collect facial images from online or CCTV sources to build facial recognition databases, as this practice raises significant privacy concerns. Emotion recognition AI in workplaces and educational settings is banned due to concerns about scientific validity, bias, and the potential for misuse.
Additionally, biometric categorization AI that classifies individuals based on sensitive attributes such as race, political beliefs, or sexual orientation is prohibited to prevent discrimination. Finally, the use of real-time remote biometric identification, such as live facial recognition by law enforcement in public spaces, is strictly forbidden due to the risks associated with mass surveillance and wrongful identification. These prohibitions reflect the EU's commitment to ensuring AI is developed and deployed in a manner that upholds privacy, fairness, and human dignity.
The European Commission took stock of the main concerns and feedback stakeholders had to share, during the course of a 4-week public consultation period (which closed on 11 December 2024). The contributions to the consultation will feed into the Commission's guidelines on the definition of AI system and prohibited AI practices under the AI Act, to be published in early 2025.
It is noteworthy that to ensure market surveillance and control of AI systems in the EU market, the national market surveillance authorities will have to annually report to the European Commission about the use of prohibited practices that occurred during that year and about the measures taken.
Credo AI’s Governance Platform: Empowering Compliance
Our Responsible AI Governance Platform is specifically designed to streamline the process of implementing and scaling responsible AI practices. From assessing risks to creating transparency artifacts, the platform reduces the governance burden on technical teams and enables seamless compliance with global regulations, including the EU AI Act.
Credo AI enables organizations to determine which EU AI Act requirements apply to each use case via a tailored EU AI Act intake questionnaire. Through this questionnaire, organizations can quickly identify whether their use case falls under the prohibited practices, what type of entity are they considered for each particular use case, and if it is considered high-risk. Additionally, Credo AI then recommends EU AI Act Policy Packs—comprehensive collections of policy requirements based on the information provided in the questionnaire. These Policy Packs give organizations clear guidance on what requirements they need to meet and how to meet them. We’ve developed five Policy Packs tailored to different contexts, including high-risk providers, high-risk deployers, technical documentation, transparency requirements, and general-purpose AI models. The AI literacy requirement is a key element within each relevant Policy Pack, requiring organizations to provide evidence as part of their governance plan to demonstrate compliance.
Tailored, Role-Specific AI Literacy Programs
Our AI Governance Academy is at the heart of our AI literacy initiatives. It provides targeted training modules for diverse stakeholders:
- For non-Technical Roles: Foundational modules offer insights into the ethical, legal, and societal dimensions of AI, empowering policymakers, legal teams, and business leaders to make informed decisions.
- For Technical Teams: Advanced modules provide deep dives into AI risk management, governance workflows, and regulatory compliance.
- For Cross-Functional Teams: Interactive tools and scenario-based exercises bridge knowledge gaps, fostering collaboration across departments.
Moreover, inclusivity is a core principle of our programs. By addressing diverse learning needs and leveraging flexible formats—from self-paced digital modules to in-person workshops—we ensure accessibility for participants regardless of their technical expertise or background.
Compliance and Beyond: Turning Governance into Competitive Advantage
Meeting the requirements of the EU AI Act is just the beginning. Credo AI helps organizations leverage compliance as a springboard for innovation and trust-building. Our advisory services, offered in modular formats, guide organizations through every stage of their governance journey, from initial assessments to full operationalization. This includes:
- Custom AI Governance Roadmaps: Tailored plans that align governance practices with business objectives and regulatory demands.
- Enterprise-Level Tools: Features like an AI registry and vendor registry centralize governance activities, enabling consistent oversight across global operations.
- Continuous Learning: Regular updates to our literacy programs ensure organizations stay ahead of technological advancements and evolving regulations.
Looking Ahead: Building a Responsible AI Ecosystem through Partnerships and Leadership
Credo AI’s commitment to responsible AI governance extends beyond compliance. Feedback loops and real-time insights help us refine our offerings to remain effective and relevant.The European Commission and AI Pact community emphasize the need for voluntary codes and collaborative efforts to advance AI literacy. Through collaborations with regulators, academia, and civil society, we are helping shape the future of AI literacy. As a co-signatory of the AI Pact, we are dedicated to its core commitments, including advancing AI governance implementation, identifying high-risk systems, and promoting ethical, responsible AI through staff literacy. Our participation in the first AI literacy workshop hosted by the EU AI Office further highlights our leadership in driving global efforts toward ethical and responsible AI governance. Additionally, Credo AI remains actively involved in international standardization efforts to ensure alignment with evolving best practices and compliance frameworks. AI literacy is the cornerstone of responsible AI adoption, fostering trust, accountability, and innovation. As organizations prepare for the transformative changes ushered in by the EU AI Act, Credo AI is your trusted partner in navigating these complexities. By combining cutting-edge technology, tailored advisory services, and a commitment to inclusivity, we enable organizations to meet regulatory demands, manage AI risks, and scale governance practices with confidence. Together, we can ensure that AI serves as a force for positive societal change while driving sustainable business growth. At Credo AI, we are dedicated to facilitating organizations not only meet these requirements but also operationalize responsible AI governance at scale, fostering innovation while ensuring compliance.
Compliance Milestones for AI Operators in 2025
- 2 May 2025 – Finalization of Codes of Practice for General-Purpose AI (GPAI) Models
- By this date, the Codes of Practice for GPAI models should be finalized and ready for adoption.
Important Note: If the Code of Practice cannot be finalized or is deemed inadequate by the AI Office, the European Commission may introduce common rules for GPAI providers via implementing acts to ensure effective compliance.
- 2 August 2025 – Compliance Requirements Begin for GPAI Models
- GPAI models, including those that pose systemic risks, must start complying with obligations under the EU AI Act.
Important Note: Providers of GPAI models that were already placed on the market or put into service before this date must ensure full compliance by August 2, 2027.
- Late 2025 – Publication of European Harmonized Standards
- The European Standardization Organizations CEN/CENELEC are expected to issue European Harmonized Standards in response to the European Commission’s request. These standards will serve as a reference for AI providers seeking conformity with the AI Act.
Start your AI Compliance Journey today with Credo AI’s Governance Advisory, which helps your enterprise build AI literacy and operationalize responsible AI governance with confidence - get started!
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.