Certified DORA Lead Manager - Digital Operational Resilience Act (Instructor-Led Online)

Is this a Certification Course? Yes, this is a certification course. Certification and examination fees are included in the price of the training course.

Delivery Model: Instructor-Led Online

Exam Duration: 3 hours

Retake Exam: You can retake the exam once within one year

Request more information


Looking for a self-study course?   Click Here.

Price: US$ 1950 / CAD$ 2600

Enroll Now

 

The PECB Certified DORA Lead Manager training course equips you with the necessary skills to lead and oversee the implementation of digital operational resilience strategies within financial entities to help them ensure compliance with European Union’s Digital Operational Resilience Act (DORA). 


Why should you attend?


As DORA will come into force on January 17, 2025, there’s never been a more crucial time to grasp its implications and requirements thoroughly. Attending the PECB Certified DORA Lead Manager training course offers a unique opportunity to engage with industry experts and peers, fostering valuable discussions and insights into best practices for digital operational resilience. Through interactive sessions and practical exercises, you will gain real-world perspectives on implementing effective strategies to mitigate ICT risks and enhance digital operational resilience in financial institutions. 


Additionally, attending this course demonstrates your commitment to professional development and positions you as a competent leader in the evolving landscape of digital operational resilience. Upon successfully completing the training course and exam, you can apply for the “PECB Certified DORA Lead Manager” credential. 


Who should attend?


This training course is intended for:


  • Financial institutions executives and decision-makers
  • Compliance officers and risk managers
  • IT professionals
  • Legal and regulatory affairs personnel
  • Consultants and advisors specializing in financial regulation and cybersecurity

Learning objectives


After completing this training course, you will be able to:


  • Understand the regulatory landscape and compliance requirements outlined in DORA, focusing on key pillars such as ICT risk management, ICT-related incident management and reporting, digital operational resilience testing, and ICT third-party risk management
  • Implement effective strategies and measures to enhance digital operational resilience and mitigate ICT risks within financial institutions, aligning with DORA requirements and industry best practices
  • Identify, analyze, evaluate, and treat ICT risks relevant to the financial entities 
  • Develop and maintain robust ICT risk management frameworks, incident response plans, business continuity and disaster recovery plans
  • Foster collaboration and communication with key stakeholders to ensure successful implementation and ongoing compliance with DORA
  • Utilize industry-standard tools and methodologies for monitoring, assessing, and managing ICT risks and vulnerabilities, enhancing the overall security posture of financial institutions

Educational approach


  • The training course incorporates interactive elements, such as essay-type exercises and multiple-choice quizzes, some of which are scenario-based. 
  • Participants are strongly encouraged to communicate and engage in discussions.
  • The quizzes are designed in a manner that closely resembles the format of the certification exam.

Prerequisites 


The main requirement for participating in this training course is having a fundamental understanding of information security and cybersecurity concepts, and familiarity with ICT risk management principles. 




Course Content


Day 1:  Introduction to the concepts and requirements of DORA


Day 2: ICT-related risk and incident management


Day 3: ICT third-party risk management and information sharing


Day 4: Review and continual improvement


Day 5: Certification exam


Examination


The “PECB Certified DORA Lead Manager” exam meets the PECB Examination and Certification Program (ECP) requirements, and it covers the following competency domains:


Domain 1: Fundamental concepts of ICT risk management and digital operational resilience  


Domain 2: Preparing and planning for DORA project implementation 


Domain 3: ICT risk and ICT-related incident management 


Domain 4: Digital operational resilience testing and ICT third-party risk management 


Domain 5: Review and continual improvement


Certification

After successfully passing the exam, you can apply for one of the credentials shown below. You will receive the certificate once you comply with all the requirements related to the selected credential.


The ICT risk management activities should follow best practices and include the following:


  • Drafting a DORA implementation business case
  • Managing a DORA implementation project
  • Implementing an ICT risk management framework
  • Managing documented information
  • Implementing corrective actions
  • Monitoring and improving the performance of the ICT risk management framework

General Information

  • Certification and examination fees are included in the price of the training course
  • Participants will receive the training course material containing over 450 pages of explanatory information, examples, best practices, exercises, and quizzes. 
  • An attestation of course completion worth 31 CPD (Continuing Professional Development) credits will be issued to the participants who have attended the training course.
  • If candidates fail the exam, they can retake it within 12 months following the initial attempt for free.

 


Price: US$ 1950 / CAD$ 2600

Download the Brochure
Enroll Now

Our latest blog posts

alt=
November 24, 2025
AI can re-identify individuals hidden in anonymous datasets. Learn how GDPR, the EU AI Act, and strong AI GRC frameworks protect privacy in an algorithmic world.
alt=
November 18, 2025
As AI becomes embedded in critical areas, like healthcare and finance, understanding the concept of safety has never been more important. In this video, we explore how the principles of safety, security and resilience form the foundation of trustworthy AI — preventing harm, protecting against attacks, and ensuring systems can recover from failure. You’ll learn how to apply these concepts in practice using global frameworks like ISO/IEC 42001, the EU AI Act, and the NIST AI RMF, and why these 3 principles must evolve together as part of a responsible AI governance strategy. Interested in learning more? Explore our AI GRC courses here.
alt=
November 18, 2025
How does AI handle personal data and what does that mean for privacy and compliance? In this video, we explore how artificial intelligence challenges traditional data protection principles, from consent and transparency to data minimisation and accountability. Learn how regulations like the EU AI Act and GDPR are shaping the future of responsible AI, and what GRC professionals need to know to manage privacy risk in intelligent systems. Interested in more information? Explore our AI GRC courses here.
alt=
November 18, 2025
Welcome to this course on Fairness and Non-Discrimination in AI Systems. Fairness in artificial intelligence is not just a desirable quality; it is a fundamental requirement to ensure that AI systems serve people equitably and responsibly. When we talk about fairness, we refer to the absence of systematic bias, unequal treatment, or discrimination in the way AI makes decisions. This is especially critical because AI increasingly influences decisions in sensitive areas such as hiring, credit scoring, healthcare, policing, and education. A biased algorithm in any of these contexts can cause real harm to individuals and communities. The introduction of fairness as a design principle reminds us that AI must operate within the ethical and social expectations of the societies in which it is deployed. Fairness is also about legitimacy: organizations that fail to demonstrate fairness face reputational, legal, and financial risks. Many international frameworks, including ISO/IEC 42001, the European Union Artificial Intelligence Act, and the NIST AI Risk Management Framework, emphasize fairness as a core principle. In this course, we will explore fairness not as an abstract concept but as a practical requirement. We will discuss how unfairness arises, how to detect it, and how to implement safeguards to mitigate discriminatory outcomes. The goal is to equip you with both the conceptual understanding and the practical tools necessary to ensure AI systems are developed, deployed, and monitored with fairness as a guiding principle. To learn more about our AI GRC professional certification training, you can visit us here .
alt=
November 18, 2025
Welcome to this course on accountability and traceability in AI systems. These two principles are at the core of trustworthy artificial intelligence. Accountability ensures that individuals, teams, and organizations remain answerable for decisions and outcomes associated with AI. Traceability, on the other hand, guarantees that the steps leading to those decisions can be tracked, reconstructed, and verified. Together, they provide the backbone of governance, compliance, and trust. In this session, we will explore how accountability and traceability function within AI governance frameworks such as ISO/IEC 42001, the NIST AI Risk Management Framework, and the European Union’s AI Act. We will discuss practical mechanisms such as audit trails, role assignment, data lineage, and documentation processes that make these principles operational. The course is structured into five sections: first, we will introduce the foundations of accountability and traceability. Then, we will examine how international standards and frameworks define these concepts. Next, we will explore mechanisms that organizations can adopt to ensure accountability and traceability across the AI lifecycle. We will also analyze case studies where the absence of these principles resulted in failures or risks. Finally, we will conclude by summarizing benefits, emerging challenges, and actions you can take to strengthen accountability and traceability in your own organization. By the end of this course, you will have a solid understanding of these two governance pillars, why they matter, and how to apply them to ensure safe, responsible, and compliant AI. To learn more about our AI GRC professional certification training, you can visit us here .
alt=
November 18, 2025
This course builds directly on the foundations established in our AI Governance Foundations module and continues our structured series on AI Governance, Risk Management, and Compliance. While the first course introduced the core concepts, principles, and regulatory landscape, this course goes deeper into the essential pillars of transparency and explainability. It is designed to help you understand why these principles matter and apply them in practice, aligning with international standards such as ISO/IEC 42001, the NIST AI Risk Management Framework, and the EU AI Act. Together, these courses form a progressive learning pathway, equipping you with the knowledge and tools to implement, monitor, and audit AI systems responsibly as you advance through the full AI GRC curriculum. Transparency and explainability are two of the most critical principles in the governance of artificial intelligence. They provide the foundation for trust, accountability, and meaningful oversight of AI systems. Transparency refers to making the inner workings, design choices, data sources, and limitations of an AI system visible and understandable to relevant stakeholders. It ensures that users, regulators, auditors, and impacted individuals are not left in the dark when an AI system makes or supports decisions. Explainability, on the other hand, refers to the ability of the AI system to communicate the reasoning behind its outputs in clear, human-understandable terms. While transparency focuses on openness and disclosure, explainability focuses on comprehension and clarity. Transparency and explainability are essential to ensure that AI systems are not “black boxes” but instead are interpretable, predictable, and accountable. This module introduces the objectives, scope, and structure of transparency and explainability, setting the stage for exploring how these principles are embedded in international standards such as ISO/IEC 42001, the NIST AI Risk Management Framework, and the EU AI Act. Participants will also learn about the risks associated with opaque systems, the benefits of making AI interpretable, and the organizational responsibilities in applying these principles throughout the AI lifecycle. By the end of this module, learners should recognize transparency and explainability as mandatory elements for building trustworthy AI systems. To learn more about our AI GRC professional certification training, you can visit us here .
alt=
November 17, 2025
AI training data can unintentionally expose personal information through model memorization and data leakage. Governance frameworks such as ISO/IEC 42001 and the NIST AI RMF can help organizations mitigate these privacy risks.
alt=
November 5, 2025
You already know how to manage risk. Now it's time to manage intelligence. If you’ve worked in Governance, Risk, and Compliance (GRC) for any length of time, you’ve seen waves of transformation: cloud computing, automation, privacy reform. Each one reshaped the way organizations think about control and accountability. Now, artificial intelligence is the next wave. It’s changing how businesses make decisions, assess risk, and build trust. Many professionals look at AI GRC and think it’s a brand-new specialty. In reality, it’s the next chapter of what GRC was always meant to be — a system that keeps technology aligned with ethics, law, and business purpose. And if you’ve been working in traditional GRC, you’re already well prepared. You just need to apply your existing strengths to a new kind of system: one that learns, evolves, and occasionally surprises you.
alt=
October 27, 2025
How do you prepare for compliance with regulations that are both complex and still evolving? ISO/IEC 42001, the first international management system standard for AI, gives businesses a way to govern, monitor, and document their AI systems.
October 21, 2025
When businesses prepare for an AI audit, they usually focus on the big issues: data breaches, biased algorithms, or compliance with new regulations. Those are obviously important, but they’re not the reason most audits go wrong. More often than not, companies stumble on the basics. Missing documentation, vague accountability, and inconsistent monitoring. These small gaps are easy to overlook in day-to-day operations, but in an audit, they’re the first things the auditor will look at. Being perfect isn’t the goal when it comes to a successful audit. It’s much more important to get the fundamentals right. In this article, we’ll highlight seven common things companies forget when preparing for AI audits, and more importantly, how to fix them before they become costly mistakes.
Show More