February 16, 2026
d2d49dfb6f78c37f364c7afc3b5c3dbcedce6c9ac5e8ce557d43982b679348ad.jpeg

I show You how To Make Huge Profits In A Short Time With Cryptos!

Twenty-five years in the past, Jay Bavisi based EC-Council within the aftermath of 9/11 with a simple premise: if attackers perceive programs deeply, defenders want to know them simply as nicely. That concept led to Licensed Moral Hacker (CEH), which went on to turn into some of the well known credentials in cybersecurity.

Bavisi thinks we’re at the same inflection level once more—this time with AI.

The expertise is shifting quick. The workforce isn’t. And similar to the early days of software program improvement, a lot of the consideration is on what AI can do, not on the way to deploy it safely, responsibly, or at scale.

“We’re again in that period the place constructing one thing feels cool,” Bavisi advised me. “Within the early days of net improvement, safety and governance have been afterthoughts. We’re doing the identical factor once more with AI—performance first, use instances first, and solely later asking what the dangers are.”

That’s the hole EC-Council is making an attempt to deal with with the most important growth of its portfolio in 25 years: 4 new AI certifications and a revamped Licensed CISO program.

The Expertise Hole Isn’t Hypothetical

The info behind this push isn’t refined. IDC estimates unmanaged AI danger might attain $5.5 trillion globally. Bain initiatives a 700,000-person AI and cybersecurity reskilling hole within the U.S. alone. The IMF and World Financial Discussion board have each landed on the identical conclusion: entry to expertise isn’t the constraint—persons are.

I’ve spent the final couple of years speaking with executives about AI, and the tone has shifted. Early on, almost everybody insisted AI wasn’t going to exchange jobs. It grew to become virtually ritualistic. Comprehensible, positive—however not totally trustworthy.

Recently, the messaging has modified. Some roles will disappear. That’s not controversial anymore. The extra correct framing has all the time been: AI in all probability received’t take your job, however somebody who is aware of the way to use AI higher than you would possibly. That’s the true danger—and the true alternative.

What EC-Council Is Really Launching

The brand new certifications are constructed round a framework EC-Council calls ADG: Undertake, Defend, Govern. It’s meant to present organizations a manner to consider AI intentionally, slightly than defaulting to “simply purchase a subscription and see what occurs.”

“It’s not nearly choosing Claude or Gemini or GPT,” Bavisi stated. “Your knowledge, your buyer data, your enterprise processes all get pulled in. You want guardrails.”

The 4 certifications are role-specific:

  • AI Necessities (AIE) is baseline AI fluency—sensible, not theoretical.
  • Licensed AI Program Supervisor (C|AIPM) focuses on implementing AI packages with accountability and danger administration.
  • Licensed Accountable AI Governance & Ethics Skilled (C|RAGE) targets governance gaps, aligning with frameworks like NIST AI RMF and ISO/IEC 42001.
  • Licensed Offensive AI Safety Skilled (COASP) teaches practitioners the way to assault LLM programs in order that they perceive the way to defend them.

That final one feels particularly on-brand. It’s basically the CEH mindset utilized to AI: you possibly can’t shield what you don’t perceive.

Why This Isn’t Tutorial

Bavisi shared a current instance that places the urgency into perspective. EC-Council took half in a managed take a look at with a top-ten international insurance coverage firm. They in contrast conventional human-led pen testing in opposition to the AI strategy.

Throughout three rounds, people discovered 5 complete vulnerabilities. The AI discovered 37.

That’s not an indictment of human talent. It’s a reminder that AI doesn’t get drained, doesn’t neglect, and doesn’t function throughout the identical constraints. The job doesn’t disappear—however the expectations round the way it’s accomplished change dramatically.

The CISO Function Is Altering Too

Alongside the AI certifications, EC-Council up to date its Licensed CISO program to model 4. Safety leaders are actually accountable for programs that be taught, adapt, and make choices autonomously, however that’s not what most CISOs skilled for a decade in the past.

The up to date curriculum displays that actuality—much less guidelines safety, extra governance, danger possession, and accountability in AI-driven environments.

Why This Issues

Certifications don’t magically make somebody an professional. I’ve collected sufficient of them over time to know that. However they do matter. They open doorways. They sign baseline competency. And proper now, that sign carries extra weight than common.

“There are cloud engineers and GRC professionals all over the place asking the identical query,” Bavisi stated. “How do you do governance and danger with AI? Till now, there haven’t been actual frameworks or actual coaching packages.”

AI isn’t slowing down. The workforce has to catch up. EC-Council is betting that structured, role-based training—grounded in sensible actuality slightly than hype—may also help shut that hole. Given what they did with CEH, it’s a wager price being attentive to.

Newest posts by Tony Bradley (see all)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *