IEEE logo


The Mark of AI Ethics


The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users.


IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We invite you to fill out the form to start receiving these specifications.

Why IEEE CertifAIEd?

The IEEE CertifAIEd program was developed to enable, enhance, and reinforce trust through ethically tenable AI systems comprising technical standards, training, criteria, and certification which:

  • provides independent AI Ethics certification services for products, services, or systems that comprise or support Autonomous Intelligence Systems (AIS)
  • provides a unique, visible, differentiating AI Ethics certification mark
  • reflects socio-technical values in the development and deployment of AI products, services and systems, supported by comprehensive ethical criteria developed and verified by global experts
  • is a cross-industry framework based on broadly accepted priorities in the AI domain: privacy, reduction of algorithmic bias, transparency, and accountability
  • offers customizable criteria to the use cases deemed critical by upcoming regulations
  • leverages adaptive criteria within a risk-based approach, minimizing changes when applied to existing frameworks for AI practitioners and test labs, and helping to enable economically beneficial results for the ecosystem and its stakeholders

The Basis of AI Ethics Certification

IEEE CertifAIEd criteria consist of a suite of detailed specifications for the evaluation, assessment and certification of ethical properties of AI products, services, and systems. The criteria were developed in an open manner through the Industry Connections Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS).

The following facets of ethical responsibility are applied:


Transparency criteria relate to values embedded in a system design, and the openness and disclosure of choices made for development and operation.


Accountability criteria recognize that the system/service autonomy and learning capacities are the results of algorithms and computational processes designed by humans and
organizations that remain responsible for their outcomes.

Algorithmic Bias

Algorithmic bias criteria relate to the prevention of systematic errors and repeatable undesirable behaviors that
create unfair outcomes.


Privacy criteria are aimed at
respecting the private sphere of life and public identity of an
individual, group, or community, upholding dignity.

The IEEE CertifAIEd Mark

The IEEE CertifAIEd mark provides organizations with the visibility that demonstrates that transparency, accountability, algorithmic bias, and privacy safeguards needed to build trust in their AIS have been addressed. The mark:

  • affirms an organization’s commitment to upholding human values, dignity, and well-being, and to respecting, protecting and preserving fundamental human rights.
  • conveys an AIS’s capability to fulfill applicable requirements stipulated in the appropriate criteria, and fosters trust and facilitates the adoption and use of AIS.
  • enhances confidence in public and private enterprises that wish to realize the benefits of AI ethics certification in the absence of or as a complement to broadly accepted and enforced regulations for AI, while mitigating adverse impacts on their reputation and market share.

Whatever your organization’s role in the development and delivery of AI products, services, or systems– as a developer, system integrator, vendor, operator, or maintainer– you stand to gain from the responsible innovation-driven, human-centric, independent IEEE CertifAIEd training, evaluation, assessment, certification, and mark.

Aspects of Certification

IEEE CertifAIEd offers the ability to scale responsible innovation implementations globally, thereby helping to increase the quality of AI systems, the associated trust with key stakeholders, and realizing associated benefits.


Certification Guidance & Assessment

IEEE Authorized Assessors help guide organizations through a thorough review and examination to determine the ethical risk and reward profile of an AI product, service, or system and its conformance with relevant ethical criteria.

Independent Verification

An IEEE Authorized Certifier grants a certificate and mark to the AI product, service, or system based on independent review and verification of the IEEE CertifAIEd assessment and conformance to the certification scheme.

IEEE CertifAIEd offers trainers the ability to address a skills gap. It offers a vital source of in-depth knowledge about the process and ethical criteria that are essential for conformity assessment against IEEE CertifAIEd.


Being recognized as an IEEE CertifAIEd trainer generates credibility, backed by IEEE expertise and standardization know-how. Enrollment in the IEEE CertifAIEd ecosystem can generate visibility and possible new opportunities.

IEEE CertifAIEd assessor certification underpins provision of expertise and credentials for delivering AI Ethics assessment while flexibly integrating into the assessors’ own offerings.


Joining the IEEE CertifAIEd ecosystem offers the ability to scale responsible innovation implementations globally, thereby helping to increase the quality of AI systems, the associated trust with key stakeholders, and realizing associated benefits.

IEEE CertifAIEd offers certifiers a well defined process reflecting a clearly stated value proposition.


Training is offered with clearly documented criteria tailored by vertical, reflecting strong assessment documentation for validation. The assessments and certifications are decoupled and independent, adding to a credibility that stems from existing proven methodologies and reputable expert groups.

Discover More

Several organizations have partnered with IEEE to pilot IEEE CertifAIEd to apply the criteria in real-world settings. Learn how your organization can create profiles or use cases focused on an industry vertical. 

  • Learn more about the AI Ethics Certification Process
  • Read the Blog Post The IEEE CertifAIEd Framework for AI Ethics Applied to the City of Vienna, written by Dietmar Schabus, Data Scientist, Wiener Stadtwerke.
  • Read the IEEE Use CaseCriteria for Addressing Ethical Challenges in Transparency, Accountability and Privacy of Contact Tracing, developed to address the broad ethical considerations stemming from the proliferation of mobile technologies that alert individuals and relevant public health institutions of a person’s exposure to a communicable disease.
  • See how IEEE 7000-2021, Standard Model Process for Addressing Ethical Concerns during System Design, can help you create a roadmap towards responsible implementation. IEEE 7000 provides processes for engineers to translate stakeholder values and ethical considerations into system requirements and design practices. IEEE CertifAIEd is a path forward that brings an assessor to help guide the way towards independent verification and certification.
  • IEEE has a suite of standards addressing socio-technical challenges and issues that promote innovation, foster interoperability and honor human values. Our global standards cover identifying and mitigating algorithmic bias, for documenting and protecting users’ interests in personal data, for evaluating reliability of online messaging, for protecting children’s personal information online, for assuring the safety of autonomous and intelligent systems, and other relevant areas.

IEEE logo

© Copyright 2022 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.