Created by Humans. Powered by Purpose. Guided by Ethics.
Ethical eXcellence in Artificial Intelligence (EXAI) Manifesto
“So act as to treat humanity, whether in your own person or in another, always as an end and never as only a means.”
—Immanuel Kant
1. What is the Ethical Excellence in AI Community?
The Ethical Excellence in AI community of professionals with a shared interest in shepherding the maturing power of artificial intelligence technology for the good of all people and to identify and prevent harm.
1.1 Our goal: including what’s mostly good for everyone, excluding anything that is very bad for anyone.
1.2 We as a community are embracing an exciting new age with the grand opportunity to improve upon our use and understanding of a powerful tool; our digital tools are becoming intelligent enough to understand our desires and the way we naturally communicate.
1.2.1 Artificial Intelligence is a loaded term because it can imply so much more—namely, sentience that can be harmful. Our public unconscious is anxious about something so widespread with its own agency that may not share our human values.
1.2.2 Disparate cultures share a common interest in proper use of AI, and a common fear of its potential pitfalls. The bedrock of our situation is that emerging AI technology and advanced models are very powerful, posing both opportunity and legitimate challenges which should be addressed as an organized collective.
1.2.3 This living Manifesto proposes simple tenets for Ethical Excellence in human application of AI. Considering the simple tenets of this proposed living Manifesto, Artificial Intelligence (also known as “AI”) will be treated as just that: a tool for people to use for society’s benefit. Regular updates and feedback loops reinforce the Manifesto’s role as a living document that evolves with technological advancements.
1.2.4 Much like a hammer is designed to build things, it can also be misused due to ineptitude or, worse yet, ill-intent. A hammer can be swung to hit a nail and accidentally strike a thumb. It can also be used as a blunt-force weapon for violence. Whether by accident or on purpose, misusing tools as powerful as advanced models can cause great pain—for shareholders, end users, employees, investors, and broader society.
2. This is what EXAI seeks to mitigate. AI is viewed as a powerful tool, similar to a hammer, capable of great benefit or harm depending on intent and competence. It is these two factors that define all misuse of all tools:
2.1 Ineptitude
2.1.1 If a Financial Services company hires a consultant to implement AI or oversee advanced models and sees neither return nor transparent understanding as to how to oversee the model(s) going forward, this monetary loss would count as ineptitude.
2.2 Malignant Intent
2.2.1 If a mobile game studio adept at applying AI does not factor in (or openly disclose) that their use of powerful models may drive addiction, and correlated effects on the end user, this hypothetical studio may not be inept, but they misuse AI under the latter category: malignant intent.
2.3 Both ineptitude and malignant intent are herein unethical when applying a tool as powerful as AI to any organizational outcome, if only for different reasons.
2.4 Cross-cultural engagement underpins the Manifesto’s goal of inclusivity and its emphasis on benefiting diverse communities globally.
2.5 Encouraging ethical practices through incentives aligns with the Manifesto’s overarching goal of fostering responsible AI use.
2.6 Practical, scenario-based guidance supports the Manifesto’s goal of making ethical AI principles actionable.
3. This is to say that the Manifesto’s tenets are adequate in themselves to proffer a foundation on which to practice AI ethically, empowering professionals to immediately point out malignant practice, and better guide ineptitude regarding AI applications.
3.1 This living Manifesto herein sets a standard for AI as a tool to serve humans as ends, not means. Socratic discourse around the definitions of Its tenets is encouraged, if only for the benefit of The Human—user and developer, employer and employee, and every stakeholder involved otherwise.
3.2 EXAI is an optimistic, educated, and prodigious community that seeks to implement and oversee models at the behest of The Human, not the other way around.
“The proud person always wants to do the right thing, the great thing. But because he wants to do it in his own strength, he is fighting not with man, but with God.”
—Soren Kierkegaard
4. Ensure Transparency and Explainability
4.1 Transparency in AI refers to the ability of users to understand how an AI system works and makes decisions.
4.2 Leaders should be able to elocute to all stakeholders (including employees, investors, end users, etc.) simply what their model does, its rationale, its opportunities, risks, and goals. This includes:
4.2.1 Understanding the data: All relevant stakeholders should be aware of the type, quality, and sources of the data used to train the AI model upon general inquiry.
4.2.2 Knowing the algorithms: The algorithms or models used should be understandable to a certain degree, allowing all relevant stakeholders to grasp the underlying logic of the implementation.
4.2.3 Understanding the decision-making process: The AI system should be able to provide explanations for its outputs.
4.2.4 Open and clear disclosures for model outputs leaves the onus of the decision on the human decision-maker, not the model itself, powerful it may be.
4.2.5 Bias mitigation is integral to making AI decisions understandable and equitable, ensuring transparency aligns with fairness.
4.2.6 Empowering stakeholders through education enhances their ability to engage with and understand AI systems effectively.
4.2.7 Establishing measurable benchmarks aligns with the commitment to transparency and provides clear criteria for success.
5. Explainability is closely related to transparency, but not quite synonymous.
5.1 Companies often utilize complex legal language in their privacy policies to obscure the extent of their data collection and usage practices. Data obfuscation is deemed unethical due to lack of transparency.
5.1.1 Data obfuscation is not limited to, but can be achieved through vague terms, excessive enumeration of purposes, default opt-out mechanisms, indefinite data retention, cross-device tracking, third-party data sharing, and frequent policy modifications.
5.1.2 Data obfuscation makes it difficult for individuals to understand the full implications of their interactions with companies and the potential risks to their personal information.
5.2 Ethical explainability is the ability to provide a clear and understandable explanation for a decision made by an AI system. This can be achieved through:
5.2.1 Feature importance: Identifying the most influential factors that contributed to a decision will give users peace of mind and hold executives accountable for what they set out to do.
5.2.2 Rule extraction: government compliance aside, every model should have a ruleset that is readily available and communicable to all relevant stakeholders.
“If you can’t explain it simply, you don’t understand it well enough.”
—Albert Einstein
6. Fortify Privacy and Data Security
6.1 Privacy in AI refers to the protection of individuals’ personal information from unauthorized access or disclosure. This includes data collected, processed, and used by AI systems.
6.2 Data security in AI refers to the protection of data from unauthorized access, use, disclosure, disruption, modification, or destruction. This involves implementing cybersecurity measures to prevent data breaches and ensure data integrity.
6.3 Data minimization: Organizations should collect and process only the necessary data for their explainable goals.
6.4 Data anonymization/ pseudonymization: Organizations should transform data to remove or disguise personal identifiers.
6.5 Access controls: Implementing strong access controls to limit access to sensitive data.
6.6 Encryption: Data encryption should be implemented to protect it from unauthorized access.
6.7 Regular security assessments: Organizations should conduct regular security assessments to identify and address vulnerabilities.
6.8 Incident response plans: Organizations should have written and communicable plans in place to respond to data breaches and other security incidents.
7. Identify Decision-makers and Hold Them Accountable
7.1 Human decision-makers remain ultimately accountable for the outcomes. This is important and should be instilled as a basic premise for AI implementation.
7.2 Human decision-makers decide to implement AI, and therefore are responsible for ensuring that AI is used in a way that aligns with societal values and avoids harmful consequences.
7.2.1 If this is considered, then a more equitable and understandable system may take place that can do both: grow profits and personal well-being.
7.3 Extending accountability to all phases of the AI lifecycle reinforces the ethical use of AI tools throughout their deployment and beyond.
7.4 Independent monitoring strengthens the accountability mechanisms outlined in the Manifesto.
8. Risk Assessment and Mitigation: AI systems can introduce new risks and challenges.
8.1 Human decision-makers are responsible for assessing these risks and taking appropriate measures to mitigate them, such as developing safety protocols or implementing safeguards.
8.2 Legal and Regulatory Compliance: Human decision-makers are ultimately responsible for ensuring that AI systems comply with relevant laws and regulations. This includes understanding and adhering to privacy laws, data protection regulations, and other applicable legal frameworks.
8.3 Learning and Improvement: Holding individuals and organizations accountable can encourage them to learn from their mistakes and improve their practices. This can help to ensure that AI systems are continuously evolving and improving.
8.4 In essence, human decision-makers act as the final line of defense in AI systems. They are responsible for ensuring that AI is used ethically, responsibly, and in a way that benefits society.
8.5 A dynamic framework ensures ongoing alignment with the Manifesto’s focus on assessing and mitigating emerging AI risks.
9. While this Manifesto does not and will not define “the greater good” or “holistic societal benefit” explicitly, if The Human acknowledges all consequences of AI and advanced implementation that may occur without seeking indemnification, then better clarity can ensue.
9.1 Brighter outcomes will arise when an executive or decision-maker is steadfast in their understanding of accountability for decisions and allocation of resources.
9.2 While AI can automate tasks and enhance decision-making, it is essential that The Human remains accountable for the outcomes and consequences of their use.
10. There should be clear accountability for the development, deployment, and use of AI systems. No matter the function, advanced models and AI (especially) to its most powerful extent should be stamped with one of a legal human name, human face, or a human signature, bearing the responsibility for the consequences of their actions, regardless of outcome.
“It is absurd to make external circumstances responsible and not oneself, and to make oneself responsible for noble acts and pleasant objects responsible for base ones.”
—Aristotle
Join the Ethical eXcellence AI (EXAI) Movement
We invite institutions, developers, educators, and communities to adopt, adapt, or promote the EXAI Manifesto. Let’s build a future where AI uplifts everyone—not just the privileged few.
Contributors to the EXAI Manifesto:
This document was developed collaboratively by members of the Ethical Excellence in AI (EXAI) Community – a diverse group of practitioners, leaders, and advocates committed to advancing responsible, human-centered AI.
Author Collaboration – With Special Thanks To:
Hays “Skip” McCormick ; Philip Sagan ; Nick Oliveri ; Joseph X Ng
for their early authorship, editorial input, and ethical leadership.
This living Manifesto reflects the collective insight of the EXAI Community.
We welcome all future collaborators who share the mission of ethical, transparent, and inclusive AI. To learn more or to join the EXAI Community, visit HuMAINority.org.
Published by HuMAINority Inc. | HuMAINority.org
© [2025] All rights reserved.
