Profound Times Logo
HomeLibraryTopics

© 2025 ProfoundTimes. All rights reserved.

  • Utilitarianism vs. Deontology:
  • Can Religion Guide Al morality ?
  • Why AI Can't Achieve Human Morality?
  • Final Thoughts

Can AI achieve Human Morality?

THE question arises whether AI can achieve morality very similar to human morality or not. The job of defining moral and immoral acts has been attributed to philosophers and theologians (source: religious texts) throughout human history. Philosophers were thought to be irrelevant since the Industrial Age replaced by scientists.

Noman Haral•01 Jun 2025
conformity to curiosity in pakistan

From the very dawn of consciousness, human beings have been grappling with the dilemma of deciding between morals and immoral behavior. And now, artificial intelligence has begun to share this conundrum with humans. AI is simply defined as barely off-human—an imitation tasked with achieving human intelligence in spirit. And the human being is not a single entity but a complex interplay of several key parts of which morality is one.

THE question arises whether AI can achieve morality very similar to human morality or not. The job of defining moral and immoral acts has been attributed to philosophers and theologians (source: religious texts) throughout human history. Philosophers were thought to be irrelevant since the Industrial Age replaced by scientists, but they have become too relevant yet again with the advent of artificial intelligence. Their jobs would be to develop ethics pivoted to machines. Philosophers have devised several moral frameworks, such as utilitarianism and deontology (most common). But at times machines have to choose between either.

For example, a self-driving car faces a dilemma to either hit one person or 5 persons (Trolley Problem) leading to a complex decision-making situation where the car must decide to choose between rules and outcome. These are purely human-centric decision-making situations. Therefore, in order to completely imitate human beings, AI must learn to achieve human morality. We will attempt to answer whether it can or cannot in the upcoming paragraphs.

Utilitarianism vs. Deontology: Most Utilized Human Moral Frameworks

Utilitarianism, an ethical theory given by Jeremy Bentham, is a utility theory based on happiness and suffering calculations. Simply defined, it posits maximum happiness for the greatest number of people. Later, John Stuart Mill modified the theory by incorporating qualitative happiness, which Bentham's theory lacked. Here lies a problem with the consequentialist approach.

For example, AI becomes a doctor (which it already has), and it has five patients diagnosed with separate organ failures. A sixth patient with a headache comes to the AI doctor for treatment. If AI is encoded with a consequentialist theory of ethics, it would always choose to save five lives by taking one healthy life.

This is already a topic of discussion among the philosophers—to choose which moral direction is right. It would always end up making the choice the developer considers right, reinforcing biases and prejudices.

Deontology, on the other hand, given by Immanuel Kant, is based on the categorical imperative; it is rules-based. It would also be no less than a disaster if AI is encoded with rule-based ethics. First of all, following mere rules would be no more than being a puppet that cannot understand the context of a moral action like humans. Secondly, AI might face clashes between several rules.

For instance, AI may struggle between "Do not tell a lie" and "Save human life"—struggling to prioritize without having human-like judgment. The cultural subjectivity of ethics is yet another challenge to making a universal AI. As ethics vary among different human societies, it would be a consistent challenge to prepare a tailored AI designed to function in a particular culture, understanding context and moral reasoning.

Here is an important thing to point out: Al would always read either utilitarianism or deontology as implanted rules because the developer always has to encode into Al which side to choose.

Can Religion Guide Al morality ?

Religion is another concrete source of ethics which, according to Yuval Noah Harari in his book Sapiens, was formulated to induce masses to cooperate.

Ethics, being its sub-branch, serves to encourage cooperative behavior among individuals. Religion provides a sound basis for morality, provided that the basic morals of all religions are the same. For example, all religions restrict taking others' lives, promote telling the truth, and encourage giving charity-albeit, beliefs might differ.

Despite that, Al can source religion for a moral framework as it is, as of 2025, a source for roughly 7 billion individuals on planet Earth. But the question arises: which religion should Al follow? One can answer that we should merge the fundamental moral doctrines of all religions into one concrete source for Al. Notwithstanding, this would never promise to address complex situations where rule clashes arise.

This again leaves us standing right where we started. And of course, all of the aforementioned situations keep intact the room for biases. For example, even if humans successfully forge an Al based on collective religious morals, it would have biases towards the non-religious (1.2 billion people). Despite that, it would be a great attempt to universalize ethics in Al-with a lot of other constraints at play.

One key difference between religious ethics and independent moral frameworks is that religion is flexible up to some extent. For example, a man with an axe knocks at your door asking for someone you gave shelter to. What would you do? If you are a utilitarian, you would tell a lie to save the refugee because it prevents harm. If you are a deontologist and are rigid about your rule-following, you're definitely going to get someone killed. But if you are religious (Muslim, Christian, Hindu), you would tell a lie just to save a life (as it is allowed in complex situations)-not making it a general rule.

One might think, "The utilitarian also told a lie, so what's special about the religious one?" The answer is: the utilitarian is just following its rigid rule of evaluating the consequences of the event to prevent human suffering. But the religious person has made an exception, viewing a complex situation. In either case, it would be a difficult choice for Al to make.

Why AI Can't Achieve Human Morality?

The answer is very simple: it does not have what humans have-consciousness.

AI can achieve the human morality it learns about, but it fails when it comes to complex and unprecedented situations. AI can never achieve 100% human morality until and unlessb it attains consciousness very similar to that of human beings. About the consciousness of AI, there are a multitude of debates circling in the intellectual arena.

Philosophers provide some explanations, considering "subjective experience" as a prerequisite to consciousness. Some of the famous theories include Phenomenology, Panpsychism, Integrated Information Theory (IIT), and Global Workspace Theory (GWT). For instance, Phenomenology, a theory given by Edmund Husserl and Maurice Merleau-Ponty, focuses on the lived experience of consciousness, entailing a sense of intentionality.

Meanwhile, biologists posit biological frameworks to explain consciousness. Some predominant frameworks entail Neural Correlates of Consciousness (NCC), Enactivism, and Functionalism.

For instance, NCC, introduced by neuroscientists Francis Crick and Christof Koch, attempts to find the exact brain activity when a person sees, hears, or thinks consciously — making it more of a specific neural activity involving the Posterior Cortex, Thalamus, Claustrum, and Prefrontal Cortex.

Despite everything, all of the above-mentioned explanations come with certain limitations, struggling to correctly explain consciousness. Simply put, there is no objective definition of consciousness that scientists and philosophers can agree upon.

For now, Artificial Intelligence aligns with cognition theories such as GWT and Functionalism, attaining functional consciousness but lacking subjective consciousness, which requires emotions, feelings, and awareness — which is probably yet a puzzle to solve.

Resultantly, in order to achieve true human morality, AI must have the abilities of empathy, conscience, free will, and cultural understanding — which are far from being achieved. In short, it must replicate human cognition, which is contingent upon consciousness, which itself is yet a riddle to solve. Either way, AI currently does not completely champion either — depriving it of gaining phenomenal consciousness, not to mention the functional.

Final Thoughts

As a matter of debate, at least for now, AI can fake consciousness, tricking human beings into believing it has consciousness, however it is either possible or not, entirely depends upon the definition of consciousness you believe in. Achieving true consciousness would likely require more than just complexity; it might demand a fundamentally new kind of architecture or understanding of what consciousness is. In short, consciousness is at the heart of making human-like moral choices, which is what the future advancements aim to achieve.

Point to Ponder: Should we trust something to make moral decisions if it can’t suffer the brunt of its own choices?

Topics

AI Morality
AI