Who matters? And who cares?

in #philosophy6 years ago (edited)

This is part II of a planned series of posts on the foundations of ethics. In the first part (which you should probably read first if you haven't) I made the case that ethics (from a systems perspective) can be formulated as an optimization problem and I promised that subsequent posts would go over how to actually do this. The most difficult part of this task lies in relating and negotiating the interests of different moral stakeholders, but before tackling these thorny issues, let us first discuss who should have the right to moral consideration in the first place, and what moral good means for those individuals considered in isolation.

In the previous post I argued that when reasonable people disagree on how society should be organized it is mostly because they substitute a complex problem, which they cannot solve (how to create the best world possible) with one that is more tractable (how to maximize freedom/fairness/equality/prosperity etc, depending on their personal sensibilities). I find that another much too common reason for people to argue (even if they basically agree completely) is the imprecise use of words – often highly abstract ones that tend to mean very different things to different people.

While a field of study is in its infancy, people tend to use whatever everyday words that best fit what they are trying to describe. In the beginning, the wording is very heterogeneous, but with time the nomenclature is coalesced, and words that are vague and ambiguous in their everyday usage take on a precise technical meaning within the field. A good example of this is work. In colloquial use it could mean effort, employment or even to function. In the physical sciences however, it has come to take on the very precise meaning of force exerted on a body over its trajectory (where force is incidentally another great example).

Since ethics from a scientific perspective is very much in its infancy (in terms of not yet having produced any consensus results) its frequently used words are still rife with ambiguity. I would therefore like to start this post by clearly defining my own personal use of some foundational words. As stated in the previous post, I believe ethics and morality only make sense as concepts because there are sentient beings. So before moving on to comparing the interests of different sentient beings I want to make sure we are on the same page regarding how I define sentience (whether or not that agrees with yours or anybody else's definition). Related to sentience, another word that has people talking past each other to no end is consciousness, and I wish to clarify what I mean with consciousness as well, so that I can later use these words at my discretion.

I will start with the latter word, since to me consciousness is a prerequisite for sentience. I define consciousness as having a subjective experience. It does not involve self consciousness, inner monologue or anything else apart from simply having an experience. If there is no consciousness, nobody is home in the universe, which means that no one can possibly be there to care about good or evil. On the other hand, it is conceivable (at least at our present understanding of consciousness) that there would be conscious beings who simply don't care about anything. Such beings would not have a concept of good or bad as they couldn't care less either way. They do therefore not fulfill the requirements for being moral stakeholders. A conscious being who also has a subjective experience of good or bad (in the strictly self interested sense for now) are defined as sentient. A shorter, snappier way of putting it is that a moral stakeholder is anyone who cares.

Ethics is then the business of maximizing the subjectively good for all sentient beings (i.e. moral stakeholders). But what is to be maximized, exactly? Historically, in the tradition of utilitarianism, hedonic qualities were the first to be considered. Good was defined as pleasure, bad as pain. While Bentham's hedonic calculus involved many variables, including the intensity, duration, immediateness and certainty that the pain or pleasure would follow from a given act, ultimately, the qualities that mattered were pain and pleasure. If we accept his calculus it follows that the greatest moral good would be for everyone to have constant orgasms in a comfortable bed on a Mediterranean beach while eating cheese crust pizza and drinking mojitos or something to the same effect. Is that really the life you want? (Admittedly, that doesn't sound too bad, but I would definitely get tired of it eventually.)

Even for animals, it seems that the most worthwhile life would not be one of constant high intensity pleasure (Bentham's successor, John Stuart Mill, made similar objections to the simple hedonistic view of moral optimum. His proposed solution was to distinguish between higher (mental and spiritual) and lower (purely carnal) forms of pleasure. However, I do not find this very satisfactory as it is very arbitrary and heavily tainted by cultural biases). Rather, the best examples we could find of happy mammals would have their basic needs provided. They would not go hungry (maybe even finding the occasional special treat), they would have access to sexual partners, be able to provide for themselves and their offspring by exercising their skills at hunting, grazing or gathering food. If they were social animals they would probably be on the top of the social ladder among their peers, while not being seriously challenged by the next in line. Fear of predators would be absent or intermittent. What would it feel like to be such an animal? Of course, we can only speculate. But I bet it would not be anything like the constant orgasm scenario painted in the last paragraph.

Rather than constant ecstasy their emotional valence would be mostly neutral, with semi regular spikes of positive affect and a few hedonic dips of pain, discomfort and uncertainty, representing the challenges of life, usually successfully overcome. Thus, the indivual maximization of moral good for a mammal in isolation would look somewhat like a rollercoaster ride, with most of the deviations from equilibrium to the postive side.

This picture strikes a chord in the human case as well. Like the sucessful rodent, a maximally happy human is probably safe and warm, neither starving nor obese, with the financial means to fulfill most of one's other material desires. She or he has a good social standing and ample mutual trust among family and friends. But in contrast with the other animals, this is not the full picture.

Most humans want a meaningful job, they want to contribute to something greater than themselves. They have hopes and dreams. They are the only ones who know of their own mortality, which allows them to fear death (their own and their loved ones'), and they may wish to be remembered after they are gone. Due to our exceptionally large social groups, superior communication skills as well as stories, news and gossip, we are aware of a host of possible fates that might befall us or those for whom we care. Our ability to comprehend arbitrarily abstract concepts and understand ourselves and our environment in terms of those gives us an additional layer of metacognition with no counterpart in other species.

It seems to me that even the more nuanced picture given above of how animal welfare is maximized is not sufficient to understand what to maximize for optimal human outcomes. However, all these layers of metacognition, however many and contorted, build up to a top layer with a very accessible interface: If you want to know what a human truly wants, why not just ask them?

Any imagined utilitarian utopia where humans are kept as well-provided-for zoo animals would be more like a dystopia to most people. Why? Because we evolved executive function to be able to make quick and adaptable decisions on what is best for us. The evolutionary rationale for this is that while our genes set the boundary conditions of what we seek in life, an adult human being will usually be able to make much more adaptive decisions within the circumstances where they find themselves than any hard coded instincts ever could. This has lead to most humans strongly valuing the ability to independently make decisions regarding their own lives.

It thus seems like each moral stakeholder with the ability to speak for their own interests be free to do so. However, there are important objections to this prescription. There are many cases of humans who would not be better off deciding for themselves, the most obvious example being children. All cultures, to my knowledge, recognize that children are better off if a lot of decisions affecting their wellbeing are outsourced to their caregivers, because they don't yet have sufficient life experience to make competent decisions.

This exception and others (e.g. the demented elderly) tells us that we need a refinement to the principle hinted at above. This refinement should respect the intuitive exceptions but maintain the spirit of the principle. I propose that for moral optimization, the interest of each individual should be considered in such a way that their moral vote is cast on the alternative that they would prefer above all others if they were to live through all alternatives. This means that if someone else is better equipped to make such choices, this may be warranted. I wish I could state this in a more precise and lucid fashion, but for now, this is the best I can do. As this is the main conclusion of this part of the series, I will reiterate it:

The interest of each individual should be considered in such a way that their moral vote is cast on the alternative that they would prefer above all others if they were to live through all alternatives

This refined principle maintains some sense of self sovereignty even to those in need of a custodian. Even if their custodian is the one actually making the decision, in order to optimize moral outcomes, they should make the decision in such a way that they believe the likely outcome will be the one preferred by their protege. This principle applies equally to animals. Wild animals are often better equipped than anyone else to fend for their own wellbeing (even if they don't actually succeed). Their behavior will be a boundedly rational attempt at just this: making the choices that are most likely to be preferred by themselves in the long run. However, for domesticated or injured animals, there is a strong moral case for human custodianship.

The refined principle also enshrines a subtle caveat: even if someone is not competent enough to know what is in their own self interest (as defined by themselves), a custodian might not be the best answer, since incapacitation usually comes with a cost in terms of frustration, social standing and self esteem. It is therefore in every case important to weigh all the benefits and costs against each other to arrive at a true optimum.

This leads to the primacy of consent in moral decisionmaking, as very eloquently formulated in this recent post by voluntaryelle which I highly recommend.

I hope you found this piece interesting, and whether you agree or disagree, please let me know in the comment field – in this way we might both learn something! And if you found it interesting at all, please upvote!

Sort:  

Congratulations @raztard! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

You got a First Vote
You got a First Reply

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

I dont know who cares, i am not evern sure what that means, caring. Same goes for what matters, seems like all the things that matter, you cant really buy, like Time. Time matters, but you cant buy time, and you cant make it either. Time is the real CURRENCY, not BITCOIN.

But anyway, i am just yapping away, dont mind me much. But do hit me with that botom.

https://steemit.com/voyager/@errante/for-those-who-like-to-dream-big-like-me-to-the-stars-back-to-where-we-came-from

You know empirically what caring feels like for you. I think that is the fact of the matter of what it means for you to care. And I believe that mattering is something that people do, who care about anything. Because if you care about something, that something matters to you, which means that in some sense, this thing now matters, whereas if no one cared about it, it wouldn't matter at all.

I also believe that there are a lot of things that matter that you can by and a lot of things that matter that you cannot buy, but we tend to focus more on those things which we frustratingly cannot buy.

Thanks for reading and dropping a line, I'll make sure to take a look at your content as well!

Why am I reading this just now!

It took me more than 5+ years to realize the following:
"...even if someone is not competent enough to know what is in their own self interest (as defined by themselves), a custodian might not be the best answer, since incapacitation usually comes with a cost in terms of frustration, social standing and self esteem. It is therefore in every case important to weigh all the benefits and costs against each other to arrive at a true optimum."

A great post again @raztard, resteemed!

Thanks a lot!

Coin Marketplace

STEEM 0.28
TRX 0.13
JST 0.032
BTC 66256.11
ETH 3036.39
USDT 1.00
SBD 3.73