Tag Archives: philosophy

“Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” W. Quinn

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I’m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions.

At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn’t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur.

One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook.  But also, I don’t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society.

This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive.

Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we’re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1]

[1] This is for the sake of this text. I don’t know if I actually feel that this is correct.

My goal was to make this only 500 words, so I am going to stop here.

“All Animals Are Equal,” Peter Singer

I recently read “Disability Visibility,” which opens with a piece by Harriet McBryde Johnson about debating Peter Singer. When I got my first reading for my first class and saw it was Peter Singer, I was dismayed because of his (heinous) stances in disability. I assumed “All Animals Are Equal” was one of Singer’s pieces about animal rights. While I agree with many of the principles Singer discusses around animal rights, I feel as though his work on this front is significantly diminished by his work around disability. To put it simply, I can’t take Peter Singer seriously.

Because of this I had a lot of trouble reading “All Animals Are Equal” and taking it in good faith. I judged everything from his arguments to his writing harshly. While I don’t disagree with his basic point (all animals have rights) I disagree with how he made the point and the argument supporting it.

One of the things I was told to ask when reading any philosophy paper is “What is the argument?” or “What are they trying to convince you of?” In this case, you could frame the answer as: Animals have {some of) the same rights people do. I think it would be more accurate though to frame it as “All animals (including humans) have (some of) the same rights” or even “Humans are as equally worthy of consideration as animals are.”

I think when we usually talk about animal rights, we do it from a perspective of wanting to elevate animals to human status. From one perspective, I don’t like this approach because I feel as though it turns the framing of rights as something you deserve or earn, privileges you get for being “good enough.” The point about rights is that they are inherent — you get them because they are.

The valuable thing I got out of “All Animals Are Equal” is that “rights” are not universal. When we talk about things like abortion, for example, we talk about the right to have an abortion. Singer asks whether people who cannot get pregnant have the right to an abortion? What he doesn’t dig into is that the “right to an abortion” is really just an extension of bodily autonomy — turning one facet of bodily autonomy into the legal right to have a medical procedure.  I think this is worth thinking about more — turning high level human rights into the mundane rights, and acknowledging that not everyone can or needs them.

Consent

I was walking down the platform at the train station when I caught eyes with a police officer. Instinctively, I smiled and he smiled back. When I got closer, he said “Excuse me, do you mind if I swipe down your bag?” He gestured to a machine he was holding. “Just a random check.”

The slight tension I’d felt since I first saw him grabbed hold of my spine, shoulders, and jaw. I stood up a little straighter and clenched my teeth down.

“Sure, I guess,” I said uncertainly.

He could hear something in my voice, or read something in my change of posture. “You have to consent in order for me to be allowed to do it.”

Consent. I’d just been writing about consent that morning, before going to catch the train down to New York for Thanksgiving. It set me on edge and made more real what was happening: someone wanted to move into my personal space. There was now a legal interaction happening. “I don’t want to be difficult, but I’d rather you didn’t if you don’t have to.”

“It’s just a random check,” he said. “You don’t have to consent.”

“What happens if I say no?”

“You can’t get on the train,” he gestured to the track with his machine.

“So, my options are to let you search my bag or not go see my family for Thanksgiving?”

“You could take a bus,” he offered.

I thought about how I wanted to say this. Words are powerful and important.

“I consent to this in as much as I must without having any other reasonable option presented to me.”

He looked unconvinced, but swiped down my bag anyway, declared it safe, and sent me off.

Did I really have the right to withhold consent in this situation? Technically, yes. I could have told him no, but I had no other reasonable option.

At the heart of user freedom is the idea that we must be able to consent to the technology we’re directly and indirectly using. It is also important to note that we should not suffer unduly by denying consent.

If I don’t want to interact with a facial recognition system at an airport, I should be able to say no, but I should not be required to give up my seat or risk missing my flight spending exceptional effort as a consequence of refusing to consent. Consenting to something that you don’t want to do should not be incentivized, especially by making you take on extra risk or make extra sacrifices.

In many situations, especially with technology, we are presented with the option to opt out, but that doesn’t just mean opting out of playing a particular game: it can mean choosing whether or not to get a life saving medical implant; not filing your taxes because of government mandated tax software; or being unable to count yourself in a census.

When the choice is “agree or suffer the consequences” we do not have an option to actually consent.

Autonomy

I’ve been stuck on the question: Why is autonomy an ethical imperative? or, worded another way Why does autonomy matter? I think if we’re going to argue that free software matters (or if I am anyway), there needs to be a point where we have to be able to answer why autonomy matters.

I’ve been thinking about this in the framing of technology and consent since the summer of 2018, when Karen Sandler and I spoke at HOPE and DebConf 18 on user and software freedom. Sitting with Karen before HOPE, I had a bit of a crisis of faith and lost track of why software freedom matters after I moved to the point that consent is necessary to our continued autonomy. But why does autonomy matter?

Autonomy matters because autonomy matters. It is the postulate on which not only have I built my arguments, but my entire world view. It is an idea that is instilled in us very deeply that all arguments about what should be a legal right are framed. We have the idea of autonomy so fundamental as part of our society, that we have been trained to have negative, sometimes physical, reactions to the loss of autonomy. Pro-choice and anti-choice arguments both boil down to the question of respecting autonomy — but whose autonomy?  Arguments against euthanasia come down to autonomy — questions of whether someone really have the agency to decide to die versus concerns about autonomy being ignored and death being forced on a person. Even climate change is a question of autonomy — how can we be autonomous if we can’t even be?

Person autonomy means we can consent, user freedom is a tool for consent, software freedom is a tool for user freedom, free software is a tool for software freedom. We can also think about this in reverse:

Free software is the reality of software freedom. Software freedom protects user freedom. User freedom enables consent. Consent is necessary to autonomy. Autonomy is essential. Autonomy is essential because autonomy is essential. And that’s enough.

Autonomy and consent

When I was an undergraduate, I took a course on medical ethics. The core takeaways from the class were that autonomy is necessary for consent, and consent is necessary for ethical action.

There is a reciprocal relationship between autonomy and consent. We are autonomous creatures, we are self-governing. In being self-governing, we have the ability to consent, to give permission to others to interact with us in the ways we agree on. We can only really consent when we are self-governing, otherwise, it’s not proper consent. Consent also allows us to continue to be self-governing. By giving others permission, we are giving up some control, but doing so on our own terms.

In order to actually consent, we have to grasp the situation we’re in, and as much about it as possible. Decision making needs to come from a place of understanding.

It’s a fairly straightforward path when discussing medicine: you cannot operate on someone, or force them to take medication, or any other number of things without their permission to do so, and that their permission is based on knowing what’s going on.

I cannot stress how important it is to transpose this idea onto technology. This is an especially valuable concept when looking at the myriad ways we interact with technology, and especially computing technology, without even being given the opportunity to consent, whether or not we come from a place of autonomy.

At the airport recently, I heard that a flight was boarding with facial recognition technology. I remembered reading an article over the summer about how hard it is to opt-out. It gave me pause. I was running late for my connection and worried that I would be put in a situation where I would have to choose between the opt-out process and missing my flight. I come from a place of greater understanding than the average passenger (I assume) when it comes to facial recognition technology, but I don’t know enough about its implementation in airports to feel as though I could consent. Many people approach this from a place even with much less understanding than I have.

From my perspective, there are two sides to understanding and consent: the technology itself and the way gathered data is being used. I’m going to save those for a future blog post, but I’ll link back to this one, and edit this to link forward to them.