Category Archives: Uncategorized

NYU VPN

I needed to setup a VPN in order to access my readings for class. The instructions for Linux are located: https://nyu.service-now.com/sp?id=kb_article_view&sysparm_article=KB0014932

After you download the VPN client of your choice (they recommend Cisco AnyConnect), connect to: vpn.nyu.edu.

It will ask for two passwords: your NYU username and password and a multi-factor authentication (MFA) code from Duo. Use the Duo. See below for stuff on Duo.

Hit connect and viola, you can connect to the VPN.

Duo Authentication Setup

Go to: https://start.nyu.edu and follow the instructions for MFA. They’ll tell you that a smart phone is the most secure method of setting up. I am skeptical.

Install the Duo Authentication App on your phone, enter your phone number into the NYU web page (off of ) and it will send a thing to your phone to connect it.

Commentary

Okay, I have to complain at least a little bit about this. I had to guess what the VPN address was because the instructions are for NYU Shanghai. I also had to install the VPN client using the terminal. These sorts of things make it harder for people to use Linux. Boo.

“All Animals Are Equal,” Peter Singer

I recently read “Disability Visibility,” which opens with a piece by Harriet McBryde Johnson about debating Peter Singer. When I got my first reading for my first class and saw it was Peter Singer, I was dismayed because of his (heinous) stances in disability. I assumed “All Animals Are Equal” was one of Singer’s pieces about animal rights. While I agree with many of the principles Singer discusses around animal rights, I feel as though his work on this front is significantly diminished by his work around disability. To put it simply, I can’t take Peter Singer seriously.

Because of this I had a lot of trouble reading “All Animals Are Equal” and taking it in good faith. I judged everything from his arguments to his writing harshly. While I don’t disagree with his basic point (all animals have rights) I disagree with how he made the point and the argument supporting it.

One of the things I was told to ask when reading any philosophy paper is “What is the argument?” or “What are they trying to convince you of?” In this case, you could frame the answer as: Animals have {some of) the same rights people do. I think it would be more accurate though to frame it as “All animals (including humans) have (some of) the same rights” or even “Humans are as equally worthy of consideration as animals are.”

I think when we usually talk about animal rights, we do it from a perspective of wanting to elevate animals to human status. From one perspective, I don’t like this approach because I feel as though it turns the framing of rights as something you deserve or earn, privileges you get for being “good enough.” The point about rights is that they are inherent — you get them because they are.

The valuable thing I got out of “All Animals Are Equal” is that “rights” are not universal. When we talk about things like abortion, for example, we talk about the right to have an abortion. Singer asks whether people who cannot get pregnant have the right to an abortion? What he doesn’t dig into is that the “right to an abortion” is really just an extension of bodily autonomy — turning one facet of bodily autonomy into the legal right to have a medical procedure.  I think this is worth thinking about more — turning high level human rights into the mundane rights, and acknowledging that not everyone can or needs them.

busy busy

I’ve been working with Karen Sandler over the past few months on the first draft of the Declaration of Digital Autonomy. Feedback welcome, please be constructive. It’s a pretty big deal for me, and feels like the culmination of a lifetime of experiences and the start of something new.

We talked about it at GUADEC and HOPE. We don’t have any other talks scheduled yet, but are available for events, meetups, dinner parties, and b’nai mitzvahs.

Fire

The world is on fire.

I know many of you are either my parents friends or here for the free software thoughts, but rather than listen to me, I want you to listed to Black voices in these fields.

If you’re not Black, it’s your job to educate yourself on issues that affect Black people all over the world and the way systems are designed to benefit White Supremacy. It is our job to acknowledge that racism is a problem — whether it appears as White Supremacy, Colonialism, or something as seemingly banal as pay gaps.

We must make space for Black voices. We must make space for Black Women. We must make space for Black trans lives. We must do this in technology. We must build equity. We must listen.

I know I have a platform. It’s one I value highly because I’ve worked so hard for the past twelve years to build it against sexist attitudes in my field (and the world). However, it is time for me to use that platform for the voices of others.

Please pay attention to Black voices in tech and FOSS. Do not just expect them to explain diversity and inclusion, but go to them for their expertise. Pay them respectful salaries. Mentor Black youth and Black people who want to be involved. Volunteer or donate to groups like Black Girls Code, The Last Mile, and Resilient Coders.

If you’re looking for mentorship, especially around things like writing, speaking, community organizing, or getting your career going in open source, don’t hesitate to reach out to me. Mentorship can be a lasting relationship, or a brief exchange around a specific issue of event. If I can’t help you, I’ll try to connect you to someone who can.

We cannot build the techno-utopia unless everyone is involved.

Racism is a Free Software Issue

Racism is a free software issue. I gave a talk that touched on this at CopyLeft Conf 2019. I also talked a little bit about it at All Things Open 2019 and FOSDEM 2020 in my talk The Ethics Behind Your IoT. I know statistics, theory, and free software. I don’t know about race and racism nearly as well. I might make mistakes – I have made some and I will make more. Please, when I do, help me do better.

I want to look at a few particular technologies and think about how they reinforce systemic racism. Worded another way: how is technology racist? How does technology hurt Black Indigenous People of Color (BIPOC)? How does technology keep us racist? How does technology make it easier to be racist?

Breathalyzers

In the United States, Latinx folks are less likely to drink than white people and, overall, less likely to be arrested for DUIs3,4. However, they are more likely to be stopped by police while driving5,6.

Who is being stopped by police is up to the police and they pull over a disproportionate number of Latinx drivers. After someone is pulled over for suspected drunk driving, they are given a breathalyzer test. Breathalyzers are so easy to (un)intentionally mis-calibrate that they have been banned as valid evidence in multiple states. The biases of the police are not canceled out by the technology that should, in theory, let us know whether someone is actually drunk.

Facial Recognition

I could talk about for quite some time and, in fact, have. So have others. Google’s image recognition software recognized black people as gorillas – and to fix the issue it removed gorillas from it’s image-labeling technology.

Facial recognition software does a bad job at recognizing black people. In fact, it’s also terrible at identifying indigenous people and other people of color. (Incidentally, it’s also not great at recognizing women, but let’s not talk about that right now.)

As we use facial recognition technology for more things, from automated store checkouts (even more relevant in the socially distanced age of Covid-19), airport ticketing, phone unlocking, police identification, and a number of other things, it becomes a bigger problem that this software cannot tell the difference between two Asian people.

Targeted Advertising

Black kids see 70% more online ads for food than white kids, and twice as many ads for junk food. In general BIPOC youth are more likely to see junk food advertisements online. This is intentional, and happens after they are identified as BIPOC youth.

Technology Reinforces Racism; Racism Builds Technology

The technology we have developed reinforces racism on a society wide scale because it makes it harder for BIPOC people to interact with this world that is run by computers and software. It’s harder to not be racist when the technology around us is being used to perpetuate racist paradigms. For example, if a store implements facial recognition software for checkout, black women are less likely to be identified. They are then more likely to be targeted as trying to steal from the store. We are more likely to take this to mean that black women are more likely to steal. This is how technology builds racism,

People are being excluded largely because they are not building these technologies, because they are not welcome in our spaces. There simply are not enough Black and Hispanic technologists and that is a problem. We need to care about this because when software doesn’t work for everyone, it doesn’t work. We cannot build on the promise of free and open source software when we are excluding the majority of people.

Iron Cocktail Club: Ancho Paloma

The purpose of Iron Cocktail Club is to pick a cocktail and have the participants attempt to make it from whatever they have available.

I picked the Ancho Paloma because it’s the kind of drink I adore, but rarely think to make for myself. It’s turning into spring, and I wanted something light and refreshing.

An ancho paloma, in a peanut butter jar, with a grapefruit wedge.

Ingredients

1½ oz. Siete Misterios Doba-Yej mezcal
½ oz. Ancho Reyes
¾ oz. fresh grapefruit juice
½ oz. fresh lime juice
¼ oz. agave nectar
2 drops salt solution (1:1 salt to water)
Club soda
Grapefruit wedge dipped in sal de gusano

Combine all the ingredients except the club soda in a shaker with ice and shake until chilled. Strain into a Collins glass over ice. Top with soda and garnish.

My Recipe

1½ oz. Casamigos Repasado Tequila
½ oz. Ancho chile simple syrup*
¾ oz. fresh grapefruit juice
½ oz. fresh lime juice
¼ oz. (potato) vodka
pinch smoked salt**
Club soda
Grapefruit wedge sprinkled with smoked salt**

Combine all the ingredients except the club soda in a shaker with ice and shake until chilled. Strain into a mason jar over ice. Top with soda and garnish.

* Ancho Chile Simple Syrup

To make this, take:
½ cup sugar
¼ cup water
2 cut ancho chiles
Cayenne to taste

Put the sugar in a small pot on medium heat. Add the water, ancho chiles, and cayenne. Now comes the hardest part: leave it alone. Just don’t mess with it. Watch it, but don’t touch it. after a while it’ll turn a caramel color. At this point, turn the temperature down to the lowest it goes and stir it. Once it’s appropriately syrupy (think a a bit thinner than agave), turn off the temperature and remove it from the burner. LET IT COOL DOWN BEFORE TRYING IT. Then, add cayenne to taste. Delicious, delicious taste.

A pot of simmering caramel syrup with ancho chiles in it.

**Smoked salt

½ cup kosher salt (or similarly ground salt)
1 tablespoon liquid salt

Preheat your oven to 300 degrees. Mix the salt and liquid smoke. Spread out on a parchment paper covered cookie sheet/baking pan. Put in the oven for 10-15 minutes, or however long it takes to try out.

Smoked salt drying on a baking sheet.

Thoughts

I loved it! It was a Paloma with a bit of heat. Very refreshing. Wonderful spring drink!

IoT

We, in the US, are starting to  talk more widely about the dangers posed by Internet of Things (IoT) devices. This is great!

IoT devices are by and large terrible. They’re truly horrendous. They can be nice in a lot of ways — I enjoy controlling the music in the kitchen from my phone — but they normalize the situation where we trade our privacy and data for convenience. This is not just true of obvious surveillance technologies, though it is especially true for them and I want to talk about those.

Most of the conversation I have seen about surveillance IoT — like Ring doorbells and home surveillance devices — is focused on the insecurity of it. Major news outlets covered when a girl was harassed by someone hacking into a Ring camera her parents installed into her bedroom.

Ignoring how creepy it is that her parents decided to install a camera in her bedroom, this story is disturbing because it’s about someone violating the sanctity of what should be a safe space for a family and, moreso, for a child. This is posed as a security issue affecting an individual.

We need to shift the conversation in two ways:

1) No amount of security will make these kind of devices safe;

and

2) This is not just about the individual — these types of surveillance put communities at risk.

I think the latter is the more important point, and something I want to focus on. The conversation should not just be about the security risk of someone breaking into my home surveillance. Instead it should focus on how, for example, surveillance systems are putting your neighbors at risk, especially as these systems are being co-opted by law enforcement and faulty facial recognition tech is being used.

We should talk about how victims of domestic violence and stalking can be monitored and tracked more easily by their abusers.

I believe strongly that the people making decisions, designing, building, and selling these technologies have a responsibility to the people who purchase them as well as those who come in contact with them. I view broadening the conversations beyond the “unhackability” of devices as a necessary next step.

 

 

MollyGive 2019

After much deliberation, I decided to not do MollyGive 2019. This was a bit of a blow, especially after MollyGive 2018 having a lot less reach than previous years. (For MollyGive 2018 I supported a large, matching donation to the Software Freedom Conservancy.)

I’ve spent the past seven months paying helping a friend pay their rent. I’ve paid for groceries, train tickets, meals, coffee, books for people I know and people I don’t. Medications for strangers. I stopped keeping track of the “good” I was doing, and instead just gave when I saw people in need.

This is counter to my past behavior. I’ve been burned a few times when offering funds to people who have a major need in their life — thousands of dollars to help people make major life changes only to have them instead use the money on other things.

I believe pretty strongly that, generally, people in need know what they need and are capable of taking care of it themselves. I don’t think it’s my place to dictate or prescribe. The experience of being burned and my thought about people knowing what they need were at odds.

At the same time, I saw people suffering around me. People who know what they needed .People in positions I’ve been in: food or medication? A winter jacket or rent? I have the resources to take care of those material needs, so I supported them when the opportunity presented itself.

I have a friend who is scheduled to have surgery this spring. They have been given advice on how to fundraise for the surgery. In fact, people facing  the prospect of life saving, crushing debt generating treatments are given lots of information about how to run successful crowd funding campaigns. This is appalling. You should be disgusted by it. You need to be disgusted by it.

Giving to charity helps. Giving to your neighbors helps. However, this is not enough. The sheer level of suffering and injustice in the world, in your country, your neighborhood, your home is sickening and giving ourselves a reprieve by donating to charities will not fix these systemic problems.

All of that being said, I have made donations to non-profits, and will make more. I hope you’ll join me in supporting groups that are doing good, necessary work. I also hope you’ll join me in striving to bring about the big societal changes that will make it so we don’t need so many charities.

My career has been in non-profits, it is my dearest hope to one day be out of a job. In the mean time, I’ll continue to work, and I’ll continue to give in whatever ways I can.

Consent

I was walking down the platform at the train station when I caught eyes with a police officer. Instinctively, I smiled and he smiled back. When I got closer, he said “Excuse me, do you mind if I swipe down your bag?” He gestured to a machine he was holding. “Just a random check.”

The slight tension I’d felt since I first saw him grabbed hold of my spine, shoulders, and jaw. I stood up a little straighter and clenched my teeth down.

“Sure, I guess,” I said uncertainly.

He could hear something in my voice, or read something in my change of posture. “You have to consent in order for me to be allowed to do it.”

Consent. I’d just been writing about consent that morning, before going to catch the train down to New York for Thanksgiving. It set me on edge and made more real what was happening: someone wanted to move into my personal space. There was now a legal interaction happening. “I don’t want to be difficult, but I’d rather you didn’t if you don’t have to.”

“It’s just a random check,” he said. “You don’t have to consent.”

“What happens if I say no?”

“You can’t get on the train,” he gestured to the track with his machine.

“So, my options are to let you search my bag or not go see my family for Thanksgiving?”

“You could take a bus,” he offered.

I thought about how I wanted to say this. Words are powerful and important.

“I consent to this in as much as I must without having any other reasonable option presented to me.”

He looked unconvinced, but swiped down my bag anyway, declared it safe, and sent me off.

Did I really have the right to withhold consent in this situation? Technically, yes. I could have told him no, but I had no other reasonable option.

At the heart of user freedom is the idea that we must be able to consent to the technology we’re directly and indirectly using. It is also important to note that we should not suffer unduly by denying consent.

If I don’t want to interact with a facial recognition system at an airport, I should be able to say no, but I should not be required to give up my seat or risk missing my flight spending exceptional effort as a consequence of refusing to consent. Consenting to something that you don’t want to do should not be incentivized, especially by making you take on extra risk or make extra sacrifices.

In many situations, especially with technology, we are presented with the option to opt out, but that doesn’t just mean opting out of playing a particular game: it can mean choosing whether or not to get a life saving medical implant; not filing your taxes because of government mandated tax software; or being unable to count yourself in a census.

When the choice is “agree or suffer the consequences” we do not have an option to actually consent.

Ethical Source (2)

Continued from “Ethical Source.

Keeping Ethics in Open Source

For the sake of argument, we’re currently going to assume that open source (defined as “software under an OSI approved license”) does not adequately address social issues.

Ethical Source proponents suggest adopting licenses with “ethics clauses,” also frequently known as “do no harm clauses.” These include such points as:

  • must confirm to local labor laws;
  • may not be used by governments;
  • environmental destruction; and
  • may not be used to profit from “the destruction of people’s physical and mental health”

as well as the above examples from the Vaccine License and the Hippocratic License.

I would argue that these types of clauses are inherently flawed either due to ambiguity or unintended consequences.

To address the former, I want us to look at “environmental destruction.” There is a solid argument that all software causes environmental destruction – due to the drain on non-renewable energy resources. Software that makes cars safer also powers these cars, which fits into a narrative of car driven environmental damage.

When considering “the destruction of people’s physical and mental health,” we have to acknowledge how much software is damaging to both the physical and the mental. I am definitely having back problems due to poor posture as I sit typing away all day at my laptop. Social media has enabled bullying that has literally killed people.

These sorts of clauses are just too ambiguous to use.

Then there are more firm qualifiers, like must confirm to local labor laws. This seems rather straight forward, but there are plenty of places where women are still fighting for the right to work, for equal pay, and against all forms of discrimination. In some countries husbands can prevent their wives from working. Following local labor laws means creating a community where whole groups of people are not allowed to participating in the building of open source software.

I also want to point out that “government use” is a very broad category. Governments provide health care, social security, scientific funding, arts funding, and necessary infrastructure. By restricting government use, we are restricting our access to things like education and weather data.

Licenses are not the tool to push for social issues. Licenses are a tool to build equity, and they are even a tool to fight against inequality, but they alone are not enough.

Seth Vargo pulled source code from the Chef code base when it came to light that Chef was working with ICE. Google employees staged walkouts and protests against Project Dragonfly. Tech workers and contributors can institute codes of conduct, ban companies doing evil from their communities, refuse to accept pull requests or contracts, unionize, collectively organize, and simply refuse to work when the technology they’re creating is being used for evil or by evil.

The other problem with Do No Harm licenses is that they require the adoption of those licenses. There are already many open source licenses to choose from. Much of the critical infrastructure we’re discussing is being built by companies, which I think are unlikely to adopt Do No Harm licenses.

Acknowledgments to Elana Hashman for ideas here.