Police brutality. Asian hate. Anti-immigrant. LGBTQ+ rights. Voter disenfranchisement. Rising unemployment. Housing crisis. These are just a handful of the phrases that, when spotted in today’s news headlines, social media postings, and public protests, make me feel a pressing need to do something. However, a question I’ve been grappling with recently is: How can I scale social change responsibly?
Two years ago, I learned that my good intentions are capable of causing significant harm.
When PInT received a well scoped web-scraping project from a nonprofit that uses technology to identify victims of human trafficking, I signed up immediately. I had looked forward to using my technical skills to help others since applying to engineering school, and was eager to get started. After our team’s first meeting with the nonprofit stakeholders, I felt energized, like a vigilante about to dish out justice to the perpetrators of human trafficking.
When we started working on this project, I had little knowledge of the complexity of the human-trafficking space, and the ethical concerns that accompany any attempt at intervention. We met with professors who work in the trafficking prevention space, and researched previous intervention attempts like the SESTA-FOSTA bill, where a well-intentioned attempt to prevent human trafficking led to a policed environment that forced voluntary sex workers into riskier and more dangerous situations (Editors’ Note: This is hyperlinked). Over multiple months of research and deliberation, our team gradually shifted our focus from asking about data privacy risks to asking crucial questions about the power dynamics of the project, and the needs of the vulnerable people we wanted to help. Who decides whether someone is a victim or simply trying to make a living? Are there lasting support networks in place for all victims who are identified?
Looking at the project from the perspective of potential human trafficking victims, I realized that the people with the power to make decisions about their welfare were all ex-military white men, who have only chosen to partner with law-enforcement agencies. These men with good intentions have vastly different lived experiences from those of human trafficking survivors, and left out victims and survivors from their decision-making processes. They communicated in dehumanizing combatant terms like “extract” and “offensive/defensive”, which sharply contrast the compassionate language and values of survivor support groups. Their responsibility ends as soon as they pass on victim information to law enforcement, and they don’t have any accountability for what happens to victims afterwards. Our team had even less experience and contact with survivors of human trafficking, and hypothetically could hand off the technical tool we were asked to make, with no accountability for how that tool would be used.
The project proposal, which had once seemed so straightforward, was suddenly rife with ethical dilemmas. Our job was to cast a wide net in our data collection, and let the police-affiliated nonprofit decide which people were victims who needed help. Once this decision is made, the ‘victim’ has no choice in whether or not their information is passed on to law-enforcement agencies. The people whose data we were told to collect could be selling sexual services voluntarily, trying to make ends meet, or avoiding more dangerous situations. The repercussions of misidentifying someone as a victim without their consent are severe–it could lead to prostitution charges, incarceration, a permanent criminal record, and/or loss of child custody. I realized that an intent to “do good” is not sufficient to prevent causing significant harm to the people I aim to serve.
Due to the many dimensions of risk that our team could not account for, we all felt uncomfortable with continuing to implement this project. When we respectfully communicated our concerns and decision not to proceed to our nonprofit client, they thanked us but did not make any changes. I was initially afraid that by refusing to continue the project, we had failed to validate PInT’s student-driven consulting model. I wished we could have continued to work with the nonprofit to include survivor voices in their process, to
guide the work they are doing. However, as students with limited time and no direct connections with survivors, we were not well poised to be doing that sensitive work.
Looking back, I am proud of my team for making a difficult decision that centered care for the people we were designing for. I have come to realize that there is immense value in modeling for our community of budding engineers that saying “No” is a valid, sometimes necessary action for preventing harm to others and ourselves. I learned from this experience that refusing to continue a line of work can feel futile, but the personal risk the action carries can add weight to my words. There are hundreds of engineers out there who could take my place if I say “No”, but if I keep my head down, how can I expect others to step up, or for any meaningful change to happen? Through my refusal, I can grow my own ethical practice, and potentially influence the practice of others.
After these revelations, my mindset swung from “do good” to “do no harm”. For a time, I chose smaller-scale projects that were closer to home, with less risk of severe consequences, but also less potential for significant impact. I battled a feeling of paralysis that threatened to keep me from experiencing new contexts and challenges. A project-based class called Affordable Design and Entrepreneurship (ADE) helped me feel more equipped to commit to large-scale change, while still being cautious of potential consequences and prioritizing stakeholder needs.
In ADE, I joined a team on a multiyear project with a mission to abolish the carceral system in Massachusetts. This is an important and ambitious goal. In the United States, the carceral system is massive, and the system is racist. Mass incarceration in America has deep historical roots and causes harm to people before, during, and after they are locked in prisons. It’s a complex, daunting problem that no one person, team, community, state, or even political party could take on alone. Our challenge is to protect people of color when finding, scoping, and implementing a specific project that intervenes in the massive system built against them.
When I started the project, the team had already spent almost two years focused solely on learning about the context of mass incarceration in Massachusetts, and connecting to related local organizations, community organizers, and people who are formerly incarcerated. In the human trafficking prevention project, the only voices we initially heard were those of the nonprofit employees, who hadn’t spoken much to the people they served. Meanwhile in ADE, building relationships with stakeholders was and still is our first priority, so that we can center the voices of the people who are directly affected in all of our project-related decisions.
This semester, our goal was to scope and evaluate the impact of a specific project proposal: creating a publicly accessible database of policing traffic stop data, with analysis tools to help public defenders and community organizers statistically prove the racial discrimination they see every day. Given my state of paralysis at the time, I was weary of this project because of the ways in which it could cause harm to people of color. Sharing police accountability data could lead to retaliation in the local community and collecting traffic stop data could put people at risk of employment discrimination if that data is identifiable or leaked.
ADE has us mitigate harm by speaking to people who have relevant expertise, so that we understand the full spectrum of risks we are taking on. For example, I spent a few weeks interviewing data advocacy experts, who asked us where we get data from, who is responsible for storing it, and how it will be shared. If we dove directly into technical prototyping and user testing without answering these questions, we could end up with a useless outcome, or accidentally leak sensitive information to the wrong people. By taking the time to fully understand the problem before trying to solve it, we can make more deliberate decisions that design against harm.
However, given the generally unpredictable nature of humans, we can design for years and some amount of risk will still remain. At a certain point, the only way to learn if a solution will be effective is by testing it in a non-hypothetical setting. The ADE framework encourages us to mitigate harm by trying out solutions at a small scale, where we can easily test our assumptions and measure the effects. Our team will test our new project by collecting a small amount of data to analyze, and sharing it with a group of people we trust, for feedback. The stakes are low, so that if we discover negative consequences or exceptions to our base assumptions, we can take a step back and adjust given that information, or try something else altogether. Reflecting on this fluid process has helped me realize that scaling change can be nonlinear.
If we receive positive feedback, that is a good indicator to expand a little more, to get perspectives from different people. The leader of a local community organizing group working against systemic racism and police brutality expressed that they could make a lot of change with the proposed data tools, but never had the time to create them, given all the more immediately pressing issues on their plate. There is a clear need for our project coming from the people we serve, and our team has that bandwidth, technical skill set, and core mission alignment to be able to implement it with minimal risk. All of these factors indicate to me that in this case, our team is well-poised to be doing this type of work, and should continue down this path.
Similar to a dialogue between people, the act of scaling can expand or backtrack, and change course based on input from people who are directly affected. The fact that all of these are valid outcomes helps me overcome my feeling of paralysis, because it means I can still fail on a small scale, then learn, and grow, to inform a better solution with minimal overall harm. Over time, I can imagine our team making better and better decisions, if we base them on an accumulation of insights from past experiences. However, I still feel a tension between scaling like a dialogue, which is a process that cannot be rushed, and meeting the urgency of issues that are happening in the moment. People are dying from police brutality, and suffering behind bars because of the color of their skin. I don’t think dialogue-scaling is enough for change that needs to happen right now, and I wonder what alternative frameworks exist for making immediate change responsibly. My perspective on change-making has grown a lot at Olin, but I still have a lot to learn about the nuances of balancing momentum and risk in high-impact situations.