The Big Brother Bias: Balancing Consequences and Benefits in AI

Data access and privacy issues are throwing a shadow over the technology sector, as the data explosion threatens to become an ethics implosion. For our clients in IoT and AI, there is an additional concern, subtle but dangerous: Machine learning and deep learning models learn the bias of their source. They use algorithms to construct and modify models based on real world datasets.

Even the most logically constructed, objective algorithm is learning from the biases—from ideology to zip code to race to gender—built into our world and hence into the data being generated. This perpetuates the biases, now codified as facts. These skewed results then have the potential to unintentionally become the “gold insight” driving decision-making in business and industry.

More than 180 human biases have been defined and classified, any one of which can affect how we make decisions.1

We know the upsides of AI: speeding medical diagnosis, enabling scientific research based on massive datasets from diverse sources, and informing first responders. AI capabilities such as neural networks, machine learning, predictive analytics, speech recognition, natural language processing, and facial recognition are already in use by industries, academia, and governments to speed automation, manage risk, and increase security. But we can’t lose site of the downside that bias brings with it.

So why do we care as an agency? We want to help our technology sector clients to address the ethical and societal questions that technology innovation can unleash. This is territory newly resonant for the tech industry and we want to face it head on, because it impacts their business and the people who use their products and services.

Here are a few ways we hope to support our clients and be good advocates for their customers and partners

  1. Have the conversation: We want to listen to what clients are thinking about the possible negative consequences of AI technologies and solutions. And we want to find ways to share their insight, whether to help allay concerns or spark conversation with their intended audience.
  2. Embrace the complexity: These are complicated issues. Staying transparent about the ramifications of technology will help our clients to be thought leaders and trusted advisors.
  3. Remember the positive: Like most technology, AI has tremendous potential for good, as well as for harm. We are in a moment when fundamental assumptions are being assessed and tested. External regulations, standards bodies, and technology safeguards are likely to quickly take shape. We are here to help our clients move forward on all fronts—with innovation tempered by sustainability and conscious action.

Learn more
Partnership on AI
NYU’s AI NOW Institute
Intel’s AI public policy white paper
Microsoft’s AI for Good Initiative

Let's get started