A couple days ago, my girlfriend and I had a debate (let’s call it a discussion) about caffeine.
I read that it decreases your appetite. She disagreed, saying that caffeine actually increases our urge to raid the fridge.
It depends not only who you ask, but how.
This will sound obvious, but the way you phrase a Google search directly affects your results.
A question like, “does caffeine increase appetite” will boost the number of hits that link caffeine and hunger. The opposite is true if you include words like “decrease” or “suppress.”
Typing “how does caffeine affect appetite” or “caffeine and appetite” is your best shot at uncovering the truth — or at least what most researchers now understand to be true.
Our conversation highlights the fact that search engines aren’t objective. They’re tools designed for human use, and people are always influenced by their own beliefs, biases and experiences.
It’s easy to see how our worldview can sway familiar tools, like search engines, but the connection gets trickier — and more important — when we look at emerging technologies.
Artificial Intelligence (AI) and machine learning have now given ordinary citizens (i.e. those who aren’t necessarily rich or powerful) a disproportionate ability to influence people’s hearts and minds on a major scale.
We’re at a critical point in history, where we can collectively choose to use this technology for good, or to exploit people’s biases and belief systems.
A Spectrum of Opportunity
These are the early days of data science and machine learning. As both individuals and organizations start to experiment with emerging technologies, there’s a full spectrum of applications.
Atone end of the spectrum (let’s call it the far right), there’s the 2016 U.S. election. Almost two years later, we’re still learning how the Trump campaign used psychographic profiles to motivate voters.
Groups like DataKind live at the other end of the continuum. This data science community works with social change organizations to tackle challenges across education, poverty, health, human rights, cities, and the environment.
Most applications land right in the middle. Companies are learning how to use data and machine learning for capitalistic gains.
That’s not necessarily a bad thing. After all, most companies are incentivized to make as much money as possible to serve their employees, stakeholder, and/or founders. Data-enabled capitalism feels darker when companies use social platforms to pursue vulnerable people; to spread unhappiness.
We’ve always had advertising, but the niche targeting and sheer volume of data we encounter online is killing our attention spans. It’s also fuelling the need for instant gratification.
I lead the HubSpot team that’s building GrowthBot — a chatbot for marketing and sales. As you may know, a bot is an automated computer program designed to simulate conversation with human users.
It uses AI (a computer system that can perform tasks that normally require human intelligence) to process data. GrowthBot integrates with over a dozen systems and APIs to help people accomplish more with less distraction.
Are we using GrowthBot to make money? Yes. HubSpot is a for-profit business. At the same time, we’re building this technology to help people regain their focus. Everyone is overwhelmed with noise and data. Our bot works behind the scenes to conduct research, connect accounts, pull files and perform routine tasks, so users experience that elusive state of flow.
The Backlash Against Big Data
As technology leaps forward, there’s a growing backlash against machine learning. Some people think AI will be humanity’s downfall. Others are afraid it will replace their jobs.
I understand. We’ve all watched technology overhaul industries ranging from auto manufacturing to entertainment.
Computers can’t have “ah-ha” moments or come up with insights, just as humans will always struggle to store and process large amounts of data.
We can make the same distinctions about AI and algorithms. If you’re concerned about how they influence behavior, let’s clear up a common misconception: technology alone can’t change people’s hearts and minds.
Instead, it enables motivated groups (like the Trump campaign) to identify and potentially exploit our biases. Everything we share, search, like, and post online creates a comprehensive profile of who we are and what we believe.
Combine that with our social graph — the online networks we create and nurture — and data science can make surprisingly accurate conclusions about our deepest psychology.
For example, data scientists can predict (with a high degree of accuracy), whether someone believes in climate change, based on their support for the National Rifle Association (NRA).
How can we get smarter about machine learning?
As I said earlier, we’ve reached an important crossroads. Will we use new technologies to improve life for everyone, or to fuel the agendas of powerful people and organizations?
I certainly hope it’s the former. Few of us will run for president or lead a social media empire, but we can all help to move the needle.
Consume information with a critical eye.
Most people won’t stop using Facebook, Google, or social media platforms, so proceed with a healthy dose of skepticism. Remember that the internet can never be objective. Ask questions and come to your own conclusions.
Get your headlines from professional journalists.
Seek credible outlets for news about local, national and world events. I rely on the New York Times and the Wall Street Journal. You can pick your own sources, but don’t trust that the “article” your Aunt Marge just posted on Facebook is legit.
Search with open-ended questions.
Researchers have found that caffeine does not typically increase hunger. My girlfriend is usually right, but I guess it was my turn to win this debate. Bottom line: try to use the internet as objectively as possible. Use neutral words and open-ended questions. Don’t encourage biased links with your language.
Educate kids about online subjectivity.
Online literacy is increasingly critical. Kids of all ages (and even many seniors) need to understand how to do open-ended searches, how to vet information sources, and how to think more critically about what they see online.
Think beyond profits.
Maybe you’re a business owner who’s trying to integrate machine learning, bots or big data into your offering. Or, you’re a tech founder who’s actively working with AI. In either case, you have the opportunity to help us make new technologies more ethical, equitable and productive.
Look at text analysis software. We could be using this technology to analyze ancient history and learn how people evolved. Instead, we’re trying to correct grammar editors and suggest sales copy that converts browsers into buyers.
Clearly, there’s nothing wrong with those applications. They’re probably not hurting anyone. They could, however, move our world forward. Text analysis could help to crack the code on tricky social challenges.
If you have an opportunity to add value and enrich lives, make it happen. Do what you can to use data for good.
We’re All in it Together
Please know that I’m not giving myself or my company a pass here. I know that GrowthBot won’t cure cancer. But, we are working to help people focus, accomplish more, and realize their own genius.
If we can help a cancer researcher to stay deep in their work and discover new insights, for example, then we have achieved our goal.
AI and machine learning are still in their infancy. That’s why this moment is so critical. We can all influence how new technologies evolve. We can manage our own consumption and lobby to see big data used for constructive pursuits.
Learn what you can and watch what’s happening in politics, business, and in your own sphere of influence. Stay informed. I know it’s easy to feel overwhelmed or disillusioned, but we can’t afford to check out. We need your voice. We need your ideas.
Originally published Jun 22, 2018 7:00:00 AM, updated June 21 2018