The Edge of Risk Menu Search
New thinking on corporate risk and resilience in the global economy.
Society

AI’s Hidden Workforce Needs Recognition — And Higher Wages

There is a hidden — and largely underpaid — labor force driving a lot of AI innovation. Thousands of workers are being used by tech companies to label data to improve AI algorithms or transcribe audio to help voice recognition learn human speech better. 

Saiph Savage, the director of the human-computer interaction lab at West Virginia University and co-director of the Civic Innovation Lab at UNAM (National Autonomous University of Mexico), spoke to BRINK about her research on what she calls AI’s invisible workforce.

SAVAGE: The great portion of the artificial intelligence that we benefit from as end users, from self-driving cars and personalized search results, to not being exposed to pedophilia on social media platforms — all of them require human workers on the backend.

Because these workers are not known to the end user, we call them invisible workers. They are doing tasks such as labeling images so that a self-driving car can better recognize a tree from the road. Through this type of labor, artificial intelligence algorithms are able to function better.

But the fact that these workers are hidden has facilitated their low wages and limited some of their opportunities for skill growth and development. That is the problem.

BRINK: What kind of companies are hiring them?

SAVAGE: I would say all large tech companies are using this type of invisible labor. Basically, you have platforms where these big tech companies like Google, Uber and Microsoft advertise a series of tasks; these workers complete the tasks and then the company will collect all of the responses and use that to feed into their AI services. 

The tech companies are also offering access to this labor to some startups to facilitate their API. For instance, Microsoft has a platform that is called UHRS, where the company can post tasks and get work done by these workers. For example, some tasks might involve having a person from Texas read text so Microsoft Cortana, their intelligent virtual assistant, can understand a Texan accent. Other tasks might involve labeling photos that involve pedophilia so Microsoft Bing will not show such search results to end users. Similarly, Uber might post tasks to these workers to get them to label videos that help Uber’s self-driving cars. 

Work That Benefits Rural Areas

BRINK: Have you done analysis on what kind of people are doing this work? 

SAVAGE: Yes. I recently did a study about the challenges and opportunities that these invisible workers are facing in urban and rural America, since a significant number of Americans are working on these platforms. 

A lot of these workers are working full time. For some rural workers, this type of labor is highly beneficial because it provides them with opportunities that they do not have access to in their hometowns. We did interviews with workers from West Virginia where a lot of mining jobs have now gone, and these types of platforms have become an excellent way to get jobs without people having to leave their hometowns; the work offers them the opportunity to be able to stay where they are. 

Workers with disabilities have also found this type of job very beneficial, as well as people with depression. Because these jobs tend to be small and easy to complete, people with depression told us that the tasks motivated them, because it helped them to feel that they were getting things done, which made them feel more self-sufficient.

Shedding light on these poor working conditions could drive large tech companies to treat invisible workers better and ensure that workers are paid fairly and have opportunities to develop themselves.

People with chronic disease also expressed in interviews that they found the work helpful, because they could take breaks whenever they wanted, or it was easy for them to complete the task from their bed. So there are benefits to this type of work. 

AI Is Not Factoring in the True Cost of the Labor

BRINK: What sort of compensation or benefits do these jobs have?

SAVAGE: You basically get paid to complete the task. So, for instance, somebody might say, “Transcribe this audio for me so that Cortana can understand what this user from Texas is saying.” 

The problem is that these tasks do not tell the worker how much they’re going to be paid by the hour, but rather how much they’re going to be paid to complete the task, and many times it’s not clear how long it’s actually going to take the worker to complete the work.

Additionally, there is a lot of invisible labor that workers have to do — work that, traditionally, companies would pay workers to do; but to reduce the costs of incorporating humans into the workflow of AI, companies are no longer covering these costs. Rather, they are putting those costs onto the shoulders of workers.

So we’re living in a reality where we now have these futuristic services of self-driving cars, etc., but the problem is that this future is not factoring in the full cost of the labor, the costs of having those workers in the backend and what it might mean to them long term. 

BRINK: So what do you recommend in terms of how this could be improved?

SAVAGE: On one hand, we could start to clearly label the AI that is created with fair human labor. We could think about something similar to what we have with food or clothes that we buy, where there is a label that says, “this was produced by fair labor.” So you have something that says: “This AI is brought to you with fair wages and worker conditions.”

Shedding light on these working conditions could drive large tech companies to treat invisible workers better and ensure that workers are paid fairly and have opportunities to develop themselves.

Using AI to Improve Conditions of AI Workers

We can also improve workers’ conditions through socio-technical solutions and with AI itself. I have been developing AI-based tools to help guide invisible workers about how they can earn higher wages or develop their skills. 

My tools focus on learning what type of tasks a worker should do in order to reach the different professional goals they might have. One goal might be to earn more, or “I want to develop my skills in transcription because then I can access other types of jobs.” 

I have also been designing AI tools focused on guiding employers of invisible workers. We can help employers understand when they are using unfair employment practices. 

I have developed intelligent tools that learn to detect whether an employer is evaluating the labor of invisible workers in an unfair way and nudging them to reconsider their evaluation of workers. Evaluations are critical for workers because just one bad review can terminate them forever on the platform, causing workers to lose their livelihoods. 

Interestingly, the workers are also starting to organize themselves. Workers are creating tools like Turkopticon, which helps them to find employers who are fair and pay them adequately.

Saiph Savage

Director of the HCI Lab at West Virginia University @saiphcita

Saiph Savage is the co-director of the Civic Innovation Lab at the National Autonomous University of Mexico (UNAM) and director of the HCI Lab at West Virginia University. Her research involves the areas of Crowdsourcing, Social Computing and Civic Technology. For her research, Saiph has been recognized as one of the 35 Innovators under 35 by MIT  Technology Review. 

BRINK’s daily newsletter offers new thinking on corporate risk and resilience. Subscribe