According to documents obtained, Google’s chatbot competitor, Bard, is being trained by thousands of contractors who face significant pressure to review AI-generated answers within extremely tight deadlines, some as short as three minutes.
The leaked documents shed light on the minimal training provided to these workers, who are burdened with the responsibility of ensuring accuracy and reliability in Bard’s responses.
Reports suggest that the effectiveness of Google’s Bard, which aims to rival OpenAI’s ChatGPT, heavily relies on the efforts of contractors employed by companies like Appen and Accenture.
However, these contractors receive inadequate training and earn as little as $14 per hour. Numerous contractors, requesting anonymity, have voiced concerns about the stressful work environment and lack of clarity surrounding their roles.
Google’s response to the threat posed by OpenAI’s ChatGPT highlights the intensity of the ongoing AI race between the two tech giants.
Chatbots such as Bard and ChatGPT leverage powerful language models to generate responses, but human involvement remains crucial in reviewing and ensuring the reliability and accuracy of these answers.
The leaked documents highlight how contractors are assigned tasks that involve rating response helpfulness on a scale ranging from “not at all helpful” to “extremely helpful.” Contractors must evaluate the responsiveness and up-to-date nature of the answers provided.
One contractor, speaking to SurgeZirc SA, expressed concerns about the prevailing culture of fear, stress, and low wages, emphasizing the detrimental impact on quality and teamwork among workers.
This report draws attention to Google’s dedication to countering OpenAI’s advancements in the AI field, as both companies vie for leadership in deploying AI technologies worldwide. Microsoft’s significant investment in OpenAI has further intensified the competition.
In response to the allegations, a Google spokesperson assured SurgeZirc SA that the company is committed to providing high-quality information and developing AI products responsibly.
They emphasized rigorous testing, training, and feedback processes aimed at ensuring factual accuracy and minimizing biases.
The spokesperson added that while human evaluation is one approach, it does not directly influence the output of the models.
Moreover, Google employs various techniques and specialized teams to continually improve the quality and accuracy of their products. SurgeZirc SA reached out to Appen and Accenture for comments, but they have yet to respond.