WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO

WFTO Members Exclusive Offer

WFTO
Joinedapp

Today’s chatbots are becoming more diverse—and so, too, are their creators

Today’s chatbots are becoming more diverse—and so, too, are their creators

Users knew something was amiss when, within hours of the 2016 launch of Tay, the Microsoft Twitter bot began shouting about 9/11, Hitler, and building a wall on the U.S.-Mexican border.

Microsoft quickly shut the bot down and apologized for the public debacle when the bot’s offensive statements began to draw media scrutiny. Though “designed to engage and entertain people through casual and playful conversation,” Tay ultimately became a reminder that chatbots can unintentionally pick up the worst habits of their creators.

Lesson learned from Microsoft - via GIPHY

Tay learned its racist ways from Tweets authored by internet trolls who intended to hijack the bot. But bias can enter all phases of the chatbot production process. The data used to train a bot may over or under represent a certain worldview, or the development team may overlook the concerns of minorities. Bias could even creep into the way a chatbot is conceived, as with Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana, whom critics have pointed out were given “female” personalities and voices, potentially reflecting and reinforcing gender bias.

“Whether this bias is conscious or unconscious, it perpetuates the idea of a subservient female [who’s] there to provide unwavering support. By contrast, male voices tend to be used for computers where an authoritative tone is required,” said Kate Devlin, a senior lecturer in social and cultural artificial intelligence at King’s College London.

To combat bias, experts and other chatbot creators are drawing on ethical standards as a foundation for building chatbots, and keeping diversity at the forefront of every stage—from assembling the team to user design and training data.

Julie Carpenter, a research fellow in the ethics and emerging sciences group situated at California Polytechnic State University, said a diverse team at every stage—be it planning, development, testing, or refining—is key to addressing bias in chatbots and other digital assistants. “It must be an interdisciplinary endeavor, including people with expertise in society, not only in technology,” she said.

Every stage of the AI's development can be influenced by their creators including the cute one. - via GIPHY

For instance, the team that created genderless voice Q, which Carpenter advised, was composed of linguists, researchers, and sound designers. Moreover, the project had further input from organizations such as Copenhagen Pride and Equal AI, whose aim is to reduce bias in artificial intelligence.

Developers are also addressing bias during the design process. F’xa, a feminist chatbot developed by nonprofit organization Feminist Internet, was built using standards informed by AI researcher Josie Young’s feminist chatbot design process. This series of questions helps developers consider “how your values and position might lead you to choose one option over another or hold a specific perspective on the world.”

“Drawing on the standards, we made certain design decisions,” said Feminist Internet co-founder Charlotte Webb. “For example, F'xa never says ‘I.’ This was our way of making sure the person chatting is aware they're interacting with a bot, not a human. It's important that designers are conscious about the emotional attachments people form with technologies that aren't lifelike.”

Representation and inclusion are also crucial in eliminating bias. With the F’xa chatbot, Webb and her team included a wide range of skin tones in the emojis within the conversation. “We wanted to represent a range of viewpoints, rather than just embed a single narrative,” she said.

Other companies have deliberately designed gender-neutral chatbots in a bid to combat bias. Capital One’s Eno and Kasisto’s Kai—both banking chatbots—as well as Sage’s accounting chatbot Pegg were created with gender-neutral names and characters.

Establishing a diverse set of training data, which AI-powered chatbots and digital assistants use to “learn,” and designing for a diverse group of users, can also help reduce bias. “Beware of the ‘universal user,’” said Webb. “Humans are too complex and diverse for a one-size-fits-all approach.”

Ultimately, understanding their own biases helps developers put processes in place to avoid them. “People must keep the ideas of inclusion and accurate representation top of mind, and prioritize these efforts throughout the product or service development process,” Carpenter said. “Technologists, decision-makers, researchers, and stakeholders must champion diversity and inclusion in their crafted products because they impact the world in remarkable ways.”

Looking to further understand the AI market and how it can help you? Request a demo to learn more about how Joinedapp can help automate your interactions allowing you to reach a broader range of consumers.

Article by Rina Diane Caballar