AI consultant Ben Gold offers an in-depth look at the potentially disruptive impact of artificial intelligence on the workforce and society, specifically as it relates to diversity, equity, and inclusion.
Ben Gold, an AI consultant with over 20 years of tech sales experience, now focuses on helping organizations understand and ethically implement AI, particularly in the areas of diversity, equity, and inclusion. Ben joins Robert Wilson to discuss the rapid rise of generative AI, especially with tools like ChatGPT, and the inherent challenges brought on with their acceptance in the workplace.
Ben’s fascination with AI was solidified after witnessing tools like ChatGPT open new possibilities for individual creators and businesses alike.
“ChatGPT was the democratization of AI,” he explains. “You don’t have to be a large corporation; you can be an individual and begin testing and learning how to create.” The real game-changer, according to Gold, was the acceleration of generative AI, a type of AI capable of creating original text, images, and even entire conversations.
Ben warns that the meteoric rise of AI, however, is not without risks. As new versions roll out, AI tools are becoming better at mimicking human language and behaviors, often to the point of making it challenging to distinguish between AI and human-generated content. “The pace of innovation is so breathtakingly fast that even for people like myself who try to follow it day by day, it’s hard to keep up,” he admits.
A key point of discussion was the intersection of AI and DEI, where Gold flags concerns about the ethical challenges and potential biases AI might perpetuate.
“AI is biased because it’s made by humans,” Ben claims. Since AI systems learn from data often produced or curated by people, they can inherit and even amplify societal biases. This becomes especially concerning when AI tools are used for high-stakes decisions, like hiring, where biases can have significant repercussions.
“If the training data is not representative of the real world,” Ben warns, “you can perpetuate stereotypes.” For instance, if a hiring algorithm disproportionately favors certain names or resumes that fit a historical profile, it can unintentionally discriminate against qualified candidates from underrepresented backgrounds.
Ben outlines how organizations might address these issues by taking a methodical, inclusive approach. First, he advises establishing an AI council within companies to bring together stakeholders from various departments. This council, he says, should not only include tech-savvy employees but also people from diverse backgrounds who can assess the impact of AI across different dimensions.
“If you’re aspiring to a workplace that takes into account things like neurodiversity, gender diversity, etc., then you need a council that provides insight into those elements,” he recommends. This council should work to craft AI policies and monitor the technology's implementation to ensure it aligns with ethical and DEI standards.
Ben also encourages organizations to create company-wide AI policies that clearly outline how employees should use AI in their daily tasks. “Companies need to determine what is allowed,” he explains, noting that many companies are currently banning AI tools out of caution.
According to Ben, however, AI adoption is inevitable, and therefore clear guidelines on the responsible use of AI are necessary. For example, Ben believes transparency on when AI is used to generate content or assist in hiring processes is an ethical obligation.
Throughout the conversation, Ben highlights the biases that AI can reinforce if left unchecked. He describes ten types of biases—ranging from data bias to algorithmic bias—that can skew AI-driven outcomes. “Data bias is what happens if the training data is not representative of the real world.”
An example he provides is when AI algorithms could inadvertently favor certain demographic profiles if the training data is outdated or not diverse enough. In recruiting, this could mean that candidates with ethnic names might receive lower ratings or that individuals with employment gaps (more common among women) could be unfairly penalized.
“I would recommend every organization use a basis for their screening,” he explains, “but have someone evaluate how these algorithms are performing to make sure they’re in compliance with laws.”
As AI tools continue to evolve, Ben argues so must our approach to AI literacy and inclusion. He recommends that DEI professionals stay informed on AI developments to effectively advocate for responsible practices. "Stay on top of this,” he emplores. “You don’t want to get blindsided by suddenly seeing a new technology."
Looking ahead, Ben underscores the importance of careful and culturally sensitive AI implementation. “If you don’t have a voice at the table at every company,” he cautions, “you’re not going to be able to advocate for fairness and equity.” As AI reshapes the future of work, having DEI and ethical voices guiding its adoption is more critical than ever.
To learn more about the ways you can get involved with the Tennessee Diversity Consortium, visit tennesseediversityconsortium.org/join-tdc. And be sure to follow Speak Up for Equity wherever you listen to podcasts.
Comments