McIntosh recommends seeking out third-party resources and subject matter expertise. “It will greatly assist with expediting the development and execution of your plan and framework,” McIntosh explains. “And, based on your current program management practices, provide the same level of rigor — or more — for your AI adoption initiatives.”
Treading slowly so AI doesn’t ‘run amok’
The Laborer’s International Union of North America (LIUNA), which represents more than 500,000 construction workers, public employees, and mail handlers, has dipped its toes into using AI, mainly for document accuracy and clarification, and for writing contracts, says CIO Matt Richard.
As LIUNA expands AI use cases in 2024, “this gets to the question about how we use AI ethically,” he says. The organization has started piloting Google Duet to automate the process of writing and negotiating contractor agreements.
Right now, union officials are not using AI to identify members’ wants and needs, nor to comb through hiring data that might be sensitive and return biases on people based on how the models are trained, Richard says.
“Those are the areas where I get nervous: when a model tells me about a person. And I don’t feel we’re ready to dive into that space yet, because frankly, I don’t trust publicly trained models to give me insights into the person I want to hire,” he says.
Still, Richard expects a “natural evolution” in which, down the road, LIUNA may want to use AI to derive insights into its members to help the union deliver better benefits to them. For now, “it’s still a gray area on how we want to do that,” he says.
The union is also trying to grow its membership and part of that means using AI to identify prospective members efficiently, “without identifying the same homogenous people,” Richard says. “Our organization is pushing very hard and does a good job of empowering minorities and women, and we want to grow those groups.”
That’s where Richard worries about how AI models are used, because avoiding “the rabbit hole of finding the same stereotypical demographic” and introducing biases means humans must be part of the process. “You don’t just let the models do all the work,” he says. “You understand where you are today, and then we stop and say, ‘OK, humans need to intervene here and look at what the models are telling us.’”
“You can’t let AI run amok … with no intervention. Then you’re perpetuating the problem,” he says, adding that organizations shouldn’t take the “easy way out” with AI and only delve into what the tools can do. “My fear is people are going to buy and implement an AI tool and let it go and trust it. … You have to be careful these tools aren’t telling us what we want to hear,” he says.
To that end, Richard believes AI can be used as a kick-starter, but IT leaders must use your team’s intuition “to make sure we’re not falling into the trap of just trusting flashy software tools that aren’t giving us the data we need,” he says.
Taking AI ethics personally
Like LIUNA, Czech-based global consumer finance provider Home Credit is early in its AI journey, using GitHub Copilot for coding and documentation processes, says Group CIO Jan Cenkr.
“It’s offered a huge advantage in terms of time-saving, which in turn has a beneficial cost element too,” says Cenkr, who is also CEO of Home Credit’s subsidiary EmbedIT. Ethical AI has been top of mind for Cenkr from the start.
“When we started rolling out our AI tool pilots, we also had deep discussions internally about creating ethical governance structures to go with the use of this technology. That means we have genuine checks in place to ensure that we do not violate our codes of conduct,” he says.
Those codes are regularly refreshed and tested to ensure they are as robust as possible, Cenkr adds.
Data privacy is the most challenging consideration, he adds. “Any information and data that we feed into our AI platforms absolutely has to comply with GDPR regulations.” Because Home Credit operates in multiple jurisdictions, IT must also ensure compliance in all those markets, some of which have different laws, adding to the complexity.
Organizations should develop their governance structures “in a way that reflects your own personal approach to ethics,” Cenkr says. “I believe that if you put the same care into developing these ethical structures that you do into the ethics you apply in your personal, everyday life, these structures will be all the safer.”
Further, Cenkr says IT should be prepared to update its governance policies regularly. “AI technology is advancing daily and it’s a real challenge to keep pace with its evolution, however exciting that might be.”
Put in guardrails
AI tools such as chatbots have been in use at UST for several years, but generative AI is a whole new ballgame. This fundamentally changes business models, and has made ethical AI part of the discussion, says Krishna Prasad, chief strategy officer and CIO of the digital transformation company, while admitting that “it’s a little more theoretical today.”
Ethical AI “doesn’t always come up” in implementation considerations, Prasad says, “but we do talk about … the fact that we need to have responsible AI and some ability to get transparency and trace back how a recommendation was made.”
Discussions among UST leaders focus on what the company doesn’t want to do with AI “and where do we want to draw boundaries as we understand them today; how do we remain true to our mission without producing harm,” Prasad says.
Echoing the others, Prasad says this means humans must be part of the equation as AI is more deeply embedded inside the organization.
One question that has come up at UST is whether it is a compromise of confidentiality if leaders are having a conversation about employee performance as a bot listens in. “Things [like that] have started bubbling up,” Prasad says, “but at this point, we’re comfortable moving forward using [Microsoft] Copilot as a way to summarize conversations.”
Another consideration is how to protect intellectual property around a tool the company builds. “Based on protections that have been provided by software vendors today we still feel data is contained within our own environment, and there’s been no evidence of data being lost externally,” he says. For that reason, Prasad says he and other leaders don’t have any qualms about continuing to use certain AI tools, especially because of the productivity gains they see.
Even as he believes humans need to be involved, Prasad also worries about their input. “At the end of the day, human beings inherently have biases because of the nature of the environments we’re exposed to and our experiences and how it formulates our thinking,” he explains.
He also worries about whether bad actors will gain access to certain AI tools as they use clients’ data to develop new models for them.
These are areas leaders will have to worry about as the software becomes more prevalent, Prasad says. In the meantime, CIOs must lead the way and demonstrate how AI can be used for good and how it will impact their business models, and bring leadership together to discuss the best path forward, he says.
“CIOs have to play a role in driving that conversation because they can bust myths and also execute,” he says, adding that they also have to be prepared for those conversations to at times become very difficult.
For example, if a tool offers a certain capability, “do we want it to be used whenever possible, or should we hold back because it’s the right thing to do,” Prasad says. “It’s the most difficult conversation,” but CIOs must present that a tool “could be more than you bargained for. To me, that part is still a little fuzzy, so how do I put constraints around the model … before making the choice to offer new products and services that use AI.”