In most instances, enterprises will access generative AI tools and systems through vendor offerings. But there are wide-ranging misconceptions about what tools can and cannot do — and how technology leaders can get implementation right.
Enterprises face an evolving regulatory landscape, security concerns, high costs and pressure to find ROI from the nascent tech. Add that to fears equating AI to pandemics and nuclear war and other catastrophic scenarios, and it’s easy to see why generative AI adoption is trickier than it might seem.
In an effort to set the record straight, we spoke with five executives working at technology companies that are either already offering generative AI or plan to do so. Leaders shared where they are internally implementing the technology, where they’ve found success, what they expect in the future and pitfalls to avoid.
Internally at ServiceNow, Chris Bedi, chief digital information officer, said the team uses human-centric AI guidelines for oversight of data quality, bias and transparency to users, keeping a human in the loop to review.
Bedi recommends IT teams place intense scrutiny on verifying that AI-powered tools and systems consistently draw accurate conclusions, exercise sound decision making and effectively implement appropriate solutions.
“Our approach is always ‘should we do it’ versus ‘can we do it,’” Bedi said, though he believes at least 70% of all service desk interactions could be fully automated.
“Humans will focus more on monitoring AI, and designing the platforms that are doing a lot of the work,” Bedi said about the future of service desk positions. “The more AI does the work of machines, the less humans need to work like machines.”
In the service desk, generative AI will have the most impact by finding useful answers faster, summarizing long-running cases and generating knowledge articles to enable self-service channels, Bedi said in an email.
“Many service desks have knowledge bases to promote self-service, but generative AI can help agents and employees get to the essence of their information quickly so the issue can be resolved promptly,” Bedi said.
Insight Enterprises' path to adopting generative AI started as a grassroots effort among employees, according to David McCurdy, chief enterprise architect and CTO at Insight Enterprises.
The company began to see increased demand via IT support tickets from employees wanting to know if they could use publicly available models for work, but the company’s leadership was cautious due to security risks.
“An enterprise should protect its information, protect its teammates, protect its employees and protect its customers. That's what we do in email. That's what we do with almost any other major system,” McCurdy said. “This is no different, right? But what is different is that it's emerging tech and no one has the first clue how to use it or what it’s good for.”
The company rolled out an internal model for employees, called Insight GPT, which came from the company’s partnership with Microsoft. Employees have used the tool to summarize reports, generate content and boost productivity based on initial feedback.
“Everything you read and a lot of the fear pieces about how no one’s going to be able to work," McCurdy said. "I’m not in that mindset and I don’t subscribe to that. I am very much in the lane that this is a productivity and a personal enhancer and that’s why it’s so amazing.”
Organizational change management is more than just a buzzword, according to Weston Morris, who leads global strategy for emerging technologies at Unisys. It’s key to any implementation strategy, including with generative AI.
IT teams and their leaders should communicate more than just how to use the tools as employees are often wary of surveillance.
“One of the concerns is definitely going to be, ‘Is this generative AI going to be used to monitor me as an employee and put me in a bad light?’” Morris said. “And if so, I would want to know about that.”
Claus Jepsen, CTO at Unit4, sees generative AI as having the most impact on user experience and automating tasks, though enterprises should adopt with caution.
“The only thing people forget when they get excited about this is you need to be very specific about what you ask, otherwise it’s like an Excel [s---]-in, [s---]-out spreadsheet,” Jepsen said.
Businesses also have to be wary of inadvertently violating copyright laws, as the source of the information could be unknown. Jepson encouraged businesses using generative AI tools for content generation to review generated text to ensure it doesn’t all sound the same.
“Like with any other tool, you have to apply brain power to it,” Jepsen said.
Louis Tetu, CEO of Coveo, thinks about generative AI implementation in two buckets: content generation and human augmentation. Early impact use cases that stand out to Tetu include writing emails, generating presentation templates and summarizing lengthy text.
But the companies with the upper hand are the ones that understand model limitations, he said.
“You need to first sort out the security aspect, figure out which content you can use from a security, privilege, access and privacy perspective,” Tetu said.
Some industries are more heavily regulated than others, so banks and financial institutions, for example, will have a more complex path to addressing compliance and security guardrails. There’s also a chance models will give inaccurate or out-of-date information.
Factuality is a key imperative for enterprises, Tetu said.
“Unless you have the plumbing that can respect security and compliance and can connect to sources of truth so that it's factual, you cannot apply this technology to serve customers,” Tetu said. “You can’t make up stuff.”