Skip to main content

According to new research from Salesforce, 67% of senior IT leaders “are prioritizing generative AI for their business within the next 18 months, with one-third (33%) naming it as a top priority.”

International Data Corporation (IDC) predicts that in 2026, 55% of Forbes Global 2000 OEMs will have “redesigned service supply chains based on AI.”

The recently-released results of a Gartner poll that surveyed over 2,500 executive leaders also captured generative AI’s momentum:

  • 45% — indicated that “the publicity of ChatGPT has prompted them to increase artificial intelligence (AI) investments”

  • 70% — indicated that their organization is “in investigation and exploration mode” with generative AI

  • 19% — indicated they are “in pilot or production mode” with generative AI

“The generative AI frenzy shows no signs of abating,” said Frances Karamouzis, Distinguished VP Analyst at Gartner in a statement. “Organizations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.”

In our first post in this series, we tapped into two McKinsey & Company resources to help lay a generative AI foundation and describe how it differs from traditional AI.

In our post last week, we examined what some experts say about how generative AI can provide value for supply chain management (SCM) — as well as which SCM roles may be impacted as a result of its use.

And in this last post of our series, we’ll take a look at the potential risks of integrating generative AI within business processes — and a few strategies that may help mitigate them.

Generative AI: Potential and Pitfalls

With the rapid pace at which generative AI is unfolding, it may come as no surprise that along with the many benefits being touted, there are significant risks to consider.

A recent survey of corporate law professionals found that although generative AI may offer many benefits, 75 percent said they have “risk concerns surrounding use of ChatGPT and generative AI, mostly in areas of accuracy, privacy, confidentiality, and security.”

Even AI leaders have been speaking out about their concerns.

The Boston Consulting Group (BCG) described the dynamics like this: “The Future of Life Institute created a stir in the artificial intelligence (AI) community on March 22 by releasing an open letter calling for a six-month halt in the development of generative AI models to allow for a more thorough study of the risks. Among the more than 20,000 signers, as of mid-April, are Tesla CEO Elon Musk and Apple cofounder Steve Wozniak.”

In a recent article for the Institute for Supply Chain Management (ISM), writer Melanie Stern cites various perspectives and recommendations from Polly Mitchell-Guthrie of Kinaxis, a Canadian supply chain software solutions provider.

“Conversations about ChatGPT merits and what-ifs swirl among innovation industry leaders and corporate executives, each wanting only the best from a technology still in its infancy,” Stern writes. “Political undertones suggest (1) an urgency for gloves-off research and tested applications or (2) a heavy pause until scrutiny is replaced with regulation. When contemplating potential, its reality must be manageable.”

“Commercial enterprise drives the interest in moving AI research forward, but it doesn’t necessarily own responsibility for all its applications,” Mitchell-Guthrie reportedly said.

Describing ChatGPT’s potential as a large language model, Mitchell-Guthrie said the key will be finding a way to make it support specific needs: “We can’t put the genie back in the bottle. Instead, we need to think about its application and best use. Let’s figure out regulating factors to help mitigate the risks.”

Gartner: Six Risks Associated with Using ChatGPT

In a May press release, “Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate,” global consulting firm Gartner said, “Legal and compliance leaders should address their organization’s exposure to six specific ChatGPT risks, and what guardrails to establish to ensure responsible enterprise use of generative AI tools.”

“The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks,” said Ron Friedmann, senior director analyst in in the Gartner Legal & Compliance Practice. “Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties. Failure to do so could expose enterprises to legal, reputational and financial consequences.”

Here are the six ChatGPT risks Gartner says “legal and compliance leaders should evaluate.”

Risk 1 – Fabricated and Inaccurate Answers

Gartner says one of the most common issues with ChatGPT and other LLM tools is “a tendency to provide incorrect – although superficially plausible – information.”

“ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann. “Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted.”

Risk 2 – Data Privacy and Confidentiality

Additionally, if the chat history isn’t disabled, Gartner says “any information entered into ChatGPT…may become a part of its training dataset.”

“Sensitive, proprietary or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” said Friedmann. “Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools.”

Risk 3 – Model and Output Bias

“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant,” said Friedmann. “This may involve working with subject matter experts to ensure output Is reliable and with audit and technology functions to set data quality controls.”

Risk 4 – Intellectual Property (IP) and Copyright Risks

Since ChatGPT is trained on massive amounts of internet data, Gartner says there’s a good chance that data includes material that is copyrighted: “Therefore, its outputs have the potential to violate copyright or IP protections.”

“ChatGPT does not offer source references or explanations as to how its output is generated,” said Friedmann. “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

Risk 5 – Cyber Fraud Risks

Noting that ChatGPT is already being misused for nefarious purposes, Gartner says applications that use LLM models “are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn’t intended for such as writing malware codes or developing phishing sites that resemble well-known sites.”

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”

Risk 6 – Consumer Protection Risks

Underscoring the importance of letting consumers know when they’re interacting with a chatbot, Gartner says failure to do so could result in a loss of trust and the risk of “being charged with unfair practices under various laws.”

“Legal and compliance leaders need to ensure their organization’s ChatGPT use complies with all relevant regulations and laws, and appropriate disclosures have been made to customers,” said Friedmann.

Managing Generative AI Risks

In “Why Trust and Security are Essential for the Future of Generative AI,” Avivah Litan, VP Analyst at Gartner notes that there’s no stopping the development of generative AI — which is why it’s important for organizations to “act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM).

“There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models,” Litan says. “There are currently no off-the-shelf tools on the market that give users systematic privacy assurances or effective content filtering of their engagements with these models, for example, filtering out factual errors, hallucinations, copyrighted materials or confidential information. AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management. “

He describes several actions enterprise leaders can take now to address generative AI risks.

Review out-of-the-box models carefully

Noting that two general approaches to implementing ChatGPT and similar applications are to use them as is, with “no direct customization,” and via a “prompt engineering approach,” which uses tools “to create, tune and evaluate prompt inputs and outputs.”

“For out-of-the-box usage, organizations must implement manual reviews of all model output to detect incorrect, misinformed or biased results,” Litan says. “Establish a governance and compliance framework for enterprise use of these solutions, including clear policies that prohibit employees from asking questions that expose sensitive organizational or personal data.”

Monitor “unsanctioned” uses

“Organizations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations,” Litan says. “For example, firewalls can block enterprise user access, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor disallowed API calls.”

Protect internal and sensitive data

In addition to the previously described measures, Litan says prompt engineering usage requires further safeguards.

“Additionally, steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure,” he explains. “Create and store engineered prompts as immutable assets. These assets can represent vetted engineered prompts that can be safely used. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared or sold.”

Salesforce: Generative AI Guidelines

In a recent article for Harvard Business Review (HBR), Kathy Baxter, Principal Architect of Ethical AI Practice at Salesforce and Yoav Schlesinger, Architect of Ethical AI Practice at Salesforce, describe additional strategies for organizations seeking to manage the risks of generative AI.

“A business using generative AI technology in an enterprise setting is different from consumers using it for private, individual use,” they write. “Businesses need to adhere to regulations relevant to their respective industries (think: healthcare), and there’s a minefield of legal, financial, and ethical implications if the content generated is inaccurate, inaccessible, or offensive.”

In addition to the “trusted AI principles (transparency, fairness, responsibility, accountability, and reliability)” published by Salesforce in 2019, the writers say that the “mainstream emergence — and accessibility — of generative AI” has led to the creation of additional guidelines “specific to the risks this specific technology presents.”

“These guidelines don’t replace our principles, but instead act as a North Star for how they can be operationalized and put into practice as businesses develop products and services that use this new technology,” Baxter and Schlesinger write.

Noting that their new set of guidelines “can help organizations evaluate generative AI’s risks and considerations as these tools gain mainstream adoption,” they describe five focus areas they cover, which include:

  • Accuracy: “Organizations need to be able to train AI models on their own data to deliver verifiable results that balance accuracy, precision, and recall (the model’s ability to correctly identify positive cases within a given dataset). …”

  • Safety: “Making every effort to mitigate bias, toxicity, and harmful outputs by conducting bias, explainability, and robustness assessments is always a priority in AI. …”

  • Honesty: “When collecting data to train and evaluate our models, respect data provenance and ensure there is consent to use that data. …”

  • Empowerment: “While there are some cases where it is best to fully automate processes, AI should more often play a supporting role. …”

  • Sustainability: “Language models are described as ‘large’ based on the number of values or parameters it uses. Some of these large language models (LLMs) have hundreds of billions of parameters and use a lot of energy and water to train them. …”

For more, please see “Generative AI: 5 Guidelines for Responsible Development.”

Similar to Litan, Baxter and Schlesinger note that many companies will be integrating pre-made generative AI tools, rather than building their own. They offer several “tactical tips for safely integrating generative AI in business applications to drive business results”:

  • Use zero-party or first-party data: “Companies should train generative AI tools using zero-party data — data that customers share proactively — and first-party data, which they collect directly. …”

  • Keep data fresh and well-labeled: “AI is only as good as the data it’s trained on. …”

  • Ensure there’s a human in the loop: “Just because something can be automated doesn’t mean it should be. …”

  • Test, test, test: “Generative AI cannot operate on a set-it-and-forget-it basis — the tools need constant oversight. …”

  • Get feedback: “Listening to employees, trusted advisors, and impacted communities is key to identifying risks and course-correcting. …”

For more detail, please see the HBR article, “Managing the Risks of Generative AI.”

Leave a Reply