Trust and Safety: The biggest barrier to enterprise adoption of Generative AI

Vivek Sriram
6 min readSep 27, 2023

--

Photo by Call Me Fred on Unsplash

Trust and Safety is the biggest blocker to enterprise adoption

“Our greatest fear is having patient data show up in response to a prompt on ChatGPT,” an executive at a health system shared with us in our first meeting. And his concerns are far from unfounded, with employees from Samsung being the most notable public example. In response, Apple, Commonwealth Bank of Australia, and Calix have made headlines for outright banning their employees from using ChatGPT at work.

These risks and more haven’t been overlooked by enterprise technology leaders. A survey conducted by IBM research and Oxford Economics found that 80% of executives “see at least one trust-related issue as a roadblock to AI adoption.” Generative AI introduces a host of fears on top of the typical cybersecurity considerations, including explainability, hallucinations and bias. It’s unsurprising that safeguarding sensitive corporate data sits at the top of IT executives’ priorities every time. Without a comprehensive trust and safety framework — addressing both known and emerging problems — corporate adoption of Generative AI will continue to remain tepid, with many experiments but few examples of actual live use.

Enterprise use of Generative AI broadly reflects the experiences executives have with it in their personal lives — primarily, that means Chat GPT. Chat, text and image generation remain common starting points, with the highest concentration of applications in marketing and customer service. No real surprise, since corporate priorities are set by corporate executives — who are people who have the same curiosity about the transformational impact of technologies they are experiencing in their personal lives. Their eagerness for experimentation though is tempered by the fear of unknown risk.

No matter the low probability of such an event, executives will continue to be cautious about their own in-house development of transformative technologies until such time that there are systemic capabilities to address the whole range of risks.

As this chart from IBM shows, corporate adoption will continue to be primarily led by the use of the most mature features and capabilities which have the lowest risk potential vs the design and development of original capabilities which deliver strategic value by addressing critical pain points which generic fixes will not solve.

Competitive advantage requires developers being able to easily build Safe AI

True competitive advantage from transformative Generative AI applications in the enterprise is only really possible when there is trust and transparency up and down the chain and executives and comfortable with the risk of sensitive corporate data being put to use to train LLMs and for their customers and users to interact with it without having fear from hallucinations.

The current state, as Avivah Litan from Gartner explains, is one where, “there are currently no off-the-shelf tools on the market that give users systematic privacy assurances or effective content filtering of their engagements with these models.” Until that time, the onus is on AI developers to figure oversight and governance to form a policy-based approach to use. As this approach is cumbersome and error-prone, enterprise adoption tends to default to using undifferentiated features from their plethora of service providers.

Though fast evolving, the current landscape of tools isn’t yet well suited to address all of the possible governance, risk and privacy issues that come along with Generative AI. There are two reasons why this is the case:

  1. Because Generative AI is still fairly new, security technologies and processes have not yet matured to effectively mitigate the various categories of risk associated with it.
  2. The regulatory framework is still forming and is unsettled. Despite a growing volume of policy approaches to governance, security ethics and fairness in AI, very few standards have become either codified or widely accepted industry wide.

Consequently, the responsibility largely falls on the individual development teams building Generative AI into enterprise applications to figure out how to handle currently known issues and to also protect against unforeseen problems.

With this admittedly high bar, developers, architects and technical staff are having to work with policymakers, including the office of the CISO and legal departments to figure out what specific policies and practices are applicable or not for the specific oversight and risk management requirements for individual projects. For example, a chatbot on the public website of a health insurance provider might have to contend with issues ranging from errors and accuracy to trust and fairness, hallucinations, copyrighted materials and other confidential information. Clearly this isn’t going to scale very well.

Risk factors

Until the systems get built and Generative AI development platforms develop the capability to address trust and safety issues systematically, development teams building Generative AI should consider a few common risks and plan for a few mitigation scenarios.

  1. Hallucinations: are frequently caused by poor quality guardrails. Though very capable open source foundation models abound, many are of dubious provenance. Enterprise developers should turn a skeptical eye to models with fuzzy licensing terms or poor documentation about source data. Even with good models of solid pedigree, fine-tuning with proprietary data with sufficient diversity for satisfying the needs of the specific use case will be critical.
  2. Fairness and bias: algorithmic bias can sometimes lead to unquantifiable risk. An algorithm that spits out preferences for one kind of toothpaste over another is less consequential than one which might offer biased responses about candidates for jobs. Solving for the latter is a weighty matter that involves law, data engineering and compliance protocols (and depending on geography privileges to users for rights to be forgotten), while the former might only be a minor irritant.
  3. Data leakage, privacy and confidentiality: there are countless examples now of curious newbies guilelessly feeding sensitive corporate information into ChatGPT to see what happens. While there are clearly degrees of severity in the consequences of confidential information showing up publicly, all of it is the stuff of a CISO’s nightmares. The most effective way to prevent it is to have the model remain in complete control within an organization’s own cloud account subject to access controls.
  4. Copyright: Generative AI trained on public data is inevitably trained on large quantities of images and content, a lot which lives under fuzzy and unsettled copyright laws. AI applications trained on foundation models with unclear documentation on what the source data used for training might be at risk of infringing. The best way to protect against it is to use open source models where source training data can be verified to be free of copyright infringement. Admittedly, even popular open source models like LLaMa2 are in large part trained on Common Crawl, where copyright concerns still remain unsettled.
  5. Vulnerability and cybersecurity: there is a fast emerging crop of software generation co-pilots which can scan source code for vulnerabilities. Malicious developers can use these same tools for illegitimate purposes like prompt injection attacks. Models that run outside of an organization’s own control might not have the same degree of protection that an internal team’s security controls might provide.

For Generative AI to have the transformative impact in the enterprise that it’s hyped to be, it must overcome an evolving set of trust and safety considerations. Since the tools, processes, systems and best practices for addressing these emerging challenges have not yet materialized, the responsibility for now falls on the development teams experimenting with Generative AI. For now suitability for Generative AI in enterprise applications depends on application developers choosing the appropriate models, cloud deployment platforms, access control and to do the appropriate testing to mitigate some of the major risks associated with these potentially transformative technologies.

Safe AI, Simplified

Our mission at bookend is to make Safe AI simple. To make that happen, we are on a path to build the most comprehensive set of tools that developers who are building for the enterprise can use to make Generative AI transparent, trustworthy and safe. We don’t believe that trust and safety is done with checking off a few boxes for access control or compliance alone. Rather, we believe that trust and safety permeates every aspect from model selection to development, deployment and usage.

--

--

No responses yet