The range of applications of AI is effectively infinite. While we can’t give guidance on how the standard might apply to every use case, we can use examples to illustrate how you can use the guardrails to manage the risks and benefits of a specific AI system.
We’ve chosen 4 examples to show how individual guardrails might be apply in different use cases. The examples explore how organisations may use particular guardrails as part of their overall approach to deploying AI systems. The examples show that the guardrails can be applied in different situational contexts, for different technologies.
These examples are not intended to represent a comprehensive application of all relevant guardrails, responsibilities or other legal obligations that may be relevant for the specified use cases. They are to provide examples of how the guardrails can be applied in a selection of fictional examples.
Example 1: General-purpose AI chatbot
A detailed example representing a common use case for organisations of all sizes, across all sectors. Due to the growing ubiquity of this technology, we’ve provided extra detail on how an organisation could adopt a range of guardrails. As a point of contrast, this example includes potential outcomes where safe and responsible AI methodologies are not followed.
Example 2: Facial recognition technology
A simplified example on the use of facial recognition technology. It illustrates the use of the guardrails to decide that non-AI-based solutions will better achieve strategic and operational goals.
Example 3: Recommender engine
A simplified example of a common use case in which a recommender engine is used to improve customer experience and meet organisational goals. It includes reference to a court case in which a business using this kind of technology was ordered to pay a substantial financial penalty for not meeting legal obligations.
Example 4: Warehouse accident detection
A detailed example to outline obligations for testing of AI systems. In this example, we offer guidance on linking areas of concern with acceptance criteria. It covers testing at different stages during the AI system and governance lifecycle, due to the specific and technical nature of meeting relevant guardrails.
Example 1: General-purpose AI Chatbot
NewCo background
NewCo is a fast-growing B2C company with 50 employees, selling a range of products in a niche market. It has an annual turnover of $3.5 million.
The company is approaching a major product launch that they expect will create a significant increase in demand. NewCo’s head of sales proposes to use the latest advances in AI and procure a new chatbot for their website. The chatbot would engage with customers to answer the most commonly asked questions. The company expects the new product to sell over 10,000 units in the first month because of an aggressive social media strategy featuring early-bird discounts.
The new chatbot is meant to reduce the amount of time customers wait for a phone operator by shifting those with routine queries to the online chatbot. This should reduce the need to expand phone support and allow employees to spend more time on complex tasks. The most common customer queries include delivery times, returns and the application of time-limited discount codes.
The head of sales suggests that a chatbot based on general-purpose AI would help the company respond to and resolve customer queries faster, leading to improved customer satisfaction scores (CSAT). CSAT scores are considered lead indicators for revenue growth goals, so NewCo hopes that a suitable customer query chatbot would also support growth in sales.