Term | Definition | Source (if applicable) |
---|---|---|
Artificial intelligence (AI) | Research and development of mechanisms and applications of AI systems. | ISO |
AI agent | Automated entity that senses and responds to its environment and takes actions to achieve goals. | ISO |
AI audit | An internal or (independent) external evaluation of an AI system to determine whether the given system meets the requirements set by a normative framework. | Whittenberg |
AI benchmark | Publicly available collections of datasets and performance metrics that specify tasks and objectives for AI systems, serving as a standard point of comparison. | Ott et al. |
AI deployer | An individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be used for internal purposes or used externally impacting others, such as customers or individuals, who are not deployers of the system. | |
AI lifecycle | The sequence of phases that an AI system goes through, from its conception, all the way through its development, testing, deployment, use, and eventual retirement. | |
AI model |
Representation of an entity, phenomenon, process or data, employing various AI algorithms to interpret, predict, or generate responses based on input. Machine learning model: mathematical construct that generates an inference or prediction based on input data or information. |
ISO |
AI partner | An organisation or entity that provides services in the context of AI, such as system integrator, data provider, AI evaluator, AI auditor. | ISO |
AI (technology) producer | Organisation or entity that designs, develops, tests and provides AI technologies such as AI models and components. | Significant adaptation from ISO |
AI (platform, product, service) provider | An organisation or entity that provides products or services that uses one or more AI systems. AI providers encompass AI platform providers and AI product or service providers. | ISO |
AI supply chain | The entire flow of resources, including data, model, software, hardware, computational power, talent, and financial capital, required to develop, deploy, and maintain AI systems. | |
AI system | A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. | OECD |
AI user | An entity that uses or relies on an AI system. This entity can range from organisations (such as businesses, governments, or not-for-profits) to individuals or other systems. In some contexts, an AI organisation user is equivalent to an AI deployer. | |
AI value chain | The entire lifecycle of creating and delivering value through AI systems, which extends beyond the AI supply chain. | |
Accountable/ Accountability |
Accountable: answerable for actions, decisions and performance within a well-defined scope of responsibility and potentially sanctionable. Accountability: state of being accountable. |
ISO |
Affected stakeholder | An entity impacted by the decisions or behaviours of an AI system. These can include organisations, individuals, communities or other systems. | |
Algorithm |
A set of instructions that guide a computer in performing specific tasks or solving problems. Machine learning algorithm: algorithm to determine parameters of a machine learning model from data according to given criteria. |
ISO |
Assurance |
The process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. The overall aim of assurance is to verify and attest or assert that the system is operating as intended. AI assurance: measures, evaluates and communicates the trustworthiness of AI systems. |
DSIT |
Bias | Systematic difference in treatment of certain objects, people or groups in comparison to others. | ISO |
Continuous learning (continual learning, lifelong learning) | Incremental training of an AI system that takes place on an ongoing basis during the operation phase of the AI system lifecycle. | ISO |
Disclosure | The act of providing relevant information to a party that was not believed to be previously aware. | Adapted from ISO 29147 |
Evaluation |
The process of assessing against specific criteria with or without executing the artifacts, including model/system evaluation, capability evaluation, benchmarking, testing, verification, validation, as well as broader risk assessment and impact assessment. AI model evaluation: the process of assessing an AI model against predefined specific criteria or general benchmarks (beyond accuracy). AI system evaluation: the process of assessing the functional accuracy/correctness and quality of an AI system against predefined specific criteria or general benchmarks (beyond model and correctness/accuracy). AI capability evaluation: a comprehensive assessment of an AI model or system's overall capabilities, including both planned capabilities and unplanned, emerging, or dangerous capabilities. Unlike specific task-focused evaluations seen in AI model and system evaluation, capability evaluation seeks to understand the full range of an AI's capabilities. This includes evaluating how an AI might adapt or evolve beyond its initial training, identifying both beneficial emergent behaviours and potential risks that could arise from its autonomous operation or interaction with complex environments. |
|
Explainability | Property of an AI system to express important factors influencing the AI system results in a way that humans can understand. | ISO |
Fairness |
Treatment, behaviour or outcomes that respect established facts, beliefs and norms and are not determined or affected by favouritism or unjust discrimination (ISO TR24368). Unfairness: unjustified differential treatment that preferentially benefits certain groups more than others. |
ISO |
Fingerprint | Set of inherent properties computable from digital content that identifies the content or near duplicates of it. | C2PA |
Foundation model | ‘An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts’. | US Executive Order |
General-purpose AI | Type of AI system that addresses a broad range of tasks and uses, both intended and unintended by developers. | ISO |
Generative AI | ‘A branch of AI that develops generative models with the capability of learning to generate content such as images, text, and other media with similar properties as their training data.’ | Qinghua et al. |
Impact assessment (AI system) | An extensive method for appraising the wider effects that AI systems may exert on economic, social, and environmental spheres. It considers the long-term ramifications and systemic shifts that may be induced by the deployment and operation of AI systems. | |
Justified/calibrated trust of AI | Where a stakeholder trusts the use of an AI system based on reliable evidence, without over-trusting or under-trusting. | |
Labelling |
A procedure that enables organisations to put their information classification scheme into practice by attaching classification labels to relevant information assets. AI content labelling: applying visible content warnings to alert stakeholders to the presence of AI-generated content and its provenance. |
Adapted from ISO 27002:2022 and Wittenberg |
Measurement (of AI systems) | Employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyse, assess, benchmark, and monitor AI system performance, risk and related impacts. | NIST |
Metrics |
A qualitative or quantitative measure used to assess, compare, and track the performance or quality of a system, process, product or service. Internal metrics measure the AI model or system itself (such as model complexity, explainability, training compute resources). External metrics measure the behaviour and quality of the AI model or system (such as accuracy, response time, scalability). Risk metrics measure the negative outcomes of using the AI system in a specific context (such as impacts on bias, privacy, security and compliance). Impact metrics measure the broader effects of AI systems on users, groups and society. |
|
Narrow AI | Type of AI system that is focused on defined tasks and uses to address a specific problem. | ISO |
Performance | Measurable results. | ISO |
Provenance | The logical concept of understanding the history of an asset and its interaction with actors and other assets, as represented by the provenance data. | |
Red teaming |
An exercise, reflecting real-world conditions, that is conducted as a simulated adversarial attempt to provide a comprehensive assessment of the security capability of the AI system and organisation. Adversarial testing: same. |
|
Responsible AI | ‘The practice of developing and using AI systems in a way that provides benefits to individuals, groups, and wider society, while minimising the risk of negative consequences.’ | Qinghua et al. |
Risk
|
Composite measure of an event’s probability of occurring and the magnitude of the consequences of the corresponding event. Note: The consequences of AI systems can be positive, negative, or both and can result in opportunities or threats. Effect of uncertainty on objectives. Chance of harm. |
NIST and ISO |
Risk assessment (AI system) | The systematic process of identifying and evaluating the likelihood and potential consequences of events or actions within AI systems that could lead to harm. | |
Risk control (AI system) | Measure that maintains and/or modifies risk. Controls include but are not limited to any process, policy, device, practice or other actions. | |
Risk mitigation (AI system) | The practices and tools used to reduce the likelihood and potential consequences of events or actions within AI systems that could lead to harm. | |
Test case | The specification of all the entities that are essential for testing, such as inputs testing procedure, and the expected outcomes. A set of test cases is usually called a test suite. | Washizaki[xi] |
Testing | The process of executing an AI model or system to verify and validate that it exhibits expected behaviours across a set of appropriately selected test cases. | Washizaki |
Transparency |
Organisation: Property of an organisation that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner. System: Property of a system that appropriate information about the system is made available to relevant stakeholders. Appropriate information for system transparency can include aspects such as features, performance, limitations, components, procedures, measures, design goals, design choices and assumptions, (training) data sources and testing and evaluation methodologies, benchmarks, test cases and criteria. |
ISO |
Trust (of AI system) | The extent to which a stakeholder is persuaded that the AI will behave as intended. | Adapted from ISO 25010 |
Trustworthiness (of AI system) | Ability of an AI system to meet stakeholder expectations in a verifiable way. | ISO |
Validation | Confirmation, through the provision of objective evidence, that the needs of the user have been fulfilled. | |
Verification | Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled. | |
Watermark |
Information embedded into digital content, either perceptibly or imperceptibly by humans, that can serve a variety of purposes, such as establishing digital content provenance or informing stakeholders that the contents are AI-generated or significantly modified. AI-generated content watermarking: a procedure by which watermarks are embedded into AI-generated content. This embedding can occur at 2 distinct stages: during generation by altering a GenAI model's inference procedure or post-generation, as the content is distributed along the data and information distribution chain. |
C2PA |