Term | Definition | Source |
---|---|---|
Accountable / Accountability |
Accountable: answerable for actions, decisions and performance. Accountability: state of being accountable. |
Aligned ISO 22989:2022 3.5.1 3.5.2 |
AI agent | Automated entity that senses and responds to its environment and takes actions to achieve goals. |
Aligned ISO 22989:2022 3.1.1 |
AI audit | An internal or (independent) external evaluation of an AI system to determine whether the given system meets the requirements set by a normative framework. | Whittenberg |
AI developer / AI development |
AI Developer: An organisation or individual that is concerned with the development of AI models, systems, and associated applications, products, and services. AI Model Developer: An organisation or individual that is concerned with activities such as preparing training data, feature engineering, training and fine-tuning, testing, validating AI models. AI System Developer: An organisation or individual that is concerned with activities such as designing, building, testing, training or adapting the overall AI system. This includes integrating AI models with other components such as knowledge or databases, input/output filters, user interfaces, tools, and other systems. Note: A single organisation may play multiple roles, such as AI model developer, AI system developer, and AI system deployer. Note: Organisations or individuals who, design, build, train, adapt, or combine AI systems and applications and distributes or otherwise places it on the market as a service for others to use, whether for payment or free of charge are referred to as an AI provider under the EU AI Act. |
Adapted from ISO 22989:2022 5.19.3.2 |
AI deployer |
An organisation or individual that uses an AI system to provide a product or service. Deployment can be internal to the business or external. When deployment is external it can affect other stakeholders, such as customers. AI deployment is concerned with making an AI model or system available in a specific production environment tailored to particular use cases. Deployment activities often involve customising and integrating AI systems with existing systems and workflows, preparing infrastructure to support operational demands, conducting environment or use case-specific testing, ensuring compliance with security and regulatory standards, creating policies for AI users, and setting up monitoring mechanisms for operations and AI usage. Note: The technical and legal nature, as well as the amount of customisation and integration, may affect whether the activity is system deployment or system development. |
|
AI lifecycle | The sequence of phases that an AI system goes through, from its conception, all the way through its development, testing, deployment, use, and eventual retirement. | |
AI management system | Overarching organisational governance framework for the development and deployment of AI systems, including the policies, objectives and processes to meet objectives. Includes structure, roles and responsibilities, planning and operation. |
ISO42001
|
AI model
|
Representation of an entity, phenomenon, process or data, employing various algorithms to interpret, predict, or generate responses based on input. Note: A machine learning model is a type of AI model and is a mathematical construct that generates inferences or predictions based on input data or information. Note: AI models, together with other components, are combined to form AI systems. The inference capability in the AI system which is the key difference with conventional software, comes from its models. |
Adapted from ISO 22989:2022 3.1.23 |
AI safety | Principles and practices to ensure AI is designed, developed, deployed, and used in ways which are human-centric, trustworthy and responsible. This is to realise the potential of AI to help and not harm people; to protect human rights; as well as to promote inclusive economic growth, sustainable development and innovation. | Bletchley Declaration |
AI supply chain
|
Sequence of activities or parties that provides AI products or services to an organisation or individual. Note: In some cases, AI supply chain is used interchangeably with AI value chain. |
ISO 26000:2010 |
AI system
|
A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. Note: AI systems integrate AI models with other components such as knowledge or databases, input/output data processes, user interfaces, tools, and other systems. Note: AI models typically need to be converted to an AI system to be deployed and used. For example, by addition of at least some minimal interfaces or input/output data processes. |
OECD.AI Policy Observatory
|
AI user/Use of AI |
See AI deployer. Note: In this standard, we use the term AI users refer to organisational users, which largely equates to AI deployers, as the scope of the standard addresses organisational, not individual responsibilities. We note that, in typical software standards, user is often defined as a “person who interacts with a system, product, or service” (ISO 25066:2016) or an “individual or group that interacts with a system or benefits from it during its utilization” (ISO 25010:2011). |
Adapted from ISO 25066:2016, ISO 25010:2011 |
Affected stakeholder | Anyone impacted by the decisions or behaviours of an AI system. These can include organisations, individuals, communities or other systems. For example, consumers, employees and Unions. | |
Algorithm |
A set of instructions that guide a computer in performing specific tasks or solving problems. Note: A machine learning algorithm is an algorithm used to determine parameters of a machine learning model from data according to given criteria. |
ISO 22989:2022 |
Bias |
Systematic difference in treatment of certain objects, people or groups in comparison to others. Note: From a technical perspective, bias is necessary for AI to identify systematic differences between groups of objects or people and to treat them differently when justified. However, bias becomes problematic or "unwanted" when it leads to unfairness, i.e. unjustified differential treatment that preferentially benefits or harms certain individuals or groups over others. |
Aligned ISO 22989:2022 3.5.4 |
AI end user |
Any intended or actual individual or organisation that consumes an AI-based product or service, interacts with it, or is impacted by it after deployment. Affected stakeholder. Note: The term end user is often defined as a “person who directly uses the system for its intended purpose” (ISO 25010:2023) to emphasize direct interaction, or as an “individual person who ultimately benefits from the outcomes of the system or software” (ISO/IEC 25000:2014), highlighting the derived benefits. Since this standard encompasses both benefits and risks, with a focus on affected impacts on a wide range of stakeholders, we adjusted the definition to reflect this. |
TGA 2024: Clarifying and strengthening the regulation of AI |
Evaluation |
The process of assessing against specific criteria with or without executing the artifacts, including model/system evaluation, capability evaluation, benchmarking, testing, verification, validation, as well as broader risk assessment and impact assessment against criteria or thresholds. AI model evaluation: the process of assessing an AI model against predefined specific criteria or general benchmarks. AI system evaluation: the process of assessing an AI system against predefined specific criteria or general benchmarks. AI capability evaluation: a comprehensive assessment of an AI model or system’s overall capabilities, including both planned capabilities and unplanned, emerging, or dangerous capabilities. |
Xia et.al 2024 https://arxiv.org/abs/2404.05388v1
|
Explainability | Property of an AI system to express important factors influencing the AI system results in a way that humans can understand. |
Aligned ISO 22989:2022 3.5.7 |
Fairness |
Treatment, behaviour or outcomes that respect established facts, beliefs or norms and are not determined or affected by favouritism or unjust discrimination. Unfairness: unjustified differential treatment that preferentially benefits certain groups more than others. |
Aligned ISO TR 24368:2022, ISO TR 24027:2021 |
General-purpose AI/General AI
|
AI models or systems developed to handle a broad range of tasks and integrate into a variety of downstream systems or applications. |
Adapted from ISO22928 3.1.14 |
Generative AI (GenAI)
|
A type of AI models or systems with the capability to generate synthetic content such as text, images, videos, and other media. Note: Many current GenAI is based on GPAI. However, GenAI can be developed to perform a narrow set of tasks either by restricting GPAI capabilities or via other development approaches. Note: ChatGPT can be considered both a GenAI system and a GPAI system. It is based on the GPT series of models, which are GPAI models (further tuned from foundation models to follow instructions and align with human). |
Qinghua et al.
|
Impact Assessment |
A process by which an organisation developing, deploying, or using AI systems identifies, analyses, and evaluates the broader economic, social, and environmental effects of the AI systems on individuals, groups, and societies. Note: Compared to AI risk assessment, AI impact assessment typically considers broader effects beyond the immediate consequences of an AI system. It usually does not incorporate detailed likelihood or probability analysis and focuses directly on affected stakeholders and society. In contrast, risk assessment emphasises the financial, reputational, and legal consequences for the organisation, which are only indirectly linked to |
ISO 42001:2022 ISO CD 42005: 2024 |
Labelling |
Labelling (data): the process of attaching meaningful information (called labels) to pieces of data so that an AI system can learn from them or be tested on them. Datasets are labelled where samples are associated with target variables. Labelling (content): techniques which vary by modality to alert stakeholders to the presence of AI-generated content and its provenance. Note: labelling may take the form of overt watermarks (such as icons overlaid on content, audible disclosures), labels within content (such as warnings, pre-roll or interstitial labels in video and/or audio, or font differences), or user interfaces (such as disclaimers, warnings or symbols to indicate provenance data). |
Aligned ISO22989:2022 5.10 Aligned NIST AI 100-4 p30
|
Measurement (of AI systems) |
Employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyse, assess, benchmark, and monitor AI system performance, risk and related impacts. |
Aligned NISTL AI RMF Core
|
Metrics |
A qualitative or quantitative measure used to assess, compare, and track the performance or quality of a system, process, product or service. • Internal metrics measure the AI model or system itself (such as model complexity, explainability, training compute resources). • External metrics measure the behaviour and quality of the AI model or system (such as accuracy, response time, scalability). • Risk metrics measure the negative outcomes of using the AI system in a specific context (such as impacts on bias, privacy, security and compliance). • Impact metrics measure the broader effects of AI systems on users, groups and society. |
|
Narrow AI | Type of AI system that is focused on defined tasks and uses to address a specific problem. |
Aligned ISO 22989:2022 5.2 |
Performance |
Measurable results Note: this can relate to quantitative or qualitative findings, actions or behaviours. |
Aligned ISO 22989:2022 3.1.25 |
Provenance | The logical concept of understanding the history of an asset and its interaction with actors and other assets, as represented by the provenance data. |
Aligned C2PA 2.3.8 |
Red teaming / adversarial testing | An exercise, reflecting real-world conditions, that is conducted as a simulated adversarial attempt to provide a comprehensive assessment of the security capability of the AI system and organisation. | Aligned NIST: CSRC Term |
Responsible AI | The practice of developing and using AI systems in a way that provides benefits to individuals, groups, and wider society, while minimising the risk of negative consequences. This includes implementing appropriate governance, oversight and compliance mechanisms. | Adapted Qinghua et al |
Risk (of AI system) | Composite measure of an event’s probability of occurring and the magnitude of the impacts or consequences of the corresponding event. The impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or threats. |
NIST: AI RMF Core ISO 31000:2018 |
Risk analysis (of AI system) | The systematic use of risk or threat models to identify sources and pathways through which AI systems could produce risks, and to estimate the level of risks quantitatively or qualitatively. |
EU AI Act: Code of Practice
|
Risk assessment (of AI system) | The systematic process of of risk identification, risk analysis and risk evaluation. |
ISO 31073:2022 EU AI Act: Code of Practice |
Risk control (of AI system) | Measure that maintains and/or modifies risk. Controls include but are not limited to any process, policy, device, practice or other actions. | ISO 31000: 2018 |
Risk evaluation (of AI system) | The process of comparing the estimated value or level of risks from risk analysis to predefined criteria, such as risk thresholds, or risk leves/tiers defined by regulatory bodies and stakeholders or an organisation’s risk tolerance. | EU AI Act: Code of Practice |
Risk identification (of AI system) | The process of finding, recognising and describing risks. Risk identification involves the identification of hazards, events, and their potential consequences. |
ISO 31073:2022 EU AI Act: Code of Practice |
Risk mitigation (of AI system)
|
The process of prioritising, selecting, and implementing appropriate risk-reduction controls. Note: Risk mitigation focuses on risk-reduction controls, while risk treatment includes additional options as well as recovery, response, and communication plans for the realisation of risks. |
NIST: CSRC Terms |
Risk threshold (of AI system) | The values establishing concrete decision points and operational limits that trigger a response, action, or escalation. They can involve technical indicators (e.g., error rates, scale, training compute) and human values (e.g., social or legal norms) in determining when AI systems present unacceptable risks or risks that demand enhanced scrutiny and mitigation measures. |
NIST: AI RMF Core OECD.AI |
Risk tolerance |
An organisation’s or individual’s readiness to bear the risk in order to achieve their objectives. Note: It is sometimes used interchangeably with risk appetite—referring to a justified or unjustified attitude as opposed to a readiness to bear risks. |
NIST: AI RMF Core |
Risk treatment |
The systematic process of prioritising, selecting, and implementing options (e.g., avoidance, transfer, acceptance, reduction) and risk controls to manage and address identified risks. Note: Risk treatment is broader than risk mitigation, as it often involves detailed prioritisation based on impact, probibility, and available resources, along with response, recovery, and communication plans for the realisation of risks. |
ISO 23894:2023 NIST: AI RMF Core |
Systemic risk (of AI system) | A risk that is specific to the high-impact capabilities of AI, having a significant impact due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. | EU AI Act |
Testing |
The process of executing an AI model or system to verify and validate that it exhibits expected behaviours across a set of appropriately selected test cases. |
IEEE SEBoK |
Transparency / transparent |
<organisation> Property of an organisation that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner. <system> Property of a system that appropriate information about the system is made available to relevant stakeholders. <mechanism> Process of making information about an AI system or AI-generated content or its provenance available to users and stakeholders. |
Adapted ISO 22989:2022 3.5.14 and 3.5.15 |
Trust (of AI system) | The extent to which a stakeholder is persuaded that the AI will behave as intended. |
ISO 25010
|
Trustworthiness (of AI system) |
An AI system which is deserving of trust due to its ability to meet stakeholder expectations (e.g. reliability, fairness, privacy and security) in a verifiable way. |
ISO 22989:2022 3.5.16 Australia’s AI Ethics Principles |
Validation/ Validate | Confirmation, through the provision of objective evidence, that the needs of the user have been fulfilled. |
IEEE SEBoK
|
Verification / Verify | Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled. | IEEE SEBoK |
Watermark
|
Information embedded into digital content, either perceptibly or imperceptibly by humans, that can serve a variety of purposes, such as establishing digital content provenance or informing stakeholders that the contents are AI-generated or significantly modified. AI-generated content watermarking: a procedure by which watermarks are embedded into AI-generated content. |
Adapted from C2PA 2.4.2
|