Explainability for Vulnerability Identification in AI Systems

A Contract Award Notice
by DEFENCE SCIENCE AND TECHNOLOGY LABORATORY

Source
Contracts Finder
Type
Contract (Products)
Duration
1 year
Value
£150K
Sector
TECHNOLOGY
Published
24 Aug 2022
Delivery
01 Sep 2022 to 31 Aug 2023
Deadline
08 Jul 2022 23:59

Concepts

Location

Geochart for 1 buyers and 1 suppliers

Description

The Research and Development submission to the Strategic Review (SR20) recognised the need to advance MOD's ability to adopt critical and game-changing technology, enabling autonomous systems on the battlefield and in the command space through the use of artificial intelligence. It proposed to do this by establishing a Defence AI Centre with the science and technology component delivered by a Defence AI Centre Experimentation hub (DAIC-X) led by Dstl. A key objective for DAIC-X is to understand and develop good practice in managing AI verification, validation, vulnerabilities as well as wider issues including trust and transparency and legal and ethical considerations. This task will research the potential to exploit artificial intelligence explainability (XAI) methodologies to identify and expose vulnerabilities in neural network-based machine vision algorithms. Please see the attached Tasking Form for further information regarding this award.

Award Detail

1 City University of London (London)
  • Value: £149,717

CPV Codes

  • 48000000 - Software package and information systems

Indicators

  • Contract is suitable for SMEs.

Other Information

Part A of the Tasking Form giving the basic details of the requirement RCloud_Tasking_Form_Part A-Task_Overview_v1.1.pdf This document details the requirements of the tasking 20220608_Explainability_for_Vulnerability_Identification_in_AI_Systems_SOR_O_v0.6.1.pdf Transparency Annex C Transparency Annex C.pdf

Reference

Domains