Skip to content

alan-turing-institute/AssurancePlatform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Trustworthy and Ethical Assurance Platform

An illustration representing the collaborative development of a structured assurance case. The image shows various groups of people working together across different workstations linked by different paths.

Go to the TEA Platform DOI Backend Coverage Frontend Coverage License: MIT

Development Quickstart ๐Ÿ’ป

Get the TEA Platform running locally with Docker in just a few steps:

Prerequisites

Quick Setup

  1. Clone the repository

    git clone https://github.com/alan-turing-institute/AssurancePlatform.git
    cd AssurancePlatform
  2. Set up environment files

    # Copy example environment files
    cp tea_backend/.env.example tea_backend/.env.local
    cp tea_frontend/.env.example tea_frontend/.env.local
  3. Configure GitHub OAuth (optional)

    • Create a new GitHub OAuth App
    • Set Homepage URL: http://localhost:3000
    • Set Authorization callback URL: http://localhost:3000/api/auth/callback/github
    • Add your Client ID and Client Secret to both .env.local files
  4. Start the development environment

    docker-compose -f docker-compose.development.yml up --build
  5. Access the platform

The platform includes user registration, so you can create an account directly or sign in with GitHub (optional). For more detailed setup instructions, see the technical documentation.

About this Repository ๐Ÿ—‚

This repository contains the code and documentation for the Trustworthy and Ethical Assurance (TEA) platformโ€”an application for building trustworthy and ethical assurance cases, developed by researchers at the Alan Turing Institute and University of York.

What is TEA? ๐Ÿซ–

The Trustworthy and Ethical Assurance (TEA) Platform is a collaborative tool for developing structured arguments about how ethical principles and trustworthy practices have been upheld throughout the lifecycle of data-driven technologies.

At its core, TEA helps multi-stakeholder project teams create assurance cases: structured, graphical representations that demonstrate how goals like fairness, explainability, safety, or sustainability have been achieved over the course of a project's lifecycle.

The platform addresses a fundamental challenge in responsible technology development: how can project teams provide justified evidence that ethical principles have been upheld?

TEA supports this through three integrated components:

  1. An interactive tool for building assurance cases
  2. A comprehensive framework of skills and capabilities resources
  3. A collaborative community infrastructure that promotes open practices and shared learning in the trustworthy assurance ecosystem

Documentation ๐Ÿ“„

Our documentation site can be accessed at https://assuranceplatform.azurewebsites.net/documentation

Further Resources ๐Ÿ“š

The following resources provide additional information about the Trustworthy and Ethical Assurance framework and methodology:

  • Burr, C., Arana, S., Gould Van Praag, C., Habli, I., Kaas, M., Katell, M., Laher, S., Leslie, D., Niederer, S., Ozturk, B., Polo, N., Porter, Z., Ryan, P., Sharan, M., Solis Lemus, J. A., Strocchi, M., Westerling, K., (2024) Trustworthy and Ethical Assurance of Digital Health and Healthcare. https://doi.org/10.5281/zenodo.10532573
  • Porter, Z., Habli, I., McDermid, J. et al. A principles-based ethics assurance argument pattern for AI and autonomous systems. AI Ethics 4, 593โ€“616 (2024). https://doi.org/10.1007/s43681-023-00297-2
  • Burr, C. and Powell, R., (2022) Trustworthy Assurance of Digital Mental Healthcare. The Alan Turing Institute https://doi.org/10.5281/zenodo.7107200
  • Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies. AI and Ethics. https://doi.org/10.1007/s43681-022-00178-0

Funding Statements ๐Ÿ’ท

From March 2024 until September 2024, the project is funded by UKRI's BRAID programme as part of a scoping research award for the Trustworthy and Ethical Assurance of Digital Twins project.

Between April 2023 and December 2023, this project received funding from the Assuring Autonomy International Programme, a partnership between Lloydโ€™s Register Foundation and the University of York, which was awarded to Dr Christopher Burr.

Between July 2021 and June 2022 this project received funding from the UKRIโ€™s Trustworthy Autonomous Hub, which was awarded to Dr Christopher Burr (Grant number: TAS_PP_00040).

About

Project to facilitate creation of Assurance Cases

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 15