Get the TEA Platform running locally with Docker in just a few steps:
- Docker and Docker Compose
- Git
- A GitHub OAuth App for authentication
-
Clone the repository
git clone https://github.com/alan-turing-institute/AssurancePlatform.git cd AssurancePlatform
-
Set up environment files
# Copy example environment files cp tea_backend/.env.example tea_backend/.env.local cp tea_frontend/.env.example tea_frontend/.env.local
-
Configure GitHub OAuth (optional)
- Create a new GitHub OAuth App
- Set Homepage URL:
http://localhost:3000
- Set Authorization callback URL:
http://localhost:3000/api/auth/callback/github
- Add your Client ID and Client Secret to both
.env.local
files
-
Start the development environment
docker-compose -f docker-compose.development.yml up --build
-
Access the platform
- ๐ Frontend: http://localhost:3000
- ๐ง Backend API: http://localhost:8000/api
- ๐๏ธ Database: PostgreSQL on localhost:5432
The platform includes user registration, so you can create an account directly or sign in with GitHub (optional). For more detailed setup instructions, see the technical documentation.
This repository contains the code and documentation for the Trustworthy and Ethical Assurance (TEA) platformโan application for building trustworthy and ethical assurance cases, developed by researchers at the Alan Turing Institute and University of York.
The Trustworthy and Ethical Assurance (TEA) Platform is a collaborative tool for developing structured arguments about how ethical principles and trustworthy practices have been upheld throughout the lifecycle of data-driven technologies.
At its core, TEA helps multi-stakeholder project teams create assurance cases: structured, graphical representations that demonstrate how goals like fairness, explainability, safety, or sustainability have been achieved over the course of a project's lifecycle.
The platform addresses a fundamental challenge in responsible technology development: how can project teams provide justified evidence that ethical principles have been upheld?
TEA supports this through three integrated components:
- An interactive tool for building assurance cases
- A comprehensive framework of skills and capabilities resources
- A collaborative community infrastructure that promotes open practices and shared learning in the trustworthy assurance ecosystem
Our documentation site can be accessed at https://assuranceplatform.azurewebsites.net/documentation
The following resources provide additional information about the Trustworthy and Ethical Assurance framework and methodology:
- Burr, C., Arana, S., Gould Van Praag, C., Habli, I., Kaas, M., Katell, M., Laher, S., Leslie, D., Niederer, S., Ozturk, B., Polo, N., Porter, Z., Ryan, P., Sharan, M., Solis Lemus, J. A., Strocchi, M., Westerling, K., (2024) Trustworthy and Ethical Assurance of Digital Health and Healthcare. https://doi.org/10.5281/zenodo.10532573
- Porter, Z., Habli, I., McDermid, J. et al. A principles-based ethics assurance argument pattern for AI and autonomous systems. AI Ethics 4, 593โ616 (2024). https://doi.org/10.1007/s43681-023-00297-2
- Burr, C. and Powell, R., (2022) Trustworthy Assurance of Digital Mental Healthcare. The Alan Turing Institute https://doi.org/10.5281/zenodo.7107200
- Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies. AI and Ethics. https://doi.org/10.1007/s43681-022-00178-0
From March 2024 until September 2024, the project is funded by UKRI's BRAID programme as part of a scoping research award for the Trustworthy and Ethical Assurance of Digital Twins project.
Between April 2023 and December 2023, this project received funding from the Assuring Autonomy International Programme, a partnership between Lloydโs Register Foundation and the University of York, which was awarded to Dr Christopher Burr.
Between July 2021 and June 2022 this project received funding from the UKRIโs Trustworthy Autonomous Hub, which was awarded to Dr Christopher Burr (Grant number: TAS_PP_00040).