Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,5 @@ LABEL_API_PORT=55430
DBSDER_API_URL=http://localhost:3008
DBSDER_API_KEY=3d8767ff-ed2a-47bd-91c2-f5ebac712f2c
DBSDER_API_VERSION=v1
NLP_PSEUDONYMISATION_API_URL=http://localhost:8081
NLP_PSEUDONYMISATION_API_ENABLED=false
JWT_PRIVATE_KEY=myPrivateKey
RUN_MODE=LOCAL
11 changes: 6 additions & 5 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ You can lauch the backend with or withour docker. To configure each of these met

Copy and rename `docker.env.example` and `.env.example`.

Label depends on 2 other services from the Cour de cassation : dbsder-api and nlp-pseudonymisation-api. You can lauch these services locally to simulate operation close to production or you can disable theses services from env files. In this case these 2 services are emulated by Label with the storage folder. To do so, follow the `Add documents you want to annotate` step in the [reuser guide](docs/reuserGuide.md) or just rename the `storage-example` folder to `storage`.
Label depends on one other service from the Cour de cassation : [dbsder-api](https://github.com/cour-de-cassation/dbsder-api). This API is used to import and export decisions in label.

## Installation and lauch

Expand Down Expand Up @@ -65,11 +65,12 @@ yarn start:backend:dev

### Database

You can init database with :
You can init/load/clean database with seeds scripts :

```sh
yarn init:db
```
- clean the database : `node seeds/clean.js`
- load fake data in all collections : `node seeds/load.js`
- save your current database data on seeds : `node seeds/save.js`
- refresh date to a recent date : `node seeds/refreshDate.js <timestamp>`

This script is lauch with the `.env` configuration.

Expand Down
2 changes: 0 additions & 2 deletions ansible/group_vars/all/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ label_api_port: "55430" # pas encore utilisé, a utiliser dans le dockerfile du
label_client_app_id: "label-client"
label_back_app_id: "label-backend"
label_back_root_path: "label/api"
nlp_pseudonymisation_api_url: "http://nlp-pseudonymisation-api-service.nlp.svc.cluster.local:8081"
nlp_pseudonymisation_api_enabled: "true"
dbsder_api_url: "http://api-service.dbsder:3000" # url not tested
dbsder_api_version: "v1"
label_db_name: "labelDb"
Expand Down
9 changes: 0 additions & 9 deletions ansible/roles/deploy_backend/defaults/main/defaults.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,5 @@
---
jobs:
- name: "nlp-annotation"
schedule: "*/5 6-19 * * *"
successful_jobs_history_limit: 7
failed_jobs_history_limit: 7
backoff_limit: 0
restartPolicy: "Never"
parallelism: 1
active_deadline_seconds: 18000
command: "dist/scripts/annotateDocumentsWithoutAnnotationsWithNlp.js"
- name: "tobetreated-document-import"
schedule: "*/15 6-17 * * *"
successful_jobs_history_limit: 7
Expand Down
2 changes: 0 additions & 2 deletions ansible/roles/deploy_backend/tasks/configmap.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@
APP_ID: "{{ label_back_app_id }}"
KUBE_NAMESPACE: "{{ label_kube_namespace }}"
ROOT_PATH: "{{ label_back_root_path }}"
NLP_PSEUDONYMISATION_API_URL: "{{ nlp_pseudonymisation_api_url }}"
NLP_PSEUDONYMISATION_API_ENABLED: "{{ nlp_pseudonymisation_api_enabled }}"
DBSDER_API_URL: "{{ dbsder_api_url }}"
DBSDER_API_VERSION: "{{ dbsder_api_version }}"
LABEL_API_PORT: "{{ label_api_port }}"
Expand Down
2 changes: 0 additions & 2 deletions docker.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,5 @@ LABEL_API_PORT=55430
DBSDER_API_URL=http://dbsder-api-dbsder-api-1:3000
DBSDER_API_KEY=3d8767ff-ed2a-47bd-91c2-f5ebac712f2c
DBSDER_API_VERSION=v1
NLP_PSEUDONYMISATION_API_URL=http://host.docker.internal:8081
NLP_PSEUDONYMISATION_API_ENABLED=false
JWT_PRIVATE_KEY=myPrivateKey
RUN_MODE=LOCAL
4 changes: 0 additions & 4 deletions docs/cronJobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

Here are all the cron jobs of Label :

## nlp-annotation

Send Label documents to the nlp annotation service.

## import-j-7

Import recent (7 days) documents from the source databases.
Expand Down
4 changes: 1 addition & 3 deletions docs/documentStatuses.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,7 @@

A `document` is supposed to follow a specific flow once it enters the label database. Its status will evolve accordingly.

- `loaded`: the document has only been imported in LABEL. No treatment has been done on it.
- `nlpAnnotating`: the document is currently being annotated by the NLP annotator.
- `free`: the document is ready to be annotated by a working user. There are already several `treatments` related to that document (one by the `NLP`, maybe one with the `supplementaryAnnotations` if decision is partially public)
- `free`: the document is ready to be annotated by a working user. There are already several `treatments` related to that document (one by the `NLP` and others from users)
- `pending`: the document is proposed to a working user. An `assignation` and a corresponding empty `treatment` have been created. The document won't be proposed to another working user.
- `saved`: the document is being annotated by a working user.
- `toBeConfirmed`: the document needs to be proof-read a second time by an administrator
Expand Down
14 changes: 6 additions & 8 deletions docs/reuserGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,13 @@ If you are reusing Label, these instructions will be usefull. Also have a look a

## Add documents you want to annotate

The `courDeCassation/storage-example` contains two folders:
We provide seeds to populate the database with test data.
To use seeds you can use the following commands from the project root :

- documents : the documents you want to annotate. Look at `courDeCassation/storage-example/documents/123452.json` for an example of the fields you are supposed to fill. The only required fields are:
- `dateDecision`: the date of the document
- `originalText`: the text of the document. Every paragraph has to be separated by \n
- `sourceId`: the unique ID of the document, which must also be its name ("{ID}.json")
- annotations: the initial annotations for a document. If you don't have an automatic annotator, copy/paste the `courDeCassation/storage-example/annotations/123452.json` content.

The folder used by LABEL is `courDeCassation/storage`. If you want to reuse the `storage-example` folder as is, simply rename it to `storage`.
- clean the database : `node seeds/clean.js`
- load fake data in all collections : `node seeds/load.js`
- save your current database data on seeds : `node seeds/save.js`
- refresh date to a recent date : `node seeds/refreshDate.js <timestamp>`

## Edit the annotation settings

Expand Down
25 changes: 0 additions & 25 deletions docs/scripts.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

Find here the list of available scripts with a brief description. Find options at [`packages/courDeCassation/src/scripts`](https://github.com/Cour-de-cassation/label/tree/dev/packages/courDeCassation/src/scripts).

## annotateDocumentsWithoutAnnotationsWithNlp

Send documents to the NLP API and retreive their annotations.

## autoImportDocumentsFromSder

Import all documents to be pseudonymized from SDER.
Expand All @@ -14,18 +10,6 @@ Import all documents to be pseudonymized from SDER.

Cleaning script (clean duplicated documents and other).

## clearDb

Clear Label database.

## clearDbExceptUsers

Delete all documents related data but keep users.

## clearDbOnlyProblemReports

Delete all problem reports.

## deleteDocument

Delete specific document from Label db.
Expand Down Expand Up @@ -58,10 +42,6 @@ Export treated documents (with the 4 days delay).

Export important "publishable" documents.

## fillLossOfAllTreatedDocuments

Calculate loss of the documents with the NLP API.

## freePendingDocuments

Free documents assignated to an annotator that is AFK after X minutes.
Expand Down Expand Up @@ -94,11 +74,6 @@ List documents with problem reports.

Purge db (for now only the users in statistics after 6 months).

## reAnnotateFreeDocuments

If the NLP API was outdated or buggy, reannotate free documents. Warning: suspend nlp-annotation job during this operation to avoid side effects.
This script only prepare documents and set their status to loaded, the next nlp-annotation job will reannotate them.

## renewCache

Renew the cache.
Expand Down
1 change: 0 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
"docker:start:db": "docker compose -f docker-compose-dev.yml up -d labelDb",
"docker:stop:db": "docker compose -f docker-compose-dev.yml down labelDb",
"fix": "lerna run fix",
"init:db": "scripts/initializeTestDb.sh",
"lint": "lerna run lint",
"start:backend": "lerna run --scope @label/cour-de-cassation start --stream",
"start:backend:dev": "nodemon",
Expand Down
15 changes: 0 additions & 15 deletions packages/courDeCassation/src/annotator/buildNlpAnnotator.ts

This file was deleted.

23 changes: 0 additions & 23 deletions packages/courDeCassation/src/annotator/fetcher/api/index.ts

This file was deleted.

135 changes: 0 additions & 135 deletions packages/courDeCassation/src/annotator/fetcher/api/nlpApi.ts

This file was deleted.

Loading