You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Replication Package for "LiSSA: Toward Generic Traceability Link Recovery through Retrieval-Augmented Generation"
3
-
This is the replication package for our paper "LiSSA: Toward Generic Traceability Link Recovery through Retrieval-Augmented Generation". This package contains the source code for the LiSSA tool, the dataset used in the evaluation, and the results of the evaluation.
10
+
by Dominik Fuchß, Tobias Hey, Jan Keim, Haoyu Liu, Niklas Ewald, Tobias Thirolf, and Anne Koziolek
11
+
12
+
This is the replication package for our paper "LiSSA: Toward Generic Traceability Link Recovery through Retrieval-Augmented Generation".
13
+
This package contains the source code for the LiSSA tool, the dataset used in the evaluation, and the evaluation results.
4
14
5
15
## Requirements
6
16
- Java JDK 21 + Maven 3
@@ -10,14 +20,14 @@ This is the replication package for our paper "LiSSA: Toward Generic Traceabilit
10
20
*`LiSSA-RATLR-V1` contains the code and datasets used to create the results without the significance tests. It represents a former version of the tool (i.e., without features like seed definition)
11
21
*`LiSSA-RATLR-V2` contains the code and datasets used to create the results with the significance tests. It represents the most recent version of the tool (at the time of the paper).
12
22
* Note: The most recent version of the tool can be found at [ArDoCo/LiSSA-RATLR](https://github.com/ArDoCo/LiSSA-RATLR)
13
-
* In the current directory, you will also find some excel sheet that contain the tables of the evaluation results.
14
-
* In `statistical-evaluation` you will find the R scripts used to perform the significance tests.
23
+
* In the current directory, you will also find some Excel sheet that contain the tables of the evaluation results.
24
+
* In `statistical-evaluation`, you will find the R scripts used to perform the significance tests.
15
25
16
26
17
-
Each of the directories contain a README that explains how to run the tool and reproduce the results.
27
+
Each of the directories contains a README that explains how to run the tool and reproduce the results.
18
28
19
29
### Evaluation Results
20
-
Our summarized evaluation results can be found in the excel sheets in the root directory of this repository. The excel sheets contain the results of the evaluation for the different datasets and configurations.
30
+
Our summarized evaluation results can be found in the Excel sheets in the root directory of this repository. The Excel sheets contain the evaluation results for the different datasets and configurations.
21
31
22
32
* Evaluation-Req2Code.xlsx: Contains the results of the requirement to code evaluation.
23
33
* Evaluation-Req2Code-Significance.xlsx: Contains the results of the requirement to code evaluation with significance tests.
@@ -26,8 +36,8 @@ Our summarized evaluation results can be found in the excel sheets in the root d
26
36
27
37
## Installation (Docker)
28
38
> [!TIP]
29
-
> We suggest to use the provided docker container as it contain everything you need to run the tool. To run the container, execute `docker run -it --rm ghcr.io/ardoco/icse25`. The container will start in this directory.
30
-
> The docker container contains everything including the cache.
39
+
> We suggest using the provided docker container, as it contains everything you need to run the tool. To run the container, execute `docker run -it --rm ghcr.io/ardoco/icse25`. The container will start in this directory.
40
+
> The docker container contains everything, including the cache.
31
41
> Thus, you do not need access to OpenAI to run the evaluation.
32
42
33
43
## Installation (Manual)
@@ -81,4 +91,4 @@ LiSSA contains multiple modules that can be used or extended:
81
91
*`embeddingcreator`: Creates embeddings for the artifacts. Providers are currently OpenAI, Ollama, and Onnx.
82
92
*`classifier`: Here, the prompts and LLMs are defined. Here, you can define new classifiers, change prompts, or change the LLMs.
83
93
*`resultaggregator`: Aggregates the results of the classifier. This is used to get the traceability links on the right level of granularity.
84
-
*`postprocessor`: Postprocesses the results of the classifier. Mostly used for changing the identifiers to match the format of the gold standards.
94
+
*`postprocessor`: Postprocesses the results of the classifier. It is mostly used for changing the identifiers to match the format of the gold standards.
0 commit comments