You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ However, we only allocated 4GB of RAM when submitting each of our jobs.
49
49
## Requirements and Dependencies
50
50
This code is written in Python and uses TensforFlow. To reproduce the environment with necessary dependencies needed for running of the code in this repo, we recommend that the users create a `conda` environment using the `environment.yml` YAML file that is provided in the repo. Assuming the conda management system is installed on the user's system, this can be done using the following:
51
51
52
-
```shell
52
+
```
53
53
$ conda env create -f environment.yml
54
54
```
55
55
@@ -62,7 +62,7 @@ The MNIST dataset we used in our experiments can be found in the `./data` direct
62
62
# BRIDGE Experiments
63
63
We performed decentralized learning using BRIDGE and some of its variants based on distributed learning screening methods, namely Median, Krum and Bulyan. To train the one-layer neural network on MNIST with BRIDGE or its variants, run the `dec_BRIDGE.py` script. When no screening method is selected, training is done with distributed gradient descent (DGD) without screening. Each Monte Carlo trial ran in about one hundred seconds on our machines for each of the screening methods.
1) BRIDGE defending against at most two Byzantine nodes with no faulty nodes in the network (faultless setting).
90
90
91
-
```shell
91
+
```
92
92
$ python dec_BRIDGE.py 0 -b=2 -s=BRIDGE
93
93
```
94
94
2) BRIDGE defending against at most two Byzantine nodes with exactly two faulty nodes in the network (faulty setting).
95
95
96
-
```shell
96
+
```
97
97
$ python dec_BRIDGE.py 0 -b=2 -gb=True -s=BRIDGE
98
98
```
99
99
@@ -103,7 +103,7 @@ The user can run each of the possible screening methods ten times in parallel by
103
103
# ByRDiE Experiments
104
104
We performed decentralized learning using ByRDiE, both in the faultless setting and in the presence of actual Byzantine nodes. To train the one layer neural network on MNIST with ByRDiE, run the `dec_ByRDiE.py` script. Each Monte Carlo trial for ByRDiE ran in about two days on our machines.
0 commit comments