diff --git a/bootstrap/bootstrap/README.md b/bootstrap/bootstrap/README.md index e36b4bfd3..a601279be 100644 --- a/bootstrap/bootstrap/README.md +++ b/bootstrap/bootstrap/README.md @@ -20,7 +20,7 @@ will do a multiple stage bootstrap. Currently this is only two stages: - After internet service is fully started, bootstrap will start to download flists needed for zos node to work properly - As described above bootstrap run in two stages: - The first stage is used to update bootstrap itself, and it is done like that to avoid re-building the image if we only changed the bootstrap code. this update is basically done from `tf-autobuilder` repo in the [hub/tf-autobuilder](https://hub.grid.tf/tf-autobuilder) and download the latest bootstrap flist - - For the second stage bootstrap will download the flists for that env. bootstrap cares about `runmode` argument that we pass during the start of the node. for example if we passed `runmode=dev` it will get the the tag `development` under [hub/tf-zos](https://hub.grid.tf/tf-zos) each tag is linked to a sub-directory where all flists for this env exists to be downloaded and installed on the node + - For the second stage bootstrap will download the flists for that env. bootstrap cares about `runmode` argument that we pass during the start of the node. for example if we passed `runmode=dev` it will get the tag `development` under [hub/tf-zos](https://hub.grid.tf/tf-zos) each tag is linked to a sub-directory where all flists for this env exists to be downloaded and installed on the node ## Testing in Developer setup diff --git a/specs/network/Gateway_Container.md b/specs/network/Gateway_Container.md index b3f1dfda2..f354eab2d 100644 --- a/specs/network/Gateway_Container.md +++ b/specs/network/Gateway_Container.md @@ -188,7 +188,7 @@ The network setup we envisioned needed to be - Easy to reason about - Most of all, easy to debug in case something goes wrong -There so many combinations and incantations possible (this is the the case now, but will be even more so in the future) that having to maintain a living object with many relationships in terms of adding and/or deleting is not really mpossible, but very (extremely?) difficult and prone to errors. +There so many combinations and incantations possible (this is the case now, but will be even more so in the future) that having to maintain a living object with many relationships in terms of adding and/or deleting is not really mpossible, but very (extremely?) difficult and prone to errors. These errors can be User Errors, which can be fixable, but the most important problem is the possibility of discrepance between what is effectively live in a system and what is modeled in the database. A part from that problem, to add insult to injury, upgrading a network with new features or different approaches, would add an increased complexity in migration of networks from one version(or form) to another. That as well in the model, as trying to reimplement the model to reality. The more, a DataBase as single source of thruth adds the necessity to secure that database (with replacations, High Availability and all problems that are associated with maintianing databases). Needless to say, that is a problem that needs to be avoided like it were the plague. diff --git a/specs/network/datastructs.md b/specs/network/datastructs.md index d5cba2ab0..6ec7b2388 100644 --- a/specs/network/datastructs.md +++ b/specs/network/datastructs.md @@ -169,7 +169,7 @@ type DNAT struct { Protocol string `json:"protocol"` } -//Ipv4Conf represents the the IPv4 configuration of an exit container +//Ipv4Conf represents the IPv4 configuration of an exit container type Ipv4Conf struct { // cidr CIDR *net.IPNet `json:"cird"` @@ -182,7 +182,7 @@ type Ipv4Conf struct { EnableNAT bool `json:"enable_nat"` } -//Ipv6Conf represents the the IPv6 configuration of an exit container +//Ipv6Conf represents the IPv6 configuration of an exit container type Ipv6Conf struct { Addr *net.IPNet `json:"addr"` Gateway net.IP `json:"gateway"`