You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+56-25Lines changed: 56 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,21 @@
1
1
# nuodb-compose #
2
-
Docker compose files for starting a nuodb database on the local host.
2
+
Docker compose files for starting a nuodb database on the local host;
3
+
- and - optionally - initialising the database state from an existing NuoDB backup.
4
+
5
+
## Use Cases ##
6
+
* Create a local NuoDB database in docker on a developer's laptop;
7
+
* Create a local copy of a running database for diagnostic purposes;
8
+
* Create a local copy of an existing database for UAT or testing purposes;
9
+
* create a simple multi-engine database on a single cloud node (VM);
3
10
4
11
These docker compose files will create:
5
12
* a new docker network specifically for this database;
6
13
* separate AP (admin), TE, and SM containers - one for each NuoDB process;
14
+
- With changes to the file, a second TE can be supported;
7
15
* a separate CD (collector) container for each engine container - to enable NuoDB Insights;
8
16
* influxdb and grafana containers to host the NuoDB Insights dashboards.
9
17
10
-
Note that the container names will have the `project` name embedded - which is the name of the directory (`nuodb`), or set with the `-p` option to `docker-compose`.
18
+
Note that the container names will have the `project` name embedded - which is the name of the directory (`nuodb`) or set with the `-p` option to `docker-compose`.
11
19
12
20
# Instructions #
13
21
## Creating the database ##
@@ -25,13 +33,12 @@ Note that the container names will have the `project` name embedded - which is t
25
33
- on some platforms, setting `EXTERNAL_ADDRESS` to `127.0.0.1` also works;
26
34
- if you want to import initial state from a database backup into the new database, set `IMPORT_LOCAL` and/or `IMPORT_REMOTE` (see `Notes` below for details of `IMPORT_LOCAL` and `IMPORT_REMOTE`);
27
35
- the `import` operation is only performed when the archive dir is _empty_ - so the SM container can be stopped and restarted without being reinitialised each time.
28
-
- if you have set `IMPORT_LOCAL` or `IMPORT_REMOTE`_and_ it is a large file that takes multiple minutes to extract, you _will_ need to
29
-
set `STARTUP_TIMEOUT` to a larger value, to stop the startup from timing out before the IMPORT has completed.
36
+
- if you have set `IMPORT_LOCAL` or `IMPORT_REMOTE`_and_ it is a large archive that takes multiple minutes to import, you _will_ need to
37
+
set `STARTUP_TIMEOUT` to a value larger than the time taken to import, to stop the DB startup from timing out before the IMPORT has completed.
30
38
31
39
4. create and start the nuodb database with `docker-compose up -d`.
32
40
33
41
_*NOTE:*_ The `docker-compose` command may suggest to you to use `docker compose` instead.
34
-
35
42
*Don't - it doesn't work.*
36
43
37
44
@@ -55,23 +62,37 @@ _*NOTE:*_ The `docker-compose` command may suggest to you to use `docker compose
55
62
56
63
2. the initial state of the database can be imported using `IMPORT_LOCAL` and/or `IMPORT_REMOTE`, as follows:
57
64
- set `IMPORT_LOCAL` to a path on the _local_ machine.
58
-
The SM container will mount this file as a volume and extract (`untar`) it into the
59
-
archive dir prior to starting the SM process;
60
-
- set `IMPORT_REMOTE` to a URL of a remote file hosted on a server - typically accessed through `http(s)` or `(s)ftp`.
65
+
The SM container will mount this path as a volume and import it into the
66
+
archive dir prior to starting the SM process (presuming the archive is empty);
67
+
68
+
The path that `IMPORT_LOCAL` points to can be one of:
69
+
- a `tar.gzip` file of a `nuodb backup`;
70
+
- a directory cotaining a `nuodb backup`.
71
+
72
+
*Note*: that a `nuodb backup` can come in 1 of 2 formats:
73
+
- a nuodb `backup set` - which is the result of a `hotcopy --type full` backup;
74
+
- a nuodb `archive` - which is the result of a `hotcopy --type simple`, or just a copy of an SM archive and journal copied while the SM is _NOT_ running.
75
+
76
+
*Note*: a `backupset` can only be imported from a directory.
77
+
78
+
- set `IMPORT_REMOTE` to a URL of a remote `file` hosted on a server - typically accessed through `http(s)` or `(s)ftp`.
61
79
- Ex: `https://some.server.io/backup-4-3.tz`
62
80
- Ex: `sftp://sftp.some.domain.com/backup-4-3.tz`
63
81
64
-
The SM container will download the file via the URL and extract it into
65
-
the archive dir prior to starting the SM process.
66
-
- if you set _both_`IMPORT_LOCAL`_and_`IMPORT_REMOTE`, then `IMPORT_REMOTE` is treated as the remote copy, and `IMPORT_LOCAL` is treated as a locally cached copy - hence the behaviour is as follows:
67
-
- if `IMPORT_LOCAL` is a `file`_and_ is non-empty, then it is used directly, and `IMPORT_REMOTE` is ignored;
68
-
- if `IMPORT_LOCAL` is a file _and_ is `empty`, then `IMPORT_REMOTE` is downloaded into `IMPORT_LOCAL`, and the `import` is then performed from `IMPORT_LOCAL`;
69
-
- if `IMPORT_LOCAL` does _not_ exist - or is _not_ a file (ex a `directory`), then it is ignored, and the archive is imported directly from `IMPORT_REMOTE`.
70
-
71
-
_*NOTE:*_ To cause the initial download from `IMPORT_REMOTE` to be cached in `IMPORT_LOCAL`, `IMPORT_LOCAL`_must_ exist _and_ be empty.
82
+
*Note* that:
83
+
The SM container will download the remote file via the URL and extract it into the archive dir prior to starting the SM process.
84
+
- if you set _both_`IMPORT_LOCAL`_and_`IMPORT_REMOTE`, then `IMPORT_REMOTE` is treated as the remote source, and `IMPORT_LOCAL` is treated as a locally cached copy - hence the behaviour is as follows:
85
+
- if `IMPORT_LOCAL` is a _non-empty_`file` or `directory`, then it is used directly, and `IMPORT_REMOTE` is ignored.
86
+
- if `IMPORT_LOCAL` is an _empty_`file` then `IMPORT_REMOTE` is downloaded into `IMPORT_LOCAL`, and the `import` is then performed by `extracting` from `IMPORT_LOCAL` into the `archive`;
87
+
- note this _only_ works for a `tar.gzip` file of an `archive` (see above).
88
+
- if `IMPORT_LOCAL` is an _empty_`directory` then `IMPORT_REMOTE` is downloaded and extracted into `IMPORT_LOCAL`, and the `import` is then performed from `IMPORT_LOCAL` into the `archive`;
89
+
- note this works for _both_ forms of `nuodb backup` (see above);
90
+
-*Note* importing from a `directory` can be significantly _slower_ than imorting (`extracting directly`) from a `tar.gzip` file.
91
+
92
+
_Hence:*_ To cause the initial download from `IMPORT_REMOTE` to be cached in `IMPORT_LOCAL`, `IMPORT_LOCAL`_must_ exist _and_ be empty.
72
93
To ensure this, you can do something like the following:
73
94
-`$ rm -rf a/b/c`
74
-
-`$ touch a/b/c`
95
+
-`$ touch a/b/c` or `mkdir -p /a/b/c`
75
96
76
97
Now you can set `IMPORT_REMOTE` as needed, and set `IMPORT_LOCAL` to `a/b/c`.
77
98
@@ -103,18 +124,28 @@ then check the logs of the `sm_1` container to see why it has failed to start.
103
124
To check the logs of any container, run `docker logs <name-of-container>`
104
125
Ex: `docker logs nuodb_sm_1`
105
126
106
-
4. If you get an error of the form:
107
-
```
108
-
host:/a/a/c is not a valid, non-empty file - import failed...
109
-
```
110
-
then you have probably set `IMPORT_LOCAL` to point to a non-existent file, or to a directory.
111
-
112
-
5. If you get an error in the form:
127
+
4. If you get an error in the form:
113
128
```
114
129
IMPORT_REMOTE is not a valid URL: ... - import aborted
115
130
```
116
131
then you have not set `IMPORT_REMOTE` to a valid URL.
6. If an error causes only part of the database to be deployed, you can start the remaining containers - after fixing the error - by simply running `docker-compose up -d` again. The `up` command only starts those containers that are not currently running.
135
+
5. If you get an error in the form:
136
+
```
137
+
This database has <n> archives with no running SM.
138
+
No new SMs can start while the database is in this state.
139
+
```
140
+
then you have somehow restarted the database with existing archives but too few running SMs.
141
+
This could happen if an import has somehow failed after the initial import started, and you restart with `IMPORT_X` set.
142
+
This could also happen if an SM has shut down, and you try to restart it with `docker-compose up`, but have accidentally set `IMPORT_X`.
143
+
(You cannot attempt to import the database state if there is existing state in some archive - even if the SM for that archive is not currently running.)
144
+
Follow the instructions following the error message to resolve the problem(s), and then continue stating with:
145
+
`... docker-compose up -d`
146
+
147
+
6. If an error causes only part of the database to be deployed, you can start the remaining containers - after fixing the error - by simply running `... docker-compose up -d` again. The `up` command only starts those containers that are not currently running.
148
+
When running `... docker-compose up` a subsequent time, you need to decide if you still need to set `IMPORT_X` variable(s):
149
+
- you _DON'T_ need to if the database state has already been successfully imported;
150
+
- you probably _DO_ need to if you had them set for the original `docker up` command, and the `import` has not yet succeeded.
0 commit comments