Skip to content

Commit 215852b

Browse files
committed
More fixes
1 parent 3eb9946 commit 215852b

File tree

92 files changed

+312
-320
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+312
-320
lines changed

.markdownlint.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
MD041: false
22
MD013: false
33
MD033: false
4+
MD045: false
5+
MD046: false
46
MD004:
57
style: dash

docs/concepts/fs/feature_view/offline_api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ HSFS hides this complexity by performing the point-in-time JOIN transparently, s
3131
HSFS uses the event_time columns on both feature groups to determine the most recent (but not newer) feature values that are joined together with the feature values from the feature group containing the label.
3232
That is, the features in the feature group containing the label are the observation times for the features in the resulting training data, and we want feature values from the other feature groups that have the most recent timestamps, but not newer than the timestamp in the label-containing feature group.
3333

34-
#### Spine Dataframes
34+
#### Spine Groups
3535

3636
The left side of the point-in-time join is typically the set of training entities/primary key values for which the relevant features need to be retrieved.
3737
This left side of the join can also be replaced by a [spine group](../feature_group/spine_group.md).

docs/setup_installation/admin/alert.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ It will try to load the new configuration to the alertmanager and show any error
118118
If you make any changes to the configuration ensure that the changes are valid by reloading the configuration until the changes are loaded and visible in the advanced page.
119119

120120
_Example:_ Adding the yaml snippet shown below in the global section of the alert manager configuration will
121-
have the same effect as creating the SMTP configuration as shown in [section 1](#1-email-alerts) above.
121+
have the same effect as creating the SMTP configuration as shown in [section 1](#step-2-configure-email-alerts) above.
122122

123123
```yaml
124124
global:

docs/setup_installation/admin/audit/export-audit-logs.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,20 @@ Create a dataset and a table in [BigQuery](https://cloud.google.com/bigquery/doc
1818

1919
The table schema is shown below.
2020

21-
```
22-
fullname mode type description
23-
pathInfo NULLABLE STRING
24-
methodName NULLABLE STRING
25-
caller NULLABLE RECORD
26-
dateTime NULLABLE TIMESTAMP bq-datetime
21+
```plaintext
22+
fullname mode type description
23+
pathInfo NULLABLE STRING
24+
methodName NULLABLE STRING
25+
caller NULLABLE RECORD
26+
dateTime NULLABLE TIMESTAMP bq-datetime
2727
userAgent NULLABLE STRING
28-
clientIp NULLABLE STRING
29-
outcome NULLABLE STRING
30-
parameters NULLABLE STRING
28+
clientIp NULLABLE STRING
29+
outcome NULLABLE STRING
30+
parameters NULLABLE STRING
3131
className NULLABLE STRING
3232
caller.userId NULLABLE STRING
33-
caller.email NULLABLE STRING
34-
caller.username NULLABLE STRING
33+
caller.email NULLABLE STRING
34+
caller.username NULLABLE STRING
3535
```
3636

3737
## Step 2: Export Audit Logs to the BigQuery Table

docs/setup_installation/admin/ha-dr/dr.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,8 @@ The backup will be located locally on each datanode under the following path:
5454
/srv/hops/mysql-cluster/ndb/backups/BACKUP - the directory name will be BACKUP-[backup_id]
5555
```
5656

57-
A more comprehensive backup script is available [here](https://github.com/logicalclocks/ndb-chef/blob/master/templates/default/native_ndb_backup.sh.erb) - The script includes the steps above as well as collecting all the partial RonDB backups on a single node.
57+
You can check out [a more comprehensive backup script](https://github.com/logicalclocks/ndb-chef/blob/master/templates/default/native_ndb_backup.sh.erb).
58+
The script includes the steps above as well as collecting all the partial RonDB backups on a single node.
5859
The script is a good starting point and can be adapted to ship the database backup outside the cluster.
5960

6061
### HopsFS Backup

docs/setup_installation/admin/monitoring/export-metrics.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -67,14 +67,14 @@ kubectl -n $NAMESPACE get svc prometheus-external -ojsonpath='{.status.loadBalan
6767
!!!Warning
6868
It will take a few seconds until an IP address is assigned to the Service.
6969

70-
We will use this IP address in Step 2.
70+
We will use this IP address in Step 3.
7171

72-
#### Step 2
72+
#### Step 3
7373

7474
Edit the configuration file of **Prometheus B** server and append the following Job under `scrape_configs`:
7575

7676
!!! note
77-
Replace IP_ADDRESS with the IP address from Step 1 or the IP address of Prometheus service if it is directly accessible.
77+
Replace IP_ADDRESS with the IP address from Step 2 or the IP address of Prometheus service if it is directly accessible.
7878
The snippet below assumes Hopsworks services runs at Namespace **hopsworks**.
7979

8080
```yaml

docs/setup_installation/admin/oauth2/create-client.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Introduction
44

55
Before registering your identity provider in Hopsworks you need to create a client application in your identity provider and acquire a _client id_ and a _client secret_.
6-
An example on how to create a client using [Okta](https://www.okta.com/) and [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) identity providers can be found [here](../create-okta-client) and [here](../create-azure-client) respectively.
6+
An example on how to create a client using [Okta](https://www.okta.com/) and [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) identity providers can be found in the following guides: [Create Okta Client](../create-okta-client) and [Create Azure Client](../create-azure-client).
77

88
## Prerequisites
99

docs/setup_installation/admin/project.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ As an example, a 100MB file stored with a replication factor of 3, will consume
4848
By default, all storage quotas are disabled and not enforced.
4949
Administrators can change this default by changing the following configuration in the [Configuration](../admin/variables.md) UI and/or the cluster definition:
5050

51-
```
51+
```yaml
5252
hopsworks:
5353
featurestore_default_quota: [default quota in bytes, -1 to disable]
5454
hdfs_default_quota: [default quota in bytes, -1 to disable]
@@ -68,7 +68,7 @@ Currently, Hopsworks does not support defining quotas for compute scheduled on t
6868
By default, the compute quota is disabled.
6969
Administrators can change this default by changing the following configuration in the [Configuration](../admin/variables.md) UI and/or the cluster definition:
7070
71-
```
71+
```yaml
7272
hopsworks:
7373
yarn_default_payment_type: [NOLIMIT to disable the quota, PREPAID to enable it]
7474
yarn_default_quota: [default quota in seconds]
@@ -95,7 +95,7 @@ By default, each user can create up to 10 projects.
9595
For production environments, the number of projects should be limited and controlled for resource allocation purposes as well as closer control over the data.
9696
Administrators can control how many projects a user can provision by setting the following configuration in the [Configuration](../admin/variables.md) UI and/or cluster definition:
9797
98-
```
98+
```yaml
9999
hopsworks:
100100
max_num_proj_per_user: [Maximum number of projects each user can create]
101101
```

docs/setup_installation/admin/roleChaining.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,6 +136,6 @@ And click on _Create new role chaining_
136136
</figure>
137137

138138
Project member can now create connectors using _temporary credentials_ to assume the role you configured.
139-
More detail about using temporary credentials can be found [here](../../user_guides/fs/data_source/creation/s3.md#temporary-credentials).
139+
More details about using temporary credentials can be found in the [Temporary Credentials section](../../user_guides/fs/data_source/creation/s3.md#temporary-credentials) of the S3 datasource creation guide.
140140

141141
Project member can see the list of role they can assume by going the _Project Settings_ -> [Assuming IAM Roles](../../../user_guides/projects/iam_role/iam_role_chaining) page.

docs/setup_installation/aws/getting_started.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -219,8 +219,7 @@ This section describes the steps required to deploy the Hopsworks stack using He
219219

220220
- Configure Repo
221221

222-
To obtain access to the Hopsworks helm chart repository, please obtain
223-
an evaluation/startup licence [here](https://www.hopsworks.ai/try).
222+
To obtain access to the Hopsworks helm chart repository, please [obtain](https://www.hopsworks.ai/try) an evaluation/startup licence.
224223

225224
Once you have the helm chart repository URL, replace the environment
226225
variable $HOPSWORKS_REPO in the following command with this URL.

0 commit comments

Comments
 (0)