Skip to content

Commit f8337b0

Browse files
[Doc]: fix various typos in different files (#3499)
1 parent 2e4d1d5 commit f8337b0

File tree

11 files changed

+17
-17
lines changed

11 files changed

+17
-17
lines changed

docs/source/en/guides/cli.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ rendered properly in your Markdown viewer.
44

55
# Command Line Interface (CLI)
66

7-
The `huggingface_hub` Python package comes with a built-in CLI called `hf`. This tool allows you to interact with the Hugging Face Hub directly from a terminal. For example, you can login to your account, create a repository, upload and download files, etc. It also comes with handy features to configure your machine or manage your cache. In this guide, we will have a look at the main features of the CLI and how to use them.
7+
The `huggingface_hub` Python package comes with a built-in CLI called `hf`. This tool allows you to interact with the Hugging Face Hub directly from a terminal. For example, you can log in to your account, create a repository, upload and download files, etc. It also comes with handy features to configure your machine or manage your cache. In this guide, we will have a look at the main features of the CLI and how to use them.
88

99
> [!TIP]
1010
> This guide covers the most important features of the `hf` CLI.
@@ -174,7 +174,7 @@ hf download --help
174174

175175
### Download a single file
176176

177-
To download a single file from a repo, simply provide the repo_id and filename as follow:
177+
To download a single file from a repo, simply provide the repo_id and filename as follows:
178178

179179
```bash
180180
>>> hf download gpt2 config.json

docs/source/en/guides/download.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Note that it is used internally by [`hf_hub_download`].
6868
## Download an entire repository
6969

7070
[`snapshot_download`] downloads an entire repository at a given revision. It uses internally [`hf_hub_download`] which
71-
means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed-up the process.
71+
means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed up the process.
7272

7373
To download a whole repository, just pass the `repo_id` and `repo_type`:
7474

docs/source/en/package_reference/environment_variables.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ If `HF_HUB_OFFLINE=1` is set as environment variable and you call any method of
125125

126126
### HF_HUB_DISABLE_IMPLICIT_TOKEN
127127

128-
Authentication is not mandatory for every requests to the Hub. For instance, requesting
128+
Authentication is not mandatory for every request to the Hub. For instance, requesting
129129
details about `"gpt2"` model does not require to be authenticated. However, if a user is
130130
[logged in](../package_reference/login), the default behavior will be to always send the token
131131
in order to ease user experience (never get a HTTP 401 Unauthorized) when accessing private or gated repositories. For privacy, you can
@@ -138,7 +138,7 @@ would need to explicitly pass `token=True` argument in your script.
138138

139139
### HF_HUB_DISABLE_PROGRESS_BARS
140140

141-
For time consuming tasks, `huggingface_hub` displays a progress bar by default (using tqdm).
141+
For time-consuming tasks, `huggingface_hub` displays a progress bar by default (using tqdm).
142142
You can disable all the progress bars at once by setting `HF_HUB_DISABLE_PROGRESS_BARS=1`.
143143

144144
### HF_HUB_DISABLE_SYMLINKS_WARNING

src/huggingface_hub/_inference_endpoints.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ def scale_to_zero(self) -> "InferenceEndpoint":
368368
"""Scale Inference Endpoint to zero.
369369
370370
An Inference Endpoint scaled to zero will not be charged. It will be resumed on the next request to it, with a
371-
cold start delay. This is different than pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
371+
cold start delay. This is different from pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
372372
would require a manual resume with [`InferenceEndpoint.resume`].
373373
374374
This is an alias for [`HfApi.scale_to_zero_inference_endpoint`]. The current object is mutated in place with the

src/huggingface_hub/_login.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ def logout(token_name: Optional[str] = None) -> None:
123123
124124
Args:
125125
token_name (`str`, *optional*):
126-
Name of the access token to logout from. If `None`, will logout from all saved access tokens.
126+
Name of the access token to logout from. If `None`, will log out from all saved access tokens.
127127
Raises:
128128
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError):
129129
If the access token name is not found.

src/huggingface_hub/_webhooks_server.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ class WebhooksServer:
5050
It is recommended to accept [`WebhookPayload`] as the first argument of the webhook function. It is a Pydantic
5151
model that contains all the information about the webhook event. The data will be parsed automatically for you.
5252
53-
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
53+
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
5454
WebhooksServer and deploy it on a Space.
5555
5656
> [!WARNING]
@@ -231,7 +231,7 @@ def webhook_endpoint(path: Optional[str] = None) -> Callable:
231231
you can use [`WebhooksServer`] directly. You can register multiple webhook endpoints (to the same server) by using
232232
this decorator multiple times.
233233
234-
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
234+
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
235235
server and deploy it on a Space.
236236
237237
> [!WARNING]

src/huggingface_hub/constants.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ def _as_int(value: Optional[str]) -> Optional[int]:
233233
# Used to override the get request timeout on a system level
234234
HF_HUB_DOWNLOAD_TIMEOUT: int = _as_int(os.environ.get("HF_HUB_DOWNLOAD_TIMEOUT")) or DEFAULT_DOWNLOAD_TIMEOUT
235235

236-
# Allows to add information about the requester in the user-agent (eg. partner name)
236+
# Allows to add information about the requester in the user-agent (e.g. partner name)
237237
HF_HUB_USER_AGENT_ORIGIN: Optional[str] = os.environ.get("HF_HUB_USER_AGENT_ORIGIN")
238238

239239
# If OAuth didn't work after 2 redirects, there's likely a third-party cookie issue in the Space iframe view.

src/huggingface_hub/serialization/_dduf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ def export_entries_as_dduf(
202202
... # ... do some work with the pipeline
203203
204204
>>> def as_entries(pipe: DiffusionPipeline) -> Generator[tuple[str, bytes], None, None]:
205-
... # Build an generator that yields the entries to add to the DDUF file.
205+
... # Build a generator that yields the entries to add to the DDUF file.
206206
... # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file.
207207
... # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time)
208208
... yield "vae/config.json", pipe.vae.to_json_string().encode()

src/huggingface_hub/serialization/_torch.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -710,7 +710,7 @@ def _get_unique_id(tensor: "torch.Tensor") -> Union[int, tuple[Any, ...]]:
710710
pass
711711

712712
if tensor.device.type == "xla" and is_torch_tpu_available():
713-
# NOTE: xla tensors dont have storage
713+
# NOTE: xla tensors don't have storage
714714
# use some other unique id to distinguish.
715715
# this is a XLA tensor, it must be created using torch_xla's
716716
# device. So the following import is safe:
@@ -761,7 +761,7 @@ def get_torch_storage_size(tensor: "torch.Tensor") -> int:
761761
attrs, _ = tensor.__tensor_flatten__() # type: ignore[attr-defined]
762762
return sum(get_torch_storage_size(getattr(tensor, attr)) for attr in attrs)
763763
except ImportError:
764-
# for torch version less than 2.1, we can fallback to original implementation
764+
# for torch version less than 2.1, we can fall back to original implementation
765765
pass
766766

767767
try:
@@ -808,7 +808,7 @@ def storage_ptr(tensor: "torch.Tensor") -> Union[int, tuple[Any, ...]]:
808808
if is_traceable_wrapper_subclass(tensor):
809809
return _get_unique_id(tensor) # type: ignore
810810
except ImportError:
811-
# for torch version less than 2.1, we can fallback to original implementation
811+
# for torch version less than 2.1, we can fall back to original implementation
812812
pass
813813

814814
try:
@@ -916,7 +916,7 @@ def _is_complete(tensor: "torch.Tensor") -> bool:
916916
attrs, _ = tensor.__tensor_flatten__() # type: ignore[attr-defined]
917917
return all(_is_complete(getattr(tensor, attr)) for attr in attrs)
918918
except ImportError:
919-
# for torch version less than 2.1, we can fallback to original implementation
919+
# for torch version less than 2.1, we can fall back to original implementation
920920
pass
921921

922922
return tensor.data_ptr() == storage_ptr(tensor) and tensor.nelement() * _get_dtype_size(

src/huggingface_hub/utils/_telemetry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ def send_telemetry(
2525
user_agent: Union[dict, str, None] = None,
2626
) -> None:
2727
"""
28-
Sends telemetry that helps tracking usage of different HF libraries.
28+
Sends telemetry that helps track usage of different HF libraries.
2929
3030
This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants
3131
to share additional information, and we respect your privacy. You can disable telemetry collection by setting the

0 commit comments

Comments
 (0)