diff --git a/translation/zh/getting-started/adapter-and-device/index.md b/translation/zh/getting-started/adapter-and-device/index.md
new file mode 100644
index 0000000..39fe84a
--- /dev/null
+++ b/translation/zh/getting-started/adapter-and-device/index.md
@@ -0,0 +1,20 @@
+适配器和设备
+==================
+
+```{translation-warning} Outdated Translation, /getting-started/adapter-and-device/index.md
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
+```
+
+**设备**(device)是使用 WebGPU 时进行交互的**核心对象**。通过该对象,我们可以**创建**所有其他资源(纹理、缓冲区、管线等),向 GPU **发送指令**,并**处理错误**。
+
+接下来两章描述了**设备初始化流程**,这也是我们对终端用户的物理设备进行**兼容性检查**的位置。
+
+目录
+--------
+
+```{toctree}
+:titlesonly:
+
+the-adapter
+the-device
+```
diff --git a/translation/zh/getting-started/adapter-and-device/the-adapter.md b/translation/zh/getting-started/adapter-and-device/the-adapter.md
new file mode 100644
index 0000000..2dbf46d
--- /dev/null
+++ b/translation/zh/getting-started/adapter-and-device/the-adapter.md
@@ -0,0 +1,464 @@
+适配器 🟢
+===========
+
+```{lit-setup}
+:tangle-root: zh/005 - 适配器
+:parent: zh/001 - Hello WebGPU
+```
+
+*结果代码:* [`step005`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step005)
+
+在我们上手**设备**(device)之前,我们需要选择一**适配器**(adapter)。在同一个宿主系统可多个物理 GPU 时,宿主系统下可以暴露**多个适配器**。也可能存在代表着一个模拟的或虚拟的设备的适配器。
+
+```{note}
+对于高端笔记本电脑,包含**两个物理 GPU** 是很常见的,一个**高性能** GPU 和一个 **低功耗** CPU(后者经常集成在 CPU 芯片中)。
+```
+
+每个适配器都会提其可支持的可选**功能**和**限制范围**的列表。这些信息用于在**请求设备**前确定系统的整体能力。
+
+> 🤔 为何需要同时存在**适配器**和**设备**这两层抽象?
+
+其设计初衷是为了避免"在我的机器上能跑"(但在其他机器上时却不能)的兼容性问题。**适配器**用于获取用户硬件的**实际能力**,这些信息将决定应用程序在不同代码路径中的具体行为。一旦选定代码路径,系统就会根据**我们选择的能力**创建对应的**设备**。
+
+在应用的后续逻辑中,也只能使用为该设备选择的能力集。通过这种机制,可以从**根本上杜绝开发者无意间依赖特定机器专属能力的情况**。
+
+```{themed-figure} /images/the-adapter/limit-tiers_{theme}.svg
+:align: center
+在适配器/设备机制的高级用法中,我们可以配置多个限制预设并基于适配器从中进行选择。在我们的示例代码中,我们只有一个预设,如果遇到了兼容性问题就会立刻终止。
+```
+
+
+请求适配器
+----------------------
+
+适配器并不是由我们**创建**的,而是通过 `wgpuInstanceRequestAdapater` 函数**请求**获取到的。
+
+````{note}
+`webgpu.h` 提供的方法名称始终遵循同样的结构:
+
+```C
+wgpuSomethingSomeAction(something, ...)
+ ^^^^^^^^^ // 对什么样的对象...
+ ^^^^^^^^^^ // ...做什么事情
+^^^^ // (统一的前缀,用于避免命名冲突)
+```
+
+函数的第一个参数始终是一个表示这个“Something”对象的“句柄”(一个不透明指针)。
+````
+
+根据名称,我们知道了第一个参数是我们在上一章中创建的 `WGPUInstance`。那么的其他的参数呢?
+
+```C++
+// webgpu.h 中定义的 wgpuInstanceRequestAdapter 函数签名
+void wgpuInstanceRequestAdapter(
+ WGPUInstance instance,
+ WGPU_NULLABLE WGPURequestAdapterOptions const * options,
+ WGPURequestAdapterCallback callback,
+ void * userdata
+);
+```
+
+```{note}
+查阅 `webgpu.h` 头文件中的函数定义总是能获得有价值的信息!
+```
+
+第二个参数是一些**配置**的集合,它与我们在 `wgpuCreateSomething` 函数中所看到的**描述符**类似,我们会在后面详细说明它。`WGPU_NULLABLE` 标记是一个空定义,仅用于告知读者(也就是我们)在使用**默认配置**时是可以使用 `nullptr` 作为输的入。
+
+### 异步函数
+
+后面两个参数是共同使用的,并且它们揭示了另一个 **WebGPU 惯用设计**。实际上,`wgpuInstanceRequestAdapter` 是一个**异步**函数。它并不直接返回一个 `WGPUAdapter` 对象,而是接受一个**回调函数**,也就是在请求结束时才会被调用的函数。
+
+```{note}
+在 WebGPU API 内部多处,只要一个操作需要耗费时间,它们都使用了异步函数,**没有任何一个 WebGPU 函数**会占用时间返回。这样,我们所编写的 CPU 程序永远不会被一个需要耗时的操作所阻塞!
+```
+
+为了更好的理解回调机制,我们来看一下 `WGPURequestAdapterCallback` 函数类型的定义:
+
+```C++
+// webgpu.h 内定义的 WGPURequestAdapaterCallback 函数类型定义
+typedef void (*WGPURequestAdapterCallback)(
+ WGPURequestAdapterStatus status,
+ WGPUAdapter adapter,
+ char const * message,
+ void * userdata
+);
+```
+
+该回调函数是一个接收包括参数为**请求的适配器**、**状态**信息(用于表示请求是否失败与原因)和神秘的 `userdata` **指针**的**函数**。
+
+这个 `userdata` 指针可以是任意数据,WebGPU 不会解析其内容,仅会将其从最初的 `wgpuInstanceRequestAdapter` 调用**透传**至回调函数,作为**共享上下文信息**的载体:
+
+```C++
+void onAdapterRequestEnded(
+ WGPURequestAdapterStatus status, // 请求状态
+ WGPUAdapter adapter, // 返回的适配器
+ char const* message, // 错误信息,或 nullptr
+ void* userdata // 用户自定义数据,与请求适配器时一致
+) {
+ // [...] 对适配器进行操作
+
+ // 操作用户信息
+ bool* pRequestEnded = reinterpret_cast(userdata);
+ *pRequestEnded = true;
+}
+
+// [...]
+
+// main() 函数:
+bool requestEnded = false;
+wgpuInstanceRequestAdapter(
+ instance /* navigator.gpu 的等价对象 */,
+ &options,
+ onAdapterRequestEnded,
+ &requestEnded // 在本示例中,自定义用户信息是一个简单的布尔值指针
+);
+```
+
+我们将在下一节中看到针对上下文更高级的用法,它用于在请求结束时恢复适配器。
+
+````{admonition} 笔记 - JavaScript API
+:class: foldable note
+
+在 **JavaScript WebGPU API** 中,异步函数使用内置的 [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) 机制:
+
+```js
+const adapterPromise = navigator.gpu.requestAdapter(options);
+// promise 目前还没有值,它是一个我们用于连接回调的句柄
+adapterPromise.then(onAdapterRequestEnded).catch(onAdapterRequestFailed);
+
+// [...]
+
+// 它使用多个回调函数而不是使用 'status' 参数
+function onAdapterRequestEnded(adapter) {
+ // 操作 adapter
+}
+function onAdapterRequestFailed(error) {
+ // 显示错误信息
+}
+```
+
+JavaScript 后期引进了一种名为 [`async` 函数](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) 的机制,它允许**"等待"**一个异步函数执行完成,而不需要显式地声明一个回调函数。
+
+```js
+// 在一个异步函数内
+const adapter = await navigator.gpu.requestAdapter(options);
+// 操作 adapter
+```
+
+现在该机制在其他语言中也存在,比如 [Python](https://docs.python.org/3/library/asyncio-task.html)。C++20 也引入了相同机制的 [coroutines](https://en.cppreference.com/w/cpp/language/coroutines) 特性。
+
+但在本教程中,我会尽量**避免堆砌过多高级抽象**,因此我们不会使用它们(并且对齐 C++17),但高阶的读者可能希望创建依赖 coroutines 的 WebGPU 封装。
+````
+
+### Request
+
+We can wrap the whole adapter request in the following `requestAdapterSync()` function, which I provide so that we do not spend too much time on **boilerplate** code (the important part here is that you get the idea of the **asynchronous callback** described above):
+
+```{lit} C++, Includes (append)
+#include
+```
+
+```{lit} C++, Request adapter function
+/**
+ * Utility function to get a WebGPU adapter, so that
+ * WGPUAdapter adapter = requestAdapterSync(options);
+ * is roughly equivalent to
+ * const adapter = await navigator.gpu.requestAdapter(options);
+ */
+WGPUAdapter requestAdapterSync(WGPUInstance instance, WGPURequestAdapterOptions const * options) {
+ // A simple structure holding the local information shared with the
+ // onAdapterRequestEnded callback.
+ struct UserData {
+ WGPUAdapter adapter = nullptr;
+ bool requestEnded = false;
+ };
+ UserData userData;
+
+ // Callback called by wgpuInstanceRequestAdapter when the request returns
+ // This is a C++ lambda function, but could be any function defined in the
+ // global scope. It must be non-capturing (the brackets [] are empty) so
+ // that it behaves like a regular C function pointer, which is what
+ // wgpuInstanceRequestAdapter expects (WebGPU being a C API). The workaround
+ // is to convey what we want to capture through the pUserData pointer,
+ // provided as the last argument of wgpuInstanceRequestAdapter and received
+ // by the callback as its last argument.
+ auto onAdapterRequestEnded = [](WGPURequestAdapterStatus status, WGPUAdapter adapter, char const * message, void * pUserData) {
+ UserData& userData = *reinterpret_cast(pUserData);
+ if (status == WGPURequestAdapterStatus_Success) {
+ userData.adapter = adapter;
+ } else {
+ std::cout << "Could not get WebGPU adapter: " << message << std::endl;
+ }
+ userData.requestEnded = true;
+ };
+
+ // Call to the WebGPU request adapter procedure
+ wgpuInstanceRequestAdapter(
+ instance /* equivalent of navigator.gpu */,
+ options,
+ onAdapterRequestEnded,
+ (void*)&userData
+ );
+
+ // We wait until userData.requestEnded gets true
+ {{Wait for request to end}}
+
+ assert(userData.requestEnded);
+
+ return userData.adapter;
+}
+```
+
+```{lit} C++, Utility functions (hidden)
+// All utility functions are regrouped here
+{{Request adapter function}}
+```
+
+In the main function, after creating the WebGPU instance, we can get the adapter:
+
+```{lit} C++, Request adapter
+std::cout << "Requesting adapter..." << std::endl;
+
+WGPURequestAdapterOptions adapterOpts = {};
+adapterOpts.nextInChain = nullptr;
+WGPUAdapter adapter = requestAdapterSync(instance, &adapterOpts);
+
+std::cout << "Got adapter: " << adapter << std::endl;
+```
+
+#### Waiting for the request to end
+
+You may have noticed the comment above saying **we need to wait** for the request to end, i.e. for the callback to be invoked, before returning.
+
+When using the **native** API (Dawn or `wgpu-native`), it is in practice **not needed**, we know that when the `wgpuInstanceRequestAdapter` function returns its callback has been called.
+
+However, when using **Emscripten**, we need to hand the control **back to the browser** until the adapter is ready. In JavaScript, this would be using the `await` keyword. Instead, Emscripten provides the `emscripten_sleep` function that interrupts the C++ module for a couple of milliseconds:
+
+```{lit} C++, Wait for request to end
+#ifdef __EMSCRIPTEN__
+ while (!userData.requestEnded) {
+ emscripten_sleep(100);
+ }
+#endif // __EMSCRIPTEN__
+```
+
+In order to use this, we must add a **custom link option** in `CMakeLists.txt`, in the `if (EMSCRIPTEN)` block:
+
+```{lit} CMake, Emscripten-specific options (append)
+# Enable the use of emscripten_sleep()
+target_link_options(App PRIVATE -sASYNCIFY)
+```
+
+Also do not forget to include `emscripten.h` in order to use `emscripten_sleep`:
+
+```{lit} C++, Includes (append)
+#ifdef __EMSCRIPTEN__
+# include
+#endif // __EMSCRIPTEN__
+```
+
+### Destruction
+
+Like for the WebGPU instance, we must release the adapter:
+
+```{lit} C++, Destroy adapter
+wgpuAdapterRelease(adapter);
+```
+
+````{note}
+We will no longer need to use the `instance` once we have selected our **adapter**, so we can call `wgpuInstanceRelease(instance)` right after the adapter request **instead of at the very end**. The **underlying instance** object will keep on living until the adapter gets released but we do not need to manage this.
+
+```{lit} C++, Create things (hidden)
+{{Create WebGPU instance}}
+{{Check WebGPU instance}}
+{{Request adapter}}
+// We no longer need to use the instance once we have the adapter
+{{Destroy WebGPU instance}}
+```
+````
+
+```{lit} C++, file: main.cpp (replace, hidden)
+{{Includes}}
+
+{{Utility functions in main.cpp}}
+
+int main() {
+ {{Create things}}
+
+ {{Main body}}
+
+ {{Destroy things}}
+
+ return 0;
+}
+```
+
+```{lit} C++, Utility functions in main.cpp (hidden)
+{{Utility functions}}
+```
+
+```{lit} C++, Main body (hidden)
+```
+
+```{lit} C++, Destroy things (hidden)
+{{Destroy adapter}}
+```
+
+Inspecting the adapter
+----------------------
+
+The adapter object provides **information about the underlying implementation** and hardware, and about what it is able or not to do. It advertises the following information:
+
+ - **Limits** regroup all the **maximum and minimum** values that may limit the behavior of the underlying GPU and its driver. A typical examples is the maximum texture size. Supported limits are retrieved using `wgpuAdapterGetLimits`.
+ - **Features** are non-mandatory **extensions** of WebGPU, that adapters may or may not support. They can be listed using `wgpuAdapterEnumerateFeatures` or tested individually with `wgpuAdapterHasFeature`.
+ - **Properties** are extra information about the adapter, like its name, vendor, etc. Properties are retrieved using `wgpuAdapterGetProperties`.
+
+```{note}
+In the accompanying code, adapter capability inspection is enclosed in the `inspectAdapter()` function.
+```
+
+```{lit} C++, Utility functions (append, hidden)
+void inspectAdapter(WGPUAdapter adapter) {
+ {{Inspect adapter}}
+}
+```
+
+```{lit} C++, Request adapter (append, hidden)
+inspectAdapter(adapter);
+```
+
+### Limits
+
+We can first list the limits that our adapter supports with `wgpuAdapterGetLimits`. This function takes as argument a `WGPUSupportedLimits` object where it writes the limits:
+
+```{lit} C++, Inspect adapter
+#ifndef __EMSCRIPTEN__
+WGPUSupportedLimits supportedLimits = {};
+supportedLimits.nextInChain = nullptr;
+
+#ifdef WEBGPU_BACKEND_DAWN
+bool success = wgpuAdapterGetLimits(adapter, &supportedLimits) == WGPUStatus_Success;
+#else
+bool success = wgpuAdapterGetLimits(adapter, &supportedLimits);
+#endif
+
+if (success) {
+ std::cout << "Adapter limits:" << std::endl;
+ std::cout << " - maxTextureDimension1D: " << supportedLimits.limits.maxTextureDimension1D << std::endl;
+ std::cout << " - maxTextureDimension2D: " << supportedLimits.limits.maxTextureDimension2D << std::endl;
+ std::cout << " - maxTextureDimension3D: " << supportedLimits.limits.maxTextureDimension3D << std::endl;
+ std::cout << " - maxTextureArrayLayers: " << supportedLimits.limits.maxTextureArrayLayers << std::endl;
+}
+#endif // NOT __EMSCRIPTEN__
+```
+
+```{admonition} Implementation divergences
+The procedure `wgpuAdapterGetLimits` returns a boolean in `wgpu-native` but a `WGPUStatus` in Dawn.
+
+Also, as of April 1st, 2024, `wgpuAdapterGetLimits` is not implemented yet on Google Chrome, hence the `#ifndef __EMSCRIPTEN__` above.
+```
+
+Here is an example of what you could see:
+
+```
+Adapter limits:
+ - maxTextureDimension1D: 32768
+ - maxTextureDimension2D: 32768
+ - maxTextureDimension3D: 16384
+ - maxTextureArrayLayers: 2048
+```
+
+This means for instance that my GPU can handle 2D textures up to 32k, 3D textures up to 16k and texture arrays up to 2k layers.
+
+```{note}
+There are **many more limits**, that we will progressively introduce in the next chapters. The **full list** is [available in the spec](https://www.w3.org/TR/webgpu/#limits), together with their **default values**, which is also expected to be the minimum for an adapter to claim support for WebGPU.
+```
+
+### Features
+
+Let us now focus on the `wgpuAdapterEnumerateFeatures` function, which enumerates the features of the WebGPU implementation, because its usage is very typical from WebGPU native.
+
+We call the function **twice**. The **first time**, we provide a null pointer as the return, and as a consequence the function only returns the **number of features**, but not the features themselves.
+
+We then dynamically **allocate memory** for storing this many items of result, and call the same function a **second time**, this time with a pointer to where the result should store its result.
+
+```{lit} C++, Includes (append)
+#include
+```
+
+```{lit} C++, Inspect adapter (append)
+std::vector features;
+
+// Call the function a first time with a null return address, just to get
+// the entry count.
+size_t featureCount = wgpuAdapterEnumerateFeatures(adapter, nullptr);
+
+// Allocate memory (could be a new, or a malloc() if this were a C program)
+features.resize(featureCount);
+
+// Call the function a second time, with a non-null return address
+wgpuAdapterEnumerateFeatures(adapter, features.data());
+
+std::cout << "Adapter features:" << std::endl;
+std::cout << std::hex; // Write integers as hexadecimal to ease comparison with webgpu.h literals
+for (auto f : features) {
+ std::cout << " - 0x" << f << std::endl;
+}
+std::cout << std::dec; // Restore decimal numbers
+```
+
+The features are numbers corresponding to the enum `WGPUFeatureName` defined in `webgpu.h`. We use `std::hex` to display them as hexadecimal values, because this is how they are listed in `webgpu.h`.
+
+You may notice very high numbers apparently not defined in this enum. These are **extensions** provided by our native implementation (e.g., defined in `wgpu.h` instead of `webgpu.h` in the case of `wgpu-native`).
+
+### Properties
+
+Lastly we can have a look at the adapter's properties, that contain information that we may want to display to the end user:
+
+```{lit} C++, Inspect adapter (append)
+WGPUAdapterProperties properties = {};
+properties.nextInChain = nullptr;
+wgpuAdapterGetProperties(adapter, &properties);
+std::cout << "Adapter properties:" << std::endl;
+std::cout << " - vendorID: " << properties.vendorID << std::endl;
+if (properties.vendorName) {
+ std::cout << " - vendorName: " << properties.vendorName << std::endl;
+}
+if (properties.architecture) {
+ std::cout << " - architecture: " << properties.architecture << std::endl;
+}
+std::cout << " - deviceID: " << properties.deviceID << std::endl;
+if (properties.name) {
+ std::cout << " - name: " << properties.name << std::endl;
+}
+if (properties.driverDescription) {
+ std::cout << " - driverDescription: " << properties.driverDescription << std::endl;
+}
+std::cout << std::hex;
+std::cout << " - adapterType: 0x" << properties.adapterType << std::endl;
+std::cout << " - backendType: 0x" << properties.backendType << std::endl;
+std::cout << std::dec; // Restore decimal numbers
+```
+
+Here is a sample result with my nice Titan RTX:
+
+```
+Adapter properties:
+ - vendorID: 4318
+ - vendorName: NVIDIA
+ - architecture:
+ - deviceID: 7682
+ - name: NVIDIA TITAN RTX
+ - driverDescription: 536.23
+ - adapterType: 0x0
+ - backendType: 0x5
+```
+
+Conclusion
+----------
+
+ - The very first thing to do with WebGPU is to get the **adapter**.
+ - Once we have an adapter, we can inspect its **capabilities** (limits, features) and properties.
+ - We learned to use **asynchronous functions** and **double call** enumeration functions.
+
+*Resulting code:* [`step005`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step005)
diff --git a/translation/zh/getting-started/adapter-and-device/the-device.md b/translation/zh/getting-started/adapter-and-device/the-device.md
new file mode 100644
index 0000000..6e75c21
--- /dev/null
+++ b/translation/zh/getting-started/adapter-and-device/the-device.md
@@ -0,0 +1,566 @@
+The Device 🟢
+==========
+
+```{lit-setup}
+:tangle-root: zh/010 - 设备 - Next
+:parent: zh/005 - 适配器 - Next
+```
+
+*Resulting code:* [`step010-next`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step010-next)
+
+A WebGPU **device** represents a **context** of use of the API. All the objects that we create (geometry, textures, etc.) are owned by the device.
+
+The device is requested from an **adapter** by specifying the **subset of limits and features** that we are interesed in. Once the device is created, the adapter is generally no longer used: **the only capabilities that matter** to the rest of the application are the one of the device.
+
+Device request
+--------------
+
+### Helper function
+
+Requesting the device **looks a lot like requesting the adapter**, so we will start from a similar function. The key differences lie in the **device descriptor**, which we detail below.
+
+```{lit} C++, Utility functions (append, hidden)
+{{Request device function}}
+```
+
+```{lit} C++, Request device function
+/**
+ * Utility function to get a WebGPU device, so that
+ * WGPUDevice device = requestDeviceSync(adapter, options);
+ * is roughly equivalent to
+ * const device = await adapter.requestDevice(descriptor);
+ * It is very similar to requestAdapter
+ */
+WGPUDevice requestDeviceSync(WGPUInstance instance, WGPUAdapter adapter, WGPUDeviceDescriptor const * descriptor) {
+ struct UserData {
+ WGPUDevice device = nullptr;
+ bool requestEnded = false;
+ };
+ UserData userData;
+
+ // The callback
+ auto onDeviceRequestEnded = [](
+ WGPURequestDeviceStatus status,
+ WGPUDevice device,
+ WGPUStringView message,
+ void* userdata1,
+ void* /* userdata2 */
+ ) {
+ UserData& userData = *reinterpret_cast(userdata1);
+ if (status == WGPURequestDeviceStatus_Success) {
+ userData.device = device;
+ } else {
+ std::cerr << "Error while requesting device: " << toStdStringView(message) << std::endl;
+ }
+ userData.requestEnded = true;
+ };
+
+ // Build the callback info
+ WGPURequestDeviceCallbackInfo callbackInfo = {
+ /* nextInChain = */ nullptr,
+ /* mode = */ WGPUCallbackMode_AllowProcessEvents,
+ /* callback = */ onDeviceRequestEnded,
+ /* userdata1 = */ &userData,
+ /* userdata2 = */ nullptr
+ };
+
+ // Call to the WebGPU request adapter procedure
+ wgpuAdapterRequestDevice(adapter, descriptor, callbackInfo);
+
+ // Hand the execution to the WebGPU instance until the request ended
+ wgpuInstanceProcessEvents(instance);
+ while (!userData.requestEnded) {
+ sleepForMilliseconds(200);
+ wgpuInstanceProcessEvents(instance);
+ }
+
+ return userData.device;
+}
+```
+
+In the **accompanying code** ([`step010-next`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step010-next)), I move these utility functions into `webgpu-utils.cpp`. Unfold the following note to detail all the changes that this implies.
+
+````{admonition} Note - Moving utilities to webgpu-utils.cpp
+:class: foldable note
+
+First, we declare our utility functions in a new header file `webgpu-utils.h`:
+
+```{lit} C++, file: webgpu-utils.h
+#pragma once
+
+#include
+
+#include
+
+/**
+ * Convert a WebGPU string view into a C++ std::string_view.
+ */
+std::string_view toStdStringView(WGPUStringView wgpuStringView);
+
+/**
+ * Convert a C++ std::string_view into a WebGPU string view.
+ */
+WGPUStringView toWgpuStringView(std::string_view stdStringView);
+
+/**
+ * Convert a C string into a WebGPU string view
+ */
+WGPUStringView toWgpuStringView(const char* cString);
+
+/**
+ * Sleep for a given number of milliseconds.
+ * This works with both native builds and emscripten, provided that -sASYNCIFY
+ * compile option is provided when building with emscripten.
+ */
+void sleepForMilliseconds(unsigned int milliseconds);
+
+/**
+ * Utility function to get a WebGPU adapter, so that
+ * WGPUAdapter adapter = requestAdapter(options);
+ * is roughly equivalent to
+ * const adapter = await navigator.gpu.requestAdapter(options);
+ */
+WGPUAdapter requestAdapterSync(WGPUInstance instance, WGPURequestAdapterOptions const * options);
+
+/**
+ * Utility function to get a WebGPU device, so that
+ * WGPUAdapter device = requestDevice(adapter, options);
+ * is roughly equivalent to
+ * const device = await adapter.requestDevice(descriptor);
+ * It is very similar to requestAdapter
+ */
+WGPUDevice requestDeviceSync(WGPUInstance instance, WGPUAdapter adapter, WGPUDeviceDescriptor const * descriptor);
+
+/**
+ * An example of how we can inspect the capabilities of the hardware through
+ * the adapter object.
+ */
+void inspectAdapter(WGPUAdapter adapter);
+```
+
+Then, we move the "Utility functions" block in a new `webgpu-utils.cpp` file. Do not forget to copy relevant includes:
+
+```{lit} C++, file: webgpu-utils.cpp
+#include "webgpu-utils.h"
+
+#include
+#include
+#include
+
+#ifdef __EMSCRIPTEN__
+# include
+#else // __EMSCRIPTEN__
+# include
+# include
+#endif // __EMSCRIPTEN__
+
+{{Utility functions}}
+```
+
+We remove utility functions from main.cpp and include our new `webgpu-utils.h` in `main.cpp` instead:
+
+```{lit} C++, Includes (prepend)
+#include "webgpu-utils.h"
+```
+
+```{lit} C++, Utility functions in main.cpp (replace)
+```
+
+In `CMakeLists.txt`, we now have multiple source files in our executable. We list all our source files; header files are optional, but including them helps IDEs display them correctly in the project's structure:
+
+```{lit} CMake, App source files
+main.cpp
+webgpu-utils.h
+webgpu-utils.cpp
+```
+
+These go in the call to `add_executable` that define our `App` target:
+
+```{lit} CMake, Define app target (replace)
+{{Dependency subdirectories}}
+
+add_executable(App
+ {{App source files}}
+)
+
+{{Link libraries}}
+```
+````
+
+### Usage
+
+In the main function, after getting the adapter, we can request the device:
+
+```{lit} C++, Request device
+std::cout << "Requesting device..." << std::endl;
+
+WGPUDeviceDescriptor deviceDesc = WGPU_DEVICE_DESCRIPTOR_INIT;
+{{Build device descriptor}}
+WGPUDevice device = requestDeviceSync(instance, adapter, &deviceDesc);
+
+std::cout << "Got device: " << device << std::endl;
+```
+
+```{lit} C++, Create things (append, hidden)
+{{Request device}}
+```
+
+```{tip}
+I use here the `WGPU_DEVICE_DESCRIPTOR_INIT` macro defined in `webgpu.h` to assign **default values to all fields** of `deviceDesc`. Such an initializer macro is **available for all structs** of `webgpu.h`, I recommend using them!
+```
+
+```{admonition} wgpu-native
+As of `v24.0.0.2`, wgpu-native does not support init macros yet. It should come shortly though.
+```
+
+And of course, we release the device when the program ends:
+
+```{lit} C++, Release things (prepend)
+wgpuDeviceRelease(device);
+```
+
+````{note}
+The adapter can be **released before the device**. Actually we often release it as soon as we have our device and never use it again.
+
+```{lit} C++, Create things (append)
+// We no longer need to access the adapter once we have the device
+{{Release adapter}}
+```
+
+```{lit} C++, Release things (replace)
+wgpuDeviceRelease(device);
+{{Release WebGPU instance}}
+```
+
+````
+
+```{important}
+An adapter **may only provide one device** during its lifetime. It is then "**consumed**", meaning that if you need to **create another device**, you also need to **request a new adapter** (which may correspond to the same underlying physical device).
+```
+
+Device descriptor
+-----------------
+
+A lot goes in the device descriptor, so let us have a look at its definition:
+
+```C++
+// Definition of the WGPUDeviceDescriptor struct in webgpu.h
+struct WGPUDeviceDescriptor {
+ WGPUChainedStruct * nextInChain;
+ WGPUStringView label;
+ size_t requiredFeatureCount;
+ WGPUFeatureName const * requiredFeatures;
+ WGPU_NULLABLE WGPULimits const * requiredLimits;
+ WGPUQueueDescriptor defaultQueue;
+ WGPUDeviceLostCallbackInfo deviceLostCallbackInfo;
+ WGPUUncapturedErrorCallbackInfo uncapturedErrorCallbackInfo;
+};
+```
+
+First of all, we recognize the now usual `nextInChain` pointer that starts all such structures. We **do not use any extension** for now so we can leave it to `nullptr`, which the `WGPU_DEVICE_DESCRIPTOR_INIT` macro ensured.
+
+```C++
+// This is only needed if not using WGPU_DEVICE_DESCRIPTOR_INIT
+deviceDesc.nextInChain = nullptr;
+```
+
+### Label
+
+Then comes the **label**, which is present in almost all descriptors as well. This is used to give a name to your WebGPU objects, so that **error messages get easier to read**.
+
+```{lit} C++, Build device descriptor
+// Any name works here, that's your call
+deviceDesc.label = toWgpuStringView("My Device");
+```
+
+After this message will say something like *"error with device 'My Device'..."*, which is not that important for devices because you will typically only have one, but **when it comes to buffers or textures**, it is very helpful to **know which one is causing an issue**!
+
+### Features
+
+In the previous chapter, we saw that adapters can list *features* which may or may not be available. We can pick a subset of the list of **available features** and request the device to support them.
+
+This kind of **array argument** is always specified through **a pair of two fields** in a C API like WebGPU: **(a)** the number of items and **(b)** the address in memory of the first item, the other one being expected to lie **contiguously in memory**.
+
+In our case, we do not need any feature for now, so we can leave this to an **empty array**:
+
+```C++
+deviceDesc.requiredFeatureCount = 0;
+deviceDesc.requiredFeatures = nullptr;
+```
+
+````{note}
+When we will want to request for some feature, we will typically do it through a `std::vector` like this:
+
+```{lit} C++, Build device descriptor (append)
+std::vector features;
+{{List required features}}
+deviceDesc.requiredFeatureCount = features.size();
+deviceDesc.requiredFeatures = features.data();
+// Make sure 'features' lives until the call to wgpuAdapterRequestDevice!
+```
+
+```{lit} C++, List required features (hidden)
+// No required feature for now
+```
+
+```{lit} C++, Includes (append)
+#include
+```
+````
+
+### Limits
+
+We may specify limits that we need the device to support through the `requiredLimits` field. Note that this is a pointer marked as `WGPU_NULLABLE`, because we can set it to `nullptr` to let limits to the [default values](https://www.w3.org/TR/webgpu/#limit-default).
+
+```C++
+deviceDesc.requiredLimits = nullptr;
+```
+
+Alternatively, we can specify the address of a `WGPULimits` object:
+
+```{lit} C++, Build device descriptor (append)
+WGPULimits requiredLimits = WGPU_LIMITS_INIT;
+{{Specify required limits}}
+deviceDesc.requiredLimits = &requiredLimits;
+// Make sure that the 'requiredLimits' variable lives until the call to wgpuAdapterRequestDevice!
+```
+
+```{note}
+If you look at the actual values set by `WGPU_LIMITS_INIT` in `webgpu.h`, they seem to be different from the default values listed in the [WebGPU specification](https://www.w3.org/TR/webgpu/#limit-default) and look like `WGPU_LIMIT_U32_UNDEFINED`. These **special values** mean "use whatever the standard default is" to the WebGPU backend.
+```
+
+Let us use the default values of `requiredLimits` for now, I will try to **mention in each chapter which limit it is related to** so that we can progressively populate this.
+
+```{lit} C++, Specify required limits
+// We leave 'requiredLimits' untouched for now
+```
+
+### Queue
+
+The field `defaultQueue` is a substructure of the device descriptor, which is pretty minimal but may become in future version of WebGPU and/or through extensions:
+
+```C++
+// Definition of the WGPUQueueDescriptor struct in webgpu.h
+struct WGPUQueueDescriptor {
+ WGPUChainedStruct * nextInChain;
+ WGPUStringView label;
+};
+```
+
+The value of `deviceDesc.defaultQueue.nextInChain` was automatically initialized to `nullptr` when using `WGPU_DEVICE_DESCRIPTOR_INIT`, so all we may do is give a name to the queue (which is optional because here again we only have one queue):
+
+```{lit} C++, Build device descriptor (append)
+deviceDesc.defaultQueue.label = toWgpuStringView("The Default Queue");
+```
+
+### Device Lost Callback
+
+The last two fields of the descriptor are **callback info** structures, like we have seen with adapter and device request functions.
+
+The only thing that changes from one `WGPUSomethingCallbackInfo` to another is the type of the core `callback` field, so let us have a look at `WGPUDeviceLostCallback` and define a function that has exactly that signature:
+
+```{lit} C++, Device Lost Callback
+auto onDeviceLost = [](
+ WGPUDevice const * device,
+ WGPUDeviceLostReason reason,
+ struct WGPUStringView message,
+ void* /* userdata1 */,
+ void* /* userdata2 */
+) {
+ // All we do is display a message when the device is lost
+ std::cout
+ << "Device " << device << " was lost: reason " << reason
+ << " (" << toStdStringView(message) << ")"
+ << std::endl;
+};
+```
+
+```{note}
+I define this function using a [lambda expression](https://en.cppreference.com/w/cpp/language/lambda) (like we did in `requestDeviceSync`) in order to place it **close to the device descriptor definition**, but it could be a regular function.
+```
+
+The possible reasons for a lost device are listed in `webgpu.h`:
+
+```C++
+enum WGPUDeviceLostReason {
+ // This is probably suspicious:
+ WGPUDeviceLostReason_Unknown = 0x00000001,
+ // This is raised at the end of your program if you call
+ // wgpuInstanceProcessEvents after releasing the device:
+ WGPUDeviceLostReason_Destroyed = 0x00000002,
+ // This happens when the instance got destroyed by the web browser or the
+ // program terminates without processing events after the device was
+ // destroyed:
+ WGPUDeviceLostReason_InstanceDropped = 0x00000003,
+ // This happens when the device could not even be created:
+ WGPUDeviceLostReason_FailedCreation = 0x00000004,
+ // Special value, never used:
+ WGPUDeviceLostReason_Force32 = 0x7FFFFFFF
+};
+```
+
+We set this callback in our `deviceLostCallbackInfo`, and set the mode to `AllowProcessEvents` like we did with other callbacks:
+
+```{lit} C++, Build device descriptor (append)
+{{Device Lost Callback}}
+deviceDesc.deviceLostCallbackInfo.callback = onDeviceLost;
+deviceDesc.deviceLostCallbackInfo.mode = WGPUCallbackMode_AllowProcessEvents;
+```
+
+### Uncaptured Error Callback
+
+This last callback is very important, as it defines a function that will be invoked **whenever something goes wrong** with the API. It this is very likely to happen, and the information messages passed to this callback are very valuable to help debugging our application, so we **must not overlook it**!
+
+Here again, we define a callback that displays information about the device error:
+
+```{lit} C++, Device Error Callback
+auto onDeviceError = [](
+ WGPUDevice const * device,
+ WGPUErrorType type,
+ struct WGPUStringView message,
+ void* /* userdata1 */,
+ void* /* userdata2 */
+) {
+ std::cout
+ << "Uncaptured error in device " << device << ": type " << type
+ << " (" << toStdStringView(message) << ")"
+ << std::endl;
+};
+```
+
+And we set this callback in the descriptor's `uncapturedErrorCallbackInfo` field:
+
+```{lit} C++, Build device descriptor (append)
+{{Device Error Callback}}
+deviceDesc.uncapturedErrorCallbackInfo.callback = onDeviceError;
+```
+
+````{caution}
+This callback info **does not have a `mode` field** because contrary to other callbacks, this one is en **event handler** that may be called repeatedly (as opposed to a *"future"* handler that is invoked only once).
+
+```
+// Definition of the WGPUUncapturedErrorCallbackInfo struct in webgpu.h
+struct WGPUUncapturedErrorCallbackInfo {
+ WGPUChainedStruct * nextInChain;
+ // No 'mode' field! Callback may be invoked at any time.
+ WGPUUncapturedErrorCallback callback;
+ WGPU_NULLABLE void* userdata1;
+ WGPU_NULLABLE void* userdata2;
+};
+```
+````
+
+Inspecting the device
+---------------------
+
+All right, our **descriptor is complete**, we now have a device!
+
+Like the adapter, the device has its own set of capabilities that we can inspect at any time.
+
+```{note}
+At this point of the code -- where we just created the device -- we know its capabilities and limits because when the creation succeeded the device **corresponds to what we requested**. Being able to inspect the device is useful later on, or **when writing a library** that receives a `WGPUDevice` object that was created somewhere else.
+```
+
+```{lit} C++, Utility functions (append)
+// We create a utility function to inspect the device:
+void inspectDevice(WGPUDevice device) {
+
+ WGPUSupportedFeatures features = WGPU_SUPPORTED_FEATURES_INIT;
+ wgpuDeviceGetFeatures(device, &features);
+ std::cout << "Device features:" << std::endl;
+ std::cout << std::hex;
+ for (size_t i = 0; i < features.featureCount; ++i) {
+ std::cout << " - 0x" << features.features[i] << std::endl;
+ }
+ std::cout << std::dec;
+ wgpuSupportedFeaturesFreeMembers(features);
+
+ WGPULimits limits = WGPU_LIMITS_INIT;
+ bool success = wgpuDeviceGetLimits(device, &limits) == WGPUStatus_Success;
+
+ if (success) {
+ std::cout << "Device limits:" << std::endl;
+ std::cout << " - maxTextureDimension1D: " << limits.maxTextureDimension1D << std::endl;
+ std::cout << " - maxTextureDimension2D: " << limits.maxTextureDimension2D << std::endl;
+ std::cout << " - maxTextureDimension3D: " << limits.maxTextureDimension3D << std::endl;
+ std::cout << " - maxTextureArrayLayers: " << limits.maxTextureArrayLayers << std::endl;
+ {{Extra device limits}}
+ }
+}
+```
+
+```{lit} C++, Extra device limits (hidden)
+std::cout << " - maxBindGroups: " << limits.maxBindGroups << std::endl;
+std::cout << " - maxBindGroupsPlusVertexBuffers: " << limits.maxBindGroupsPlusVertexBuffers << std::endl;
+std::cout << " - maxBindingsPerBindGroup: " << limits.maxBindingsPerBindGroup << std::endl;
+std::cout << " - maxDynamicUniformBuffersPerPipelineLayout: " << limits.maxDynamicUniformBuffersPerPipelineLayout << std::endl;
+std::cout << " - maxDynamicStorageBuffersPerPipelineLayout: " << limits.maxDynamicStorageBuffersPerPipelineLayout << std::endl;
+std::cout << " - maxSampledTexturesPerShaderStage: " << limits.maxSampledTexturesPerShaderStage << std::endl;
+std::cout << " - maxSamplersPerShaderStage: " << limits.maxSamplersPerShaderStage << std::endl;
+std::cout << " - maxStorageBuffersPerShaderStage: " << limits.maxStorageBuffersPerShaderStage << std::endl;
+std::cout << " - maxStorageTexturesPerShaderStage: " << limits.maxStorageTexturesPerShaderStage << std::endl;
+std::cout << " - maxUniformBuffersPerShaderStage: " << limits.maxUniformBuffersPerShaderStage << std::endl;
+std::cout << " - maxUniformBufferBindingSize: " << limits.maxUniformBufferBindingSize << std::endl;
+std::cout << " - maxStorageBufferBindingSize: " << limits.maxStorageBufferBindingSize << std::endl;
+std::cout << " - minUniformBufferOffsetAlignment: " << limits.minUniformBufferOffsetAlignment << std::endl;
+std::cout << " - minStorageBufferOffsetAlignment: " << limits.minStorageBufferOffsetAlignment << std::endl;
+std::cout << " - maxVertexBuffers: " << limits.maxVertexBuffers << std::endl;
+std::cout << " - maxBufferSize: " << limits.maxBufferSize << std::endl;
+std::cout << " - maxVertexAttributes: " << limits.maxVertexAttributes << std::endl;
+std::cout << " - maxVertexBufferArrayStride: " << limits.maxVertexBufferArrayStride << std::endl;
+std::cout << " - maxInterStageShaderVariables: " << limits.maxInterStageShaderVariables << std::endl;
+std::cout << " - maxColorAttachments: " << limits.maxColorAttachments << std::endl;
+std::cout << " - maxColorAttachmentBytesPerSample: " << limits.maxColorAttachmentBytesPerSample << std::endl;
+std::cout << " - maxComputeWorkgroupStorageSize: " << limits.maxComputeWorkgroupStorageSize << std::endl;
+std::cout << " - maxComputeInvocationsPerWorkgroup: " << limits.maxComputeInvocationsPerWorkgroup << std::endl;
+std::cout << " - maxComputeWorkgroupSizeX: " << limits.maxComputeWorkgroupSizeX << std::endl;
+std::cout << " - maxComputeWorkgroupSizeY: " << limits.maxComputeWorkgroupSizeY << std::endl;
+std::cout << " - maxComputeWorkgroupSizeZ: " << limits.maxComputeWorkgroupSizeZ << std::endl;
+std::cout << " - maxComputeWorkgroupsPerDimension: " << limits.maxComputeWorkgroupsPerDimension << std::endl;
+std::cout << " - maxStorageBuffersInVertexStage: " << limits.maxStorageBuffersInVertexStage << std::endl;
+std::cout << " - maxStorageTexturesInVertexStage: " << limits.maxStorageTexturesInVertexStage << std::endl;
+std::cout << " - maxStorageBuffersInFragmentStage: " << limits.maxStorageBuffersInFragmentStage << std::endl;
+std::cout << " - maxStorageTexturesInFragmentStage: " << limits.maxStorageTexturesInFragmentStage << std::endl;
+```
+
+If you define this function in `webgpu-utils.cpp`, do not forget to also declare it in `webgpu-utils.h`:
+
+```{lit} C++, file: webgpu-utils.h (append)
+/**
+ * Display information about a device
+ */
+void inspectDevice(WGPUDevice device);
+```
+
+And we call this after creating the device:
+
+```{lit} C++, Create things (append)
+inspectDevice(device);
+```
+
+We can see that by default the device limits are not the same as what the adapter supports. Setting `deviceDesc.requiredLimits` to `nullptr` or using default limits from `WGPU_LIMITS_INIT` corresponded to ask for minimal limits:
+
+```
+Device limits:
+ - maxTextureDimension1D: 8192
+ - maxTextureDimension2D: 8192
+ - maxTextureDimension3D: 2048
+ - maxTextureArrayLayers: 256
+ - ...
+```
+
+```{note}
+One can also **retrieve the adapter** that was used to request the device using `wgpuDeviceGetAdapter`.
+```
+
+Conclusion
+----------
+
+ - We now have our **device**, from which we can create all other WebGPU objects.
+ - **Important:** Once the device is created, the adapter should in general no longer be used. The only capabilities that matter to the application are the one of the device.
+ - Default limits are minimal limits, rather than what the adapter supports. This helps ensuring consistency across devices.
+ - The **uncaptured error callback** is where all of our issues will be reported, it is important to set it up.
+
+We are now ready to **send instructions and data** to the device through the **command queue**!
+
+*Resulting code:* [`step010-next`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step010-next)
diff --git a/translation/zh/getting-started/hello-webgpu.md b/translation/zh/getting-started/hello-webgpu.md
new file mode 100644
index 0000000..a305721
--- /dev/null
+++ b/translation/zh/getting-started/hello-webgpu.md
@@ -0,0 +1,344 @@
+Hello WebGPU 🟢
+============
+
+```{translation-warning} Outdated Translation, /getting-started/hello-webgpu.md
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
+```
+
+```{lit-setup}
+:tangle-root: zh/001 - Hello WebGPU
+:parent: zh/000 - 配置项目
+:fetch-files: ../../data/webgpu-distribution-v0.2.0-beta2.zip
+```
+
+*结果代码:* [`step001`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step001)
+
+WebGPU 是*渲染硬件接口* (RHI),这代表它是一个用于对多种潜在图形硬件会操作系统提供**通用接口**的编程库。
+
+对于你编写的 C++ 程序而言,WebGPU 仅仅是一个 **独立的头文件**,它的内部列出了所有可用的方法与数据结构:[`webgpu.h`](https://github.com/webgpu-native/webgpu-headers/blob/main/webgpu.h)。
+
+然而,在构建程序时,你的编译器必须要在最后(在最终的*链接*步骤)知道**去哪里寻找**这些函数的具体实现。与本地 API 相反,驱动并不提供这些实现,因此我们必须要显式地提供它。
+
+```{figure} /images/rhi-vs-opengl.png
+:align: center
+像 WebGPU 这样的渲染硬件接口(RHI)**并不是由驱动直接提供的**:我们需要将它链接到一个在(操作)系统之上实现了这些 API 的库。
+```
+
+安装 WebGPU
+-----------------
+
+目前,存在两套针对 WebGPU 本地头文件的实现:
+
+ - [wgpu-native](https://github.com/gfx-rs/wgpu-native),为 Firefox 开发的 Rust 库 [`wgpu`](https://github.com/gfx-rs/wgpu) 提供了一个原生接口。
+ - Google 的 [Dawn](https://dawn.googlesource.com/dawn),是为 Chrome 开发的库。
+
+```{figure} /images/different-backend.png
+:align: center
+有(至少)两个 WebGPU 的实现,它们分别针对两大主流网页引擎开发。
+```
+
+目前这两种实现仍存在**一些差异**,但随着 WebGPU 规范趋于稳定,这些差异将会消失。本指南的编写力求**同时兼容这两种实现**。
+
+为简化在 CMake 项目中的集成,我提供了一个[发行版 WebGPU](https://github.com/eliemichel/WebGPU-distribution) 仓库,您可以从以下选项中选择其一:
+
+`````{admonition} 选项太多了? (点击我)
+:class: foldable quickstart
+
+*您是否更看重快速构建而非详细的错误信息?*
+
+````{admonition} 当然,我想要快速构建,并且不希望在首次构建时连接网络
+:class: foldable yes
+
+选择[**选项 A**](#选项-a-轻巧的-wgpu-native) (wgpu-native)!
+````
+
+````{admonition} 不,我需要更详细的错误信息。
+:class: foldable no
+
+选择[**选项 B**](#选项-b-舒适便捷的-dawn) (Dawn)!
+````
+
+```{admonition} 我不想做选择
+:class: foldable warning
+
+选择[**选项 C**](#选项-c-将二者灵活兼得),它允许你在不同的实现后端之间自由切换!
+```
+
+`````
+
+### 选项 A: 轻巧的 wgpu-native
+
+由于 `wgpu-native` 是用 Rust 编写的,我们无法轻松地从头开始构建它,因此发行版中包含了预编译的库文件:
+
+```{important}
+**WIP:** 尽量使用“全平台版本”而不是针对特定平台的版本,由于我还后者完成自动化构建,所以它们一般在版本上会落后。
+```
+
+ - [wgpu-native 全平台版本](https://github.com/eliemichel/WebGPU-distribution/archive/refs/tags/wgpu-v0.19.4.1.zip) (由于针对了所有平台,所以体积会稍重)
+ - [wgpu-native Linux 版](#)
+ - [wgpu-native Windows 版](#)
+ - [wgpu-native MacOS 版](#)
+
+```{note}
+预编译的二进文件是由 `wgpu-native` 项目直接提供的,因此你可以完全信任它。唯一的不同之处在与我在发行版中增加了一个 `CMakeLists.txt` 文件使集成更方便。
+```
+
+**优点**
+ - 这是可构建的最轻量选择。
+
+**缺点**
+ - 你并不是从源代开始构建。
+ - `wgpu-native` 并不能够给出像 Dawn 一样多的调试信息。
+
+### 选项 B: 舒适便捷的 Dawn
+
+相对而言,Down 提供了更完善的错误信息。同时,由于 Dawn 是有 C++ 编写的,所以我们可以从头构建它。在出现崩溃时我们也能够更深入的检查堆栈追踪信息:
+
+ - [Dawn 全平台版本](https://github.com/eliemichel/WebGPU-distribution/archive/refs/tags/dawn-6536.zip)
+
+```{note}
+我提供的基于 Dawn 的发型版本是从它的原始仓库直接获取的源代码,单尽可能采取浅克隆的方式,并预设了一些选项以避免构建我们不需要使用的组件。
+```
+
+**优点**
+
+ - 由于 Dawn 提供了更详细的错误信息,它在开发时会显著地更加便捷。
+ - 相对于 `wgpu-native`,它在接口实现的进度上会更领先(不过 `wgpu-native` 最终也会赶上)。
+
+**缺点**
+ - 虽然我减少了对外部依赖的内容,但你还上需要[安装 Python](https://www.python.org/) 和 [git](https://git-scm.com/download)。
+ - 发行版中会下载 Dawn 的源代码和它的依赖项,因此在初次使用时你需要连接到互联网。
+ - 初次构建会显著耗费更长的时间,并且占用更多的硬盘空间。
+
+````{note}
+在 Linux 上使用时请参考 [Dawn 构建文档](https://dawn.googlesource.com/dawn/+/HEAD/docs/building.md)中需要安的包,截止到 2024 年 4 月 7 日,(在 Ubuntu 中)需要安装的包如下:
+
+```bash
+sudo apt-get install libxrandr-dev libxinerama-dev libxcursor-dev mesa-common-dev libx11-xcb-dev pkg-config nodejs npm
+```
+````
+
+### 选项 C: 将二者灵活兼得
+
+在这个选项中,我们只会包含一系列的 Cmake 文件到我们的项目中。根据我们的配置,它会自动下载 `wgpu-native` 或 Dawn。
+
+```
+cmake -B build -DWEBGPU_BACKEND=WGPU
+# 或者
+cmake -B build -DWEBGPU_BACKEND=DAWN
+```
+
+```{note}
+**配套代码**使用了该选项。
+```
+
+This is given by the `main` branch of my distribution repository:
+它在我的发行版仓库的 `main` 分支上提供:
+
+ - [WebGPU 任意发行版](https://github.com/eliemichel/WebGPU-distribution/archive/refs/tags/main-v0.2.0-beta1.zip)
+
+```{tip}
+这个仓库的 README 文件包含了如何使用 FetchContent_Declare 将它添加到你的项目中的说明。操作完成后,根据编写的配置配置你会使用到较新版本的 Dawn 或 wgpu-native。因此本书中的示例可能会无法编译。请参考下方说明以下载本书对应的版本。
+```
+
+**优点**
+ - 你可以同时拥两种`构建`,一种使用了 Dawn,另外一种使用 `wgpu-native`
+
+**缺点**
+ - 这是一个 `元发行版`,在你配置构建(也就是第一次使用 `cmake` 指令)时会下载对应的版本,所以你需要在这时拥有**网络连接**并安装好 **git**。
+
+当然,根据你的选择,*Option A* 和 *Option B* 的优缺点也都会一同存在。
+
+### 集成
+
+不论你选择哪种发行版本,集成方式是相同的:
+
+ 1. 下载你所选择选项的压缩包。
+ 2. 把它解压到项目的根目录,解压后应当有一个 `webgpu/` 目录,它包含一个 `CMakeLists.txt` 和一些其他文件(.dll 或者 .so)。
+ 3. 在你的 `CMakeLists.txt` 中添加 `add_subdirectory(webgpu)`。
+
+```{lit} CMake, 依赖子目录 (insert in {{定义应用构建目标}} before "add_executable")
+# 包含 webgpu 目录, 以定义 'webgpu' 目标
+add_subdirectory(webgpu)
+```
+
+```{important}
+这里的“webgpu”指的是 webgpu 所在的目录,因此它应该包含一个 `webgpu/CMakeLists.txt` 文件。否则它代表了 `webgpu.zip` 并没有解压到正确的目录,你可以选择移动该目录,或修改 `add_subdirectory` 指令中的路径来解决该问题。
+```
+
+ 4. 增加 `webgpu` 构建目标,并(在 `add_executable(App main.cpp)` 后)使用 `target_link_libraries` 指令将它设置为我们的应用的依赖。
+
+```{lit} CMake, 链接库 (insert in {{定义应用构建目标}} after "add_executable")
+# 向我们的 App 应用添加 `webgpu` 目标依赖
+target_link_libraries(App PRIVATE webgpu)
+```
+
+```{tip}
+这次,“webgpu” 指的是 `webgpu/CMakeLists.txt` 中的构建目标,它由 `add_library(webgpu ...)` 定义,它与目录名称并不相关。
+```
+
+在使用预编译二进制文件时,需额外增加一个步骤:在 `CMakeLists.txt` 文件的末尾调用函数 `target_copy_webgpu_binaries(App)`。此操作可确保运行时依赖的 .dll/.so 文件被复制到生成的可执行文件同级目录下。请注意,在发行你的应用程序时,必须同时发行此动态库文件。
+
+```{lit} CMake, 链接库 (append)
+# 应用二进制程序在运行时需要找到 wgpu.dll 或 libwgpu.so,因此我们将它自动复制到二
+# 进制文件旁(它通常被称作 WGPU_RUNTIME_LIB)。
+target_copy_webgpu_binaries(App)
+```
+
+```{note}
+在使用 Dawn 时并不存在需要复制的预编译二进制文件,但我依然定义了 `target_copy_webgpu_binaries` 函数(它什么都不做),以便你针对两种发行版本使用完全相同的 CMakeLists。
+```
+
+测试安装
+------------------------
+
+要测试实现,我们只需创建 WebGPU **实例**,也就是我能做 JavaScript 环境中获取的d `navigator.gpu` 的等价物。然后会v检查并销毁它。
+
+```{important}
+确保在使用任何 WebGPU 函数或类型前引入 ``!
+```
+
+```{lit} C++, 依赖引入
+// 引入依赖
+#include
+#include
+```
+
+```{lit} C++, file: main.cpp
+{{依赖引入}}
+
+int main (int, char**) {
+ {{新建 WebGPU 实例}}
+
+ {{检查 WebGPU 实例}}
+
+ {{销毁 WebGPU 实例}}
+
+ return 0;
+}
+```
+
+### 描述符和实例创建
+
+实例通过 `wgpuCreateInstance` 函数创建。像所有用于**创建**实体的 WebGPU 函数一样,它以**描述符**作为参数,我们可以通过该描述来配置此对象的初始选项。
+
+```{lit} C++, 新建 WebGPU 实例
+// 我们创建一个描述符
+WGPUInstanceDescriptor desc = {};
+desc.nextInChain = nullptr;
+
+// 我们使用这个描述符创建一个实例
+WGPUInstance instance = wgpuCreateInstance(&desc);
+```
+
+```{note}
+描述符是一将**多个函数参数打包**在一起的方法,有时它们确实包含了太多参数。它也可以用于编写包含多个参数的工具函数以使程序结构简单。
+```
+
+我们在 `WGPUInstanceDescriptor` 的结构中遇到了另一个 WebGPU 的**惯用设计**:描述符的首个字段总是一个名称是 `nextInChain` 的指针。这是该 API 用于支持未来添加自定义扩展的通用机制。大部情况下,我们将它设置为 `nullptr`。
+
+### 检查
+
+通过 `wgpuCreateSomething` 函数创建的 WebGPU 实体在技术上**仅是一个指针**。它是一个不透明的句柄(opaque handle),用于标识后端实际存在的对象——我们永远无需直接访问该底层对象。
+
+要验证对象是否有效,只需将其与 `nullptr` 比较,或使用布尔运算:
+
+```{lit} C++, 检查 WebGPU 实例
+// 我们检查实例是否真正被创建
+if (!instance) {
+ std::cerr << "Could not initialize WebGPU!" << std::endl;
+ return 1;
+}
+
+// 打印对象 (WGPUInstance 是一个普通的指针,它可以被随意复制,而无需
+// 关心它的体积)
+std::cout << "WGPU instance: " << instance << std::endl;
+```
+
+程序时应当输出结构如 `WGPU instance: 000001C0D2637720` 的内容。
+
+### 销毁与生命周期管理
+
+所有通过 WebGPU 可**创建**的实体最终均需要被**释放**。创建对象的方法名称总是 `wgpuCreateSomething`,同时释放它的函数名字是 `wgpuSomethingRelease`。
+
+需要注意的是,每个 WebGPU 对象内部都维护着一个引用计数器。只有当对象不再被代码中的其他部分引用时(即引用计数降为 0),释放该对象才会真正回收其关联的内存资源:
+
+```C++
+WGPUSomething sth = wgpuCreateSomething(/* 描述符 */);
+
+// 这代表 “将对象 sth 的引用计数增加 1”
+wgpuSomethingReference(sth);
+// 现在引用数量为 2 (在创建时它被设置为 1)
+
+// 这代表 “将对象 sth 的引用计数减少 1,如果降到了 0 就销毁对象”
+wgpuSomethingRelease(sth);
+// 现在引用数量为 1,对象依旧可被使用
+
+// 再次释放
+wgpuSomethingRelease(sth);
+// 现在引用计数已经降到了 0,该对象会立刻销毁并不能再被使用
+```
+
+特别地,我们需要释放全局的 WebGPU 实例:
+
+```{lit} C++, 销毁 WebGPU 实例
+// 我们清除 WebGPU 实例
+wgpuInstanceRelease(instance);
+```
+
+### 针对特定实现的行为
+
+为了处理不同实现间的轻微差别,我提供的发行版本中还定义了如下预处变量:
+
+```C++
+// 如果使用 Dawn
+#define WEBGPU_BACKEND_DAWN
+
+// 如果使用 wgpu-native
+#define WEBGPU_BACKEND_WGPU
+
+// 如果使用 emscripten
+#define WEBGPU_BACKEND_EMSCRIPTEN
+```
+
+### 为 Web 构建
+
+上方列出的 WebGPU 发行版本已经与 [Emscripten](https://emscripten.org/docs/getting_started/downloads.html) 兼容。如果在构建你的 web 应用有任何问题时,你可以参考[它的专用附录](../appendices/building-for-the-web.md)。
+
+因为我们未来会不时添加一些专为 web 构建定制的选项,我们先在 CMakeLists.txt 文件末尾新增一个专门的配置区块。
+
+```{lit} CMake, file: CMakeLists.txt (append)
+# Emscripten 的特殊配置
+if (EMSCRIPTEN)
+ {{Emscripten 的特殊配置}}
+endif()
+```
+
+现在我们仅修改输出文件的后缀名为一个 HTML 网页(而不是一个 WebAssembly 模块或 JavaScript 库)。
+
+```{lit} CMake, Emscripten 的特殊配置
+# 输一个完整的网页,而不是一个简单的 WebAssembly 模块
+set_target_properties(App PROPERTIES SUFFIX ".html")
+```
+
+由于某原因,在使用 Emscripten 时实例描述符**必须为空**(此时它表示“使用缺省值”),所以我们已经可以使用我们的 `WEBGPU_BACKEND_EMSCRIPTEN` 宏:
+
+```{lit} C++, 新建 WebGPU 实例 (replace)
+// 我们创建一个描述符
+WGPUInstanceDescriptor desc = {};
+desc.nextInChain = nullptr;
+
+// 我们使用这个描述符创建一个实例
+#ifdef WEBGPU_BACKEND_EMSCRIPTEN
+WGPUInstance instance = wgpuCreateInstance(nullptr);
+#else // WEBGPU_BACKEND_EMSCRIPTEN
+WGPUInstance instance = wgpuCreateInstance(&desc);
+#endif // WEBGPU_BACKEND_EMSCRIPTEN
+```
+
+总结
+----------
+
+在本章中,我们配置了 WebGPU 并了解到有**多个渲染后端**可用。同时,我们也掌握了 WebGPU API 中贯穿始终的核心编程范式——对象创建与销毁机制!
+
+*结果代码:* [`step001`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step001)
diff --git a/translation/zh/getting-started/index.md b/translation/zh/getting-started/index.md
new file mode 100644
index 0000000..a2a7216
--- /dev/null
+++ b/translation/zh/getting-started/index.md
@@ -0,0 +1,22 @@
+起步
+===============
+
+```{translation-warning} Outdated Translation, /getting-started/index.md
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
+```
+
+目录
+--------
+
+```{toctree}
+:titlesonly:
+
+project-setup
+hello-webgpu
+adapter-and-device/index
+the-command-queue
+opening-a-window
+first-color
+
+cpp-idioms
+```
diff --git a/translation/zh/getting-started/project-setup.md b/translation/zh/getting-started/project-setup.md
new file mode 100644
index 0000000..9af2a74
--- /dev/null
+++ b/translation/zh/getting-started/project-setup.md
@@ -0,0 +1,161 @@
+配置项目 🟢
+=============
+
+```{translation-warning} Outdated Translation, /getting-started/project-setup.md
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
+```
+
+```{lit-setup}
+:tangle-root: zh/000 - 配置项目
+```
+
+*结果代码:* [`step000`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step000)
+
+我们会在配套代码中使用 [CMake](https://cmake.org/) 来管理代码编译. 这是跨平台构建中很标准的处理方式,同时我们遵守 [modern cmake](https://cliutils.gitlab.io/modern-cmake/) 风格编写这些配置。
+
+必要条件
+------------
+
+我们只需要 CMake 和一个 C++ 编译器,下方提供了不同系统环境下的操作指南。
+
+```{hint}
+在完成安装后,你可以使用 `which cmake` (linux/macOS) 或 `where cmake` (Windows) 命令来查看命令行是否可以找到 `cmake` 命令的完整路径。在没有的情况下,请确保你的 `PATH` 环境变量包含 CMake 安装的目录。
+```
+
+### Linux
+
+如果你使用的是 Ubuntu/Debian 发行版,安装以下包:
+
+```bash
+sudo apt install cmake build-essential
+```
+
+其他发行版也有类似的包,确保`cmake`, `make` 和 `g++` 命令可运行即可。
+
+### Windows
+
+从 [下载页面](https://cmake.org/download/) 下载并安装 CMake。你可以使用 [Visual Studio](https://visualstudio.microsoft.com/downloads/) 或 [MinGW](https://www.mingw-w64.org/) 作为编译器工具包。
+
+### MacOS
+
+You can install CMake using `brew install cmake`, and [XCode](https://developer.apple.com/xcode/) to build the project.
+使用 `brew install cmake` 安装 CMake,然后使用 [XCode](https://developer.apple.com/xcode/) 构建项目。
+
+最小项目
+---------------
+
+最小的项目包含一个 `main.cpp` 源文件和一个 `CMakeLists.txt` 构建文件。
+
+让我们在 `main.cpp` 中从经典的 hello world 开始:
+
+```{lit} C++, file: main.cpp
+#include
+
+int main (int, char**) {
+ std::cout << "Hello, world!" << std::endl;
+ return 0;
+}
+```
+
+在 `CMakeLists.txt` 中,我们指定我们想要创建一个类型为 *executable* 的 *target*(构建目标),名为 "App"(这将是可执行文件的名称),其源文件为 `main.cpp`:
+
+```{lit} CMake, 定义应用构建目标
+add_executable(App main.cpp)
+```
+
+CMake 还期望在 `CMakeLists.txt` 的开头知道这个配置文件是为哪个版本的 CMake 编写的(<最低支持的版本>...<你使用的版本>),同时 CMake 也希望知道一些关于项目的信息:
+
+```{lit} CMake, file: CMakeLists.txt
+cmake_minimum_required(VERSION 3.0...3.25)
+project(
+ LearnWebGPU # 项目名称,如果你使用 Visual Studio,它也将是解决方案的名称
+ VERSION 0.1.0 # 任意的版本号
+ LANGUAGES CXX C # 项目使用的编程语言
+)
+
+{{定义应用构建目标}}
+
+{{推荐的额外配置}}
+```
+
+构建
+--------
+
+我们现在可以开始构建我们的最小项目了。打开一个终端并跳转到包含 `CMakeLists.txt` 和 `main.cpp` 文件的目录:
+
+```bash
+cd your/project/directory
+```
+
+```{hint}
+在 Windows 环境使用资源管理器打开你的项目目录时,按下 Ctrl+L,然后输入 `cmd` 并回车,就可以打开一个当前目录的终端窗口。
+```
+
+现在让我们要求 CMake 为我们的项目创建构建文件。我们通过使用 `-B build` 选项将由源代码生成的构建文件放在名为 *build/* 的目录中。我们强烈推荐这样的操作方式,它便于我们轻松地区分自动生成的文件和我们手动编写的文件(也就是源代码):
+
+```bash
+cmake . -B build
+```
+
+这个指令会根据你的系统创建 `make`,Visual Studio 或 XCode 的构建文件(你可以使用 `-G` 选项来强制使用特定的构建系统,更多信息请参阅 `cmake -h`)。要最终构建程序并生成 `App`(或 `App.exe`)可执行文件,你可以打开生成的 Visual Studio 或 XCode 解决方案,或者在终端中输入:
+
+```bash
+cmake --build build
+```
+
+然后运行生成的程序:
+
+```bash
+build/App # linux/macOS
+build\Debug\App.exe # Windows
+```
+
+推荐的额外配置
+------------------
+
+在调用 `add_executable` 之后的位置,我们可以通过调用 `set_target_properties` 命令来设置 `App` 目标的一些属性。
+
+```{lit} CMake, 推荐的额外配置
+set_target_properties(App PROPERTIES
+ CXX_STANDARD 17
+ CXX_STANDARD_REQUIRED ON
+ CXX_EXTENSIONS OFF
+ COMPILE_WARNING_AS_ERROR ON
+)
+```
+
+我们将 `CXX_STANDARD` 属性设置为 17 表示我们需要 C++17(它允许我们使用一些额外的语法,但不是强制性的)。`CXX_STANDARD_REQUIRED` 属性确保在 C++17 不支持时配置将失败。
+
+我们将 `CXX_EXTENSIONS` 属性设置为 `OFF` 以禁用编译器特定的扩展(例如,在 GCC 上,这将使 CMake 使用 `-std=c++17` 而不是 `-std=gnu++17` 来设置编译标志列表)。
+
+作为一个良好的实践,我们将 `COMPILE_WARNING_AS_ERROR` 打开,以确保没有警告被忽略。当我们学习一个新的语言/库时,警告尤其重要。因此为了确保有尽可能多的警告,我们添加下面这些编译选项:
+
+```{lit} CMake, 推荐的额外配置 (append)
+if (MSVC)
+ target_compile_options(App PRIVATE /W4)
+else()
+ target_compile_options(App PRIVATE -Wall -Wextra -pedantic)
+endif()
+```
+
+```{note}
+在附带的代码中,我在 `utils.cmake` 中定义了一个名为 `target_treat_all_warnings_as_errors()` 的函数,并在 `CMakeLists.txt` 的开头包含了它。
+```
+
+在 macOS 上,CMake 可以生成 XCode 项目文件,但是默认情况下不会创建 *schemes*。XCode 可以为每个 CMake 目标生成一个 scheme,通常我们只想要主目标的方案。因此我们设置 `XCODE_GENERATE_SCHEME` 属性。同时我们启用帧捕获以进行 GPU 调试。
+
+```{lit} CMake, 推荐的额外配置 (append)
+if (XCODE)
+ set_target_properties(App PROPERTIES
+ XCODE_GENERATE_SCHEME ON
+ XCODE_SCHEME_ENABLE_GPU_FRAME_CAPTURE_MODE "Metal"
+ )
+endif()
+```
+
+总结
+----------
+
+现在我们有了一个不错的**基本项目配置**,我们将在接下来的章节中以它为基础进行构建。在接下来的章节中,我们将看到如何[将WebGPU集成](hello-webgpu.md)到我们的项目中,如何[初始化它](adapter-and-device/index.md),以及如何[打开一个窗口](opening-a-window.md)以进行绘制。
+
+*结果代码:* [`step000`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step000)
diff --git a/translation/zh/index.md b/translation/zh/index.md
index 6697b35..3a606df 100644
--- a/translation/zh/index.md
+++ b/translation/zh/index.md
@@ -2,7 +2,7 @@ Learn WebGPU
============
```{translation-warning} Outdated Translation, /index.md
-这是[原英文页面](%original%)的**社区翻译**,自翻译以来有已更新,因此可能不再同步。欢迎您的[贡献](%contribute%)!
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
```
*用于C++中的原生图形开发。*
@@ -78,10 +78,10 @@ Learn WebGPU
```{admonition} 🚧 施工中
文档**仍在构建**,**WebGPU标准亦在不断发展**。为帮助读者跟踪本文档的最新进展,我们在各章标题中使用了如下标识:
-🟢 **最新版**:*使用最新版本的[WebGPU分发](https://github.com/eliemichel/WebGPU-distribution)*
-🟡 **已完成**:*已完成,但用的是旧版WebGPU*
-🟠 **施工中**:*足够可读,但不完整*
-🔴 **待施工**:*只触及了表面*
+🟢 **最新版**:*使用最新版本的[WebGPU分发](https://github.com/eliemichel/WebGPU-distribution)*
+🟡 **已完成**:*已完成,但用的是旧版WebGPU*
+🟠 **施工中**:*足够可读,但不完整*
+🔴 **待施工**:*只触及了表面*
**请注意:**当使用章节的伴随代码时,请确保使用的是与`webgpu/`**相同的版本**,以避免差异。
```
diff --git a/translation/zh/introduction.md b/translation/zh/introduction.md
index b398a13..39b9a63 100644
--- a/translation/zh/introduction.md
+++ b/translation/zh/introduction.md
@@ -2,7 +2,7 @@
============
```{translation-warning} Outdated Translation, /introduction.md
-这是[原英文页面](%original%)的**社区翻译**,自翻译以来有已更新,因此可能不再同步。欢迎您的[贡献](%contribute%)!
+这是[原始英文页面](%original%)的**社区翻译版本**。由于原文页面在翻译后**已更新**,因此内容可能不再同步。欢迎您参与[贡献](%contribute%)!
```
什么是图形API?
@@ -112,12 +112,12 @@ WebGPU是一个**渲染硬件接口**,建立在您平台的驱动程序/操作
如果您遇到任何错别字或更严重的问题,您可以点击每个页面顶部的编辑按钮来修复它们!
-```{image} images/edit-light.png
+```{image} /images/edit-light.png
:alt: 使用每个页面顶部的编辑按钮!
:class: only-light
```
-```{image} images/edit-dark.png
+```{image} /images/edit-dark.png
:alt: 使用每个页面顶部的编辑按钮!
:class: only-dark
```