|
17 | 17 | "metadata": {}, |
18 | 18 | "source": [ |
19 | 19 | "# 1. Create and Deploy Your Operational cluster on Capella\n", |
20 | | - "To get started with Couchbase Capella, create an account and use it to deploy a cluster. To know more, please follow the [instructions](https://docs.couchbase.com/cloud/get-started/create-account.html).\n", |
| 20 | + "To get started with Couchbase Capella, create an account and use it to deploy a cluster. \n", |
| 21 | + "\n", |
| 22 | + "Make sure that you deploy a `Multi-node` cluster with `data`, `index`, `query` and `eventing` services enabled. To know more, please follow the [instructions](https://docs.couchbase.com/cloud/get-started/create-account.html).\n", |
21 | 23 | " \n", |
22 | 24 | "### Couchbase Capella Configuration\n", |
23 | 25 | "When running Couchbase using [Capella](https://cloud.couchbase.com/sign-in), the following prerequisites need to be met.\n", |
|
154 | 156 | "cell_type": "code", |
155 | 157 | "execution_count": null, |
156 | 158 | "id": "0298d27f-ee03-4de2-829d-b653c39746a9", |
157 | | - "metadata": {}, |
| 159 | + "metadata": { |
| 160 | + "vscode": { |
| 161 | + "languageId": "powershell" |
| 162 | + } |
| 163 | + }, |
158 | 164 | "outputs": [], |
159 | 165 | "source": [ |
160 | 166 | "!pip install couchbase langchain-couchbase langchain-openai" |
|
233 | 239 | }, |
234 | 240 | { |
235 | 241 | "cell_type": "code", |
236 | | - "execution_count": 7, |
| 242 | + "execution_count": null, |
237 | 243 | "id": "1d77404b", |
238 | 244 | "metadata": {}, |
239 | 245 | "outputs": [], |
240 | 246 | "source": [ |
241 | 247 | "bucket_name = \"Unstructured_data_bucket\"\n", |
242 | 248 | "scope_name = \"_default\"\n", |
243 | 249 | "collection_name = \"_default\"\n", |
244 | | - "index_name = \"search_autovec_workflow_text-embedding\" # This is the name of the search index that was created in step 3.6 and can also be seen in the search tab of the cluster.\n", |
| 250 | + "index_name = \"hyperscale_autovec_workflow_text-embedding\" # This is the name of the search index that was created in step 3.6 and can also be seen in the search tab of the cluster.\n", |
245 | 251 | " \n", |
246 | 252 | "# Using the OpenAI SDK for the embeddings with the capella model services and they are compatible with the OpenAIEmbeddings class in Langchain\n", |
247 | 253 | "embedder = OpenAIEmbeddings(\n", |
248 | | - " model=\"nvidia/nv-embedqa-e5-v5\", # This is the model that will be used to create the embedding of the query.\n", |
| 254 | + " model=\"nvidia/llama-3.2-nv-embedqa-1b-v2\", # This is the model that will be used to create the embedding of the query.\n", |
249 | 255 | " openai_api_key=\"CAPELLA_MODEL_KEY\",\n", |
250 | 256 | " openai_api_base=\"CAPELLA_MODEL_ENDPOINT/v1\",\n", |
251 | 257 | " check_embedding_ctx_length=False,\n", |
|
341 | 347 | "- Content text: This is the value of the field you configured as `text_key` (in this tutorial: `text-to-embed`). It represents the human-readable content we chose to display.\n", |
342 | 348 | "\n", |
343 | 349 | "### How the Ranking Works\n", |
344 | | - "1. Your natural language query (e.g., `query = \"How to setup java SDK?\"`) is embedded using the NVIDIA model (`nvidia/nv-embedqa-e5-v5`).\n", |
| 350 | + "1. Your natural language query (e.g., `query = \"How to setup java SDK?\"`) is embedded using the NVIDIA model (`nvidia/llama-3.2-nv-embedqa-1b-v2`).\n", |
345 | 351 | "2. The vector store compares the query embedding to stored document embeddings in the field you configured (`embedding_key = \"text-embedding\"`).\n", |
346 | 352 | "3. Results are sorted by vector similarity. Higher similarity = closer semantic meaning.\n", |
347 | 353 | "\n", |
|
0 commit comments