Skip to content

Commit eba9531

Browse files
updated screenshots with updated cost cards and replace previous model service with nvidia/llama models
1 parent a2a97d4 commit eba9531

File tree

4 files changed

+12
-6
lines changed

4 files changed

+12
-6
lines changed

autovec_unstructured/autovec_unstructured.ipynb

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,9 @@
1717
"metadata": {},
1818
"source": [
1919
"# 1. Create and Deploy Your Operational cluster on Capella\n",
20-
"To get started with Couchbase Capella, create an account and use it to deploy a cluster. To know more, please follow the [instructions](https://docs.couchbase.com/cloud/get-started/create-account.html).\n",
20+
"To get started with Couchbase Capella, create an account and use it to deploy a cluster. \n",
21+
"\n",
22+
"Make sure that you deploy a `Multi-node` cluster with `data`, `index`, `query` and `eventing` services enabled. To know more, please follow the [instructions](https://docs.couchbase.com/cloud/get-started/create-account.html).\n",
2123
" \n",
2224
"### Couchbase Capella Configuration\n",
2325
"When running Couchbase using [Capella](https://cloud.couchbase.com/sign-in), the following prerequisites need to be met.\n",
@@ -154,7 +156,11 @@
154156
"cell_type": "code",
155157
"execution_count": null,
156158
"id": "0298d27f-ee03-4de2-829d-b653c39746a9",
157-
"metadata": {},
159+
"metadata": {
160+
"vscode": {
161+
"languageId": "powershell"
162+
}
163+
},
158164
"outputs": [],
159165
"source": [
160166
"!pip install couchbase langchain-couchbase langchain-openai"
@@ -233,19 +239,19 @@
233239
},
234240
{
235241
"cell_type": "code",
236-
"execution_count": 7,
242+
"execution_count": null,
237243
"id": "1d77404b",
238244
"metadata": {},
239245
"outputs": [],
240246
"source": [
241247
"bucket_name = \"Unstructured_data_bucket\"\n",
242248
"scope_name = \"_default\"\n",
243249
"collection_name = \"_default\"\n",
244-
"index_name = \"search_autovec_workflow_text-embedding\" # This is the name of the search index that was created in step 3.6 and can also be seen in the search tab of the cluster.\n",
250+
"index_name = \"hyperscale_autovec_workflow_text-embedding\" # This is the name of the search index that was created in step 3.6 and can also be seen in the search tab of the cluster.\n",
245251
" \n",
246252
"# Using the OpenAI SDK for the embeddings with the capella model services and they are compatible with the OpenAIEmbeddings class in Langchain\n",
247253
"embedder = OpenAIEmbeddings(\n",
248-
" model=\"nvidia/nv-embedqa-e5-v5\", # This is the model that will be used to create the embedding of the query.\n",
254+
" model=\"nvidia/llama-3.2-nv-embedqa-1b-v2\", # This is the model that will be used to create the embedding of the query.\n",
249255
" openai_api_key=\"CAPELLA_MODEL_KEY\",\n",
250256
" openai_api_base=\"CAPELLA_MODEL_ENDPOINT/v1\",\n",
251257
" check_embedding_ctx_length=False,\n",
@@ -341,7 +347,7 @@
341347
"- Content text: This is the value of the field you configured as `text_key` (in this tutorial: `text-to-embed`). It represents the human-readable content we chose to display.\n",
342348
"\n",
343349
"### How the Ranking Works\n",
344-
"1. Your natural language query (e.g., `query = \"How to setup java SDK?\"`) is embedded using the NVIDIA model (`nvidia/nv-embedqa-e5-v5`).\n",
350+
"1. Your natural language query (e.g., `query = \"How to setup java SDK?\"`) is embedded using the NVIDIA model (`nvidia/llama-3.2-nv-embedqa-1b-v2`).\n",
345351
"2. The vector store compares the query embedding to stored document embeddings in the field you configured (`embedding_key = \"text-embedding\"`).\n",
346352
"3. Results are sorted by vector similarity. Higher similarity = closer semantic meaning.\n",
347353
"\n",
22.6 KB
Loading
-4.22 KB
Loading
13.3 KB
Loading

0 commit comments

Comments
 (0)