Skip to content

Commit 911b84a

Browse files
committed
Update from SAP DITA CMS (squashed):
commit 70e51a994d834416a22de0bd4d680d09a6e74cdc Author: REDACTED Date: Tue Apr 1 14:46:54 2025 +0000 Update from SAP DITA CMS 2025-04-01 14:46:54 Project: dita-all/vuy1741044682128 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loiod3d776bb52294a17b48298443a286f55 Language: en-US Builddable map: 89ab8c0ed18c432d8fb87551823e7de7.ditamap commit 740c6096b0e9d1805ec4e8fe0570da6cda216f7a Author: REDACTED Date: Tue Apr 1 14:46:31 2025 +0000 Update from SAP DITA CMS 2025-04-01 14:46:31 Project: dita-all/vuy1741044682128 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap Output: loioc25299a38b6448f889a43b42c9e5897d Language: en-US Builddable map: 678695d903b546e5947af69e56ed42b8.ditamap commit 15a87198cd4f40769bec0ae834459019ce60d3ae Author: REDACTED Date: Tue Apr 1 14:46:15 2025 +0000 Update from SAP DITA CMS 2025-04-01 14:46:15 Project: dita-all/vuy1741044682128 Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap ################################################## [Remaining squash message was removed before commit...]
1 parent c16fdeb commit 911b84a

File tree

123 files changed

+1431
-940
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

123 files changed

+1431
-940
lines changed

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-and-preparing-data-in-the-object-store-2a6bc3f.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Acquiring and Preparing Data in the Object Store
44

5-
Users with a modeler role can load large quantities of data via replication flows and store them inexpensively in file spaces in the object store. You can prepare the data using transformation flows and then share data to a standard space to be used as a source of flows, views, and analytic models.
5+
Users with a modeler role can load large quantities of data via replication flows and store them inexpensively in file spaces in the SAP Datasphere object store. You can prepare the data using Apache Spark transformation flows and then share data to a standard space to be used as a source of flows, views, and analytic models.
66

77
This topic contains the following sections:
88

@@ -19,9 +19,7 @@ This topic contains the following sections:
1919
## Introduction to the SAP Datasphere Object Store
2020

2121
> ### Note:
22-
> The object store is not enabled by default in SAP Datasphere tenants. To enable it in your tenant, see SAP note [3525760](https://me.sap.com/notes/3525760).
23-
>
24-
> For additional information on working with data in the object store, see SAP Note [3538038](https://me.sap.com/notes/3538038).
22+
> For additional information on working with data in the object store, see SAP note [3538038](https://me.sap.com/notes/3538038).
2523
>
2624
> The object store cannot be enabled in SAP Datasphere tenants provisioned prior to version 2021.03. To request the migration of your tenant, see SAP note [3268282](https://me.sap.com/notes/3268282).
2725
@@ -33,7 +31,7 @@ The object store provides an inbound layer for staging large quantities of data
3331

3432
## Create a File Space in the Object Store
3533

36-
A user with an administrator role can create a space with SAP HANA data lake files storage in the object store. File spaces are intended for loading and preparing large quantities of data in an inexpensive inbound staging area \(see [Create a File Space to Load Data in the Object Store](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/947444683e524cfd9169d7671b72ba0c.html "Create a space with SAP HANA data lake files storage in the object store, allocate compute resources and assign one or more users to allow them to start acquiring and preparing data. File spaces are intended for loading and preparing large quantities of data in an inexpensive inbound staging area.") :arrow_upper_right:\).
34+
A user with an administrator role can create a space with SAP HANA data lake files storage in the object store. File spaces are intended for loading and preparing large quantities of data in an inexpensive inbound staging area \(see [Create a File Space to Load Data in the Object Store](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/947444683e524cfd9169d7671b72ba0c.html "Create a file space and allocate compute resources to it. File spaces are intended for loading and preparing large quantities of data in an inexpensive inbound staging area and are stored in the SAP Datasphere object store.") :arrow_upper_right:\).
3735

3836
> ### Note:
3937
> You cannot create views, data flows, data access controls, analytic models, intelligent lookups, E/R models or use the *Business Builder* in a file space. You cannot import or export objects via CSN/JSON files and you cannot import CSV files, entities, remote tables or currency conversion tables.
@@ -44,7 +42,7 @@ A user with an administrator role can create a space with SAP HANA data lake fil
4442

4543
## Load Data with Replication Flows
4644

47-
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\). You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
45+
Users with a modeler role can use replication flows to load data in local tables \(file\) that are stored in a file space \(see [SAP Datasphere Targets](sap-datasphere-targets-12c45eb.md)\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\) or via the Local Tables \(File\) monitor. You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:\).
4846

4947

5048

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-source-to-a-data-flow-7b50e8e.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -287,6 +287,27 @@ Add a source to read data from. You can add multiple sources and combine them to
287287
288288

289289

290+
</td>
291+
</tr>
292+
<tr>
293+
<td valign="top">
294+
295+
*Access Method*
296+
297+
</td>
298+
<td valign="top">
299+
300+
Select the open connector API to use to access the data:
301+
302+
- *GET*: The Open Connectors data query API is called directly once per page to retrieve data. It's the recommended default method, except in cases of large amounts of data.
303+
304+
> ### Note:
305+
> When you select the *GET* method, you can also update the *WebService Data Receive Timeout \(milliseconds\)*. This property controls the timeouts for each individual HTTP request sent to the Open Connector API. For example if it is set to the 5 mins, then you can have 5 back requests of 4mins, but if one will last 6 mins, then it will fail.
306+
307+
- *BULK*: An asynchronous query job is created on Open Connectors to start streaming the data. It's better to use this method to get better performance in cases of large amounts of data.
308+
309+
310+
290311
</td>
291312
</tr>
292313
</table>

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-or-create-a-target-table-in-a-data-flow-0fa7805.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -249,6 +249,27 @@ Add a target table to write data to. You can only have one target table in a dat
249249
250250

251251

252+
</td>
253+
</tr>
254+
<tr>
255+
<td valign="top">
256+
257+
*Access Method*
258+
259+
</td>
260+
<td valign="top">
261+
262+
Select the open connector API to use to access the data:
263+
264+
- *GET*: The Open Connectors data query API is called directly once per page to retrieve data. It's the recommended default method, except in cases of large amounts of data.
265+
266+
> ### Note:
267+
> When you select the *GET* method, you can also update the *WebService Data Receive Timeout \(milliseconds\)*. This property controls the timeouts for each individual HTTP request sent to the Open Connector API. For example if it is set to the 5 mins, then you can have 5 back requests of 4mins, but if one will last 6 mins, then it will fail.
268+
269+
- *BULK*: An asynchronous query job is created on Open Connectors to start streaming the data. It's better to use this method to get better performance in cases of large amounts of data.
270+
271+
272+
252273
</td>
253274
</tr>
254275
</table>

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-the-target-for-a-replication-flow-ab490fb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ If you are using an existing table as the target object, this table may contain
105105
> ### Note:
106106
> Changing the toggle on an active replication flow will have no effect.
107107
108-
- If you deactivate it, you get an error message for each unmapped target column when saving the replication flow. However, you can still manually set each column to *Skip Mapping* in the *Mapping* tab.
108+
- If you deactivate it, you get an error message for each unmapped target column when saving the replication flow. However, you can still manually map each column in the *Mapping* tab.
109109

110110

111111
For replication flows created before version 2025.01 of SAP Datasphere, the property is deactivated by default, but you can activate it as required. For replication flows created with version 2025.02 or later, this property is activated by default, and you can deactivate it as required.

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/capturing-delta-changes-in-your-local-table-154bdff.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ The 2 objects are consumed differently by SAP Datasphere apps:
138138
- The following SAP Datasphere apps also interact with the Delta Capture Table that contains the delta columns:
139139
- *Transformation Flow*:
140140
- As Source, you can choose between source with "Delta Capture" or "All Active Records". See [Add a Source to a Graphical View](../add-a-source-to-a-graphical-view-1eee180.md)
141-
- As target, it depends of the combination of the load type used and the table type \(local table with or without delta capture\). See [Processing Changes to Sources and Target Tables](../processing-changes-to-sources-and-target-tables-705292c.md) and [Create or Add a Target Table to Your Transformation Flow](../create-or-add-a-target-table-to-your-transformation-flow-0950746.md)
141+
- As target, it depends of the combination of the load type used and the table type \(local table with or without delta capture\). See [Processing Changes to Sources and Target Tables](../processing-changes-to-sources-and-target-tables-705292c.md) and [Create or Add a Target Table to a Transformation Flow](../create-or-add-a-target-table-to-a-transformation-flow-0950746.md)
142142

143143
- *Replication Flow*: The Delta Capture Table can be used as source or as target, see [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md) and [Add the Source for a Replication Flow](add-the-source-for-a-replication-flow-7496380.md).
144144
- *Table Editor*:

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-file-d21881b.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,7 @@ Create a local table \(file\) to store data in the object store. Load data to yo
1313
## Context
1414

1515
> ### Note:
16-
> The object store is not enabled by default in SAP Datasphere tenants. To enable it in your tenant, see SAP note [3525760](https://me.sap.com/notes/3525760).
17-
>
18-
> For additional information on working with data in the object store, see SAP Note [3538038](https://me.sap.com/notes/3538038).
16+
> For additional information on working with data in the object store, see SAP note [3538038](https://me.sap.com/notes/3538038).
1917
>
2018
> The object store cannot be enabled in SAP Datasphere tenants provisioned prior to version 2021.03. To request the migration of your tenant, see SAP note [3268282](https://me.sap.com/notes/3268282).
2119
@@ -31,7 +29,7 @@ SAP HANA Cloud, data lake allows SAP Datasphere to store and manage mass-data ef
3129
As a local table \(file\) is capturing delta changes via flows, it creates different entities in the repository after it is deployed:
3230

3331
- An active records entity for accessing the delta capture entity through a virtual table. It excludes the delta capture columns and deleted records, and keeps only the active records.
34-
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run via a task chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\).
32+
- A delta capture entity that stores information on changes found in the delta capture table. It serves as target for flows at design time. In addition, every local table \(File\) has a specific folder in file storage \(inbound buffer\) to which a replication flow writes data files to a specific target object. To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md), [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right: and [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md).\) You can monitor the buffer merge status using the *Local Tables \(File\)* monitor \(See [Monitoring Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/6b2d0073a8684ee6a59d6f47d00ec895.html "Monitor your local tables (file). Check how and when they were last updated and if new data has still to be merged.") :arrow_upper_right:.
3533

3634

3735

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,23 @@ For more information about available connection types, sources, and targets, see
187187
188188

189189

190+
</td>
191+
</tr>
192+
<tr>
193+
<td valign="top">
194+
195+
Merge Data Automatically
196+
197+
</td>
198+
<td valign="top">
199+
200+
\[only relevant for replication flow created in file space\] Select this option if you want new data to be automatically replicated in your local table \(file\). When new data appears in the inbound buffer, a merge task is automatically run and data is updated in your target local table \(file\).
201+
202+
> ### Note:
203+
> The option is enabled by default when you create a new replication with SAP Datasphere as target and load type *Initial and Delta*. For replication flows created before the option was available, you can still manually enable it \(and a redeployment will be needed\).
204+
205+
206+
190207
</td>
191208
</tr>
192209
</table>

docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-task-chain-d1afbc2.md

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -267,8 +267,8 @@ You can monitor the status of task chain runs from the Data Integration Monitor.
267267
- Local Table \(File\) - Merge, Optimize or Delete Records.
268268

269269
> ### Note:
270-
> - Merge: Add, update or delete data into the existing local table \(file\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(File\). To process data updates from this inbound buffer to the local table \(File\), and therefore make data visible, a merge task has to run..
271-
> - Optimize: Improve data access performance by optimizing the layout of data in file storage \(for examply by grouping small files into larger files..
270+
> - Merge: Add, update or delete data into the existing local table \(file\). A replication flow writes data files to the inbound buffer \(specific folder in file storage\) of a target local table \(file\). To process data updates from this inbound buffer to the local table \(file\), and therefore make data visible, a merge task has to run..
271+
> - Optimize: Improve data access performance by optimizing the layout of data in file storage \(for example by grouping small files into larger files..
272272
> - Delete Records: Delete records from your local table \(file\). Under Settings, define what type of deletion you want:
273273
> - *Delete All Records \(Mark as Deleted\)*: Records will not be physically deleted but marked as deleted and filtered out when accessing the active records of the local table. They will still consume storage, and they can still be processed by other apps that consume them.
274274
> - *Delete previous versions \(Vacuum\), which are older than the specified number of days*: Records that meet your defined criteria will be permanently deleted. Default value is 90 days. Minimum authorized value is 7 so that records from the last 7 days cannot be deleted. In addition, only records that have been fully processed can be deleted.
@@ -286,6 +286,23 @@ You can monitor the status of task chain runs from the Data Integration Monitor.
286286

287287

288288

289+
</td>
290+
</tr>
291+
<tr>
292+
<td valign="top">
293+
294+
Apache Spark Settings
295+
296+
</td>
297+
<td valign="top">
298+
299+
\[File Space Only\] When creating a file space, administrators have defined default *Apache Spark Applications* to run tasks \(in Workload Management\). You can update these settings following your needs by object types:
300+
301+
- *Use Default*: The default application is the application selected by an administrator during the file space creation. However, if the settings have been changed on the object level, in the data integration monitor, this value has become the default value, erasing the value defined in *Workload Management*.
302+
- *Define New Setting for This Task*: Select another *Apache Spark Application* that fits your need.
303+
304+
For more information, see [Merge or Optimize Your Local Tables (File)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/e533b154ed3e49ce9a03e4421a5296e7.html "Local Tables (File) can store large quantities of data in the object store. You can manage this file storage with merge or optimize tasks, and allocate the required amount of compute resources that the file space can consume when processing these tasks.") :arrow_upper_right: and [Update the Settings Used to Run Your Transformation Flow (in a File Space)](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/e5c4ac8ab3bf4573b86cd4f4f3118c16.html "Update the maximum amount of compute resources that the file space can consume to run a transformation flow.") :arrow_upper_right:.
305+
289306
</td>
290307
</tr>
291308
<tr>

0 commit comments

Comments
 (0)