Skip to content

Commit fab4c63

Browse files
AshutoshSinghIntelp-durandintsavina
authored
[Docs]cache encryption for CacheMode property with OPTIMIZE_SPEED update (#32434)
### Details: - *changes with PR 32310* ### Tickets: - *None* --------- Co-authored-by: Pavel Durandin <pavel.durandin@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
1 parent e3a81e1 commit fab4c63

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@ loading it from the cache. Currently, this property can be set only in ``compile
192192
:language: cpp
193193
:fragment: [ov:caching:part5]
194194

195-
If model caching is enabled in the GPU Plugin, the model topology can be encrypted while it is saved to the cache and decrypted when it is loaded from the cache. Full encryption only works when the ``CacheMode`` property is set to ``OPTIMIZE_SIZE``.
195+
If model caching is enabled in the GPU Plugin, the model topology is encrypted when saved to the cache and decrypted when loaded from the cache if the ``CacheMode`` property is set to ``OPTIMIZE_SIZE``. The weights are encrypted only when ``CacheMode`` is set to ``OPTIMIZE_SPEED``. Weight encryption requires extra disk space equal to the size of the weights and may introduce runtime memory overhead for decryption, depending on the encryption algorithm.
196196

197197
.. tab-set::
198198

0 commit comments

Comments
 (0)