@@ -339,172 +339,6 @@ if one does not have access to the ``torch.load`` callsites.
339339  if ``weights_only `` was not passed as an argument.
340340
341341
342- .. _serializing-python-modules :
343- 
344- Serializing torch.nn.Modules and loading them in C++
345- ---------------------------------------------------- 
346- 
347- See also: `Tutorial: Loading a TorchScript Model in C++  <https://pytorch.org/tutorials/advanced/cpp_export.html >`_
348- 
349- ScriptModules can be serialized as a TorchScript program and loaded
350- using :func: `torch.jit.load `.
351- This serialization encodes all the modules’ methods, submodules, parameters,
352- and attributes, and it allows the serialized program to be loaded in C++
353- (i.e. without Python).
354- 
355- The distinction between :func: `torch.jit.save ` and :func: `torch.save ` may not
356- be immediately clear. :func: `torch.save ` saves Python objects with pickle.
357- This is especially useful for prototyping, researching, and training.
358- :func: `torch.jit.save `, on the other hand, serializes ScriptModules to a format
359- that can be loaded in Python or C++. This is useful when saving and loading C++
360- modules or for running modules trained in Python with C++, a common practice
361- when deploying PyTorch models.
362- 
363- To script, serialize and load a module in Python:
364- 
365- ::
366- 
367-     >>> scripted_module = torch.jit.script(MyModule()) 
368-     >>> torch.jit.save(scripted_module, 'mymodule.pt') 
369-     >>> torch.jit.load('mymodule.pt') 
370-     RecursiveScriptModule( original_name=MyModule 
371-                           (l0): RecursiveScriptModule(original_name=Linear) 
372-                           (l1): RecursiveScriptModule(original_name=Linear) ) 
373- 
374- 
375- Traced modules can also be saved with :func: `torch.jit.save `, with the caveat
376- that only the traced code path is serialized. The following example demonstrates
377- this:
378- 
379- ::
380- 
381-     # A module with control flow 
382-     >>> class ControlFlowModule(torch.nn.Module): 
383-           def __init__(self): 
384-             super().__init__() 
385-             self.l0 = torch.nn.Linear(4, 2) 
386-             self.l1 = torch.nn.Linear(2, 1) 
387- 
388-           def forward(self, input): 
389-             if input.dim() > 1: 
390-                 return torch.tensor(0) 
391- 
392-             out0 = self.l0(input) 
393-             out0_relu = torch.nn.functional.relu(out0) 
394-             return self.l1(out0_relu) 
395- 
396-     >>> traced_module = torch.jit.trace(ControlFlowModule(), torch.randn(4)) 
397-     >>> torch.jit.save(traced_module, 'controlflowmodule_traced.pt') 
398-     >>> loaded = torch.jit.load('controlflowmodule_traced.pt') 
399-     >>> loaded(torch.randn(2, 4))) 
400-     tensor([[-0.1571], [-0.3793]], grad_fn=<AddBackward0>) 
401- 
402-     >>> scripted_module = torch.jit.script(ControlFlowModule(), torch.randn(4)) 
403-     >>> torch.jit.save(scripted_module, 'controlflowmodule_scripted.pt') 
404-     >>> loaded = torch.jit.load('controlflowmodule_scripted.pt') 
405-     >> loaded(torch.randn(2, 4)) 
406-     tensor(0) 
407- 
408- The above module has an if statement that is not triggered by the traced inputs,
409- and so is not part of the traced module and not serialized with it.
410- The scripted module, however, contains the if statement and is serialized with it.
411- See the `TorchScript documentation  <https://pytorch.org/docs/stable/jit.html >`_
412- for more on scripting and tracing.
413- 
414- Finally, to load the module in C++:
415- 
416- ::
417- 
418-     >>> torch::jit::script::Module module; 
419-     >>> module = torch::jit::load('controlflowmodule_scripted.pt'); 
420- 
421- See the `PyTorch C++ API documentation  <https://pytorch.org/cppdocs/ >`_
422- for details about how to use PyTorch modules in C++.
423- 
424- .. _saving-loading-across-versions :
425- 
426- Saving and loading ScriptModules across PyTorch versions
427- ----------------------------------------------------------- 
428- 
429- The PyTorch Team recommends saving and loading modules with the same version of
430- PyTorch. Older versions of PyTorch may not support newer modules, and newer
431- versions may have removed or modified older behavior. These changes are
432- explicitly described in
433- PyTorch’s `release notes  <https://github.com/pytorch/pytorch/releases >`_,
434- and modules relying on functionality that has changed may need to be updated
435- to continue working properly. In limited cases, detailed below, PyTorch will
436- preserve the historic behavior of serialized ScriptModules so they do not require
437- an update.
438- 
439- torch.div performing integer division
440- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
441- 
442- In PyTorch 1.5 and earlier :func: `torch.div ` would perform floor division when
443- given two integer inputs:
444- 
445- ::
446- 
447-     # PyTorch 1.5 (and earlier) 
448-     >>> a = torch.tensor(5) 
449-     >>> b = torch.tensor(3) 
450-     >>> a / b 
451-     tensor(1) 
452- 
453- In PyTorch 1.7, however, :func: `torch.div ` will always perform a true division
454- of its inputs, just like division in Python 3:
455- 
456- ::
457- 
458-     # PyTorch 1.7 
459-     >>> a = torch.tensor(5) 
460-     >>> b = torch.tensor(3) 
461-     >>> a / b 
462-     tensor(1.6667) 
463- 
464- The behavior of :func: `torch.div ` is preserved in serialized ScriptModules.
465- That is, ScriptModules serialized with versions of PyTorch before 1.6 will continue
466- to see :func: `torch.div ` perform floor division when given two integer inputs
467- even when loaded with newer versions of PyTorch. ScriptModules using :func: `torch.div `
468- and serialized on PyTorch 1.6 and later cannot be loaded in earlier versions of
469- PyTorch, however, since those earlier versions do not understand the new behavior.
470- 
471- torch.full always inferring a float dtype
472- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
473- 
474- In PyTorch 1.5 and earlier :func: `torch.full ` always returned a float tensor,
475- regardless of the fill value it’s given:
476- 
477- ::
478- 
479-     # PyTorch 1.5 and earlier 
480-     >>> torch.full((3,), 1)  # Note the integer fill value... 
481-     tensor([1., 1., 1.])     # ...but float tensor! 
482- 
483- In PyTorch 1.7, however, :func: `torch.full ` will infer the returned tensor’s
484- dtype from the fill value:
485- 
486- ::
487- 
488-     # PyTorch 1.7 
489-     >>> torch.full((3,), 1) 
490-     tensor([1, 1, 1]) 
491- 
492-     >>> torch.full((3,), True) 
493-     tensor([True, True, True]) 
494- 
495-     >>> torch.full((3,), 1.) 
496-     tensor([1., 1., 1.]) 
497- 
498-     >>> torch.full((3,), 1 + 1j) 
499-     tensor([1.+1.j, 1.+1.j, 1.+1.j]) 
500- 
501- The behavior of :func: `torch.full ` is preserved in serialized ScriptModules. That is,
502- ScriptModules serialized with versions of PyTorch before 1.6 will continue to see
503- torch.full return float tensors by default, even when given bool or
504- integer fill values. ScriptModules using :func: `torch.full ` and serialized on PyTorch 1.6
505- and later cannot be loaded in earlier versions of PyTorch, however, since those
506- earlier versions do not understand the new behavior.
507- 
508342.. _utility functions :
509343
510344Utility functions
0 commit comments