Skip to content

Conversation

@av-novikov
Copy link
Contributor

@av-novikov av-novikov commented Oct 8, 2024

This PR adds interfaces for embedding of Python function into C++ code, adds multilinear adaptive interpolation with given exact evaluator from Python and adds the support of both static and adaptive interpolation in ReactiveCompositionalMultiphaseOBLKernels. Ultimate goal is to run simulation similar to open-darts from a corresponding (part of) model given from Python.

  • fix assembly in ReactiveCompositionalMultiphaseOBLKernels
  • multilinear adaptive interpolation
  • interfaces to call Python function from C++ (from interpolator)
  • convergence test for multilinear adaptive interpolation
  • generalize input of OBL fluid physics for static and adaptive interpolation
  • pass Python-based OBL operators evaluator to solver
  • use LvArray/src/python/PyFunc for embedding Python function to C++
  • deduce access level in LvArray::python::PyArrayWrapper to support MODIFIABLE arguments: LvArray: PR 336
  • unify casting-to-numpy interfaces between GEOS and open-darts: open-darts: MR 138
  • Python interfaces to GEOS from Makutu repository: geosPythonPackages: PR 74
  • Examples - 2ph_comp and carbonated_water (with PHREEQC-backed geochemistry) have been moved to geosPythonPackages: PR 74

FAQ:

Q: Why do we use __uint128_t ?
A: We need to count nodes and hypercubes in multidimensional state space. Even with 300 points per axis in 8-dim space (e.g. 8-component fluid), the number of points (300^8) surpasses the maximum of 64-bit integer. Moreover, single __uint128_t key type simplifies hashing points and hypercubes in std::unordered_map storage.

Q: Do we duplicate the storage of points?
A: Yes, because we minimize memory accesses by excessive, but consecutive storage.

@av-novikov av-novikov changed the title Adaptive multilinear interpolation for OBL solver feat: Adaptive multilinear interpolation for OBL solver Oct 9, 2024
@av-novikov av-novikov marked this pull request as draft October 9, 2024 09:32
@av-novikov av-novikov self-assigned this Oct 9, 2024
@sframba sframba self-requested a review October 18, 2024 07:45
@sframba
Copy link
Contributor

sframba commented Oct 18, 2024

Thanks @av-novikov for this PR! I think that the python scripts shouldn't go in a subfolder of compositionalMultiphaseFlow, but in a higher-level folder (maybe in a new folder scripts/pygeos?). We'll be adding the remaining Makutu scripts in the (possbly near) future and they'll be used for geophysics and potentially geomechanics workflows. All these base python classes and utilities need to be common to these other solvers as well. @CusiniM @rrsettgast what's your opinion?

@victorapm
Copy link
Contributor

@av-novikov the following patch should fix the issues with MGR

diff --git a/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp b/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
index ee2561ad4..172aa06e4 100644
--- a/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
+++ b/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
@@ -68,8 +68,8 @@ public:
 
     m_levelFRelaxType[0]          = MGRFRelaxationType::none;
     m_levelInterpType[0]          = MGRInterpolationType::injection;
-    m_levelRestrictType[0]        = MGRRestrictionType::injection;
-    m_levelCoarseGridMethod[0]    = MGRCoarseGridMethod::cprLikeBlockDiag;
+    m_levelRestrictType[0]        = MGRRestrictionType::blockColLumped; // True-IMPES
+    m_levelCoarseGridMethod[0]    = MGRCoarseGridMethod::galerkin;
     m_levelGlobalSmootherType[0]  = MGRGlobalSmootherType::ilu0;
     m_levelGlobalSmootherIters[0] = 1;
   }

@av-novikov
Copy link
Contributor Author

@av-novikov the following patch should fix the issues with MGR

diff --git a/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp b/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
index ee2561ad4..172aa06e4 100644
--- a/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
+++ b/src/coreComponents/linearAlgebra/interfaces/hypre/mgrStrategies/ReactiveCompositionalMultiphaseOBL.hpp
@@ -68,8 +68,8 @@ public:
 
     m_levelFRelaxType[0]          = MGRFRelaxationType::none;
     m_levelInterpType[0]          = MGRInterpolationType::injection;
-    m_levelRestrictType[0]        = MGRRestrictionType::injection;
-    m_levelCoarseGridMethod[0]    = MGRCoarseGridMethod::cprLikeBlockDiag;
+    m_levelRestrictType[0]        = MGRRestrictionType::blockColLumped; // True-IMPES
+    m_levelCoarseGridMethod[0]    = MGRCoarseGridMethod::galerkin;
     m_levelGlobalSmootherType[0]  = MGRGlobalSmootherType::ilu0;
     m_levelGlobalSmootherIters[0] = 1;
   }

Dear @victorapm,

Thank you very much for your patch!
I tried to run under
m_levelRestrictType[0] = MGRRestrictionType::blockColLumped; m_levelCoarseGridMethod[0] = MGRCoarseGridMethod::galerkin;

and it helped but not solved the problem.

Now we have 1D homogeneous model with injection of carbonated water converging, although spending many linear iterations (LI) per newton iteration NI: 10-60, avg-45. The similar 2D 100x100 heterogeneous setup starts with 15-40 LI/NI but then relatively quickly rise till maximum value of 200 LI/NI. In the end it takes too much time and i decided to stop simulation. Before we had both 1D and 2D setups non-converging, i.e. 200 LI were not enough to reduce error.

1D setup, log without your patch - output_1d_verbose.txt
1D setup, log with your patch - output_victor_patch_1d_verbose.log
2D setup, log without your patch - output_2d_verbose.txt
2D setup, log with your patch - output_victor_patch_2d_verbose.log

I can share hypre output with matrices if needed. Please let me know how should i proceed?

Best,
Aleks

@av-novikov
Copy link
Contributor Author

Dear @victorapm,

Indeed merge of develop branch with recently added scaling option, accompanied by the fix you suggested helped to improve performance of linear solver. Now it takes much less iterations and both 1D homogeneous and 2D heterogeneous setups converge smoothly. I attached logs for both.

output_merge_1d.log
output_merge_2d.log

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants