- 
                Notifications
    You must be signed in to change notification settings 
- Fork 23
DFT interface and implementations #135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This can only be an improvement as the garbage return value was being ignored by all callers.
| I snuck in a few improvements to this PR but I'll stop now. When you get a chance @HaiwangYu please review. Next I'll replace existing hard-wired DFTs with  | 
| I'll close this PR in favor of PR #136 on branch  | 
This PR brings a new
IDFTinterface for performing FFT and two implementations: FFTW3 and Torch. They are thread-safe to call from graph node execute operators.For the Torch implementation a semaphore may be used to limit number of concurrent calls into Torch. The semaphore itself is a "service" type component as it should be shared by all things that call into torch. Default limit is 1 call. When Torch will run on GPU this may be increased based on available RAM (needs prior understanding of RAM needs and what GPU provides). When Torch runs on CPU note it may use additional threads (up to 4?) beyond what TBB is allowed.
Some tests:
Low level
IDFTimplementationThe
IDFTinterface accepts lowest-common-denominator of pointers to array data.A higher-level API is provided as functions
WireCell::Aux::fwd()andinv()inWireCellAux/DftTools.h. These family of functions take anIDFT::pointerand then provide a data interface based onstd::vectororEigen::Arrayand handle all the annoying array storage order details.This high-level API be exercised with:
This test accepts same command line variants as does
test_idft.This PR should not be breaking to any "core" code but the
TorchServiceconfiguration interface is reworked a bit to use the newISemaphorevia the newTorchContextmix-in class which is shared by thePytorch::DFTimplementation.Note, this also removes the unwanted direct dependency on CUDA found in reviewing #131 (thanks!). The rest of the cleanups identified still need to be done....
With this PR, there is no actual code using
IDFToutside of the tests. A subsequent PR will start to replace the hard-wired use of DFT functions fromWaveformandArraywith theirIDFTequivalent.Future work may be expected which will implement cuFFT and/or ZIO based
IDFTbackends.