-
Notifications
You must be signed in to change notification settings - Fork 75
[LoadStoreOpToLLVM] Transpose 2d load. #4870
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This draft PR implements transpose 2D block load functionality to efficiently load column major matrices from global memory on Intel Xe+ GPUs. The implementation introduces a transpose operation when the register layout's fast-changing dimension differs from the memory layout, using d32 type matrices with bitcast operations for the transformation.
- Added support for transpose 2D block IO operations with transpose parameter
- Enhanced block IO tile size calculation to handle transpose scenarios
- Implemented new test coverage for transpose and column major load operations
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.
| File | Description |
|---|---|
| LoadStoreOpToLLVM.cpp | Major refactoring of 2D block load implementation to support transpose operations and simplified layout handling |
| tensor-pointer-load-block-2d.mlir | Updated test expectations for new block load configurations and tile sizes |
| test_block_store.py | Added transpose parameter and column major test cases for block operations |
third_party/intel/lib/TritonIntelGPUToLLVM/LoadStoreOpToLLVM.cpp
Outdated
Show resolved
Hide resolved
third_party/intel/lib/TritonIntelGPUToLLVM/LoadStoreOpToLLVM.cpp
Outdated
Show resolved
Hide resolved
third_party/intel/lib/TritonIntelGPUToLLVM/LoadStoreOpToLLVM.cpp
Outdated
Show resolved
Hide resolved
efff84d to
55c896e
Compare
20a1637 to
942ca37
Compare
210886e to
e979428
Compare
| packedElemSizeInBits = 32; | ||
| numPackedVals = packedElemSizeInBits / elemSizeInBits; | ||
|
|
||
| // Improve this. The current 2D block load only transposes the matrix at |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The improvements will be added in another PR to minimal the changes in a single PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@whitneywhtsang @etiotto , The transpose loading is ready for review. |
e979428 to
248ae4c
Compare
Signed-off-by: Lu,Chengjun <chengjun.lu@intel.com>
|
Can you fix the typo in the image of the PR description or remove it? |
| return axisInfo ? axisInfo->getStride(dim) : -1; | ||
| if (axisInfo) { | ||
| const SmallVector<int64_t> &stride = axisInfo->getStride(); | ||
| if (dim < stride.size()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why would we call getStride with dim more than the size of stride?
To use the transpose 2d block io to load column major matrix from global memory. (The column major matrix here could be generalized to the cases that register layout fast change dim is not same as the fast change dim on global memory.)
The transposing operation is a recursive operation:

To use the transpose 2D block IO to load column major matrix on Xe+:
The code is only implemented for functionality for the layouts with limitations.
It is not best efficient for now.