You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[GPU] Canonicalize 3d shape for onednn conv/deconv post operations (#32391)
### Description of the issue(symptom, root-cause, how it was resolved)
- onednn 3d conv post-op mem_desc needs to be canonicalized to 4d when
conv output is blocked
#### The code and line that caused this issue (if it is not changed
directly)
- src/plugins/intel_gpu/src/graph/program_node.cpp
#### Reproduction step and snapshot (if applicable. Do not attach for
customer model)
- reproduction step and model are attached in the ticket.
```
// need to convert IR: embedding_model.onnx -> FP32 -> INT8
$ ovc embedding_model.onnx --output_model model_FP32/embedding_model.xml
--input "input[?,50,29]" --compress_to_fp16 False
$ python int8_quantization.py
// Run test
$ python openvino_script.py --device GPU.1 --model
ov_onnx_model/int8/model_INT8.xml --batch 1
```
#### Problematic graph
It doesn't rely on graph patterns.
#### Checklist
- [ ] Is it a proper fix? (not a workaround)
- [x] Did you include test case for this fix, if necessary?
- [x] Did you review existing test that can be extended to cover this scenario? Which test did you review?
-- No test for this issue.
### Tickets:
- 174583
0 commit comments