Skip to content

Actions: NVIDIA/TensorRT

Actions

Blossom-CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
4,355 workflow runs
4,355 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

February 11, 2025 22:23 5s
The KL divergence calculation is very slow and is not optimized for acceleration
Blossom-CI #6892: Issue comment #4023 (comment) created by poweiw
February 11, 2025 22:22 4s
February 11, 2025 22:22 4s
PTQ support for ViT models
Blossom-CI #6891: Issue comment #4002 (comment) created by poweiw
February 11, 2025 22:18 4s
February 11, 2025 22:18 4s
INT8 Quantization of a custom model failed
Blossom-CI #6890: Issue comment #4037 (comment) created by poweiw
February 11, 2025 22:14 5s
February 11, 2025 22:14 5s
How can I transfer t5 decoder_model_merged.onnx to tensorrt
Blossom-CI #6889: Issue comment #4032 (comment) created by poweiw
February 11, 2025 22:01 5s
February 11, 2025 22:01 5s
Question: Disable Optimizations for TensorRT
Blossom-CI #6888: Issue comment #4075 (comment) created by poweiw
February 11, 2025 21:59 4s
February 11, 2025 21:59 4s
How to accelerate GCN using torch-geometric with TensorRT?
Blossom-CI #6887: Issue comment #4108 (comment) created by poweiw
February 11, 2025 21:55 4s
February 11, 2025 21:55 4s
PTQ support for ViT models
Blossom-CI #6886: Issue comment #4002 (comment) created by poweiw
February 11, 2025 21:34 5s
February 11, 2025 21:34 5s
[New] Discord channel for triton-inference-server, tensorrt
Blossom-CI #6885: Issue comment #4012 (comment) created by brnguyen2
February 11, 2025 21:19 5s
February 11, 2025 21:19 5s
February 11, 2025 20:54 4s
UINT8-to-FLOAT cast after transpose breaks the graph.
Blossom-CI #6883: Issue comment #3985 (comment) created by kevinch-nv
February 11, 2025 20:51 4s
February 11, 2025 20:51 4s
FP16 model of TensorRT 10.0 are incorrect when running on GPU T4
Blossom-CI #6882: Issue comment #4022 (comment) created by kevinch-nv
February 11, 2025 20:49 5s
February 11, 2025 20:49 5s
February 11, 2025 20:46 5s
PTQ support for ViT models
Blossom-CI #6880: Issue comment #4002 (comment) created by ruro
February 11, 2025 20:08 5s
February 11, 2025 20:08 5s
TensorRT 10 slower than TensorRT 8.6 for models with Instance Normalization layers
Blossom-CI #6877: Issue comment #3962 (comment) created by brnguyen2
February 11, 2025 16:59 5s
February 11, 2025 16:59 5s
Inference slower on A40 then A30
Blossom-CI #6875: Issue comment #4042 (comment) created by brnguyen2
February 11, 2025 16:28 5s
February 11, 2025 16:28 5s
Request support for Ubuntu24.04
Blossom-CI #6874: Issue comment #4034 (comment) created by brnguyen2
February 11, 2025 16:15 5s
February 11, 2025 16:15 5s
February 11, 2025 15:59 5s
February 11, 2025 15:49 5s
XXX failure of TensorRT X.Y when running XXX on GPU XXX
Blossom-CI #6871: Issue comment #3693 (comment) created by brnguyen2
February 11, 2025 15:40 6s
February 11, 2025 15:40 6s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6870: Issue comment #3993 (comment) created by brnguyen2
February 11, 2025 15:37 9s
February 11, 2025 15:37 9s
February 11, 2025 15:32 5s