|
3 | 3 |
|
4 | 4 | <img src="./imgs/llmc.png" alt="llmc" width="75%" /> |
5 | 5 |
|
| 6 | +<img src="./imgs/llmc+.png" alt="llmc" width="75%" /> |
| 7 | + |
6 | 8 | [](https://opensource.org/licenses/Apache-2.0) |
7 | 9 | [](https://deepwiki.com/ModelTC/LightCompress) |
8 | 10 | [](https://arxiv.org/abs/2405.06001) |
| 11 | +[](https://arxiv.org/abs/2508.09981) |
9 | 12 | [](https://discord.com/invite/NfJzbkK3jY) |
10 | 13 | [](http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=I9IGPWWj8uuRXWH3_ELWjouf6gkIMgUl&authKey=GA3WbFAsm90ePJf%2FCbc7ZyXXq4ShQktlBaLxgqS5yuSPAsr3%2BDKMRdosUiLYoilO&noverify=0&group_code=526192592) |
11 | 14 | [](https://llmc-en.readthedocs.io/en/latest/) |
@@ -156,6 +159,27 @@ You can add your own model type referring to files under `llmc/models/*.py`. |
156 | 159 |
|
157 | 160 | ## 💡 Supported Algorithm List |
158 | 161 |
|
| 162 | +### Token Reduction |
| 163 | + |
| 164 | +- ✅ [ToMe](https://arxiv.org/abs/2210.09461) |
| 165 | +- ✅ [FastV](https://arxiv.org/abs/2403.06764) |
| 166 | +- ✅ [SparseVLM](https://arxiv.org/abs/2410.04417) |
| 167 | +- ✅ [VisionZip](https://arxiv.org/abs/2412.04467) |
| 168 | + |
| 169 | +<details> |
| 170 | +<summary>More Supported Algorithms </summary> |
| 171 | + |
| 172 | +- ✅ [PyramidDrop](https://arxiv.org/abs/2410.17247) |
| 173 | +- ✅ [VisPruner](https://arxiv.org/abs/2412.01818) |
| 174 | +- ✅ [MustDrop](https://arxiv.org/abs/2411.10803) |
| 175 | +- ✅ [DART](https://arxiv.org/abs/2502.11494) |
| 176 | +- ✅ [DyCoke](https://arxiv.org/abs/2411.15024) |
| 177 | +- ✅ [PruneVid](https://arxiv.org/abs/2412.16117) |
| 178 | +- ✅ [FastVID](https://arxiv.org/abs/2503.11187) |
| 179 | +- ✅ [HoliTom](https://arxiv.org/abs/2505.21334) |
| 180 | + |
| 181 | +</details> |
| 182 | + |
159 | 183 | ### Quantization |
160 | 184 |
|
161 | 185 | - ✅ Naive |
@@ -223,6 +247,15 @@ We develop our code referring to the following repos: |
223 | 247 |
|
224 | 248 | If you find our toolkit or research paper useful or relevant to your research, please kindly cite our work: |
225 | 249 |
|
| 250 | +``` |
| 251 | +@article{lv2025llmc+, |
| 252 | + title={LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit}, |
| 253 | + author={Lv, Chengtao and Zhang, Bilang and Yong, Yang and Gong, Ruihao and Huang, Yushi and Gu, Shiqiao and Wu, Jiajun and Shi, Yumeng and Guo, Jinyang and Wang, Wenya}, |
| 254 | + journal={arXiv preprint arXiv:2508.09981}, |
| 255 | + year={2025} |
| 256 | +} |
| 257 | +``` |
| 258 | + |
226 | 259 | ``` |
227 | 260 | @inproceedings{DBLP:conf/emnlp/GongYGHLZT024, |
228 | 261 | author={Ruihao Gong and Yang Yong and Shiqiao Gu and Yushi Huang and Chengtao Lv and Yunchen Zhang and Dacheng Tao and Xianglong Liu}, |
|
0 commit comments