Please see the detailed running steps in subfolders.
If you make use of our work, please cite our paper:
@InProceedings{pmlr-v162-liu22i,
  title = 	 {Rethinking Attention-Model Explainability through Faithfulness Violation Test},
  author =       {Liu, Yibing and Li, Haoliang and Guo, Yangyang and Kong, Chenqi and Li, Jing and Wang, Shiqi},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {13807--13824},
  year = 	 {2022},
  publisher =    {PMLR},
}
This work is built based on the implementation of 2021-ICCV work Generic Attention-Model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transform, and 2020-ACL work Towards Transparent and Explainable Attention Models. Many thanks for the generous sharing.