-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathEN参考文献.html
20 lines (20 loc) · 11.1 KB
/
EN参考文献.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<title>参考文献表</title>
</head>
<body>
<div class="csl-bib-body" style="line-height: 1.35; margin-left: 2em; text-indent:-2em;">
<div class="csl-entry" style="margin-bottom: 1em;">Cui, Tianyu, Shiyu Ma, Ziang Chen, Tong Xiao, Shimin Tao, Yilun Liu, Shenglin Zhang, Duoming Lin, Changchang Liu, Yuzhe Cai, Weibin Meng, Yongqian Sun, and Dan Pei. 2024. ‘LogEval: A Comprehensive Benchmark Suite for Large Language Models In Log Analysis’.</div>
<span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_id=info%3Adoi%2F10.48550%2FarXiv.2407.01896&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.type=preprint&rft.title=LogEval%3A%20A%20Comprehensive%20Benchmark%20Suite%20for%20Large%20Language%20Models%20In%20Log%20Analysis&rft.description=Log%20analysis%20is%20crucial%20for%20ensuring%20the%20orderly%20and%20stable%20operation%20of%20information%20systems%2C%20particularly%20in%20the%20field%20of%20Artificial%20Intelligence%20for%20IT%20Operations%20(AIOps).%20Large%20Language%20Models%20(LLMs)%20have%20demonstrated%20significant%20potential%20in%20natural%20language%20processing%20tasks.%20In%20the%20AIOps%20domain%2C%20they%20excel%20in%20tasks%20such%20as%20anomaly%20detection%2C%20root%20cause%20analysis%20of%20faults%2C%20operations%20and%20maintenance%20script%20generation%2C%20and%20alert%20information%20summarization.%20However%2C%20the%20performance%20of%20current%20LLMs%20in%20log%20analysis%20tasks%20remains%20inadequately%20validated.%20To%20address%20this%20gap%2C%20we%20introduce%20LogEval%2C%20a%20comprehensive%20benchmark%20suite%20designed%20to%20evaluate%20the%20capabilities%20of%20LLMs%20in%20various%20log%20analysis%20tasks%20for%20the%20first%20time.%20This%20benchmark%20covers%20tasks%20such%20as%20log%20parsing%2C%20log%20anomaly%20detection%2C%20log%20fault%20diagnosis%2C%20and%20log%20summarization.%20LogEval%20evaluates%20each%20task%20using%204%2C000%20publicly%20available%20log%20data%20entries%20and%20employs%2015%20different%20prompts%20for%20each%20task%20to%20ensure%20a%20thorough%20and%20fair%20assessment.%20By%20rigorously%20evaluating%20leading%20LLMs%2C%20we%20demonstrate%20the%20impact%20of%20various%20LLM%20technologies%20on%20log%20analysis%20performance%2C%20focusing%20on%20aspects%20such%20as%20self-consistency%20and%20few-shot%20contextual%20learning.%20We%20also%20discuss%20findings%20related%20to%20model%20quantification%2C%20Chinese-English%20question-answering%20evaluation%2C%20and%20prompt%20engineering.%20These%20findings%20provide%20insights%20into%20the%20strengths%20and%20weaknesses%20of%20LLMs%20in%20multilingual%20environments%20and%20the%20effectiveness%20of%20different%20prompt%20strategies.%20Various%20evaluation%20methods%20are%20employed%20for%20different%20tasks%20to%20accurately%20measure%20the%20performance%20of%20LLMs%20in%20log%20analysis%2C%20ensuring%20a%20comprehensive%20assessment.%20The%20insights%20gained%20from%20LogEvals%20evaluation%20reveal%20the%20strengths%20and%20limitations%20of%20LLMs%20in%20log%20analysis%20tasks%2C%20providing%20valuable%20guidance%20for%20researchers%20and%20practitioners.&rft.identifier=urn%3Adoi%3A10.48550%2FarXiv.2407.01896&rft.aufirst=Tianyu&rft.aulast=Cui&rft.au=Tianyu%20Cui&rft.au=Shiyu%20Ma&rft.au=Ziang%20Chen&rft.au=Tong%20Xiao&rft.au=Shimin%20Tao&rft.au=Yilun%20Liu&rft.au=Shenglin%20Zhang&rft.au=Duoming%20Lin&rft.au=Changchang%20Liu&rft.au=Yuzhe%20Cai&rft.au=Weibin%20Meng&rft.au=Yongqian%20Sun&rft.au=Dan%20Pei&rft.date=2024-07-02&rft.language=en"></span>
<div class="csl-entry" style="margin-bottom: 1em;">Li, Ming, Pei Chen, Chenguang Wang, Hongyu Zhao, Yijun Liang, Yupeng Hou, Fuxiao Liu, and Tianyi Zhou. 2024. ‘Mosaic-IT: Free Compositional Data Augmentation Improves Instruction Tuning’.</div>
<span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.type=preprint&rft.title=Mosaic-IT%3A%20Free%20Compositional%20Data%20Augmentation%20Improves%20Instruction%20Tuning&rft.description=Finetuning%20large%20language%20models%20with%20a%20variety%20of%20instruction-response%20pairs%20has%20enhanced%20their%20capability%20to%20understand%20and%20follow%20instructions.%20Current%20instruction%20tuning%20primarily%20relies%20on%20teacher%20models%20or%20human%20intervention%20to%20generate%20and%20refine%20the%20instructions%20and%20responses%20for%20training%2C%20which%20are%20costly%2C%20non-sustainable%2C%20and%20may%20lack%20diversity.%20In%20this%20paper%2C%20we%20introduce%20Mosaic%20Instruction%20Tuning%20(Mosaic-IT)%2C%20a%20human%2Fmodel-free%20compositional%20data%20augmentation%20method%20that%20can%20efficiently%20create%20rich%20and%20diverse%20augmentations%20from%20existing%20instruction%20tuning%20data%20to%20enhance%20the%20LLMs.%20Mosaic-IT%20randomly%20concatenates%20multiple%20instruction%20data%20into%20one%20and%20trains%20the%20model%20to%20produce%20the%20corresponding%20responses%20with%20predefined%20higher-level%20meta-instructions%20to%20strengthen%20its%20multi-step%20instruction-following%20and%20format-following%20skills.%20Our%20extensive%20evaluations%20demonstrate%20a%20superior%20performance%20and%20training%20efficiency%20of%20Mosaic-IT%2C%20which%20achieves%20consistent%20performance%20improvements%20over%20various%20benchmarks%20and%20a%20%2480%5C%25%24%20reduction%20in%20training%20costs%20compared%20with%20original%20instruction%20tuning.%20Our%20codes%20and%20data%20are%20available%20at%20https%3A%2F%2Fgithub.com%2Ftianyi-lab%2FMosaic-IT.&rft.identifier=http%3A%2F%2Farxiv.org%2Fabs%2F2405.13326&rft.aufirst=Ming&rft.aulast=Li&rft.au=Ming%20Li&rft.au=Pei%20Chen&rft.au=Chenguang%20Wang&rft.au=Hongyu%20Zhao&rft.au=Yijun%20Liang&rft.au=Yupeng%20Hou&rft.au=Fuxiao%20Liu&rft.au=Tianyi%20Zhou&rft.date=2024-10-07&rft.language=en"></span>
<div class="csl-entry" style="margin-bottom: 1em;">Liu, Yilun, Yuhe Ji, Shimin Tao, Minggui He, Weibin Meng, Shenglin Zhang, Yongqian Sun, Yuming Xie, Boxing Chen, and Hao Yang. 2024. ‘LogLM: From Task-Based to Instruction-Based Automated Log Analysis’.</div>
<span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_id=info%3Adoi%2F10.48550%2FarXiv.2410.09352&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.type=preprint&rft.title=LogLM%3A%20From%20Task-based%20to%20Instruction-based%20Automated%20Log%20Analysis&rft.description=Automatic%20log%20analysis%20is%20essential%20for%20the%20efficient%20Operation%20and%20Maintenance%20(O%26M)%20of%20software%20systems%2C%20providing%20critical%20insights%20into%20system%20behaviors.%20However%2C%20existing%20approaches%20mostly%20treat%20log%20analysis%20as%20training%20a%20model%20to%20perform%20an%20isolated%20task%2C%20using%20task-specific%20log-label%20pairs.%20These%20task-based%20approaches%20are%20inflexible%20in%20generalizing%20to%20complex%20scenarios%2C%20depend%20on%20task-specific%20training%20data%2C%20and%20cost%20significantly%20when%20deploying%20multiple%20models.%20In%20this%20paper%2C%20we%20propose%20an%20instruction-based%20training%20approach%20that%20transforms%20log-label%20pairs%20from%20multiple%20tasks%20and%20domains%20into%20a%20unified%20format%20of%20instruction-response%20pairs.%20Our%20trained%20model%2C%20LogLM%2C%20can%20follow%20complex%20user%20instructions%20and%20generalize%20better%20across%20different%20tasks%2C%20thereby%20increasing%20flexibility%20and%20reducing%20the%20dependence%20on%20task-specific%20training%20data.%20By%20integrating%20major%20log%20analysis%20tasks%20into%20a%20single%20model%2C%20our%20approach%20also%20relieves%20model%20deployment%20burden.%20Experimentally%2C%20LogLM%20outperforms%20existing%20approaches%20across%20five%20log%20analysis%20capabilities%2C%20and%20exhibits%20strong%20generalization%20abilities%20on%20complex%20instructions%20and%20unseen%20tasks.&rft.identifier=urn%3Adoi%3A10.48550%2FarXiv.2410.09352&rft.aufirst=Yilun&rft.aulast=Liu&rft.au=Yilun%20Liu&rft.au=Yuhe%20Ji&rft.au=Shimin%20Tao&rft.au=Minggui%20He&rft.au=Weibin%20Meng&rft.au=Shenglin%20Zhang&rft.au=Yongqian%20Sun&rft.au=Yuming%20Xie&rft.au=Boxing%20Chen&rft.au=Hao%20Yang&rft.date=2024-10-12&rft.language=en"></span>
<div class="csl-entry" style="margin-bottom: 1em;">Liu, Yilun, Shimin Tao, Weibin Meng, Jingyu Wang, Hao Yang, and Yanfei Jiang. 2024. ‘Multi-Source Log Parsing With Pre-Trained Domain Classifier’. <i>IEEE Transactions on Network and Service Management</i> 21(3):2651–63. doi: <a href="https://doi.org/10.1109/TNSM.2023.3329144">10.1109/TNSM.2023.3329144</a>.</div>
<span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_id=info%3Adoi%2F10.1109%2FTNSM.2023.3329144&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Source%20Log%20Parsing%20With%20Pre-Trained%20Domain%20Classifier&rft.jtitle=IEEE%20Transactions%20on%20Network%20and%20Service%20Management&rft.stitle=IEEE%20Trans.%20Netw.%20Serv.%20Manage.&rft.volume=21&rft.issue=3&rft.aufirst=Yilun&rft.aulast=Liu&rft.au=Yilun%20Liu&rft.au=Shimin%20Tao&rft.au=Weibin%20Meng&rft.au=Jingyu%20Wang&rft.au=Hao%20Yang&rft.au=Yanfei%20Jiang&rft.date=2024-06&rft.pages=2651-2663&rft.spage=2651&rft.epage=2663&rft.issn=1932-4537%2C%202373-7379&rft.language=en"></span>
<div class="csl-entry">Liu, Yilun, Shimin Tao, Weibin Meng, Feiyu Yao, Xiaofeng Zhao, and Hao Yang. 2024. ‘LogPrompt: Prompt Engineering Towards Zero-Shot and Interpretable Log Analysis’. Pp. 364–65 in <i>Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings</i>. Lisbon Portugal: ACM.</div>
<span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_id=info%3Adoi%2F10.1145%2F3639478.3643108&rft_id=urn%3Aisbn%3A979-8-4007-0502-1&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.atitle=LogPrompt%3A%20Prompt%20Engineering%20Towards%20Zero-Shot%20and%20Interpretable%20Log%20Analysis&rft.btitle=Proceedings%20of%20the%202024%20IEEE%2FACM%2046th%20International%20Conference%20on%20Software%20Engineering%3A%20Companion%20Proceedings&rft.place=Lisbon%20Portugal&rft.publisher=ACM&rft.aufirst=Yilun&rft.aulast=Liu&rft.au=Yilun%20Liu&rft.au=Shimin%20Tao&rft.au=Weibin%20Meng&rft.au=Feiyu%20Yao&rft.au=Xiaofeng%20Zhao&rft.au=Hao%20Yang&rft.date=2024-04-14&rft.pages=364-365&rft.spage=364&rft.epage=365&rft.isbn=979-8-4007-0502-1&rft.language=en"></span>
</div></body>
</html>