This repository is for our EMNLP Findings '25 paper, A Survey on LLMs for Story Generation.
Authors: Maria Teleki, Vedangi Bengali*, Xiangjue Dong*, Sai Tejas Janjur*, Haoran Liu*, Tian Liu, Cong Wang, Ting Liu, Yin Zhang, Frank Shipman, James Caverlee (Texas A&M University)
We aim to serve the community with this repository -- if you have systems to add, please send us a pull request!
This repository hosts the survey paper "A Survey on LLMs for Story Generation", which presents a comprehensive taxonomy, comparison, and future outlook on the use of Large Language Models (LLMs) for story generation.
Key contributions:
- Proposes a novel taxonomy for LLMs in story generation:
- Independent Generation (LLMs as primary authors)
- Author Assistance (LLMs supporting human authors)
- Provides systematic comparisons of datasets, evaluation methods, and LLM usage.
- Outlines future research directions in multimodal storytelling, inference-time control, benchmarking, and story-specific metrics.
LLMs act as the primary author.
- Single-Agent Generation
- Multi-Agent Collaboration
LLMs act as assistants to human authors.
- Adaptive Stories
- StoryVerse [FDG '24] [Paper]
- Static Stories
- Multimodal Storytelling: Integration of VLMs and image-text generation.
- Inference-Time Constraints: Beam search & rule-based sampling for coherence.
- Benchmarking: Lack of standardized benchmarks across LLMs.
- Story-Specific Metrics: New metrics like Automatic Story Evaluation (ASE).
- Ethical Concerns: Authorship, originality, professional impact, transparency.
Surveyed works span ACL, EMNLP, NAACL, COLING, TACL, CHI, CSCW, and other top venues (2023β2025). Focus is on storytelling (non-data-centric) with LLMs post-2022 (GPT-4 era).
If you use this survey, please cite:
@article{teleki2025survey,
title={A Survey on LLMs for Story Generation},
author={Teleki, Maria and Bengali, Vedangi and Dong, Xiangjue and Janjur, Sai Tejas and Liu, Haoran and Liu, Tian and Wang, Cong and Liu, Ting and Zhang, Yin and Shipman, Frank and Caverlee, James},
year={2025},
journal={EMNLP Findings}
}
