Skip to content

Posttraining Library #1771

@MarkLiLabs

Description

@MarkLiLabs

Posttraining Library Support

Summary

I understand that torchtune is being phased out and the team announced in July 2025 that they are developing a new product in a new repo for end-to-end post-training with scale. It's now been several months since that announcement. Could you share an update on when this new library will be released?

Motivation

In [Issue #2883](meta-pytorch/torchtune#2883), the torchtune team announced plans to develop a new product focused on end-to-end post-training with scale. That announcement was made several months ago in July 2025, and torchtune is now in maintenance mode (receiving only critical bug fixes and security patches during 2025).

Questions

  • When will the new post-training library be released? It's been several months since the July announcement, can you share a timeline or expected release date?
  • Will the new library be part of torchtitan or a separate repository? The announcement mentioned a "new repo," but given torchtitan's focus on production-grade training, would it make sense to integrate?
  • What's the relationship between the new library and torchtitan? Will they share infrastructure, or are they separate projects?
  • Which post-training techniques will be prioritized? (eg SFT, RLHF/DPO, continued pretraining)
  • Is there a beta or early access program? Many in the community are eager to start testing and contributing.

Why I'm asking here (instead of torchtune)

I'm posting this question in the torchtitan repo rather than torchtune because:

  1. Architectural excellence: The torchtitan team has demonstrated exceptional work in building a production-grade, PyTorch-native training system with modular composability and scale as a first-class citizen, exactly the qualities mentioned in the torchtune transition announcement.

  2. Natural evolution: Given that torchtitan already handles pretraining at scale with features like 3D parallelism, distributed checkpointing, and native PyTorch integration, it seems like a natural foundation or model for a post-training library with similar scale requirements.

  3. Team expertise: The torchtitan team's deep expertise in distributed training, parallelism techniques, and PyTorch internals makes them well-positioned to build or be involved with the successor to torchtune.

  4. Unified vision: Both the torchtitan philosophy and the announced new post-training library share similar goals: hackable code, minimal abstraction, scale-first design, and native PyTorch.

Additional Context

With torchtune entering maintenance mode and no longer accepting new features, many practitioners are in a transitional period waiting for the new post-training solution. Understanding the timeline and scope of the new library would help the community plan their training workflows accordingly.

Thank you for your excellent work on torchtitan and the broader PyTorch training ecosystem, we're excited to see what's coming!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions