🎁 New User? Get 20% off your first purchase with code NEWUSER20 Register Now →
Menu

Categories

PyTorch 2.7 Released: Compiled Training by Default and Apple Silicon Distributed

PyTorch 2.7 Released: Compiled Training by Default and Apple Silicon Distributed

Meta has released PyTorch 2.7, with significant quality-of-life improvements for practitioners and new capabilities for Apple Silicon users.

torch.compile Default for Training

torch.compile is now enabled by default in new Trainer APIs and on nn.Module's training paths when supported. Typical models see 20-40% training throughput improvements with zero code changes. Fallback to eager mode is automatic when incompatible ops are detected.

Distributed on Apple Silicon

PyTorch now supports multi-machine distributed training on Apple Silicon (M2 Ultra and later) via Thunderbolt 5 and NCCL-compatible MLX backend. Small research labs can build low-power training clusters from Mac Studios, achieving competitive throughput for mid-scale models.

Export Improvements

torch.export handles dynamic shapes more gracefully with a new Dim API, and ExecuTorch (on-device inference) is officially non-experimental with wide hardware backend coverage.

Share this article:
Dargslan Editorial Team (Dargslan)
About the Author

Dargslan Editorial Team (Dargslan)

Collective of Software Developers, System Administrators, DevOps Engineers, and IT Authors

Dargslan is an independent technology publishing collective formed by experienced software developers, system administrators, and IT specialists.

The Dargslan editorial team works collaboratively to create practical, hands-on technology books focused on real-world use cases. Each publication is developed, reviewed, and...

Programming Languages Linux Administration Web Development Cybersecurity Networking

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.