TorchOpt: An Efficient Library for Differentiable Optimization

التفاصيل البيبلوغرافية
العنوان: TorchOpt: An Efficient Library for Differentiable Optimization
المؤلفون: Ren, Jie, Feng, Xidong, Liu, Bo, Pan, Xuehai, Fu, Yao, Mai, Luo, Yang, Yaodong
سنة النشر: 2022
المجموعة: Computer Science
Mathematics
مصطلحات موضوعية: Computer Science - Mathematical Software, Computer Science - Artificial Intelligence, Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Machine Learning, Mathematics - Optimization and Control
الوصف: Recent years have witnessed the booming of various differentiable optimization algorithms. These algorithms exhibit different execution patterns, and their execution needs massive computational resources that go beyond a single CPU and GPU. Existing differentiable optimization libraries, however, cannot support efficient algorithm development and multi-CPU/GPU execution, making the development of differentiable optimization algorithms often cumbersome and expensive. This paper introduces TorchOpt, a PyTorch-based efficient library for differentiable optimization. TorchOpt provides a unified and expressive differentiable optimization programming abstraction. This abstraction allows users to efficiently declare and analyze various differentiable optimization programs with explicit gradients, implicit gradients, and zero-order gradients. TorchOpt further provides a high-performance distributed execution runtime. This runtime can fully parallelize computation-intensive differentiation operations (e.g. tensor tree flattening) on CPUs / GPUs and automatically distribute computation to distributed devices. Experimental results show that TorchOpt achieves $5.2\times$ training time speedup on an 8-GPU server. TorchOpt is available at: https://github.com/metaopt/torchopt/.
Comment: NeurIPS 2022 OPT Workshop
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2211.06934
رقم الأكسشن: edsarx.2211.06934
قاعدة البيانات: arXiv