PatrickStar是一款腾讯开发的分布式深度学习训练工具,它的设计目标是支持以GPT、Bert为代表的超大预训练模型训练。
用法PatrickStar基于PyTorch,这使得迁移pytorch项目变得容易。以下是PatrickStar的示例:
frompatrickstar.runtimeimportinitialize_engineconfig={"optimizer":{"type":"Adam","params":{"lr":0.001,"betas":(0.9,0.999),"eps":1e-6,"weight_decay":0,"use_hybrid_adam":True,},},"fp16":{#lossscalerparams"enabled":True,"loss_scale":0,"initial_scale_power":2**3,"loss_scale_window":1000,"hysteresis":2,"min_loss_scale":1,},"default_chunk_size":64*1024*1024,"release_after_init":True,"use_cpu_embedding":False,}defmodel_func():#MyModelisaderivedclassfortorch.nn.ModulereturnMyModel(...)model,optimizer=initialize_engine(model_func=model_func,local_rank=0,config=config)...fordataindataloader:optimizer.zero_grad()loss=model(data)model.backward(loss)optimizer.step()使用与 DeepSpeed配置JSON 相同的config格式,主要包括优化器、损失缩放器和一些PatrickStar特定配置的参数。
引用我们@article{fang2021patrickstar,title={PatrickStar:ParallelTrainingofPre-trainedModelsviaaChunk-basedMemoryManagement},author={Fang,JiaruiandYu,YangandZhu,ZilinandLi,ShengguiandYou,YangandZhou,Jie},journal={arXivpreprintarXiv:2108.05818},year={2021}}
评论