AI 科技评论按:刚刚在Github上发布了开源 Pytorch-Transformers 1.0,该项目支持BERT, GPT, GPT-2, Transfo-XL, XLNet, XLM等,并包含27个预训练模型。
我们来看。

哪些支持

PyTorch-Transformers(此前叫做pytorch-pretrained-bert)是面向自然语言处理,当前性能最高的预训练模型开源库。 

该开源库现在包含了 PyTorch 实现、预训练模型权重、运行脚本和以下模型的转换工具:
1、谷歌的 BERT
论文:“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2、OpenAI 的GPT
论文:“ Improving Language Understanding by Generative Pre-Training
3、OpenAI 的 GPT-2
论文:“ Language Models are Unsupervised Multitask Learners
4、谷歌和 CMU 的 Transformer-XL 
论文:“ Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
5、谷歌和 CMU 的XLNet
论文:XLNet: Generalized Autoregressive Pretraining for Language Understanding
6、Facebook的 XLM
论文:“ Cross-lingual Language Model Pretraining
这些实现都在几个数据集(参见示例脚本)上进行了测试,性能与原始实现相当,例如 BERT中文全词覆盖在 SQuAD数据集上的F1分数为93 , OpenAI GPT 在 RocStories上的F1分数为88, Transformer-XL在 WikiText 103 上的困惑度为18.3, XLNet在STS-B的皮尔逊相关系数为0.916。

27个预训练模型

项目中提供了27个预训练模型,下面是这些模型的完整列表,以及每个模型的简短介绍。
ArchitectureShortcut nameDetails of the model
BERTbert-base-uncased12-layer, 768-hidden, 12-heads, 110M parameters Trained on lower-cased English text
bert-large-uncased24-layer, 1024-hidden, 16-heads, 340M parameters Trained on lower-cased English text
bert-base-cased12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased English text
bert-large-cased24-layer, 1024-hidden, 16-heads, 340M parameters Trained on cased English text
bert-base-multilingual-uncased(Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters Trained on lower-cased text in the top 102 languages with the largest Wikipedias (see details)
bert-base-multilingual-cased(New, recommended) 12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased text in the top 104 languages with the largest Wikipedias (see details)
bert-base-chinese12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased Chinese Simplified and Traditional text
bert-base-german-cased12-layer, 768-hidden, 12-heads, 110M parameters Trained on cased German text by Deepset.ai (see details on deepset.ai website)
bert-large-uncased-whole-word-masking24-layer, 1024-hidden, 16-heads, 340M parameters Trained on lower-cased English text using Whole-Word-Masking (see details)
bert-large-cased-whole-word-masking24-layer, 1024-hidden, 16-heads, 340M parameters Trained on cased English text using Whole-Word-Masking (see details)
bert-large-uncased-whole-word-masking-finetuned-squad24-layer, 1024-hidden, 16-heads, 340M parameters The bert-large-uncased-whole-word-masking model fine-tuned on SQuAD (see details of fine-tuning in the example section)
bert-large-cased-whole-word-masking-finetuned-squad24-layer, 1024-hidden, 16-heads, 340M parameters The bert-large-cased-whole-word-maskingmodel fine-tuned on SQuAD (see details of fine-tuning in the example section)
bert-base-cased-finetuned-mrpc12-layer, 768-hidden, 12-heads, 110M parameters The bert-base-casedmodel fine-tuned on MRPC (see details of fine-tuning in the example section)
GPTopenai-gpt12-layer, 768-hidden, 12-heads, 110M parameters OpenAI GPT English model
GPT-2gpt212-layer, 768-hidden, 12-heads, 117M parameters OpenAI GPT-2 English model
gpt2-medium24-layer, 1024-hidden, 16-heads, 345M parameters OpenAI’s Medium-sized GPT-2 English model
Transformer-XLtransfo-xl-wt10318-layer, 1024-hidden, 16-heads, 257M parameters English model trained on wikitext-103
XLNetxlnet-base-cased12-layer, 768-hidden, 12-heads, 110M parameters XLNet English model
xlnet-large-cased24-layer, 1024-hidden, 16-heads, 340M parameters XLNet Large English model
XLMxlm-mlm-en-204812-layer, 1024-hidden, 8-heads XLM English model
xlm-mlm-ende-102412-layer, 1024-hidden, 8-heads XLM English-German Multi-language model
xlm-mlm-enfr-102412-layer, 1024-hidden, 8-heads XLM English-French Multi-language model
xlm-mlm-enro-102412-layer, 1024-hidden, 8-heads XLM English-Romanian Multi-language model
xlm-mlm-xnli15-102412-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM on the 15 XNLI languages.
xlm-mlm-tlm-xnli15-102412-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM + TLM on the 15 XNLI languages.
xlm-clm-enfr-102412-layer, 1024-hidden, 8-heads XLM English model trained with CLM (Causal Language Modeling)
xlm-clm-ende-102412-layer, 1024-hidden, 8-heads XLM English-German Multi-language model trained with CLM (Causal Language Modeling)

例子

BERT-base和BERT-large分别是110M和340M参数模型,并且很难在单个GPU上使用推荐的批量大小对其进行微调,来获得良好的性能(在大多数情况下批量大小为32)。
为了帮助微调这些模型,我们提供了几种可以在微调脚本中激活的技术 run_bert_classifier.py 和 run_bert_squad.py:梯度累积(gradient-accumulation),多GPU训练(multi-gpu training),分布式训练(distributed training )和16- bits 训练( 16-bits training)。注意,这里要使用分布式训练和16- bits 训练,你需要安装NVIDIA的apex扩展。
作者在doc中展示了几个基于BERT原始实现(https://github.com/google-research/bert/)和扩展的微调示例,分别为:
  • 九个不同GLUE任务的序列级分类器;
  • 问答集数据集SQUAD上的令牌级分类器;
  • SWAG分类语料库中的序列级多选分类器;
  • 另一个目标语料库上的BERT语言模型。
我们这里仅展示GLUE的结果:
这里是使用uncased BERT基础模型在GLUE基准测试开发集上得到的结果。所有实验均在批量大小为32的P100 GPU上运行。尽管比较原始,但结果看起来还不错。

安装

该项目是在Python 2.7和3.5+上测试(例子只在python 3.5+上测试)和PyTorch 0.4.1到1.1.0测试

pip 安装:
pip install pytorch-transformers

测试:

python-m pytest -sv ./pytorch_transformers/tests/python-m pytest -sv ./examples/
传送门:
源码:
https://github.com/huggingface/pytorch-transformers
文档:
https://huggingface.co/pytorch-transformers/index.html
RECOMMEND
推荐阅读
点击阅读原文,查看更多开发工具推荐
继续阅读
阅读原文