-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a byte pair encoding (BPE) tokenizer layer #46
Comments
Nice! Should I draw up a rough implementation and share a Colab notebook? |
This is a pretty important feature, as it will unlock some important models and is widely used. However, there are some technical roadblocks here currently. We would like to keep our tokenizer running inside the tensorflow graph using tensorflow ops, and currently the tokenization ops are all provided by tf-text. There is not a BPE tokenizer offered by tf text, but in theory SentencePiece should be configurable in a way that is compatible. See tensorflow/text#763 The first thing to do would be to see if that is possible. Try configuring SentencePiece tokenizer for tf text and see if it can be configured to be actually compatible with the tokenizers for gpt-2 and roberta (testing against huggingface tokenizers would probably be the simplest to do this). A colab showing compatibility would "unblock" this work, and if it's not possible currently we may have to apply some fixes to tf-text and sentencepiece. From there we could produce a design that would essentially hide the complexity of sentence piece under the hood. We would need to think about the vocab format we provide (a vocab and merges file?). |
@abheesht17 you are definitely welcome to help with this! This will require some diving into other libraries, to understand the support we have today. |
Great, will do 👍🏼 |
Hey, @mattdangerw. I went through this issue. So, essentially, this is what you want me to do:
Is this correct? |
I'm not sure we need to actually train a sentence piece model, though that might help understand things. Basically, the public API we can rely on that might give us the op support we need is tf text's SentencepieceTokenizer, but that takes a sentence piece model proto as input. End users will want probably want to use this layer with "vocab json" and "merges txt" files provided by official gpt/roberta githubs or huggingface. We can keep thinking about the file format we would want, but asking end users to construct a sentence piece model is probably a non-starter. So, the question we could try to answer is can we manually construct a sentence piece model proto from gpt vocab and merge files in a way that's compatible. If so, we could build this layer on top of the existing tf text API, and not rule out more direct support from tf text in the future. If not, we will need to go back to the drawing board a little bit and figure out how to get op level support here. So putting that into a list:
It may turn out we are more blocked here than we think from tensorflow/text#763, but this would be the way to find out. |
Ah, understood. Thanks for clarifying! |
Some useful articles about how Hugging Face tokenises the input text (given huggingface/transformers#1083 (comment)
|
Hey, @mattdangerw. Sorry for the delay, forgot about it. I opened an issue on the SentencePiece repository: google/sentencepiece#739. The author of the repo mentions this: "manual model modification/creation is totally unsupported." However, looks like we may be able to add tokens from the vocab to the |
Hi All, Just curious if anyone has found any sort of work around for this issue. My conclusion after reading related issues is that its not currently possible to incorporate popular BPE tokenizers (roberta/GPT2) within tensorflow-text pipelines? |
@aleemkhan62 Currently you can use BPE via tf_text.SentencePieceTokenizer only if you have a pretrained model proto. We are looking into a better solution on it! please stay tuned, thanks! |
To add a little more color for others finding this issue, you can train a BPE-style vocabulary with sentecepiece today, and a sentencpiece model can be used with tensorflow text, or the SentencePieceTokenizer in this library. However than might not have the exact behavior as roberta/gpt2 tokenization. We are currently working on a way to support the actual vocabulary files used by roberta/gpt2 (merges.txt and vocab.json), with exactly equivalent tokenization, running inside the tf graph. |
Any updates here? |
|
Closing this! We have an implementation released -> https://keras.io/api/keras_nlp/tokenizers/byte_pair_tokenizer/ If anyone encounters issue with the tokenizer, please file a bug! |
We would like to add a BPE tokenizer (used by gpt-2, roberta and others). This ideally should be configurable to be compatible with the actual tokenization used by gpt-2 and roberta, and run inside a tensorflow graph.
The text was updated successfully, but these errors were encountered: