Multi-token masks?

#1
by latitude - opened

Suppose I know the beginning and end of a text, but don't know exactly how many tokens I want filled in. Maybe I can just set a minimum and maximum number of tokens. Can I do this with the Inference API? If not, how would I do it in my own code?

In this case I would also advise to simply iterate over multiple

<mask> tokens. However, you'll probably get better results with + constrained generation where the constraint are your end tokens. See: https://huggingface.co./blog/constrained-beam-search (cc @cwkean)

@latitude Right, you could force the beginning of text trivially (just start the generation from that token), but forcing the end token is a trickier concept (especially since you don't know how many tokens should be in between).

Like Patrick said, what I'd do right now is do constrained generation with that end token then filtering for your needs.

This is a pretty interesting use-case though; could you explain your situation with an example?

I think I could easily make a new Constraint object dedicated for this sort of "templating" as I actually meant to do previously

Thanks for your help! One example is rhyming poetry. There are a very limited number of rhymes for the previous line, so I could pick a rhyme first and then fill in the rest of the line and check that it fits the meter.
The other application we've been discussing is short Choose-Your-Own-Adventure stories where two plotlines come together.

@latitude Right, you could force the beginning of text trivially (just start the generation from that token), but forcing the end token is a trickier concept (especially since you don't know how many tokens should be in between).

Like Patrick said, what I'd do right now is do constrained generation with that end token then filtering for your needs.

This is a pretty interesting use-case though; could you explain your situation with an example?

I think I could easily make a new Constraint object dedicated for this sort of "templating" as I actually meant to do previously

thx

Sign up or log in to comment