Hi, thank you for providing this model.
I’m trying to use Dream for masked language modeling (filling in missing tokens in a partially masked sentence).
Is there a built-in or recommended way to do this outside of training?
I tried prompting the model with masked sentences and adjusting max_new_tokens, but:
- With max_new_tokens > 0, output length varies (sometimes adds/removes tokens).
- With max_new_tokens = 0, I get a _validate_generated_length ValueError — and if I bypass it, the outputs look strange (more or less for different cases).
Any guidance on the right way to perform masked LM inference would be greatly appreciated!
Best regards