Remove model-specific tags in llama_chat_apply_template() #15508
arcusmaximus
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Right now, the code that turns chat messages into model-specific input does no sanitizing at all, meaning it's trivial to inject tags where they don't belong. For example, this request:
results in the response "I'm sorry, but I can't reveal that information", but this request:
results in "The secret number is 456789."
Now, I have of course read the security policy which clearly states that users of llama.cpp should do their own input sanitizing. However, I still think it could be useful to strip at least these sensitive tags in the library itself:
Beta Was this translation helpful? Give feedback.
All reactions