Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chat template support #917

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

engelmi
Copy link
Member

@engelmi engelmi commented Mar 7, 2025

This PR enables the (automatic) use of the chat template file via ramalama run by passing the chat template file to llama-run. The chat template can be either provided/downloaded directly or is extracted from the GGUF model and stored - preference is given to the provided chat template file.

TODO:

Summary by Sourcery

This PR adds support for chat templates to ramalama run. It enables the automatic use of chat template files by passing them to llama-run. The chat template can be provided directly, downloaded, or extracted from the GGUF model.

Copy link
Contributor

sourcery-ai bot commented Mar 7, 2025

Reviewer's Guide by Sourcery

This PR enables the use of chat templates with ramalama run. It supports providing the chat template directly, extracting it from GGUF models, and passing it to llama-run.

Sequence diagram for ensuring chat template

sequenceDiagram
    participant ModelStore
    participant GGUFInfoParser
    participant LocalSnapshotFile

    ModelStore->>ModelStore: new_snapshot(model_tag, snapshot_hash, snapshot_files)
    ModelStore->>ModelStore: _ensure_chat_template(model_tag, snapshot_hash, snapshot_files)
    alt ChatTemplate already in snapshot_files
        ModelStore-->>ModelStore: return
    else Model file exists
        ModelStore->>GGUFInfoParser: is_model_gguf(model_file_path)
        GGUFInfoParser-->>ModelStore: true
        ModelStore->>GGUFInfoParser: parse(model_file_path)
        GGUFInfoParser-->>ModelStore: GGUFModelInfo
        ModelStore->>GGUFInfoParser: get_chat_template()
        GGUFInfoParser-->>ModelStore: chat_template
        alt chat_template not empty
            ModelStore->>LocalSnapshotFile: Create LocalSnapshotFile(chat_template)
            LocalSnapshotFile-->>ModelStore: LocalSnapshotFile
            ModelStore->>ModelStore: update_snapshot(model_tag, snapshot_hash, [LocalSnapshotFile])
        end
    end
Loading

File-Level Changes

Change Details Files
Added support for chat templates by allowing the chat template file to be passed to llama-run.
  • Added SnapshotFileType enum to differentiate between model, chat template, and other files.
  • Modified SnapshotFile to include a type attribute.
  • Added LocalSnapshotFile class for creating snapshot files from content.
  • Modified RefFile to handle chat template and model filenames.
  • Added logic to extract chat template from GGUF model if not provided.
  • Added logic to mount the chat template file in the container.
  • Added logic to pass the chat template file to llama-run.
ramalama/model_store.py
ramalama/ollama.py
ramalama/model.py
ramalama/huggingface.py
ramalama/url.py
ramalama/cli.py
ramalama/model_inspect.py
ramalama/common.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@engelmi
Copy link
Member Author

engelmi commented Mar 7, 2025

@ericcurtin Could we rebuild a new version of the ramalama container image?
It seems that it does not yet include the changes from ggml-org/llama.cpp#11961 since the --chat-template-file CLI option for llama-run is missing.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Mar 7, 2025

@rhatdan has the build infrastructure set up to do it, so it would be less effort to wait for him for a new release.

But if you build a container image locally, you can just pass that to RamaLama for development purposes.

ramalama/cli.py Outdated
@@ -817,6 +819,9 @@ def run_cli(args):
model.run(args)
except Exception:
raise e
except Exception as ex:
print(ex)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debugging info?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you run ramalama --debug, you should see tracebacks when they happen.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ups, forgot to remove that one. Removed now.

@engelmi engelmi force-pushed the add-chat-template-support branch from 66ad4bf to d756093 Compare March 7, 2025 15:39
ref_file.filenames.append(parts[0])
if parts[1] == RefFile.MODEL_SUFFIX:
ref_file.model_name = parts[0]
elif parts[1] == RefFile.CHAT_TEMPLATE_SUFFIX:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switch elif to if, no need for else.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

for file in snapshot_files:
if file.type == SnapshotFileType.Model:
ref_file.model_name = file.name
elif file.type == SnapshotFileType.ChatTemplate:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switch elif to if. No need for else.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

if file.type == SnapshotFileType.ChatTemplate:
return
if file.type == SnapshotFileType.Model:
model_file = file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should break here? Can there be multiple SnapshotfileType.Model, if yes then we only see the last.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ModelStore would allow multiple models at the moment and huggingface allows that as well (e.g. mradermacher/SmolLM-135M-GGUF). However, this would be invalid for ramalama as input. When the refs file is serialized, the current approach is to use the last seen model file - the same applies to the chat template file. So currently, I think its probably worth to add some kind of validaton for this (i.e. only one model and chat template in the list of files) and raise an Exception. WDYT?

@rhatdan
Copy link
Member

rhatdan commented Mar 7, 2025

We will do a release on Monday. or Sunday,

@codefromthecrypt
Copy link

So, when I run the following, and access the openai endpoint I still get jinja errors on tool calls as this is foundational to jinja, but not yet jinja, right?

$ gh pr checkout 917
$ python3 -m venv .venv && source .venv/bin/activate && pip install -e . && python bin/ramalama serve qwen2.5:3b

@codefromthecrypt
Copy link

ps if I make this change, and do ramalama serve qwen2.5:3b, my tool calls examples work, except one (semantic-kernel dotnet)

--- a/ramalama/model.py
+++ b/ramalama/model.py
@@ -586,6 +586,7 @@ class Model(ModelBase):
         else:
             exec_args = [
                 "llama-server",
+                "--jinja",
                 "--port",
                 args.port,
                 "-m",

ggml-org/llama.cpp#12279 has details on failures in general with hf qwen2.5, and the error of semantic-kernel dotnet, which also applies here.

@engelmi engelmi force-pushed the add-chat-template-support branch from d756093 to c1474e7 Compare March 9, 2025 09:56
@engelmi
Copy link
Member Author

engelmi commented Mar 9, 2025

@codefromthecrypt By using the --jinja option for llama-run or llama-server, implicitly the built-in chat templates are being used. I assume that one of the llama.cpp built-in templates produces quite similar output to one of the one required by your model. When you run

$ ramalama inspect qwen2.5:3b | grep chat_template

You get basically the jinja template required by that model.
(If the model has been pulled from ollama, then its probably not there since ollama uses Go Templates instead of Jinja - we are working on that as well)

This PR is part of enabling to detect and use the chat templates provided by the platformas such as ollama as well as extract this information from .gguf models. However, this will take a bit longer. So I think its best to use ramalama with your changes for the presentation.

@ericcurtin
Copy link
Collaborator

@codefromthecrypt By using the --jinja option for llama-run or llama-server, implicitly the built-in chat templates are being used. I assume that one of the llama.cpp built-in templates produces quite similar output to one of the one required by your model. When you run

$ ramalama inspect qwen2.5:3b | grep chat_template

You get basically the jinja template required by that model. (If the model has been pulled from ollama, then its probably not there since ollama uses Go Templates instead of Jinja - we are working on that as well)

This PR is part of enabling to detect and use the chat templates provided by the platformas such as ollama as well as extract this information from .gguf models. However, this will take a bit longer. So I think its best to use ramalama with your changes for the presentation.

Long-term we want llama-server to just default to jinja without any manual intervention and fallback to other techniques, needs upstream llama.cpp work.

@codefromthecrypt
Copy link

thanks for the advice folks! ps please applaud @ochafik for the work upstream on llama-cpp! ggml-org/llama.cpp#12279

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants