Athene-V2-Agent: Surpassing GPT-4o for Tool Use And Agentic Usecases

Nexusflow HF - Nexusflow Discord - Athene-V2 Blogpost

NexusRaven

Introducing Athene-V2-Agent

Athene-V2-Agent is an open-source Agent LLM that surpasses the state-of-the-art in function calling and agentic capabilities.

Benchmark

๐Ÿ’ช Versatile Agent Capability: Athene-V2-Agent is an agent model, capable of operating in environments with deeply nested dependencies with the environment. It is capable of reasoning and doing planning for trajectories with many tool calls necessary to answer a single query.

๐Ÿ“Š Performance Highlights: Athene-V2-Agent surpasses GPT-4o in single FC tasks by 18% in function calling success rates, and by 17% in Agentic success rates.

๐Ÿ”ง Generalization to the Unseen: Athene-V2-Agent has never been trained on the functions or agentic settings used in evaluation.

Athene-V2-Agent Model Usage

OpenAI-Compatible FC

Athene-V2-Agent is usable in any OpenAI API-compatible environment using our VLLM docker image. This should be a simple "drop-in" replacement to any agentic or tool-use setting with our VLLM docker image.

docker run --name athene-v2-agent \
    --runtime nvidia --gpus '"device=0,1,2,3,4,5,6,7"' \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=<secret>" \
    -p <port>:8000 \
    --ipc=host \
    ghcr.io/nexusflowai/athene-v2-vllm:latest \
    --model Nexusflow/Athene-V2-Agent \
    --dtype=auto \
    --tensor-parallel-size=8 \
    --enable-auto-tool-choice \
    --tool-call-parser Athene-V2-Agent

You can now submit any OpenAI-Compatible tool-use requests to the model by hitting the VLLM endpoint. Athene-V2-Agent will be able to issue tool calls that you can execute and return results for.

WARNING: Athene-V2-Agent uses a CUSTOM prompting style that is baked into the custom docker image, as the executable calls are extracted from the model's generated planning. For best performance, please ensure to use the docker image above for Athene-V2-Agent, including when benchmarking the model. Using HuggingFace tokenizer's chat template will yield suboptimal results for Agent usecases. Please reach out to us on Discord if you run into any issues!

Examples

An example Weather agent for this can be found here: Link. This example includes handling Athene for queries that are answerable and not answerable by the current tools.

An example extraction and RAG-Agent can be found here: Link. This example includes handling RAG-based queries with a wikipedia tool.

Prompting Tricks

  1. When giving docstrings to Athene-V2-Agent, please provide well-indented, detailed, and well-written docstrings as this can help accuracy.
  2. We strongly recommend using the docker image to interact with Athene-V2-Agent.
  3. We strongly recommend to set sampling to False when prompting Athene-V2-Agent.
  4. We strongly recommend a zero temperature.
  5. Athene-V2-Agent is designed to work within systems, so it's tuned to be very controllable with the instructions specified in the tools, including for broad behaviors (like rejecting queries, or chatting)

Handling Irrelevant Queries

The Athene-V2-Agent model is strongly tuned to have its behavior be controllable with tools to make it easy to integrate into systems.

Therefore, the model won't by default reject queries that are out of domain, as it will try its best to issue the most relevant call. However, when expecting irrelevant user queries and wanting the model to reject them, you can use a no-op function. For example, something like this would work:

{
    "type": "function",
    "function" : {
      "name": "no_relevant_function",
      "description": "Call this when no other provided function can be called to answer the user query.",
      "parameters": {
        "type": "object",
        "properties": {
          "user_query_span": {
            "type": "string",
            "description": "The part of the user_query that cannot be answered by any other function calls."
          }
        },
        "required": ["user_query_span"]
      }
    }
}

Please see the example Link here for a demo of this.

Handling Chat With FC

Since Athene-V2-Agent model is strongly tuned to be controllable, so we wanted to ensure that it does not chat unless explicitly instructed to do so. You can do this by adding a chat tool, and allowing it to do so in the system prompt:

{
    "type": "function",
    "function": {
        "name": "chat",
        "description": "Call this tool when you want to chat with the user. The user won't see anything except for whatever you pass into this function. You can use this tool to ask for more information when insufficient information is presented, and to send the final results back to the user.",
        "parameters": {
            "type": "object",
            "properties": {
                "chat_string": {
                    "type": "string",
                    "description": "The chat message to send to the user to chat back to them.",
                }
            },
            "required": ["chat_string"],
        },
    },
}

And the following system prompt, as an example (but feel free to experiment to make Athene-V2-Agent behave the way you want it to!):

{"role" : "system", "content" : "You can use the chat tool to ask the user for more information, and to send the final results."},

Please see the example Link here for a demo of this.

Contact

Please join our Discord Channel to reach out for any issues and comments!

Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibrahimkettaneh/Athene-V2-Agent-4.0bpw-h6-exl2

Base model

Qwen/Qwen2.5-72B
Finetuned
(27)
this model