Helicone Community Page

Updated 6 months ago

Gemini UsageMetadata

@Cole , To achieve this functionality (To unblock cost metrics, add usageMetadata to the async log) how do we suppose to pass the usageMetadata?
C
B
4 comments
Creating a thread to better track this conversation.
Hi! You are currently passing in usageMetadata in the async log response. On the Helicone request page, your responses look like this:

Plain Text
[
  {
    "usageMetadata": {
      "promptTokenCount": 548,
      "candidatesTokenCount": 281,
      "totalTokenCount": 829
    }
  }
]


This seems to be hardcoded at the moment. We can change that. Vertex AI returns usageMetadata object in their responses. Here is an example:

Plain Text
{
    "candidates": [
        {
            "content": {
                "role": "model",
                "parts": [
                    {
                        "text": "I am sorry, I cannot fulfill this request. I do not have access to real-time information such as current movie showtimes. \n\nWould you like me to try searching for something else? \n"
                    }
                ]
            },
            "finishReason": "STOP",
            "safetyRatings": [],
    "usageMetadata": {
        "promptTokenCount": 9,
        "candidatesTokenCount": 42,
        "totalTokenCount": 51
    }
}


You can see the usageMetadata JSON object is there. If you grab that out of the response from Vertex AI, then pass it just like you are currently passing in the hardcoded usageMetadata, you will start getting costs metrics.
@Cole , Do you guys support gemini-1.5-flash-001 model for the logging?
Hi, yes we do!
Attachment
Screenshot_2024-06-11_at_10.49.23_AM.png
Add a reply
Sign up and join the conversation on Discord