Helicone Community Page

Updated 11 months ago

Correctly Identifying Deployed Model on Helicone with Azure OpenAI

Hey folks, could I get some help?
I have a setup on Helicone with OpenAI python lib calling Azure OpenAI service
I'm wondering if there is a way to correctly identify my deployed model because it shows GPT-4 instead of GPT-4-Preview and I'm not sure how Helicone retrieves the model for Azure OpenAI
D
J
14 comments
@ayoKho @Justin any thoughts here?
I can see the request correctly showing gpt-4-preview-1106 when the request is Pending but it changes to gpt-4 on success
one example ID for reference e235ce5e-49f1-4030-96b8-e7cac01a3933

We have been using the platform for 1 month with around ~2k daily requests and planning to enable it for all our Customers
But we need to approximately measure our cost with the correct model (there is a cost difference between gpt-4 and gpt-4-preview-1106 which is the newly Turbo version released)
oh we prioritize the model put into the response
over the request model
Thanks for flagging this!
that is super frustrating! This is great to know though
A custom header could be a quick way for us to force it, like Helicone-Custom-Model

Also, on the same note. It sets the model to N/A when the request fails as well (for Rate Limiting for example)

Let me know if you folks have any ETA
Attachment
image.png
This is a really smart idea!! I just cut a small PR to allow exactly this
Our CTO just let me know we have this already
Helicone-Model-Override
Let me know if you have any questions or issues using this
Sure! Sounds good. I will give it a try tomorrow morning. Thanks for the help πŸ™‚
no problem! πŸ™‚
Add a reply
Sign up and join the conversation on Discord