Helicone Community Page

Updated 11 months ago

Segmenting requests on the dashboard view by llm provider

Is there any way to segment our requests on the Dashboard view by the LLM provider used to fulfill the request? We are starting to split requests across OpenAI and Azure and it would be good to be able to see both separated in the dashboard.
j
a
5 comments
Unfortunately, since it's OAI GPT-4 on both providers the "model" based segmentation won't work for this.
the best way to do this would be to use our custom properties filter and tag the provider!

https://docs.helicone.ai/features/advanced-usage/custom-properties#what-are-custom-properties

let me know if you have any questions about this!
Thanks for the note @ayoKho!

On our Dashboard view, I'm only seeing these filters and not any of the customer properties we send:
https://share.commandbar.com/xSpTmfhp
("model", "status", "latency" etc)
hi @jluxenberg, yes, currently those properties will only show up in the requests page. we are actively working on reworking our dashboard and queries to handle this, but we have not released it yet.

in the mean time, another way you can force the model to separate for the dashboard is by using the Helicone-Model-Override header and then manually setting it. sorry for the inconvenience!

https://docs.helicone.ai/helicone-headers/intro
Add a reply
Sign up and join the conversation on Discord