Is there any way to segment our requests on the Dashboard view by the LLM provider used to fulfill the request? We are starting to split requests across OpenAI and Azure and it would be good to be able to see both separated in the dashboard.
hi @jluxenberg, yes, currently those properties will only show up in the requests page. we are actively working on reworking our dashboard and queries to handle this, but we have not released it yet.
in the mean time, another way you can force the model to separate for the dashboard is by using the Helicone-Model-Override header and then manually setting it. sorry for the inconvenience!