Helicone Community Page

Updated 5 months ago

Can I see your full code with LLM request?

Oh, can I see your full code with litellm request?
C
s
18 comments
hey!

this is the code:
Hi @Caco ! Instead of extra_headers you need to provide metadata and pass helicone headers the same way
Attachment
image.png
Like this
response = completion( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi πŸ‘‹ - i'm openai"}], metadata={ "Helicone-Property-Hello": "World" } )
thanks!

that works for async non-streaming gpt-4o-mini
But async streaming sonnet doens't log
I guess it's because LiteLLM doesn't support async streaming for Anthropic, let me double check it
it works for me! just the logs don't work
Hm, do you pass correct helicone api key?
yep, passing them as

litellm.headers = {
"Helicone-Auth": f"Bearer {helicone_api_key}",
}

and in the metadata:

metadata = {
"Helicone-Auth": f"Bearer {helicone_api_key}",
}
tried removing litellm.drop_params = True and doesn't work either
Are the logs not appearing only for Anthropic or also for OpenaAI?
OpenAI without streaming works. Let me test OpenAI with streaming.
This is the result of OpenAI with streaming πŸ˜…
Attachment
Screenshot_2024-07-19_at_09.54.32.png
(It was just one request)
Can you please add me to your org so I can take a look? stefan@helicone.ai
done, the org is perhaps-dev
Are there any requests shown for Anthropic non-streaming/streaming?
Let me see if Sonnet withoutn streaming logs
Yep, it logs without streaming
Add a reply
Sign up and join the conversation on Discord