Helicone Community Page

Updated 12 months ago

The unexpected behavior with Helicone and streaming responses

I can't figure out why Helicone invents different response than what I am logging when stream=true. I verify that I log a simple object with id and completion properties and yet in Helicone I get completely different one. Why??

This is limiting me from releasing the new feature and I must admit it's pretty frustrating. Request is logged all good. I use async logging and pass this object to providerResponse.json. It's been working for everything else.

It also works if I omit stream:true property. I cannot use gateway since it's my own inference server hosted on IP address (no domain).

What can be done about this rather unexpected behaviour?

For context: I'm using Vercel AI to stream response to the frontend and thus I don't have access to the direct streaming response - only the full string upon completion.
Attachments
image.png
image.png
J
L
3 comments
Hi Luka!

We do this to reconstruct the body and calculate the usage tokens.

I would recommend omitting the stream:true field for now to prevent the async logger from attempting to restruct the body for you.

If you want to include this field for some reason, please let us know and we can figure out a solution that makes sense
Hey @Justin , this is not working anymore.Tokens not counted for gpt-4-1106-preview, gpt-4-0125-preview models.
Hi! Sorry Luka for the issue, Would you mind adding me to your org justin@helicone.ai and let me know your org name and we can debug? @Luka
Add a reply
Sign up and join the conversation on Discord