Helicone Community Page

L
Luka
Offline, last seen 2 weeks ago
Joined August 29, 2024
Error 400 when calling Get User Data in https://docs.helicone.ai/rest/user/post-v1userquery

I tried both in JS and Postman.
3 comments
L
L
Luka
·

tokens not found

I'm using streaming from OpenAI and log async. I send both collected response as a single message and streamed_chunks. Usage is set to -1. Isn't Helicone responsible for calculating the tokens?
5 comments
C
H
a
L
L
Luka
·

Error logs

Example. The row with meta is logged on May 10th, the above ones are logged on May 13th (either there were no requests or you lost them between). I hope you can see what is going on. 🤞
6 comments
L
J
I can't figure out why Helicone invents different response than what I am logging when stream=true. I verify that I log a simple object with id and completion properties and yet in Helicone I get completely different one. Why??

This is limiting me from releasing the new feature and I must admit it's pretty frustrating. Request is logged all good. I use async logging and pass this object to providerResponse.json. It's been working for everything else.

It also works if I omit stream:true property. I cannot use gateway since it's my own inference server hosted on IP address (no domain).

What can be done about this rather unexpected behaviour?

For context: I'm using Vercel AI to stream response to the frontend and thus I don't have access to the direct streaming response - only the full string upon completion.
3 comments
J
L