I'm using streaming from OpenAI and log async. I send both collected response as a single message and streamed_chunks. Usage is set to -1. Isn't Helicone responsible for calculating the tokens?
Hi, apologies, this was resolved. Our token counter service went down and the fallbacks failed. We’re looking into improving this to prevent this happening again in the future.