Helicone Community Page

Home
Members
Marshall
M
Marshall
Offline, last seen 3 weeks ago
Joined November 6, 2024
Does anyone know whether Helicone injects a default temperature when passing through OpenAI calls? The o1 models don't support temperature yet, so I'm leaving that out. Receiving an error that temperature isn't supported yet. Also using langchain but not sure whether it's langchain injecting temp or helicone...
4 comments
M
J
Question on streaming. I'm using Anthropic and, when I don't stream, everything is fine. When I DO stream and the response takes more than 60 seconds to come back (it seems,) the number of completion tokens shows '1' and the status is marked as Cancelled... though the full response seems to be captured by Helicone. Not a big deal because I don't NEED to stream but am not sure if this is expected behavior.
1 comment
M
I'm getting a 524 from Cloudflare for any Anthropic requests that take > 100 seconds. Is this expected and is there a setting that can be changed?
12 comments
J
M