Helicone Community Page

a
assaf
Offline, last seen 2 months ago
Joined August 29, 2024
I noticed when using gpt-4-vision-preview and sending image parts, the token calculations are negative. The token counts are positive when sending messages with no image parts, and using other models like Anthropic seem to count tokens correctly.
1 comment
J
When streaming responses from Claude 3 using the Anthropic Typescript SDK, after several tokens, I get an error:

Plain Text
Could not parse message into JSON: 
From chunk: [ 'event: content_block_delta' ]
 ⨯ Error: failed to pipe response


This error doesn't happen if I go directly to Anthropic's API by omitting the baseURL and defaultHeaders in the client constructor:

``function getAnthropic() { return new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, // baseURL: 'https://anthropic.hconeai.com/', // defaultHeaders: { // 'Helicone-Auth': Bearer ${process.env.ANTHROPIC_API_KEY}`,
// },
});
}
4 comments
a
J
a
Hey guys - does Helicone support logging of messages with multiple text parts? I'm sending in something that looks like
Plain Text
{
  type: 'text',
  text: '<student_answer_question_part1>1</student_answer_question_part1>'
},
  {
  type: 'text',
  text: '<student_answer_question_part2>2</student_answer_question_part2>'
},

In the message and it seems to be received and understood by the LLM, but only the first text part is being logged in Helicone.
1 comment
J