Helicone Community Page

C
K
m
H
E
OpenAI calls via llama-index log to helicone but this does not seem to be possible for anthropic calls - the docs don't mention a specific api base for anthropic but I've tried f"https://anthropic.helicone.ai/%7BHELICONE_API_KEY%7D/v1" which wasn't working. Am I correct in that there is no helicone-llama-index integration for anthropic models?
cc: @Cole
11 comments
C
m
H
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
helicone 1.0.14 requires openai<0.28.0,>=0.27.0, but you have openai 1.50.1 which is incompatible.
2 comments
C
K
I am unable to remove members from our org. The UI shows "Member removed successfully" but nothing changes. Have tested in different browser and full page refresh etc.

The problem is: I want to upgrade to the Pro plan with just one member, as we're exceeding the free plan quota. But without removing members, my pricing becomes 120$/mo whereas I actually only need one seat.

I'm afraid running a free account above quota will start rejecting our LLM requests while I'm unable to upgrade because of this issue.
4 comments
e
S
We have been using Helicone for over a year now, and we are encountering the issue where some data seems incorrect because the first / last active do not present the year (e.g. in the picture)
1 comment
J
As for the title, I have been trying to use the filter multiple times over the last months but it doesn't seem to be working correctly
The below connection doesn't work altough everything is setup correctly.

I even have to add the "apiKey" in the root of the config object, cause otherwise I get an error from openAI that the apiKey is undefined.

This also isn't mentioned in the Helicone Azure Doc

Any ideas?


import OpenAI from "openai"

const openai = new OpenAI({
baseURL: "https://oai.helicone.ai/openai/deployments/[DeploymentName]",
defaultHeaders: {
"Helicone-Auth": Bearer ${process.env.HELICONE_API_KEY},
"Helicone-OpenAI-API-Base": "https://chatarmin-ai.openai.azure.com",
"api-key": process.env.AZURE_API_KEY,
"Helicone-Cache-Enabled": "true",
},
apiKey: process.env.AZURE_API_KEY,
defaultQuery: { "api-version": "2024-05-13" },
})
5 comments
R
C
I'm using Azure OpenAI with stream and the information related to tokens doesn't appear
5 comments
f
J
The Async Integration, which is the only supported way of logging Mistral, Gemini, Cohere, AWS Bedrock, and a few other models, relies on OpenTelemetry, which doesn't seem to support edge environments
1 comment
J
Should not be an error if the response is 200?
I have the following:
Plain Text
 const result = await generateText({
      model: openai("gpt-4-turbo-2024-04-09"),
      headers: {
        Environment: process.env.NODE_ENV,
        "User-Prompt": args.userPrompt,
        "Block-Id": args.blockId,
...
}

Everything is working fine except for the fact that I can't find environment being logged, i just see the helicone env variables
4 comments
D
S
Most likely due to the fact that the prompt is empty
2 comments
J
c
Able to properly get a response from OpenAI - it even stores in Minio fine... but my dashboard is empty


Do i need Kafka running or something?
6 comments
b
J
Setting a custom date range on the requests page doesn't work if the two dates are on the same day
Hey guys, found the issue with tokens calculation which affects cost projection. Instead of 2-4k we get ~22 tokens 😀. Noticed that for all OpenAI models almost a month ago (streaming is on). Have you met the same before 😀 ? We're using proxy integration (https://docs.helicone.ai/integrations/openai/python) with the following headers

headers = {
"Helicone-Auth": f"Bearer AUTHKEY",
"Helicone-Cache-Enabled": "true",
"Helicone-Property-CUSTOMPROPERTY": "custom_prop_value",
"Helicone-User-Id": user_id
}
33 comments
J
O
N
a
I am getting 200 Error responses from anthropic (I'm very confused by it myself...).

Unfortunately when filtering, they are treated the same as 200 Success responses. That means that I can't search for them except by filtering for all 200 responses and then scrolling through. This doesn't work great given there's 100000 status-200 calls in the time period I'm looking at 🥲
2 comments
J
Plain Text
{"helicone-message":"Helicone ran into an error servicing your request: TypeError: Cannot read properties of undefined (reading '0')","support":"Please reach out on our discord or email us at [email protected], we'd love to help!","helicone-error":"{}"}


Plain Text
import { createAnthropic } from "@ai-sdk/anthropic";
import { createOpenAI } from "@ai-sdk/openai";

export const heliconeDefaultHeaders = {
  "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  "Helicone-Cache-Enabled": "true",
  "Helicone-Moderations-Enabled": "true",
  "Helicone-LLM-Security-Enabled": "true",
  "Helicone-Retry-Enabled": "true",
  "Helicone-Retry-Num": "7",
  "Helicone-Retry-Min-Timeout": "1000",
};

export const openaiprovider = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://oai.helicone.ai/v1",
  headers: heliconeDefaultHeaders,
});

export const anthropicprovider = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  baseURL: "https://anthropic.helicone.ai/v1",
  headers: heliconeDefaultHeaders,
});

export const groqprovider = createOpenAI({
  apiKey: process.env.GROQ_API_KEY,
  baseUrl: "https://groq.helicone.ai/openai/v1",
  headers: heliconeDefaultHeaders,
});
5 comments
e
C
Hey is request.env a ruby on rails thing? Is this meant to be in the JS docs? source: https://docs.helicone.ai/integrations/openai/javascript
6 comments
I
J
You might want to fix this in the authorize button:
What is chitalain?
6 comments
J
R
I made the correct settings using VertexAI (enterprise), but the requests do not appear in helicone.
26 comments
J
f
C
D
Hello,

New to helicone and using proxy integration via OpenAI’s JavaScript SDK but seeing weird metrics

Plain Text
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: request.env.OPENAI_API_KEY,
  baseURL: "https://oai.helicone.ai/v1",
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${request.env.HELICONE_API_KEY}`,
  },
});
4 comments
S
M
D