Helicone Community Page

Hi,
Someone at our organisation by mistake removed admin role from all the users that we have signed up on helicone.
Is there a way we can assign admin and owner role to a user of our organisation on Helicone?
I can not find any option to do so on the platform itself.
1 comment
C
why I can't found the "Key Vault"
What happens when I send the same LLM request before the initial one has finished processing? Does helicone cache and return the initial promise for the second request or both will be processed separately?
I'd like to unsubscribe from the Prompts feature but the toggle doesn't seem to work. I do get a confirmation dialog but when I confirm nothing happens.
I think there's a slight UI issue in the filters dropdown for when you select a value. If I start typing a value it'll appear but initially it will always say "No results found" even though the other properties have been selected - would expect to already see a preset list here
Is the system prompt supported by Helicone for Gemini flash 2.0?

I am having troubles setting it up, Helcione is having errors saying its not supported.

I am using the AI studio implementation over the AI vertex one.
@Justin there a security issue on the production env.

Try to post a request without Helicone-Auth, it'll work.....

The problem is that our keys are going to somewhere
5 comments
J
A
On Self-hosting, i'm getting this error on jawn:
Error processing request 22c48647-7e9d-4d8c-a47b-674445c84344 for batch : Error fetching content from S3: {"error":"Error fetching content from S3: Forbidden, 403"}
Required Kafka environment variables are not set, KafkaProducer will not be initialized.

Any clue?
1 comment
A
site down? dashboard not loading
9 comments
J
d
When do you think we can expect cost support for reasoning models? (o1, o3-mini, deepseek R1 via fireworks)
2 comments
J
Applying filter status != 200 (success) gives error.

TypeError: undefined is not an object (evaluating 'e.preview.response.slice')
1 comment
J
When filtering I believe the page should reset to the 1st one, or at least the latest with data. I was stuck for a good 5 minutes trying to figure out why the filter wasn't working
3 comments
J
R
Sorry for one more question. Suggested by the dashboard, I tried to get the requests from API but I got an error like this:

{"error":"No API key found","trace":"isAuthenticated.error"}

Would it be caused by the EU data region, or any other reasons? I am sure that my API is valid as I am still using it.



import requests

url = "https://api.helicone.ai/v1/request/query-clickhouse"

payload = {
"filter": "all",
"isCached": False,
"limit": 10,
"offset": 0,
"sort": {"created_at": "desc"},
"isScored": False,
"isPartOfExperiment": False
}
headers = {
"authorization": HELICONE_API_KEY,
"Content-Type": "application/json"
}

response = requests.request("POST", url, json=payload, headers=headers)

print(response.text)
Hello, I found that the temperature and top_p values are not logged in the schema for Anthropic only, while others (such as OpenAI and gemini) does have. Is it a bug, or there are some ways to enable it? Thanks!
I am getting this error for using Nillion LLMs, can anyone help me with this?
10 comments
J
H
I'm trying to download requests from helicone but I get this message in the request and response body. What can I do about this?

fetching body from signed_url... contact [email protected] for more information
2 comments
L
J
has anyone had any luck getting the docker compose to work?

I tried following these instructions but am getting issues with yarn build while the jawn dockerfile is building:

Plain Text
=> ERROR [jawn 10/10] RUN yarn build                                                                      20.6s

...
Plain Text
 ⠦ Service jawn                         Building                                                          127.6s 
 ✔ Service clickhouse-migration-runner  Built                                                              96.7s 
failed to solve: process "/bin/sh -c yarn build" did not complete successfully: exit code: 2


also tried the edits proposed here: https://github.com/Helicone/helicone/issues/2284#issuecomment-2503055622

but end up with other issues
Plain Text
> [jawn 10/10] RUN yarn build:
0.547 yarn run v1.22.22
0.586 $ tsc && cp src/utils/trace.proto dist/valhalla/jawn/src/utils/trace.proto
15.67 src/lib/handlers/OnlineEvalHandler.ts(102,11): error TS2322: Type 'string' is not assignable to type 'LlmSc
hema'.
15.67 src/lib/handlers/OnlineEvalHandler.ts(103,11): error TS2322: Type 'string' is not assignable to type 'LlmSc
hema'.
15.70 error Command failed with exit code 2.
[+] Running 0/1t https://yarnpkg.com/en/docs/cli/run for documentation about this command.
 ⠼ Service jawn  Building                                                                                  16.5s 
failed to solve: process "/bin/sh -c yarn build" did not complete successfully: exit code: 2
4 comments
J
a
Hi, we have been processing gpt-4o vision requests a lot yet all them cannot load in images in the request. Therefore, I'm not able to analyze if user submitted requests are accurate - any idea how to fix this? Thanks!

example of a request ID: 8b851323-33e9-4235-9e6b-8d71e510a78f
2 comments
J
J
Hey there. Quick question on the self-hosted option. Is there any documentation on the features that are available?
2 comments
C
T
i'm not seeing list of possible tools in the json view of requests anymore. the llm is getting the tool list because it's responding with a tool call. but i can't see it in the helicone ui.
11 comments
J
t
R
Hi, in the Playground I'm getting an undefined response using the function tooling?
1 comment
J
Another thing, when using Anthropic on Vertex through Helicone, it cannot retrieve the number of tokens of the calls.
1 comment
J
Hi team, thanks for this amazing product.

I'm having an issue with using Anthropic on Vertex with Helicone. Basically, I can no longer stream the responses. It arrives in one block. Here is my setup:

Plain Text
export const anthropic = new AnthropicVertex({
 baseURL: 'https://gateway.helicone.ai/v1',
 projectId: process.env.ANTHROPIC_VERTEX_PROJECT_ID,
 region: process.env.CLOUD_ML_REGION,
 googleAuth: new GoogleAuth(getAuthClient()),
 defaultHeaders: {
 'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
 'Helicone-Target-URL': `https://${process.env.CLOUD_ML_REGION}-aiplatform.googleapis.com`,
 'User-Agent': 'node-fetch',
 },
 fetch: fetchAnthropic,
 maxRetries: 3,
})


When I remove the Helicone baseURL, streaming works as expected. It would be great if you could solve that.
Unable to use Helicone with AWS Bedrock Claude Sonnet 3.5 V2 + Litellm
18 comments
J
S
Does anyone know whether Helicone injects a default temperature when passing through OpenAI calls? The o1 models don't support temperature yet, so I'm leaving that out. Receiving an error that temperature isn't supported yet. Also using langchain but not sure whether it's langchain injecting temp or helicone...
4 comments
J
M