Helicone Community Page

Is there any way to delete rows/requests? I have so many items now it appears to be lagging the UI so much that it is becoming unusable!
1 comment
J
Hello, I hope you are doing well.

I'm to use the API (api.helicone.ai/v1/request/query ) to fetch my user requests but I can't because of the API KEY.

I generated the key on my organization dashboard and use it for my requests tracking with GPT purpose but I want to get my informations I had 401 No API KEY found error.

Could you mind helping me understand that?
5 comments
J
F
Dear Sir/Madam,

I hope this message finds you well.

I am reaching out regarding the retention period of user logs. According to the available information, it appears that logs are stored for a period of three months. I would like to inquire whether it is possible to extend this retention period indefinitely, and if so, whether this option would be available for an additional fee.

Thank you in advance for your assistance. I look forward to your response and am happy to provide any further details if needed.

Best regards,

Ferdinand Yao ALLOWAKOU
OUEBX SARL
2 comments
J
F
Hi, i read though the document but did not find the way to post custom log data to Helicone, because some LLM models are not supported by Helicone yet, i would like to get the usage data and send over to Helicone. Can this be done? Thanks.
Plain Text
 Error parsing default response: SyntaxError: Unexpected token '!', "!
Error processing response body Error parsing body: SyntaxError: Unexpected token '!', "!�
                                                                                          ~���i"... is not valid JSON, !�
Facing this issue in helicone worker openai proxy container and helicone-jawn container:
1 comment
t
Why can't my account be created?
This is how I "fix" the clickhouse migration runner (lack of tables) issue: https://github.com/Helicone/helicone/issues/2965
i'm facing this issue right now
You can access the Supabase auth users here: http://localhost:54323/project/default/auth/users
I'm attempting to self-host Helicon on my local machine, but I can't access http://localhost:8989/project/default/auth/users. The console logs show the following error:

Plain Text
clickhouse-migration-runner      |   File "ch_hcone.py", line 35
clickhouse-migration-runner      |     curl_cmd = f"cat \"{migration_file}\" | curl '{
clickhouse-migration-runner      |                                                   ^
clickhouse-migration-runner      | SyntaxError: EOL while scanning string literal
clickhouse-migration-runner exited with code 1


I've checked the documentation and can't find any steps I might have missed.
Hello,
After upgrading to the Pro plan with the Alert add-on, I am unable to use the alert feature. I believe the system did not detect my upgrade.
3 comments
J
d
Even after enabling prompts and alerts in the pro plan, I can't see the prompts UI. It just asks to enable the plugin which is already enabled through the settings.
2 comments
K
J
i am trying this code to add helicone to my model:
def initialize_llm(logger):
"""Initialize LLM with OpenRouter configuration."""
logger.debug("Starting LLM initialization")
try:
# Get API keys
openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
helicone_api_key = os.getenv("HELICONE_API_KEY")

if not openrouter_api_key or not helicone_api_key:
raise ValueError("Missing required API keys")

# Initialize LLM with proper configuration
llm = LLM(
model="anthropic/claude-3-haiku",
custom_llm_provider="openrouter",
api_base="https://openrouter.ai/api/v1/chat/completions", # Changed this
default_headers={
"Authorization": f"Bearer {openrouter_api_key}",
"Helicone-Auth": f"Bearer {helicone_api_key}",
"HTTP-Referer": "https://github.com/your-repo",
"X-Title": "Blog Creation Bot",
"Content-Type": "application/json"
}
)

logger.info("Successfully initialized OpenRouter LLM with Claude and Helicone tracking")
return llm

except Exception as e:
logger.error(f"Error in LLM initialization: {str(e)}\n{traceback.format_exc()}")
raise

but i get error 404

what am i doing wrong here?
thanks!
5 comments
G
j
K
Anyone know what this means for the response in the dashboard? "error": "HTML response detected:", e.g. we have a request id = 218463c0-c5bf-42a5-b169-a06e41274fe5 on helicone, and we're wondering if it's something on our end or helicone's
3 comments
J
h
N
Nico
·

Delay

What's the expected delay between the completion of a request and it's appearance in the dashboard?
1 comment
J
Hi,
I'm using Python,
Did anyone work with Together client with base url?
it seems to not show up on my dashboard at all.
but i get the results back from the model.

Plain Text
 together_client = Together(
    api_key=together_api_key,
    # base_url="https://together.helicone.ai/v1",
    # default_headers={
    #     "Helicone-Auth": f"Bearer {helicone_api_key}",
    # },
)

i switched to the Together client as it didn't work but tried to use this code :
Plain Text
client = OpenAI(
  api_key="your-api-key-here",  # Replace with your OpenAI API key
  base_url="https://together.helicone.ai/v1"
  default_headers= {  # Optionally set default headers or set per request (see below)
    "Helicone-Auth": f"Bearer {HELICONE_API_KEY}",
  }
)
2 comments
K
what does "Model is pending mapping" mean on the request view? seems like a straightforward OAI call but it's not prettifying it. what seems to be an identical one is, on the other hand
10 comments
J
A
hey! a lot of litellm calls are not logged in helicone.. I can see in the logs that it correctly reaches to the logging, but after the whole call doesn't wait for the helicone request to complete..
it happens with the biggest requests (sending large messages)
3 comments
J
C
C
1 comment
E
OpenAI calls via llama-index log to helicone but this does not seem to be possible for anthropic calls - the docs don't mention a specific api base for anthropic but I've tried f"https://anthropic.helicone.ai/%7BHELICONE_API_KEY%7D/v1" which wasn't working. Am I correct in that there is no helicone-llama-index integration for anthropic models?
cc: @Cole
11 comments
H
C
m
F
Fun Capital
·

Gemini

i'm also getting this error

Plain Text
HTTPSConnectionPool(host='gateway.helicone.ai', port=443): Max retries exceeded with url: /v1beta/models/gemini-1.5-flash-002:generateContent?%24alt=json%3Benum-encoding%3Dint (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2417)')))
3 comments
J
F
Hey folks, I've gotten self-hosted Helicone up and running. I've used Postman to send a test request via the OpenAI proxy but am not seeing anything popping up in the dashboard. Any guidance? I'm using docker-compose FWIW.
9 comments
p
J
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
helicone 1.0.14 requires openai<0.28.0,>=0.27.0, but you have openai 1.50.1 which is incompatible.
2 comments
C
K