Helicone Community Page

Updated 3 months ago

Async Logging w/ OpenAI Assistants

Is there sample code for how to incorporate the AsyncLogger into a nextjs app? I had no trouble setting up the proxy, but trying to use the async package instead to avoid any extra network latency as well as helicone being in my app's critical path. Not having much luck integrating though.

I have something like:
Plain Text
export const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY ?? '',
 
});

export const heliconeAsyncLogger = new HeliconeAsyncLogger({
  apiKey: process.env.HELICONE_API_KEY ?? '',
  providers: {
    openAI: OpenAI,
  },
   headers: {
     'Helicone-Property-App-Env': APP_ENV,
     'Helicone-Property-MYoung-Test': 'Test',
   },
});

heliconeAsyncLogger.init();


But i'm not seeing the requests come through the console. I tried moving heliconeAsyncLogger.init() out into other parts of the code, such as the exact function where I make the GPT call but it doesn't seem to help.

Will this log correctly if I'm using OpenAI's beta client? I'm using openai.beta.chat.completion... so that I can use their new structured outputs feature
C
m
4 comments
Hi! Our async package uses OpenLLMetry and unfortunately at this time it does not support assistants.

I noticed you mentioned concerns about the proxy. We've benchmarked it and our results showed sub 10 ms latency added: https://docs.helicone.ai/faq/latency-affect

Additionally, we have not had a proxy incident in over 1 year. We have strict processes in place before deploying the proxy which is quite rare as 99% of our logic is in an entirely different service. We have quite a few large companies using our proxy without incident!
Thanks for the follow-up, @Cole!

sub 10 second latency added

Do you mean sub 0.1 second latency? Unless I'm reading the results incorrectly. 10s added would be... significant πŸ˜›
Yes wow, ms**
Add a reply
Sign up and join the conversation on Discord