Hi, We are trying to query the data we are sending to Helicone as we want to store it in our own DB's. However, I'm not seeing any data being returned when I query for the latest response in the playground (basically copied and pasted the sample code and put in my API key per the documentation instructions. I'm not sure what I'm doing wrong here...
The second issue I'm hitting is that we are using LangchainJS to power our API calls via a ConversationalRetrievalQAChain. However I'm seeing that our history isn't being stored in the Helicone requests (i.e. each API call is coming through as a request / response and the history isn't being tracked which used to be the case if I query from OpenAI directly). Wondering if there's any ways to get around this as the conversation history is very important for us to store in our DB.
Hey ! We have a known bug with the graphql API that will be fixed tomorrow, apologies there!
Can you elaborate on what you mean by "our history isn't being stored in the request"? Is this previous messages in the chat completion?
In the requests table, the request and response columns show the latest message sent and latest message received. If you click into the row, you'll see all the messages in that conversation. Is this the history you are referring to?
yup thats the history I'm referring to. when we save the responses back in our DB we'd likely want to know which chat history it came from. We are able to see the message history in the API call on our end so I'm not sure why its not showing up on the Helicone side
Helicone doesn't truncate any of the message history when it logs, so if you're missing some history the truncation might come from the langchain side, yeah
It looks like there is context there beyond just what went into the prompt. Helicone doesn't log anything in addition to the prompt at the moment. Where is that request payload going to?
We have it setup as an API route call in NextJS 13. We then make a chain and have Helicone integrated there. The chain then uses ConversationalRetrievalQAChain to actually make the call. This is kind of hard to explain without showing code
We want to and have been asked to build a tighter integration with langchain so that complex chains show the full trace of requests and link them with each other
It's possible to do some custom patching to add a property for each chain via this feature https://docs.helicone.ai/advanced-usage/custom-properties. You can then filter results by chain -- but the UI isn't currently great for that, we're working on having better support for traces!
This should work now! This is a bit of a slow query. I am working on optimizing querying for a specific user id now. π this should be a lot faster by tomorrow. We just have to shuffle some data around to make this query speedy πββοΈπ¨