is helicone doing work on representation-level observability? for example, gathering insight on user requests based on the content of the requests themselves and how they are represented in the llm? could use this to do prompt regression or directly updating model weights
No we are not, but @Cole and I would love to learn more about your use case if you wouldn't mind hopping on a quick call to chat about what you are looking for?