is helicone doing work on representation-level observability? for example, gathering insight on user requests based on the content of the requests themselves and how they are represented in the llm? could use this to do prompt regression or directly updating model weights