Helicone Community Page

Updated 4 months ago

Content filtered returning Success and odd payload response

Hi!

We noticed this issue happening a couple of time for the past weeks
Usually, when we get a Content Filtered response from our Azure deployment, it raises a status 400 and we were able to handle it correctly

For the past weeks, we noticed that some of these requests are returning a Success status and the payload is very different from the normal filtered cases... I believe it might be an issue on Helicone because we didn't updated/changed/deployed the model since last year

Example:
Helicone ID: 5eaa6e7f-29c1-4cde-adf0-3075e69bac93

JSON response:
Plain Text
{
  "choices": [
    {
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": true,
          "severity": "medium"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      },
      "finish_reason": "content_filter",
      "index": 0,
      "message": {
        "role": "assistant"
      }
    }
  ],
  "created": 1714756169,
  "id": "chatcmpl-9KqhV3YNO7FpPCsk8VlYDVbbF4KJe",
  "model": "gpt-4",
  "object": "chat.completion",
  "prompt_filter_results": [
    {
      "prompt_index": 0,
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  ],
  "system_fingerprint": "fp_2f57f81c11",
  "usage": {
    "completion_tokens": 120,
    "prompt_tokens": 715,
    "total_tokens": 835
  }
}


Expected response
ID: d2064246-399d-4978-a571-ddfdfbcaf19e

payload:
Plain Text
{
  "error": {
    "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
    "type": null,
    "param": "prompt",
    "code": "content_filter",
    "status": 400,
    "innererror": {
      "code": "ResponsibleAIPolicyViolation",
      "content_filter_result": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": true,
          "severity": "medium"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  }
}
D
C
14 comments
quick pinging you guys. Do we have an idea of what it can be? Do you need more details? If you need to investigate it, do we have any ETA?
Hi, want to confirm, the main issue here is that instead of a 400, you are receiving a 200 status code?
exactly!

And to help with the debug, I shared one of the response payloads where I can see the filter status object inside of choices (where should be the LLM response) and prompt_filter_results which should be the correct place
Also, I noticed that only the filter status object inside choices has the correct payload:
Plain Text
"sexual": {
  "filtered": true,
  "severity": "medium"
}
So, the difference in the response bodies is not the issue? Just the status that is returned?
no, as long it returns the correct HTTP status 400 when it gets filtered
The responses from Azure are interesting.

The one that is a 400 has an error response body, while the success one does not. This makes me suspect that Azure is returning a 200 for the other one.

The response from the success one even has tokens! Very odd
yup! is there any chance that it can be something on Helicone side? while parsing the result or something like that?
The one difference I notice in the request, is that the one that returned the error specifies this in the request:

Plain Text
"response_format": {
    "type": "json_object"
  },
We do not do anything with usage until after the response is returned to you. Then we send the log to Kafka which gets consumed in ECS which calculates usage.
Do you run into this when not using Helicone?
got it, so you do literally a proxy
My team member have tried a few times and it worked (returning 400) but let me double-check it myself
Yes, since we are in the critical path, we make sure to not make changes to anything unless requested (through headers)
Did a few tests here and it fails due to the content filtering
But I have noticed something odd with the openai python lib... I will do some more checks and get back to you! I need to stop everything to check something else now
Ok, let me know what results you get! Thank you
Add a reply
Sign up and join the conversation on Discord