The AI used doesn't necessarily have to be an LLM. A simple model for determining the "safety" of a comment wouldn't be vulnerable to prompt injection.
My instance admin is also extremely oppressive.
The AI used doesn't necessarily have to be an LLM. A simple model for determining the "safety" of a comment wouldn't be vulnerable to prompt injection.