okr765

joined 10 months ago
[โ€“] okr765@lemmy.okr765.com 1 points 1 month ago

The AI used doesn't necessarily have to be an LLM. A simple model for determining the "safety" of a comment wouldn't be vulnerable to prompt injection.

[โ€“] okr765@lemmy.okr765.com 19 points 1 month ago (1 children)

My instance admin is also extremely oppressive.