Back to directory
GitHub logo

GitHub

Filters harmful AI output by adjusting how the AI picks words, like a quiet safety guard.

Free
0 views
0 likes
Added Apr 2, 2026
Large Language Models (LLMs)
Fine-Tuning Platforms
Domain-Specific LLMs

ReskLogits is a tool for making AI safer. It changes the raw scores that an AI creates when it's trying to decide what to say next. This helps prevent bad or unsafe content from being generated. It works behind the scenes so the user doesn't even notice, making it a "shadow ban" system.

Pricing Details

Free

Details

Pricing
Free
Website
github.com
Added
Apr 2, 2026