The LLMBI programmatically quantifies biases in LLMs across multiple dimensions, including gender, race, religion, age, nationality, disability, sexual orientation, physical appearance, and cultural and socioeconomic status.
This benchmark takes the core LLMBI algorithm which was created for LLMs and enables it to be run on AI assistants and agents as well. We believe this is important as AI Agents are created with instructions which may introduce biases not present in the LLMs themselves.
The LLMBI algorithm was created by Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina. AIBiasIndex is not affiliated with those individuals or the paper in any way.