Chennai, Oct. 13 -- Artificial Intelligence (AI) systems, that are increasingly pervading our lives, often mimic societal biases that can discriminate in harmful ways. To address this, a group of five research scientists from IIT-Madras developed a design that can address these stereotypes but ones that are entirely Indian.
Most existing efforts of bias evaluations models have been western-centric, primarily analysing disparities in gender and race. To address that gap, the dataset (data that is used for analytics and to train machine learning models) launched by IIT-M aims to detect and assess biases in the complexities of caste, gender, religion, disability, and socioeconomic status in India which can be used by large language models w...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.