Experimental browser for the Atmosphere
Most tests for LLM biases use questionnaires, asking the model to generate a stance towards a given topic. Sadly, biases can re-emerge when the model is used in the application context. We show that apparently unbiased LLMs exhibit strong biases in conversations. Preprint: arxiv.org/abs/2501.14844
Feb 4, 2025, 9:34 AM
{
"text": "Most tests for LLM biases use questionnaires, asking the model to generate a stance towards a given topic. Sadly, biases can re-emerge when the model is used in the application context. We show that apparently unbiased LLMs exhibit strong biases in conversations.\nPreprint: arxiv.org/abs/2501.14844",
"$type": "app.bsky.feed.post",
"embed": {
"$type": "app.bsky.embed.images",
"images": [
{
"alt": "",
"image": {
"$type": "blob",
"ref": {
"$link": "bafkreibpwzoyyemjt44frx7zltxqxvy3svmofsy25rqdba7kiux75gxcpa"
},
"mimeType": "image/jpeg",
"size": 264459
},
"aspectRatio": {
"width": 1470,
"height": 682
}
}
]
},
"langs": [
"en"
],
"facets": [
{
"index": {
"byteEnd": 298,
"byteStart": 274
},
"features": [
{
"uri": "https://arxiv.org/abs/2501.14844",
"$type": "app.bsky.richtext.facet#link"
}
]
}
],
"createdAt": "2025-02-04T09:34:43.754Z"
}