Experimental browser for the Atmosphere
Loading post...
{ "uri": "at://did:plc:sj4us4pfmzfywarwpi2o3adl/app.bsky.feed.like/3lnmhygcrc224", "cid": "bafyreid3t2podgscmw7bpmy27ali223klpvau6vugijhcqfuhvc2vticay", "value": { "$type": "app.bsky.feed.like", "subject": { "cid": "bafyreic4kf3hq4kk6rlqbprca2r7p5xxdtr52qchmkzovxregg3t4qirza", "uri": "at://did:plc:t3b3uqkh7o3v6qjvfedvqs3y/app.bsky.feed.post/3lnlyljlj5k27" }, "createdAt": "2025-04-25T05:20:06.214Z" } }
If you have heard enough what LLM can do at #COMPTEXT2025, let us tell you what LLMs CANNOT do in our panel: We don’t need no LLMs (Sat 2:45pm BIG Hörsaal). I will present my work on the linguistic and contextual bias that I found in a boatload of LLMs (using over 120 different model setups).
Apr 25, 2025, 12:44 AM