Experimental browser for the Atmosphere
The error rate for newer “reasoning” models is going up, possibly because a lot of LLM generated content is ending up in the training set. The idea that AI stuff gets better indefinitely has always been a convenient lie by its boosters. Has been since the 1950s.
May 7, 2025, 3:04 PM
{ "uri": "at://did:plc:vbufq3xwt3233giwk4ulgvpr/app.bsky.feed.post/3loloardkuc2a", "cid": "bafyreidfthzdb5aq5julr6fhejoc5xkrxra6vjvgqyjrzjvg7woeaw7bvm", "value": { "text": "The error rate for newer “reasoning” models is going up, possibly because a lot of LLM generated content is ending up in the training set. \n\nThe idea that AI stuff gets better indefinitely has always been a convenient lie by its boosters. Has been since the 1950s.", "$type": "app.bsky.feed.post", "langs": [ "en" ], "reply": { "root": { "cid": "bafyreibi2ixgzsxqsum4hoh6fwnuaxilqhabibuqegwbwaybrupyszfxpy", "uri": "at://did:plc:gcoiabsumehr3g6sbimjrqpx/app.bsky.feed.post/3lojlginjys2r" }, "parent": { "cid": "bafyreigiokapcljxo6lxzhelnuuz5nfwdv52tvpqgyk2zfkygidf7uosue", "uri": "at://did:plc:xsgw6n6tvrcrgdiz5p5m64yq/app.bsky.feed.post/3lolij6yrl22r" } }, "createdAt": "2025-05-07T15:04:41.053Z" } }