Experimental browser for the Atmosphere
3. So understanding what Large Models (LLMs and their cousins) can and can't do is important. Our argument is, crudely speaking, that expecting LLMs etc to produce AGI is a category error. They are not, nor will become, anything resembling individual, goal oriented human intelligences.
Mar 14, 2025, 12:57 PM
{ "uri": "at://did:plc:a3h6mkohqeiu6xl4fnrwuk4t/app.bsky.feed.post/3lkdnxw7puk2k", "cid": "bafyreihvfh7wxkclysk3minc5fpjqxif4gbj26qdvifkik45lim7qcsdha", "value": { "text": "3. So understanding what Large Models (LLMs and their cousins) can and can't do is important. Our argument is, crudely speaking, that expecting LLMs etc to produce AGI is a category error. They are not, nor will become, anything resembling individual, goal oriented human intelligences.", "$type": "app.bsky.feed.post", "langs": [ "en" ], "reply": { "root": { "cid": "bafyreigte4rtflrtkly5kk4a4vnk3wurruskt3e5324zblr5roxjc5s7ti", "uri": "at://did:plc:a3h6mkohqeiu6xl4fnrwuk4t/app.bsky.feed.post/3lkdnxvm2xk2k" }, "parent": { "cid": "bafyreifhjooucrr6lfp2d7lmgvfz2gnb47qtfcdzdb67c6tgcw46gt6f24", "uri": "at://did:plc:a3h6mkohqeiu6xl4fnrwuk4t/app.bsky.feed.post/3lkdnxw7nw22k" } }, "createdAt": "2025-03-14T12:57:39.120Z" } }