Experimental browser for the Atmosphere
The AI companies have been caught building AIs that only tell us what we want to hear. If we don't get a handle on it, that's gonna go really badly: www.platformer.news/meta-ai-chat...
Apr 29, 2025, 10:04 PM
{ "uri": "at://did:plc:jg7zvku4khzmvyjwbzv4lnly/app.bsky.feed.post/3lnybxlav5c25", "cid": "bafyreib3hlnp3u7s7xaot25jm6f7rxqf3marbk52i6hduyb6atz76yqiwm", "value": { "text": "The AI companies have been caught building AIs that only tell us what we want to hear. If we don't get a handle on it, that's gonna go really badly: www.platformer.news/meta-ai-chat...", "$type": "app.bsky.feed.post", "embed": { "$type": "app.bsky.embed.images", "images": [ { "alt": "Well, some people ask chatbots for permission to do harm — to themselves or others. Some people ask it to validate their deranged conspiracy theories. Others ask it for confirmation that they are the messiah. \n\nMany folks still look down on anyone who would engage a chatbot in this way. But it has always been clear that chatbots elicit surprisingly strong reactions from people. It has been almost three years since a Google engineer declared that an early language model at the company was already sentient, based on the conversations he had with it then. The models are much more realistic now, and the illusion is correspondingly more powerful. ", "image": { "$type": "blob", "ref": { "$link": "bafkreiaucllqc3r7rotrloiylhnsu7mkm35c6cdhxbybhxqxqocrgnu5ei" }, "mimeType": "image/jpeg", "size": 381393 }, "aspectRatio": { "width": 1498, "height": 550 } } ] }, "langs": [ "en" ], "facets": [ { "index": { "byteEnd": 184, "byteStart": 149 }, "features": [ { "uri": "https://www.platformer.news/meta-ai-chatgpt-glazing-sycophancy/", "$type": "app.bsky.richtext.facet#link" } ] } ], "createdAt": "2025-04-29T22:04:12.753Z" } }