Experimental browser for the Atmosphere
Loading post...
{ "uri": "at://did:plc:sj46s4ufqfmqeq34ewaam6n4/app.bsky.feed.like/3lndsj4megr2a", "cid": "bafyreieyi7z5y7cz3nfpkfv6qocuvzqviyyszyyqpuqjfzrsp6x76o2dvy", "value": { "$type": "app.bsky.feed.like", "subject": { "cid": "bafyreid7i3ihw3lkgo4l3jjzw5aawefgdacvywtnujfvheftr73kxd3lvq", "uri": "at://did:plc:wk7sybhegd37i7ljsltxbef6/app.bsky.feed.post/3lmz4g6xyqs2y" }, "createdAt": "2025-04-21T18:34:26.688Z" } }
AI agents are being deployed faster than developers can answer critical questions about them—that needs to change, writes CDT's Ruchika Joshi. The public needs far more information to meaningfully evaluate whether, when, and how to use AI agents, and what safeguards will be needed to manage risks.
Apr 17, 2025, 12:32 PM