Experimental browser for the Atmosphere
Loading post...
{ "uri": "at://did:plc:eip2ux2joizj5ek5nqtrmnym/app.bsky.feed.like/3lp36afi2fy2f", "cid": "bafyreia4jnubjbongco4ljehfaisra5bfbnvhtbpon6x4pkp47zvhiz4xy", "value": { "$type": "app.bsky.feed.like", "subject": { "cid": "bafyreieudevk3334gvlqns6qdcci7vgehtqmilw4rzad3fmd22f3siqrci", "uri": "at://did:plc:35ycejprfrigkum4qtmumkxj/app.bsky.feed.post/3lojcx2lqjc2h" }, "createdAt": "2025-05-13T19:00:44.840Z" } }
#neuroskyence people (maybe vision people in particular): how do you think information is "read out" from the visual system? Do downstream areas like PFC get to query anything from V1 to IT? Or just later areas? How plastic are these readouts? Etc. IMO this is always under constrained in modeling.
May 6, 2025, 4:37 PM