Experimental browser for the Atmosphere
Loading post...
{ "uri": "at://did:plc:5llt5pj7sjira7v5jm3rcv2g/app.bsky.feed.like/3loa2efml3y2c", "cid": "bafyreidlxwpe5ssisowejwt4byqd3spfqpslquy5hcqjlenry37opqskq4", "value": { "$type": "app.bsky.feed.like", "subject": { "cid": "bafyreic2lwnt3zuft7p2tallt4oy7ldiqbzlpmtxiqikpliy5hhsqgwbmu", "uri": "at://did:plc:e6ewzleebkdi2y2bxhjxoknt/app.bsky.feed.post/3lo7ci4russ2s" }, "createdAt": "2025-05-03T00:09:30.969Z" } }
How do language models generalize from information they learn in-context vs. via finetuning? In arxiv.org/abs/2505.00661 we show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. 1/
May 2, 2025, 5:02 PM