Telegram Web Link
A really good and concise deep dive into RLHF in LLM post-training, Proximal Policy Optimization (PPO), and Group Relative Policy Optimization (GRPO)
https://yugeten.github.io/posts/2025/01/ppogrpo/
#llm
Truly a thought-provoking piece, from the author of τ-bench.
https://ysymyth.github.io/The-Second-Half/ #ai

So what’s suddenly different now?

In three words: RL finally works. More precisely: RL finally generalizes. After several major detours and a culmination of milestones, we’ve landed on a working recipe to solve a wide range of RL tasks using language and reasoning.

The second half of AI — starting now — will shift focus from solving problems to defining problems. In this new era, evaluation becomes more important than training. Instead of just asking, “Can we train a model to solve X?”, we’re asking, “What should we be training AI to do, and how do we measure real progress?” To thrive in this second half, we’ll need a timely shift in mindset and skill set, ones perhaps closer to a product manager.

It turned out the most important part of RL might not even be the RL algorithm or environment, but the priors, which can be obtained in a way totally unrelated from RL (LLMs).
🔥2
https://arxiv.org/abs/2305.18290 #llm #ai

今天深入学习了 DPO,再次感叹扎实的数学功底对 AI/ML Research 的重要性……

原始的 RLHF 是用 pairwise human preference data(A 和 B 哪个更好)去训练一个 reward model,然后用 RL 来训练主 policy model,objective 是 minimize negative log likelihood + regularization(比如 PPO 就是通过新旧 policy 之间的 KL Divergence 来做 regularization)。这样的缺点在于 RL 是出了名的难搞,而且还需要一个 critic model 来预测 reward,使得整个系统的复杂性很高。

DPO 的思路是,观察到 RLHF 的 objective 本质上是 minimize loss over (latent) reward function,通过一番 reparameterization 等数学推导,重新设计了一个 minimize loss over policy 的 objective,绕过了中间这个 reward model,让 gradient update 直接增加 policy model 生成 winner response 的概率并降低 loser response 的概率,大幅简化了流程。

拓展阅读:
- KTO: 更进一步,不需要 pairwise comparison,只用对 individual example 的 upvote/downvote 也可以学习到 preference。
- IPO: 解决 DPO 容易 overfit 的问题。
👍3
支持一下友邻!很有意思的一个人!👇
🥰1
https://koomen.dev/essays/horseless-carriages/
我是觉得拿工业革命时期的例子来类比 AI 时代的种种有点 cliche 了,不过这篇中心论点和例子都挺到位,还有交互。
In most AI apps, System Prompts should be written and maintained by users, not software developers or even domain experts hired by developers.
🍾1
https://julian.digital/2025/03/27/the-case-against-conversational-interfaces/
这篇可以一起看,标题比较钓鱼(作者自己也承认了),但其实是对怎样的 UX 能最大发挥 AI 效用很好的思考。
AI should function as an always-on command meta-layer that spans across all tools. Users should be able to trigger actions from anywhere with simple voice prompts without having to interrupt whatever they are currently doing with mouse and keyboard.

Productivity and collaboration shouldn’t be two separate workflows.


P.S. 这个博主的文章都很赞,比如 https://julian.digital/2023/07/06/multi-layered-calendars/https://julian.digital/2020/09/04/a-meta-layer-for-notes/
🏆1
Forwarded from C’s Random Collection
image_2025-05-14_23-36-37.png
504.8 KB
New landing page design and live at https://deeptime.now 🎉 and deeptime is now in beta, all features are free! Sign up today! #DeeptimeNow
Mary Meeker's first Trends report since 2019. 340 slides on the state of AI.
https://www.bondcap.com/reports/tai
https://store.steampowered.com/app/2008920/Lorelei_and_the_Laser_Eyes/
今年似乎一直在玩同一类的游戏:发生在一个庄园或类似密闭环境中的解密游戏 — Blue Prince、Botany Manor 等。
但今天想推荐的 Lorelei and the Laser Eyes 是最近几年我最喜欢的游戏之一。
谜题属于偏简单的,硬核解密玩家如果对不上电波可能会踩雷。不过逐渐将散乱的线索拼凑成答案的体验依然很棒。
艺术风格赞,叙事到一半还开始加入对艺术史的反思。
#Annapurna 出品必属精品
👍2
Naming is extremely important in Computer Science and, frankly, everything. Good naming is hard. Being able to pick a good name shows a lot of good taste.

Context engineering (a term promoted by Karpathy: https://vxtwitter.com/karpathy/status/1937902205765607626) is much better than:
- Prompt engineering: "Prompt" is just too overloaded.
- In-context learning: This is more of a research term and feels awkward to describe engineering required for building good LLM applications.
- RAG: Today, when done right, RAG is a very specific kind of context engineering. But too many people conflate it with "anything that puts stuff in the prompt".

Image credit: https://github.com/humanlayer/12-factor-agents/blob/main/content/factor-03-own-your-context-window.md
2👍2
2025/10/22 04:54:02
Back to Top
HTML Embed Code: