| | A deep critique of AI 2027's bad timeline models (lesswrong.com) |
| 76 points by paulpauper 5 hours ago | past | 42 comments |
| | X explains Z% of the variance in Y (lesswrong.com) |
| 5 points by kmm 2 days ago | past | discuss |
| | Does RL Incentivize Reasoning Capacity in LLMs Beyond the Base Model? (lesswrong.com) |
| 2 points by fzliu 2 days ago | past | discuss |
| | A deep critique of AI2027s bad timeline models (lesswrong.com) |
| 5 points by iNic 3 days ago | past | 1 comment |
| | A Technique of Pure Reason (lesswrong.com) |
| 2 points by ibobev 4 days ago | past | discuss |
| | Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low (lesswrong.com) |
| 4 points by optimalsolver 7 days ago | past | discuss |
| | Beware General Claims about "Generalizable Reasoning Capabilities" of AI Systems (lesswrong.com) |
| 3 points by mkl 8 days ago | past | discuss |
| | A Straightforward Explanation of the Good Regulator Theorem (lesswrong.com) |
| 49 points by surprisetalk 9 days ago | past | 5 comments |
| | Corporations as Paperclip Maximizers (lesswrong.com) |
| 12 points by busssard 10 days ago | past | 9 comments |
| | Broad-Spectrum Cancer Treatments (lesswrong.com) |
| 2 points by surprisetalk 12 days ago | past | discuss |
| | Read the Pricing First (lesswrong.com) |
| 2 points by surprisetalk 13 days ago | past | discuss |
| | Repairing Yudkowsky's anti-zombie argument (lesswrong.com) |
| 4 points by Bluestein 13 days ago | past | discuss |
| | Reference Works for Every Subject (lesswrong.com) |
| 4 points by surprisetalk 17 days ago | past |
| | A Technique of Pure Reason (lesswrong.com) |
| 4 points by ibobev 18 days ago | past |
| | Humans Who Are Not Concentrating Are Not General Intelligences (lesswrong.com) |
| 2 points by ctoth 19 days ago | past | 1 comment |
| | Anthropically Blind: the anthropic shadow is reflectively inconsistent (2023) (lesswrong.com) |
| 2 points by jstanley 19 days ago | past |
| | Do you even have a system prompt? (lesswrong.com) |
| 2 points by kiyanwang 20 days ago | past |
| | Orienting Toward Wizard Power (lesswrong.com) |
| 1 point by surprisetalk 26 days ago | past |
| | The AI Safety Risk Is a Conceptual Exploit (lesswrong.com) |
| 2 points by foxanthony 33 days ago | past |
| | The Codex of Ultimate Vibing (lesswrong.com) |
| 3 points by kiyanwang 33 days ago | past |
| | What We Talk About When We Talk About Objective Functions (lesswrong.com) |
| 1 point by cubefox 34 days ago | past |
| | The Cost of Our Lies to AI (lesswrong.com) |
| 24 points by danboarder 35 days ago | past | 3 comments |
| | Well-Kept Gardens Die by Pacifism (2009) (lesswrong.com) |
| 2 points by Tomte 38 days ago | past | 1 comment |
| | British naval dominance during the age of sail (lesswrong.com) |
| 112 points by surprisetalk 38 days ago | past | 85 comments |
| | If Anyone Builds It, Everyone Dies (lesswrong.com) |
| 9 points by AftHurrahWinch 39 days ago | past | 7 comments |
| | AI Doomerism in 1879 (lesswrong.com) |
| 1 point by cubefox 39 days ago | past |
| | Methods of defence against AGI manipulation (lesswrong.com) |
| 6 points by MarkelKori 40 days ago | past | 2 comments |
| | The Waluigi Effect (2023) (lesswrong.com) |
| 1 point by Tomte 42 days ago | past |
| | Orienting Toward Wizard Power (lesswrong.com) |
| 2 points by paulpauper 42 days ago | past |
| | Learned pain as a leading cause of chronic pain – LessWrong (lesswrong.com) |
| 6 points by diwank 43 days ago | past |
|
|
| More |