| | Spacetime Emerges from Observer-Relative Information (lesswrong.com) |
| 2 points by vmstabile 4 hours ago | past | 1 comment |
|
| | The Industrial Explosion (lesswrong.com) |
| 1 point by ibobev 1 day ago | past | discuss |
|
| | Race and Gender Bias as an Example of Unfaithful Chain of Thought in the Wild (lesswrong.com) |
| 1 point by supriyo-biswas 3 days ago | past | discuss |
|
| | Shutdown Resistance in Reasoning Models (lesswrong.com) |
| 1 point by ben_w 3 days ago | past | discuss |
|
| | A path from autonomy V&V to AGI alignment? (lesswrong.com) |
| 1 point by yoav_hollander 5 days ago | past | discuss |
|
| | Race and Gender Bias as an Example of Unfaithful Chain of Thought in the Wild (lesswrong.com) |
| 9 points by ibobev 6 days ago | past | discuss |
|
| | Dialects for Humans: Sounding Distinct from LLMs (lesswrong.com) |
| 5 points by nebrelbug 7 days ago | past | 1 comment |
|
| | Proposal for making credible commitments to AIs (lesswrong.com) |
| 1 point by surprisetalk 8 days ago | past | discuss |
|
| | Machines of Faithful Obedience – LessWrong (lesswrong.com) |
| 1 point by kiyanwang 8 days ago | past | discuss |
|
| | X explains Z% of the variance in Y (lesswrong.com) |
| 15 points by ibobev 11 days ago | past | 1 comment |
|
| | X explains Z% of the variance in Y (lesswrong.com) |
| 4 points by surprisetalk 11 days ago | past | discuss |
|
| | A case for courage, when speaking of AI danger (lesswrong.com) |
| 2 points by ibobev 13 days ago | past | discuss |
|
| | The Best of LessWrong (lesswrong.com) |
| 3 points by sebg 13 days ago | past | discuss |
|
| | Situational Awareness: A One-Year Retrospective (lesswrong.com) |
| 2 points by fofoz 13 days ago | past | discuss |
|
| | The V&V method – A step towards safer AGI (lesswrong.com) |
| 1 point by yoav_hollander 14 days ago | past |
|
| | My Pitch for the AI Village (lesswrong.com) |
| 1 point by ibobev 14 days ago | past |
|
| | Foom and Doom 1: "Brain in a box in a basement" (lesswrong.com) |
| 1 point by ibobev 14 days ago | past |
|
| | Metaprogrammatic Hijacking: A New Class of AI Alignment Failure (lesswrong.com) |
| 1 point by Hiyagann 16 days ago | past |
|
| | Orienting Towards Wizard Power (lesswrong.com) |
| 1 point by desmondwillow 16 days ago | past |
|
| | A deep critique of AI 2027's bad timeline models (lesswrong.com) |
| 90 points by paulpauper 16 days ago | past | 61 comments |
|
| | X explains Z% of the variance in Y (lesswrong.com) |
| 5 points by kmm 18 days ago | past |
|
| | Does RL Incentivize Reasoning Capacity in LLMs Beyond the Base Model? (lesswrong.com) |
| 2 points by fzliu 19 days ago | past |
|
| | A deep critique of AI2027s bad timeline models (lesswrong.com) |
| 5 points by iNic 19 days ago | past | 1 comment |
|
| | A Technique of Pure Reason (lesswrong.com) |
| 2 points by ibobev 20 days ago | past |
|
| | Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low (lesswrong.com) |
| 4 points by optimalsolver 23 days ago | past |
|
| | Beware General Claims about "Generalizable Reasoning Capabilities" of AI Systems (lesswrong.com) |
| 3 points by mkl 25 days ago | past |
|
| | A Straightforward Explanation of the Good Regulator Theorem (lesswrong.com) |
| 49 points by surprisetalk 25 days ago | past | 5 comments |
|
| | Corporations as Paperclip Maximizers (lesswrong.com) |
| 12 points by busssard 26 days ago | past | 9 comments |
|
| | Broad-Spectrum Cancer Treatments (lesswrong.com) |
| 2 points by surprisetalk 28 days ago | past |
|
| | Read the Pricing First (lesswrong.com) |
| 2 points by surprisetalk 29 days ago | past |
|
|
| More |