i havent been consistent noting down all posts i liked or loved into the below lists. therefore something missing from here does not mean i havent liked it or havent read it.
Lesswrong
posts from lesswrong (lw).
these posts are not necessarily originally from lesswong. i like to note the link form lesswong when there is one even if the post is originally from another blog. sorry for the inconsistenty here.
Loved lw sequences
lw sequences, excluding those from Rationality: A-Z.
Loved lw posts
the posts i liked the most from lesswrong.
in no particular order.
- https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside
- https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1
- https://www.lesswrong.com/s/pFatcKW3JJhTSxqAF/p/8uk5bmmpJaSAgnZfg
- https://www.lesswrong.com/posts/L6Ktf952cwdMJnzWm/motive-ambiguity
- new better post describing double crux: https://www.lesswrong.com/posts/WLQspe83ZkiwBc2SR/double-crux
- https://www.lesswrong.com/posts/z8usYeKX7dtTWsEnk/more-dakka
- https://www.lesswrong.com/posts/xLm9mgJRPvmPGpo7Q/the-cognitive-science-of-rationality
- https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizing-from-one-example
- https://www.lesswrong.com/posts/HYWhKXRsMAyvRKRYz/you-can-face-reality
- https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth
- https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences
- https://www.lesswrong.com/posts/Z5wF8mdonsM2AuGgt/negative-feedback-and-simulacra
- Gears-Level Understanding, Deliberate Performance, The Strategic Level from the cfar handbook
- Gears in understanding : the original lesswrong post that defines gears-level understanding.
- on how false beliefs but persistent behavior still points to some reason for the behavior even if just subjective: https://www.lesswrong.com/posts/MPj7t2w3nk4s9EYYh/incorrect-hypotheses-point-to-correct-observations
- about noticing confusion: Your Strength as a Rationalist
-
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
-
- often referenced by lw people: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/
Liked lw posts
posts i liked enough to be worth noting down for further reference but not
good enough to be part of what i feel were the best posts ive read from lesswrong.
in no particular order.
- on confidence even under uncertainty: https://www.lesswrong.com/s/pFatcKW3JJhTSxqAF/p/jFvPWZB5WAGkDBpqN
- https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison
- https://www.lesswrong.com/posts/LuXb6CZG4x7pDRBP8/wait-vs-interrupt-culture
- https://www.lesswrong.com/posts/K2JBqDeETX2yEgyyZ/the-limits-of-introspection
- https://www.lesswrong.com/s/pFatcKW3JJhTSxqAF/p/CAWHpzaZGJZMhK6pb
- inspirational to build better societies: https://www.lesswrong.com/posts/iETtCZcfmRyHp69w4/can-the-chain-still-hold-you
- inspirational on feeling every death: https://www.lesswrong.com/posts/pnhjfkcBpzGp7gFTJ/the-meditation-on-winter
- https://www.lesswrong.com/posts/GLMFmFvXGyAcG25ni/i-can-tolerate-anything-except-the-outgroup
- https://www.lesswrong.com/posts/GLPaZamxqkx7XJbXv/the-skill-of-noticing-emotions
- best ai takeover story ive read so far: https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-takeover-might-happen-in-2-years
- https://www.lesswrong.com/posts/ZLB8cDM8DxwfxJSLc/the-thing-and-the-symbolic-representation-of-the-thing
- argues to live normaly even if short ai timelines: https://www.lesswrong.com/posts/CvfZrrEokjCu3XHXp/ai-practical-advice-for-the-worried
- https://www.lesswrong.com/posts/CzyGJzESo7vm75KKz/pick-two-concise-comprehensive-or-clear-rules
- https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases-1
- https://www.lesswrong.com/posts/MFNJ7kQttCuCXHp8P/the-goddess-of-everything-else
EA Forum
posts from the ea forum (eaf).
Uncategorized yet
- ai safety impact assessment (of ai safety camp) : https://forum.effectivealtruism.org/posts/CuPnmeS4v5sFE6nQj/impact-assessment-of-ai-safety-camp-arb-research
Liked eaf posts
posts i liked enough to be worth noting down for further reference but not
good enough to be part of what i feel were the best posts ive read from
the ea forum.
in no particular order.
- story about death and compassion: https://forum.effectivealtruism.org/posts/mCtZF5tbCYW2pRjhi
SSC and ACX
posts from Scott Alexander,
from Astral Codex Ten (acx),
and from Slate Star Codex (ssc).
in no particular order.
Liked acx posts
- post about social dynamics i really liked: https://www.astralcodexten.com/p/book-review-sadly-porn
Liked ssc posts
- https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
- https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out
- https://slatestarcodex.com/2017/04/11/sacred-principles-as-exhaustible-resources/
- on cultural drift: https://slatestarcodex.com/2019/06/07/addendum-to-enormous-nutshell-competing-selectors/
Duncan
posts from Duncan Sabien.
that are not on lesswrong.
in no particular order.
Liked duncan posts
- https://homosabiens.substack.com/p/dont-break-it-for-me
- https://homosabiens.substack.com/p/notice-that-youve-taken-two-steps
Robin Hanson
posts from Robin Hanson on Overcoming Bias.
in no particular order.
Liked hanson posts
- cultural drift posts:
- origin of prediction market idea: https://www.overcomingbias.com/p/hail-jeffrey-wernick
- https://www.overcomingbias.com/p/conquest-and-liberation-of-academia
Aella
posts from Aella on Knowingless substack.
in no particular order.