We can’t circumvent the work needed to train our minds

(zettelkasten.de)

310 points | by maksimur 8 hours ago ago

146 comments

  • trjordan 6 hours ago ago

    I was talking with somebody about their migration recently [0], and we got to speculating about AI and how it might have helped. There were basically 2 paths:

    - Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.

    - Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.

    It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.

    If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.

    But also, it's exhausting. Thinking always is, I guess.

    [0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...

    • klodolph 6 hours ago ago

      I’ve tried the second path at work and it’s grueling.

      “Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.

      I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.

      • fluoridation 3 hours ago ago

        Agreed. Either that, or the task has really, really broad success parameters.

      • bluefirebrand 3 hours ago ago

        10-20% productivity gains at the expense of making it grueling sounds like a recipe for burnout

        • dvfjsdhgfv 2 hours ago ago

          And that's how many people feel now.

      • BinaryIgor 5 hours ago ago

        Exactly, same for me

    • rorylaitila 6 hours ago ago

      Yeah I think this is what I've tried to articulate to people that you've summed up well with "You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing" - Most of the bottleneck with any system design is the hard things, the unknown things, the unintended-consequences things. The AIs don't help you much with that.

      There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.

      • photonthug an hour ago ago

        > There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.

        Exactly, some kinds of refactors are like this for me. Pretty mindless, kind of relaxing, almost algebraic. It's a pleasant way to wander around the code base just cleaning and improving things while you walk down a data or control flow. If you're following a thread then you don't even make decisions really, but you also get better acquainted with parts you don't know, and subconsciously get the practice holding some kind of gestalt in your head.

        This kind of almost dream-like "grooming" seems important and useful, because it preps you for working with design problems later. Definitely formatting and style type trivia should absolutely be automated, and real architecture/design work requires active engagement. But there's a sweet spot in the middle.

        Even before LLMs maybe you could automate some of these refactors with tools for manipulating ASTs or CSTs, if your language of choice had those tools. But automating everything that can be automated won't necessarily pay off if you're losing fluency that you might need later.

      • wduquette 5 hours ago ago

        In my experience, a lot of the hard thinking gets done in my back-brain while I'm doing other things, and emerges when I take up the problem again. Doing the regular work gives my back-brain time to percolate; doing hard thing after hard thing doesn't.

      • mrguyorama 4 hours ago ago

        Also at the end of the day, humans aren't machines. We are goopy meat and chemistry.

        You cannot exclusively do hard things back to back to back every 8 hour day without fail. It will either burn you out, or you will make mistakes, or you will just be miserable.

        Human brains do not want to think hard, because millions of years of evolution built brains to be cheap, and they STILL use like 10% of our daily energy.

    • marcosdumay an hour ago ago

      The problem with LLMs is that they are not good enough to do the dumb stuff by themselves. and they are still so dumb that they will bias you once you have to intervene.

      But this is the idea behind compilers, type checkers, automated testing, version control, and etc. It's perfectly valid.

    • CuriouslyC 6 hours ago ago

      I stay at the architecture, code organization and algorithm level with AI. I plan things at that level then have the agent do full implementation. I have tests (which have been audited both manually and by agents) and I have multiple agents audit the implementation code. The pipeline is 100% automated and produces very good results, and you can still get some engineering vibes from the fact that you're orchestrating a stochastic workflow dag!

    • ryanobjc 2 hours ago ago

      regarding #2: "Automate the dumb/boring stuff", I always think of the big short when Michael Burry said "yes I read all the boring spreadsheets, and I now have a contrary position". And ended up being RIGHT.

      For example, I believe writing unit tests is way too important to be fully relegated to the most junior devs, or even LLM generation! In other fields, "test engineer" is an incredibly prestigious position to have, for example "lead test engineer, Space X/ Nasa/etc" -- that ain't a slouch job, you are literally responsible for some of the most important validation and engineering work done at the company.

      So I do question the notion that we can offload the "simple" stuff and just move on with life. It hasn't really fully worked well in all fields, for example have we really outsourced the boring stuff like manufacturing and made things way better? The best companies making the best things do typically vertically integrate.

    • danenania 6 hours ago ago

      I'd actually say that you end up needing to think more in the first example.

      Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.

      I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.

      The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.

      • Jensson 4 minutes ago ago

        > I'd actually say that you end up needing to think more in the first example.

        Yes, but you are thinking about the wrong things, so the effort get spent poorly.

        It is usually much more efficient to build your own mental model than to try to search for a solution that solves exactly what you need from externally. Without that mental model it is hard to evaluate whether the external solution even does what you want, so its something you need to do either way.

      • jama211 3 hours ago ago

        Depends how complex the task is. Sometimes I’m handed tasks so simple but tedious that AI has meant I can breeze through these instead of burning myself out on them. Sure, it doesn’t speed things up much in terms of time, but I’m way less burnt out at the end because it’s doing all the fiddly stuff that would tire me out. I suspect the tasks I get aren’t that typical though.

        • danenania 2 hours ago ago

          Yeah, I think if it's simple enough that you can understand all the code that's generated at a glance, then it's fine. There are definitely tasks that fit this description—my comment was mainly speaking to more complex tasks.

  • mmargenot 6 hours ago ago

    > You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.

    You don't have to remember everything. You have to remember enough entry points and the shape of what follows, trained through experience and going through the process of thinking and writing, to reason your way through meaningful knowledge work.

    • rafaquintanilha 6 hours ago ago

      "It is requisite that a man should arrange the things he wishes to remember in a certain order, so that from one he may come to another: for order is a kind of chain for memory" – Thomas Aquinas, Summa Theologiae. Not ironically I found the passage in my Zettelkasten.

      • mmargenot 5 hours ago ago

        It's weird to read this from zettelkasten.de, given that the method is precisely about cultivating such a graph of knowledge. "Knowing enough to begin" seems to me to be the express purpose of writing and maintaining a zettelkasten and other such tools.

        • skydhash 4 hours ago ago

          I would say they mean being able to recall, not having everything at once. It’s being able to answer the 5 why’s.

      • wduquette 5 hours ago ago

        I arrange my code to follow a certain order, so that I can get my head back into a given module quickly. I don't remember everything; there's too much over the weeks, months, and years. But I can remember enough to find what I need to know if I structure it properly. Not unlike, you know, a Zettlekasten.

        • kelvinjps10 3 hours ago ago

          Isn't a Zettlekasten, structured so you can easily go back to a note easily?

          • wduquette an hour ago ago

            Yes. And so's my code, so to speak.

    • keremk 6 hours ago ago

      Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis.

    • stronglikedan 6 hours ago ago

      I always tell people that I don't remember all the answers, only where to find them.

    • mvieira38 4 hours ago ago

      Just to be clear, are you saying that to know something:

      1- You may remember only the initial state and the brain does the rest, like with mnemonics

      2- You may remember only the initial steps towards a solution, like knowing the assumptions and one or two insights to a mathematical proof?

      I'd say a Zettlekasten user would agree with you if you mean 1

    • skybrian 5 hours ago ago

      This is task-specific. Consider having a conversation in a foreign language. You don't have time to use a dictionary, so you must have learned words to be able to use them. Similarly for other live performances like playing music.

      When you're writing, you can often take your time. Too little knowledge, though, and it will require a lot of homework.

      • kjkjadksj 20 minutes ago ago

        There might be words I don’t use or chords I don’t know. It doesn’t matter though because part of expertise is being able to consult a reference and go “of course”, implement it, and keep moving.

    • mallowdram 6 hours ago ago

      Of course you have to remember everything. Your brain stores everything, and you then get to add things by forgetting, but that does not mean you erase things. The brain is oscillatory, it works somehow by using ripples that encode everything within differences, just in case you have to remember that obscure action-syntax...a knot, a grip, a pivot that might let you escape death. Get to know the brain, folks.

      • chrisweekly 6 hours ago ago

        Interesting take. I respectfully differ. IIRC, Feynman said something akin to my POV:

        Brains are for thinking. Documents / PKM systems / tools are for remembering.

        IOW: take notes, write things down.

        FWIW I have a degree in cognitive psychology (psychobiology, neuroanatomy, human perception) and am an amateur neuroscientist. Somewhat familiar w/ the brain. :)

        • mallowdram 6 hours ago ago

          Feynman wasn't a neurobiologist.

          I'd read Spontaneous Brain by Northoff (Copernican, irreducible neuroscience) or oscillatory neurobiology Buzsaki.

          The brain is lossless.

          I would agree that external forms of memory are evolutionarily progressive, that ability to utilize the external forms requires a lossless relationship.

          Once we grasp the infinitely inferior external of arbitrariness (symbols words) are correlated through superior, lossless, concatenated internals (action-neural-spatial-syntax), until we can externalize that direct perception, the externals are deeply inferior, lossy forms.

          • wduquette 5 hours ago ago

            But taking notes and writing ideas out requires that we think them through...which we usually don't do otherwise. This has been a commonplace of the intellectual life for centuries.

            • mallowdram 5 hours ago ago

              Words and thoughts are wholly separate. Notes aren't the direct results of perception, they are more like sportscasters reading the mind of pitchers. Notes point to thoughts or observations, they aren't the thoughts themselves.

              “We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024

              • wduquette 4 hours ago ago

                I did not say that my brain uses linguistic representations internally when I think; I said that the process of turning my ideas into words helps me think.

                • mallowdram 4 hours ago ago

                  Actually you said "writing ideas out requires that we think them through" and this isn't what's happening in brains. In actuality, words interfere with our ability to think.

            • barrenko 5 hours ago ago

              Or alternate "pen is mightier that the sword."

          • nathan_compton 4 hours ago ago

            "The brain is lossless."

            Nothing is lossless.

            • mallowdram 4 hours ago ago

              Fourier transforms are lossless. If it entered the oscillations of senses, it's still there in your brain. You may never need it, but every action is detailed by difference.

              • 8note 3 hours ago ago

                fourier transforms are lossless, but what impleemntation are you refering to that losslessly implements a fourier transform?

                to my knowledge practical fourier transforms set a number of sine waves they will calculate for, and a window of time to look at. these limitations result in loss.

                but, just taking the brain, at some point the person will die and decompose. how are you gonna get the oscillations back out of the rotted flesh? there has to be some form of loss to the brain

                • mallowdram 3 hours ago ago

                  We only need brains when we're alive, so extracting the points isn't required.

                  In terms of brains, the math is used to model the irreducible occurrences in brains - that everything is still in there. So the math only gives us a window into the complexity. Brains don't compute or calculate necessarily. As an analog, or analoga of differences, it never has to exclude, or experience loss.

                  For the details: Rhythms of the Brain or Unlocking the Brain both volumes.

              • svnt 4 hours ago ago

                Math is models, not reality

          • mrguyorama 4 hours ago ago

            Popsci books tend to be horseshit.

            Reading one does not make YOU a neurobiologist.

            • mallowdram 4 hours ago ago

              They're not popsci books. I'm a co-lead developer on a project with neurobio consultants, so I better know wtf they're talking about.

    • HPsquared 6 hours ago ago

      A bit like the memory palace. One memory leads to another. Not random-access.

      • palmfacehn 6 hours ago ago

        You only need the initial seed to restore the full state, provided you can reason your way from there. If you haven't applied yourself to problem solving, then perhaps you might need to memorize the full state.

        • mmargenot 5 hours ago ago

          Executing on meaningful knowledge work also might require many different paths, depending on the context and the environment. To me it's more about the method of inquiry and how you begin than it is the specific content. Sure, more individual facts help to guide that inquiry, but at any given moment you're only truly going to be able to recall a subset of those.

  • tikhonj 6 hours ago ago

    > You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.

    If humans did not have any facilities for abstraction, sure. But then "knowledge work" would be impossible.

    You need to remember some set of concrete facts for knowledge work, sure, but it's just one—necessary but small—component. More important than specific factual knowledge, you need two things: strong conceptual models for whatever you're doing and tacit knowledge.

    You need to know some facts to build up strong conceptual models but you don't need to remember them all at once and, once you've built up that strong conceptual understanding, you'll need specifics even less.

    Tacit knowledge—which, in knowledge work, manifests as intuition and taste—can only be built up through experience and feedback. Again, you need some specific knowledge to get started but, once you have some real experience, factual knowledge stops being a bottleneck.

    Once you've built up a strong foundation, the way you learn and retain facts changes too. Memorization might be a powerful tool to get you started but, once you've made some real progress, it becomes unnecessary if not counterproductive. You can pick bits of info up as you go along and slot them into your existing mental frameworks.

    My theory is that the folks who hate memorization are the ones who were able to force their way through the beginner stages of whatever they were doing without dull rote memorization, and then, once there, really do not need it any more. Which would at least partly explain why there are such vehement disagreements about whether memorization is crucial or not.

    • bad_username 2 hours ago ago

      > More important than specific factual knowledge, you need two things: strong conceptual models for whatever you're doing and tacit knowledge

      And the more experience with computers I get, the more I realize that there's actually not that many pure unique and mutually orthogonal _concepts_ in computer science and software engineering. Yes, a competent engineer must know, feel, live these concepts, and it takes a lot of work and exposure to crystallize them in the brain from all the libraries, books, programs, architectures one has seen. But there's not a lot of them! And once you are intimate with all of them, you can grok anything computer-related quickly and efficiently: because your brain will just wuickly find the "coordinates" of that thing in the concept space, ans that's all you'll have to understand and recall later.

  • keiferski 6 hours ago ago

    I am sympathetic to memory-focused tools like Anki and Zettelkasten (haven't used the latter myself, though) but I think this post is a bit oversimplified.

    I think there are at least two models of work that require knowledge:

    1. Work when you need to be able to refer to everything instantly. I don't know if this is actually necessary for most scenarios other than live debates, or some form of hyper-productivity in which you need to have extremely high-quality results near-instantaneously.

    (HN comments are, amusingly, also an example – comments that are in-depth but come days later aren't relevant. So if you want to make a comment that references a wide variety of knowledge, you'll probably need to already know it, in toto.)

    2. Work when you need to "know a small piece of what you don't remember as a whole", or in other terms, know the map, but not necessarily the entire territory. This is essentially most knowledge work: research, writing, and other tasks that require you to create output, but that output doesn't need to be right now, like in a debate.

    For example, you can know that X person say something important about Y topic, but not need to know precisely what it was – just look it up later. However, you do still need to know what you're looking for, which is a kind of reference knowledge.

    --

    What is actually new lately, in my experience, is that AI tools are a huge help for situations where you don't have either Type 1 or Type 2 knowledge of something, and only have a kind of vague sense of the thing you're looking for.

    Google and traditional search engines are functionally useless for this, but asking ChatGPT a question like, "I am looking for people that said something like XYZ." This previously required someone to have asked the exact same question on Reddit/a forum, but now you can get a pretty good answer from AI.

    • throwway120385 6 hours ago ago

      The AI can also give you pretty good examples of "kind" that you can then evaluate. I've had it find companies that "do X" and then used those companies to understand enough about what I am or am not looking for to research it myself using a search engine. The last time I did this I didn't end up surfacing any of what the AI provided. It's more like talking to the guy in the next cubicle, hearing some suggestions from them, and using those suggestions to form my own opinion about what's important and digging in on that. You do still have to do the work of forming an opinion. The ML model is just much better at recognizing relationships between different words and between features of a category of statements, and in my case they were statements that companies in a particular field tended to make on their websites.

    • rzzzt 6 hours ago ago

      Pilots have both checklists that they can follow without memorizing, but also memory items that have to be performed almost instinctively if they encounter the precondition events.

    • skybrian 4 hours ago ago

      Live performance (like conversation or playing music) often relies on memory to do it well.

      That might be a good criteria for how much to memorize: do you want to be able to do it live?

  • Etheryte 7 hours ago ago

    > If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

    While the article makes some reasonable points, this is too far gone. You don't need to know how to "weigh each minute spend on flexibility against the minutes spent on aerobic capacity and strength" to put together a reasonable workout plan. Sure, your workouts might not be as minmaxed as they possibly could be, but that really doesn't matter. So long as the plan is not downright bad, the main thing is that you keep at it regularly. The same idea extends to nearly every other domain, you don't need to be a deep expert to get reasonably good results.

    • cyanydeez 6 hours ago ago

      The US is, however, learning exactly what happens when rationality is not part of the equation. This is all a dance around what is a "fact" and how to string facts into a reasoning model that lets you predict or confirm other potential facts, etc...

      It's simply different people we're talking about. Certain personalities are always going to gravitate to the "search for reason" model in life rather than "reason about facts".

      • nradov 6 hours ago ago

        At least in the field of sports science and exercise physiology, we have very little in the way of facts. Much of what we once thought were facts have been disproven or at least called into question by later research. So we need to be humble, and very circumspect in what we label as a "fact".

        • tibbar 6 hours ago ago

          Yeah, so much of this is not about facts as much as judgment: knowing what are the most themes and factors to making a system work well. What are the parts that, if you get right, everything else will follow. Knowing what areas it's okay, even beneficial, to rediscover instead of trying to plan ahead of time.

        • j_bum 6 hours ago ago

          Do you have any go-to examples of “facts” that were disproven in this field?

        • throwway120385 6 hours ago ago

          Like what? There are some basics that have been studied and represent useful approximations. The general statement that your body "makes specific adaptations to imposed demands" seems to hold no matter what you do. There seems to always be some debate about what demands to impose to get a specific adaptation. For example, people have a very diverse array of opinions about how and when to stretch to achieve certain kinds of flexibility and if you do some reading you will find that these opinions follow from a body of work in a specific activity and that they don't always translate well to other activities.

  • vjvjvjvjghv 7 hours ago ago

    It’s the same with math. A lot of people say they don’t need to be able to do basic arithmetic because they can use a calculator. But I think that you can process the world much better and faster if at a minimum you have some intuition about numbers and arithmetic.

    It’s the same with a lot of other things. AI and search engines help a lot but you are at an advantage if at least you have some ability to gauge what should be possible and how to do it.

    • hennell 6 hours ago ago

      I used to find it weird how many people would make an excel formula on data they couldn't intuitively check. Like even basic level 'what percentage increase is a8 from a7' - they enter a formula then don't know if it's correct. I always wrote formulas on numbers I can reason with. If a8 is 120, and a7 is 100 you can immediately tell if you've gone wrong. Then you change for 1,387 and 1,252 and know it's going to be accurate.

      People do the same with AI, ask it about something they know little about then assume it is correct, rather than checking their ideas with known values or concepts they might be able to error check.

    • RicoElectrico 6 hours ago ago

      With or without calculator some people have an aversion to calculation and that's the problem in my opinion. How much bullshit you can refute with back of the envelope calculations is remarkable.

      This, and knowing by heart all the simple formulas/rules for area/volume/density and energy measurements.

      The classic example being pizza diameter.

  • Ethee 5 hours ago ago

    I've been having conversations about this topic with friends recently and I keep coming back to this idea that most engineering work, which I will define as work that begins with a question and without a clear solution, requires a lot of foundational understanding of the previous layer of abstraction. If you imagine knowledge as a pyramid, you can work at the top of the pyramid as long as you understand the foundation that makes up your level, however to jump a level above or below that would require building that foundation yet again. Computer science fits well into this model where you have people at many layers of abstractions who all work very well within their layer but might not understand as much about the other layers. But regardless of where you are in the pyramid, understanding ALL the layers underneath will lead to better intuition about the problems of your layer. To farm out the understanding for these things will obviously end up having negative impact not just on overall critical thinking, but on the way we intuit how the world works.

  • ergonaught 4 hours ago ago

    The actual central point is that the brain requires conditioning via experience. That shouldn't be controversial, and I can't decide if the general replies here are an extended and ironic elaboration of his point or not.

    If you never memorize anything, but are highly adept at searching for that information, your brain has only learned how to search for things. Any work it needs to do in the absence of searching will be compromised due to the lack of conditioning/experience. Maybe that works for you, or maybe that works in the world that's being built currently, but it doesn't change the basic premise at all.

  • crims0n 6 hours ago ago

    I agree with the point being made, even if it is taken to an extreme. I would say you don't need to remember everything, but you do need to have been exposed to it. Not knowing what you don't know is a huge handicap in knowledge work.

    “Try to learn something about everything and everything about something.”

  • bwfan123 5 hours ago ago

    Descartes' brief rules for the direction of the mind [1] is pertinent here, as it articulates beautifully what it means to do "thinking" and how that relates to "memory".

    Concepts have to be "internalized" into intuition for much of our thinking, and if they are externalized, we become a meme-copy machine as opposed to a thinking machine.

    [1] https://en.wikipedia.org/wiki/Rules_for_the_Direction_of_the...

  • tolerance 6 hours ago ago

    The author makes a lot of bold claims and I don’t take his main one serious re: remembering everything. I think he’s being intentionally hyperbolic. But the gist is sound to me, if you can put one together. He needs an editor.

    > To find what you need online, you require a solid general education and, above all, prior knowledge in the area related to your search. > > [...] > > If you can’t produce a comprehensive answer with confidence and on the whim [...] you don’t have the sufficient background knowledge. > > [...] > > This drives us to one of the most important conclusions of the entire field of note-taking, knowledge work, critical thinking and alike: You, not AI, not your PKM or whatever need to build the knowledge because only then it is in your brain and you can go the next step. > > [...] > > The advertised benefits of all these tools come with a specific hidden cost: Your ability to think. [This passage actually appears ahead of the previous one–ed.]

    This is best read alongside: https://news.ycombinator.com/item?id=45154088

  • _bramses an hour ago ago

    A lot of good ideas in this comment section.

    I’ll say this: between store, search, synthesize and share, store and synthesize are consistently the most difficult to nail down.

    A society that wishes to succeed in creating an activated and knowledgeable populous should be interested in how to train people to notice better, and to create insightful follows.

    In the words of David Deutsch (paraphrasing): knowledge consists of conjecture and error correction

  • AndyNemmity 5 hours ago ago

    Before the internet we asked people around us in our sphere. If we wanted to know the answer to a question, we asked, they made up an answer, and we believed it and moved on.

    Then the internet came, and we asked the internet. The internet wasn't correct, but it was a far higher % correct than asking a random person who was near you.

    Now AI comes. It isn't correct, but it's far higher % correct than asking a random person near you, and often asking the internet which is a random blog page which is another random person who may or may not have done any research to come up with an answer.

    The idea that any of this needs to be 100% correct is weird to me. I lived a long period in my life where everyone accepted what a random person near them said, and we all believed it.

    • Gormo 3 hours ago ago

      How is an LLM making stochastic inferences based on aggregations of random blog pages more likely to be correct than looking things up on decidedly non-random blog pages written by people with relevant domain knowledge?

      • xpe 2 hours ago ago

        Is the above comment a genuine question? I’m concerned it’s a rhetorical question that isn’t really getting to the heart of the matter; namely, what is the empirical performance? One’s ability to explain said performance doesn’t always keep up.

        How about we pick an LLM evaluation and get specific? They have strengths and weaknesses. Some do outperform humans in certain areas.

        Often I see people latching on to some reason that “proves” to them “LLMs cannot do X”. Stop and think about how powerful such a claim has to be. Such claims are masquerading as impossibility proofs.

        Cognitive dissonance is a powerful force. Hold your claims lightly.

        There are often misunderstandings here on HN about the kinds of things transformer based models can learn. Many people use the phrase “stochastic parrots” derisively; most of the time I think these folks are getting it badly wrong. A careful reading of the original paper is essential, not to mention follow up work.

    • buellerbueller 5 hours ago ago

      If you are asking random people, then your approach is incorrect. You should be asking the domain experts. Not gonna ask my wife about video games. Not gonna ask my dad about computer programming.

      There, I've shaved a ton of the spread off of your argument. Possibly enough to moot the value of the AI, depending on the domain.

      • skybrian 4 hours ago ago

        This all assumes you have experts that you can talk to. But they might be difficult to find or expensive to hire. You wouldn't want to waste your lawyer's time on trivia.

        • skydhash 4 hours ago ago

          That is why experts often publish books and articles, which is then corrected by other experts (or random people if it’s a typo). I’ve read a lot of books and I haven’t met any of their authors. But I’ve still learned stuff.

          • skybrian 3 hours ago ago

            Yep. At that point you're doing research, and become familiar enough with the literature to know what's right is work.

            Much like with Wikipedia, using AI to start on this journey (rather than blindly using quick answers) makes a lot of sense.

      • AndyNemmity 5 hours ago ago

        Before the internet, I didn't have the phone number of domain experts to just call and ask these questions. perhaps you did. For a lot of us, it was an entirely foreign experience to have domain experts at your finger tips.

        • skydhash 4 hours ago ago

          Didn’t you have books? And teachers?

  • flerchin 6 hours ago ago

    Before the internet I siloed knowledge that I could lookup to books. Don't worry, the kids will be ok.

    • defanor 6 hours ago ago

      Indeed, I thought that "decades old" sounds like an underestimate there: Socrates is said to have criticized writing for letting people to not train their memory, so that would be millennia by now. Though of course it is possible that the article's author would not agree with that, and would have a beef with more easily searchable content only, like the people who criticized tables of contents. I do not mean that they were all wrong though: probably the degree to which knowledge is outsourced matters, maybe some transitions were more worthwhile than others, and possibly something was indeed lost with those.

    • mallowdram 6 hours ago ago

      Sorry, kids lack the foundational ability to remember, reason, imagine because their phones cauterize their basic intelligence foundations in sharp wave ripples: navigation, adventurous short-cuts, vicarious trial and error, these are the basis for memory consolidation. And we build this developmentally until we are 16 or so. Once we offload this dev to phones, we are essentially unintelligent buffoon, lacking the basis for knowledge. The kids are DOA.

      • bccdee 6 hours ago ago

        No, you're just saying that. There's no evidence that using phones makes teenagers stupid.

        • mallowdram 5 hours ago ago

          Anyone who understands the development of intelligence and creativity is directly linked to the allocortex's ability to navigate freely, use vicarious trial and error, to invent novel short-cuts, built from both egocentric (landmark memories) and allocentric (extra body mapping) up until around the age of 16 in order to develop the basics of memory consolidation can take anecdotal evidence of kids that can't take a walk without a cellphone's help and extrapolate that these kids lack critical thinking skills.

          It's elementary deduction from basic learning practices we've known since O'Keefe in 1973.

          • bccdee 2 hours ago ago

            Given that the current generation of kids missed out on several crucial years of socializing due to Covid and were forced to find community online, I'm skeptical of arguments that point to poorly-socialized kids and say, "it must be the phones." Even if this was based on real data and not a hodgepodge of anecdotes, the phones themselves would not be my #1 suspect.

            > allocortex's ability to navigate freely

            If your allocortex is navigating freely, something has gone badly wrong. Put it back under the neocortex where it belongs and seek immediate help from a neurologist.

        • steezeburger 5 hours ago ago

          There is no evidence that using phones makes teenagers stupid? I see several studies. I feel like you're the one just saying things.

          • bccdee 3 hours ago ago

            Which studies? Could you attach them?

  • tkiolp4 2 hours ago ago

    I use AI at work, and certainly I’m doing less deep thinking over time. But at home, on side projects I still do it the traditional way. This is because I enjoy the process of thinking (rather than shipping. I actually never shipped any side project).

    I guess I’m lucky, deep thinking (on interesting things at home) is a hobby so I feel less encouraged to automate that away. I never cared about my jobs, so as long as I bring home money, it’s fine.

  • low_tech_punk 6 hours ago ago

    This piece reminds me of another article musing on the necessity of manual memory: https://numinous.productions/ttft/#how-important-is-memory.

    That article articulated the reason slightly differently, arguing you need to hold multiple concepts in your head at the same time in order to develop original ideas.

    Still, I'm not sure you have to remember everything, but I agree you have to remember the foundational things at the right abstraction layer, upon which you are trying to synthesize something new.

  • BinaryIgor 5 hours ago ago

    A bit too extreme, but there definitely is something to it; trivially, you need to challenge your mind all the time and at regularly work at the edge of your current abilities to progress further. I like this part a lot:

    "In knowledge work the bottleneck is not the external availability of information. It is the internal bandwidth of processing power which is determined by your innate abilities and the training status of your mind."

  • dghlsakjg 6 hours ago ago

    The irony here is using fitness as an example of knowable things.

    Fitness guidelines is very much not a settled science, and is highly variable per individual beyond the very basics (to lose weight eat fewer calories than you burn, to build muscle you should lift heavy things).

    For every study saying that 8-12 reps x3 is the optimal muscle growth strategy there is another saying that 20x2 is better, and a third saying that 5x5 is better. If you want to know how much protein you should eat to gain muscle mass, good luck; most studies have settled on 1.6g/kg per day as the maximum amount that will have an effect, but you can find many reputable fitness sources suggesting double that.

    You can memorize "facts", but they will change as the state of the art changes... or is Pluto still a planet?

    The ability to parse information and sources, as well as knowing the limits of your knowledge is far more important than memorizing things.

    • procaryote 6 hours ago ago

      They're very knowable, it's just that there's a lot more money in making things up

      • bluGill 5 hours ago ago

        They are not very knowable. It is expensive to design a study that would work. All too often a real world attempt to figure this out conclude "despite our best efforts we couldn't get people to behave in the needed way". So we have proxy studies that we hope mean something, but might not. Mixed in are lots of people making data fit their conclusion, and then selling it as fact.

        Few people have the time to figure things out and so it isn't knowable even though all the steps are easy to lay out.

  • js8 6 hours ago ago

    While I agree with the gist of the article, I think the AI example is poor, because we know AI can make stuff up and it's a problem. So this failure of AI to be reasonably correct weakens the argument. In the old days, you would rely on an expert (through say a book, like encyclopedia) to tell you this. The issue then becomes who you trust.

    I would say your own knowledge is like a memory cache. If you know stuff, then the relevant work becomes order of magnitudes faster. But you can always do some research and get other stuff in the cache.

    (Human mind is actually more than a cache because you also create mental models, which typically stay with you. So it's easier to pickup details after they get evicted, because the mental model is kept. I think the goal of memorising stuff in school should be exactly that - forget all the details, but in the learning process build a good mental model that you have for life.)

  • Animats 5 hours ago ago

    "To find what you need online, you require a solid general education and, above all, prior knowledge in the area related to your search." And get off my lawn. That's more a criticism of search engines than AI, anyway.

    Insisting that people know exercise physiology to work out is a bit much. That's what trainers and coaches are for. Now drop and give me twenty.

    The real problem with LLMs remains that they can't really do the job of thinking yet, because they're wrong too often. They can both hallucinate and get lost. What the AI situation will be in five years, we don't know.

  • firefoxd 6 hours ago ago

    One thing that I like is that things are much easier in person. When someone shows me an AI overview they just googled on their phone, I can say "I don't think that's true." Then we can discuss. The more we talk about the subject, the more we develop our knowledge. It's not black and white.

    But online? @grok is this true?

  • qwertytyyuu 6 hours ago ago

    You definitely do not need to remember everything, it’s not worth the effort to try, famously in programming even the best look up things they have looked up before.

    Memory is helpful but brains aren’t hard drives, they aren’t designed to store information perfectly.

  • wolttam 6 hours ago ago

    I remember things just fine, just not at the sufficient detail to remember all aspects at the drop of a pin. What I hold on to are the core concepts that allow me to hit the ground running when I have to interact with the subject-matter again.

    • NitpickLawyer 6 hours ago ago

      > What I hold on to are the core concepts that allow me to hit the ground running when I have to interact with the subject-matter again.

      Exactly. And those also come with doing the thing, or watching the thing being done, or reading about the thing, or thinking about the thing. You often don't have to actively try to kastle / cram / grok / etc the information to remember it. Just being exposed to it will make you remember some of it. Knowing where to get more accurate info is often a greater skill / benefit than knowing the details yourself. Especially in fields where knowing all the details yourself is almost impossible.

      Weak article.

  • birdman3131 6 hours ago ago

    AI is not a replacement for a greybeard.

    That said AI, Search and the like can be quite useful and helpful.

    • wduquette 5 hours ago ago

      I remember the dotCom bubble. After the bubble burst, people got on with putting storefronts and other kinds of business on-line in a more sober fashion.

      I predict the same thing will happen with the current AI tools: the bubble will burst, a bunch of folks will lose their shirts, and the world in general will come to a more realistic and sober understanding of what they are good for. We will figure out how to provide the useful parts without massive data centers and it will become natural. (I remember when things a graphics card can do trivially required a supercomputer with supporting staff.)

  • procaryote 6 hours ago ago

    > The reduced engagement with the material reduces the emotional weight of the whole line of action. You mind is an engine that is fuelled by emotion. Without any emotion, you don’t think. Rather, you try to imitate thinking efficiently.

    This doesn't sound true and they don't seem to offer any support for the claim.

    There's a whole host of emotion-driven cognitive biases, where an effective counter is to reduce the emotional weight of the whole line of action.

    Of course, to their credit, it's only by remembering those biases that I could see their error

    • OmarShehata 6 hours ago ago

      The first thing that happened in your mind when you read that sentence is (1) a bad feeling. That then triggered (2) a rational, conscious thought that interpreted that bad feeling: "this feels bad because it's not true, here are the reasons why it is not true.

      There is ALWAYS an "emotional/intuitive" response that precedes the rational, conscious thought. There's a ton of research on this (see system 1 vs system 2 thinking etc).

      There is no way to stop the emotional "thought" from happening before the "rational thought". What you can do is build a loop that self reflects to understand why that emotion was triggered (sometimes, instead of "this feels bad because it's wrong", it's "this feels bad because it points to an inconvenient truth" or "I am hungry and everything I am reading feels bad")

      • procaryote 6 hours ago ago

        That's very hard to know without being in an FMRI machine while reading, which I wasn't, sadly.

        Just functionally, it seems reasonable that something happened before that bad feeling to trigger it, e.g. "trying to fit this with already known things, and finding it doesn't".

    • jbreckmckye 6 hours ago ago

      Isn't your argument a support of his claim?

      If emotions did not weigh on recall, surely there would be no "emotion-driven cognitive biases"

      • procaryote 6 hours ago ago

        If the claim was "reducing the emotional weight of the action makes your thinking worse" – No.

  • darepublic 6 hours ago ago

    At my first software dev internship my manager asked me to code in languages I was not trained in. I told him I would need some time to study up on these. He scoffed and said just look up what you need on Google. Initially I resisted, I felt like it was too shallow. That it was akin to copying answers I didn't really comprehend. However it didn't take long to pick up the habit. Learning is like going to the gym now, it's self enforced discipline

    • bluGill 5 hours ago ago

      That is because learning a computer language is not hard. Learning to program in the first place is hard, but once you know that the language itself is not hard to learn. However it is easy to verify someone knows the syntax of some specific language and hard to check if they actually know how to program.

  • rambambram 5 hours ago ago

    Off topic: This site has an input somewhere that takes focus on page load, but I can't see it. When I used the arrow down key some previously used suggestions jumped into view in a dropdown menu.

    • ctietze 3 hours ago ago

      Which browser?

  • DiscourseFan 6 hours ago ago

    Isn’t the irony of Pheadrus (the dialogue where Socrates speaks against writing) that its a written work, in a dramatic setting? Like, yes, writing, technology can make us stupid, but this article was written and transmitted to us via social media.

  • ripped_britches 6 hours ago ago

    I use chatgpt to learn a ridiculous amount of knowledge that was not possible in 2020.

    If you are using it to only decrease cognitive load (instead of keeping cognitive load constant while doing MORE), then you’re using it wrong

  • lordnacho 6 hours ago ago

    It's like a cache level issue.

    In the old world, you had your wet brain memory. You needed to fill it with the order of the alphabet, so that you could make use of paper reference works. You also needed some arithmetic and some English style notes, in case you wanted to express yourself. You had to remember things like there/their/they're and that kind of thing. You needed a small encyclopedia so that you could have a clue about where general knowledge could be found.

    On top of this, you were expected to layer on your professional knowledge. If you were a doctor, a huge number of Latin terms. A more detailed understanding of how the body works that you got from the base installation. Something about how the profession works. You would get a sense for what was likely through experience. A BS detector, in some ways.

    All because your brain ain't gonna get bigger, and paper information technology was what it was: found in a library, limited in size, hard to search, slow to update.

    Nowadays, the brain cache is no different in capability. But the external memory system you are accessing is completely different. It's massive, it can update in real time, and it's very searchable.

    So you need to keep different things in your brain cache to take advantage of this.

    But what do you need?

    The only component that really matters is the BS detector.

    Not only do I not need to be able to do long division, I don't even need to know that a calculator exists for calculating my tax, a calculator containing the Haversine formula is out there somewhere, and a so on. I just assume something like epochconverter exists, and that I can plug a timestamp into it and get something readable out.

    I just assume that I will be able to find things that I want, the tradeoff being that I don't have sit there and refresh my math knowledge to calculate distances on a sphere, type out a program, and run it for a single output. The other side of this coin is of course, I don't know whether the guy whose work I am borrowing did it correctly, whether he has some sort of interest he's not disclosed, and whether it's safe. I also have to compromise on any variation between what he built and what I wanted.

    I do this with almost everything now. I can't help it, having grown up and been educated before the internet exploded, and having started work just as the explosion was happening.

    So I'd say you actually don't have to remember everything, but you do have to use your judgement in everything.

  • hammock 6 hours ago ago

    As someone who can answer all of those questions about the workout plan in depth, it’s not a bad plan. It’s actually quite good. Missing a little detail but that’s OK.

    Wasn’t a good example for me.

  • alphazard 7 hours ago ago

    > Rowlands et al. wrote about the so called “digital natives” that they lack the critical and analytical thinking skills to evaluate the information they find on the internet.

    This doesn't match the cultural shift in the last 20 years. A generation of people grew up with chat rooms and immediately discovered the ability to misrepresent oneself on the internet. "On the internet, no one knows you're a dog", as they say. That whole demographic assumes that media is lying by default. Compare that to previous generations that trusted certain media institutions like cable news, newspapers, radio shows, etc. because the production value and scarcity of media instilled trust.

    Trust in media institutions is at an all time low, and will likely never recover. That has to be attributed to the newer generations. They are more skeptical of propaganda than ever before. To them, the high production value media outlets are just a quaint legacy variety of content slop.

    • scottLobster 6 hours ago ago

      Well it doesn't help that the media, even when it doesn't lie, often simply refuses to report on various issues depending on the whims of producers.

      I'm an older millennial, probably one of the last generations who was formally taught that organizations like the New York Times and CNN were authoritative, bibliography-worthy sources of information due to their reputation and standards. I haven't cared much about what either outlet has produced in years. For every good investigative piece there's a mountain of obvious propaganda or refusal to cover topics they find uncomfortable with any objectivity.

      The signal to noise ratio is so low, why pay attention? There's a lot of bad takes on twitter and non-mainstream media (to put it mildly) but it at least makes me aware of more things.

    • neonrider 4 hours ago ago

      > That has to be attributed to the newer generations. They are more skeptical of propaganda than ever before. To them, the high production value media outlets are just a quaint legacy variety of content slop.

      Right. The skeptical newer generation knows better. It's the generation that is immune to influence. They're so resistant to it that they've finally driven advertisers to realize that spamming YouTube, IG, TikTok, with ads peddling some new hype every week is pointless.

      Sarcasm aside, the newer generation, in any generation, is always as naive as they're said to be. You're not born with wisdom and your parents can't save you from the candle fire, no matter how much they try. Sooner or later, you'll have to burn that finger to learn. Life is an experience game. No way around it.

    • dragontamer 7 hours ago ago

      > That whole demographic assumes that media is lying by default.

      Yes. And then they turn around and trust that the Boston Marathon Bomber was some random kid because Reddit said so.

      The new generation of netizens distrusts classic media and then suddenly trusts Reddit and Google searches and random blogs.

      Bad Twitter arguments citing YouTube videos talking about a Redditors problem about Microsoft Updates and SSDs just broke through a week or two ago and nearly everyone involved in the discussion is utterly wrong.

    • blackbear_ 6 hours ago ago

      > They are more skeptical of propaganda than ever before.

      Of boomer propaganda. But don't worry, as voters evolve, so does propaganda.

      • micromacrofoot 6 hours ago ago

        indeed... new generations don't believe the news, but they believe whoever they're currently enamored with on social media

    • RicoElectrico 7 hours ago ago

      > If anything the newer generation is more skeptical of propaganda than ever before.

      Laughs in Polish GenZ voting for Konfederacja (alt-right)

      • recursive 6 hours ago ago

        Don't know anything about this particular party, but all the alt-right stuff I've seen leans heavily into skepticism of authority. Not exactly the same thing as propaganda, but might be a meaningful connection.

        • StefanBatory 6 hours ago ago

          Their slogan was "Nie chcemy Żydów, homoseksualistów, aborcji, podatków i Unii Europejskiej".

          "We don't want Jews, gays, abortion, taxes and EU."

      • StefanBatory 6 hours ago ago

        Yup, they don't trust mainstream, only to fall into niches. But they don't see it.

        Kanał Zero is the best example.

  • leslielurker 6 hours ago ago

    We can remember it for you wholesale specifically

  • johongo 5 hours ago ago

    It was a typical attitude among my physicist and mathematician friends that memorizing was for suckers, even though it was often required to reproduce long proofs or derivations. These were people to whom mathematics came naturally; their understanding, memory, curiosity, and experience just compressed that knowledge until it was trivial to memorize. Unfortunately, many walked away with a sense of not needing to know things until they need them, but good luck with that in a systems design interview.

  • wtbdbrrr 38 minutes ago ago

    Another one of those things that will separate humans on different levels of consciousness even more.

    Enough people simply don't want to train their minds. For a lot of people, IT is just a job. Programmers, consultants, ... CEOs and CTOs, people want money and that's fine. Some just don't want a boss.

    Some of these people will replace the training of their minds with other stuff, pure experience and laughter. Somewhere else all the time. Other will only train specific parts of their minds, actively or passively.

    The big question is, IMO: what will the young face when they enter school, when they enter university, the job market. Will humanity be able to keep the variety it is currently maintaining or is the reduction one of the first steps towards singularity?

    All above is rather obvious but the side effects of outsourcing much of everything systematic and structural to AI will build up. Will people care less or more about law and justice? Or just the same? Privacy? Safety? Morals and ethics?

    MITM already create a rift between customers and companies. AI will make this rift bigger.

    So it's not just the training of the mind that we can't circumvent, it's the (heartwarming I know) the heart as well. How to care for your creation, product and customer if your are completely alienated from almost everything except the balance sheet? Ah, wait, there are a couple of million people like that already. So we'll just have more of those. But don't they read a lot and cooperate and mentor and fiddle with politics, investments and are life long learners and stuff ... I wonder.

  • constantcrying 4 hours ago ago

    But in how many cases is the "why" more important than the "how"?

    People can drive a car, without understanding any of the mechanical and electrical systems of a car. In fact understanding these does essentially nothing for what most people use a car for.

    >If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

    And then what? Why would this matter at all? You can successfully use a workout schedule without understanding what its strengths and weaknesses are and how they align with your goals.

  • shadowgovt 5 hours ago ago

    Possibly worth considering the source: the author coaches on a mechanism for collecting and collating information. They have a vested interest in the notion you have to do the work (instead of letting the machine do it for you); indeed, they hope you do the work using this method they coach on...

    (That having been said: I've used zettelkasten myself a bit and I'd say it's worth a try. Probably not for everyone but the underlying idea of "building out an artifact to supplement your memory and understanding of what you've seen" is an intriguing approach).

  • weekendvampire 6 hours ago ago
  • hn_throw_250910 6 hours ago ago

    A great deal of anti-AI posts as of late seem like milquetoast pearl clutching to me. They don’t want to outright say they feel threatened/devalued but the arguments they put forward are not only unconvincing, but in this case among many others actively work against them.

    nb. I tried really hard to not point out the smugness of Zettelkasten which I suspect emboldens this feeling of superiority, because I’d rather sit this one out and see how it goes. Something tells me the AI will win by a landslide.

    • nancyminusone 5 hours ago ago

      Fine, but I ask you what does it mean to "win" in this context?

    • add-sub-mul-div 6 hours ago ago

      What's the difference between milquetoast pearl clutching and a take you disagree with about a currently hot topic?

      How do you tell when someone who disagrees with you is coming from a place of feeling threatened or devalued vs. a place you'd consider legitimate?

      Hoping your methods are reliable and transferable, you're applying them to "a great deal" of posts so maybe they'd be helpful to me too.

  • ajuc 6 hours ago ago

    It's like with math. You could theoretically only memorize the axioms and rederive everything else on the fly.

    But in practice, you don't have enough working memory or processing power to do that, so you'd be stuck with the math a few derivation steps above the axioms only.

    To actually use math for problem solving, you need to memorize everything up to the bleeding edge, and to train yourself to operate on intermediate-level abstractions intuitively.

  • mock-possum 6 hours ago ago

    > Looks good alright? Or does it? How do you know? You can’t if you don’t have sufficient background knowledge … If you can’t produce a comprehensive answer with confidence and on the whim the second you read the question, you don’t have the sufficient background knowledge.

    > “I just ask ChatGPT for that, too!”, the AI generation might ask. Ok, and then what? How can you assess the answers … you are taking on an impossible task, because you can’t use enough of your brain for your cognitive operations.

    So it’s Zeno’s paradox of knowing stuff?

    It can’t be impossible to know things, you’ve just got to decide when you know enough to get going on. Otherwise you’re mired in analysis paralysis and you never get anything done.

    I do agree that deep knowledge of the foundations a subject - particularly a skilled practice or craft - is a path to proficiency and certainly a requirement for mastery. But there are plenty of times when you can get away with ‘just reading the documentation’ and doing as instructed.

    You do not first need to invent the universe in order to begin exercising, you can just start talking a 20 minute walk after lunch.

  • SpaceManNabs 5 hours ago ago

    you can hoard anything, including knowledge.

  • zach_miller 6 hours ago ago

    Surprised this is on top. Human beings have been using tools to augment memory since writing. Lots of these tools are faulty but then so too is memory.

    If you want to remember everything good luck, but I am not convinced.

    • fullshark 6 hours ago ago

      I've reached my limit and I'm going to tap out of reading the blogosphere. It almost exclusively contains half-baked musings and overly reductive conclusions of little value.

    • micromacrofoot 6 hours ago ago

      even before writing we used song and story

  • komali2 6 hours ago ago

    I am either too stupid or too lazy to remember everything. Or my version of ADHD specifically restricts my ability to remember things, or to have sufficient energy to engage daily with the process needed to memorize information (e.g. build and review anki decks).

    I gave up on Anki. I abandoned org-roam and switched to trillium with very simple summaries of hand written notes. I don't summarize articles I read, I simply save them to my bookmarks with good tags.

    I might be stupider now, I don't know. My mandarin improves at the same pace it did before, when I was doing daily flashcards, according to my mandarin teacher. Except now I don't have the torturous Todo item of "do mandarin flashcards" weighing on me every day, or the huge catch-up if I miss even two days (again, I am stupid - I usually miss 20% of my cards, daily. Often the same cards. I have forgotten the same word over 100 times.)

    But, the other day I went to trivia in Cambridge UK, where I was probably the dumbest person in the town and almost certainly in the pub! And everyone was answering very interesting questions with correct answers, about geography, history, science, even the geography of the Americas which as an American should at least have been something I knew better than my British colleagues! Nope. It made me regretful that I grew up in suburban America and didn't get a nice bit of background education like Europeans do, or maybe for me being dumb or having ADHD. It I always admired smart people and I always wanted to count myself amongst them - it's why I brute forced my way into engineering by picking the lowest barrier to entry bit (frontend) and gritting my teeth and smashing pencils through hour after hour of tutorial and project until I finally had a portfolio I could get hired off of. And still almost always, dumbest person in the company (there have been less productive than me, though), and no idea how to change it. I'm wondering again if there's something I can do to be smarter.

    I am fascinated by the promise of zettelkasten, spaced repetition, fish oil, whatever, but none of it seems to deliver!

    • zozbot234 4 hours ago ago

      People should just stop worrying about their Anki backlog. The truth is, every little bit you do helps, even just one card a day. That "backlog" is just telling you that you are falling short of perfection wrt. keeping the whole content of your deck securely memorized. You'll still be able to recall the bulk of it as long as you keep a reasonable routine.

    • reddit_clone 4 hours ago ago

      As a fellow sufferer of same issues, I sympathize.

      You are too hard on yourself though.

      It is like a running race, when we are required to carry 50 Lb bags on our backs, while others are running freely.

      At some point we need to accept and not worry too much!

  • vxxzy 6 hours ago ago

    i will now no longer press F3 or / instead i will read.

  • digitalbullshit 6 hours ago ago

    Reminds me of when I debug why my aquascape (I’m a beginner) keeps having algae outbreak such as GSA and Cyanobacteria.

    I have a lot of things correct, CO2, light, photoperiod. LLM told me that I have too much kH and gH and too much phosphate.

    Followed the LLM advice, didn’t work. Apparently its an outdated advice. My kH and gH is high (not using RoDi water, just NJ tap water) but not that high such that plants struggle to get CO2, and my level of phosphate doesn’t necessarily what made the algae bloom.

    Turns out my nutrition was the culprit. I bottomed out on Nitrogen all the time. But when I told that my Nitrate is 0 the LLM didn’t say a thing and instead “you got everything correct”.

    Digital bullshit.

  • Avicebron 7 hours ago ago

    TLDR: firing and forgetting the first result off of a single google search isn't the best long term approach. But this guy had a bri..coaching to sell you that apparently makes you a better human being

    • tolerance 6 hours ago ago

      > But this guy had a bri..coaching to sell you that apparently makes you a better human being

      To the credit of the author and this individual post, you would have to go out of your way to come to that conclusion. That is, there isn’t anything present in the actual article to suggest that what you’re alleging is valid.