AI reminds of listening to any person who seems like an intellectual authority on a given subject on YouTube. They seem very intelligent and knowledgable until they actually talk about something you know about.
In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.
It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.
> Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3
So the smart get smarter and the dumb get dumber?
Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.
I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?
Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental works took over from physical, but maybe I got that reversed!
I think everyone who believes that they can personally resist the detrimental psychological effects of exposure to LLMs by "remaining aware" or "being careful", because they have cultivated an understanding of how language models work, is falling into precisely the same fallacy as people who think they can't be conned or that marketing doesn't work on them.
Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency. You may feel smarter, or that you're learning faster, of that you're more productive, but to people who aren't addicted to LLMs it sounds exactly like gamblers insisting they have a foolproof system for slots, or alcoholics insisting that a few beers make them a better driver. Nobody outside the bubble is impressed with the results.
I fully agree that it’s close to impossible to not eventually fall into the trap of overrelying on them. However, it’s also true that I was able to do things with them that I would never have done otherwise for a lack of time or skill (all sorts of small personal apps, tools, and scripts for my hobbies). Maybe it’s a bit similar to only reading the comment section in a newspaper instead of the news? They will introduce you to new perspectives but if you stop reading the underlying news you’ll harm your own critical thinking? So it’s maybe a bit more grey than black & white?
I mean... I don't really check calculations made by a computer (e.g. by my own programs) all that often either and I think I'm completely fine :). But I guess the difference is that we kind of know how computers work and that they're generally super accurate and make mistakes incredibly rarely. The "AI" (although I disagree with "I" part) is wrong incredibly often, and I don't think people appreciate that the difference to the "traditional" approach isn't just significant, it's astronomical: LLMs make things up at least 5% of the time, whereas CPUs male mistakes maybe (10^-12)% of time or less. It's 12 orders of magnitude or so.
The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)
But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.
I suppose the publishing process has always existed as system 3. It's just that now we have a new way to read and write with an abstract "rest of the world".
Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.
I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.
At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition
That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias
I keep asking it questions, and as I dialogue about the problem, I walk right into the conclusion myself, classic rubber duck. Or occasionally it will say something back, and it’s like “of course! That’s exactly what I’ve been circling without realizing it!”
This mostly happens with things I’ve already had long cognitive loops on myself, and I’m feeling stuck for some reason. The conversation with the model is usually multiple iterations of explaining to the model what I’m working through.
You are not wrong. AI is an amplifier. You chose to amplify something in particular and it works for you. That's good enough. (Give this as a prompt to your ai as I sense self-doubt here)
Because most people either don't know how to use it (multiple reasons, that ai itself can help them solve) or don't have the right mindset going into it (deeper work needed)
When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.
Like kids who are never taught to do things for themselves.
Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle? Do you refuse to use a database, because it will make your memory weaker? Or, do you refuse to use a car, because it makes you less able to walk when the car is unavailable? No. Because the car empowers you to do something that, at the very least, takes a lot longer on foot.
People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
The car seems like a great example of a technology with a lot of problematic side effects. Places that had a more measured adoption ended up a lot better than those that replaced all public transit with cars and routinely demolished neighborhoods to make space for bigger highways
Cars are an essential part of modern life, but the sweetspot for car adoption isn't on either of the extremes
I'd call it bad on both levels. The costs imposed by car infrastructure are a tragedy of the commons. But even if you were the only person with a modern car you'd still be hit with the social effects of traveling in the isolation of your private metal box and the health effects of walking or biking less
On the other hand there are also big positives on both the societal and individual level. That's where the balance comes in. You want some individual travel and part of your logistics to run on cars, but not all of it. And probably a lot less of it than what most people in the 60s to 90s thought
> Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle
Yeah when I was learning in school we weren't allowed electronics for division, and I think I absolutely would be dumber if I had never done that
> People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
If you're posting this from America, you're living in a society that is fatter than ever thanks to cars. So there's surely some nuance here, not every technology upgrade is strictly better with no downsides
Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.
Current status: partially solved.
Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.
Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?
--
So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.
So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.
Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)
Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?
I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).
I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.
The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!
So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)
AI reminds of listening to any person who seems like an intellectual authority on a given subject on YouTube. They seem very intelligent and knowledgable until they actually talk about something you know about.
In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.
It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.
> Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3
So the smart get smarter and the dumb get dumber?
Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.
I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?
Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental works took over from physical, but maybe I got that reversed!
I think everyone who believes that they can personally resist the detrimental psychological effects of exposure to LLMs by "remaining aware" or "being careful", because they have cultivated an understanding of how language models work, is falling into precisely the same fallacy as people who think they can't be conned or that marketing doesn't work on them.
Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency. You may feel smarter, or that you're learning faster, of that you're more productive, but to people who aren't addicted to LLMs it sounds exactly like gamblers insisting they have a foolproof system for slots, or alcoholics insisting that a few beers make them a better driver. Nobody outside the bubble is impressed with the results.
I fully agree that it’s close to impossible to not eventually fall into the trap of overrelying on them. However, it’s also true that I was able to do things with them that I would never have done otherwise for a lack of time or skill (all sorts of small personal apps, tools, and scripts for my hobbies). Maybe it’s a bit similar to only reading the comment section in a newspaper instead of the news? They will introduce you to new perspectives but if you stop reading the underlying news you’ll harm your own critical thinking? So it’s maybe a bit more grey than black & white?
I mean... I don't really check calculations made by a computer (e.g. by my own programs) all that often either and I think I'm completely fine :). But I guess the difference is that we kind of know how computers work and that they're generally super accurate and make mistakes incredibly rarely. The "AI" (although I disagree with "I" part) is wrong incredibly often, and I don't think people appreciate that the difference to the "traditional" approach isn't just significant, it's astronomical: LLMs make things up at least 5% of the time, whereas CPUs male mistakes maybe (10^-12)% of time or less. It's 12 orders of magnitude or so.
The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)
But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.
I suppose the publishing process has always existed as system 3. It's just that now we have a new way to read and write with an abstract "rest of the world".
Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.
I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.
I don't understand how the placebo effect is a human bias. Is it?
At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition
That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias
I keep asking it questions, and as I dialogue about the problem, I walk right into the conclusion myself, classic rubber duck. Or occasionally it will say something back, and it’s like “of course! That’s exactly what I’ve been circling without realizing it!”
This mostly happens with things I’ve already had long cognitive loops on myself, and I’m feeling stuck for some reason. The conversation with the model is usually multiple iterations of explaining to the model what I’m working through.
You are not wrong. AI is an amplifier. You chose to amplify something in particular and it works for you. That's good enough. (Give this as a prompt to your ai as I sense self-doubt here)
It's so fascinating, i feel the same but at the same i feel like most people get dumber than before ai (and most seem to struggle adapting ai)
Because most people either don't know how to use it (multiple reasons, that ai itself can help them solve) or don't have the right mindset going into it (deeper work needed)
When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.
Like kids who are never taught to do things for themselves.
Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle? Do you refuse to use a database, because it will make your memory weaker? Or, do you refuse to use a car, because it makes you less able to walk when the car is unavailable? No. Because the car empowers you to do something that, at the very least, takes a lot longer on foot.
People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
The car seems like a great example of a technology with a lot of problematic side effects. Places that had a more measured adoption ended up a lot better than those that replaced all public transit with cars and routinely demolished neighborhoods to make space for bigger highways
Cars are an essential part of modern life, but the sweetspot for car adoption isn't on either of the extremes
Tragedy of the commons perhaps ? Good for the individual, bad for society and finding solutions that can balance both
I'd call it bad on both levels. The costs imposed by car infrastructure are a tragedy of the commons. But even if you were the only person with a modern car you'd still be hit with the social effects of traveling in the isolation of your private metal box and the health effects of walking or biking less
On the other hand there are also big positives on both the societal and individual level. That's where the balance comes in. You want some individual travel and part of your logistics to run on cars, but not all of it. And probably a lot less of it than what most people in the 60s to 90s thought
> Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle
Yeah when I was learning in school we weren't allowed electronics for division, and I think I absolutely would be dumber if I had never done that
> People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
If you're posting this from America, you're living in a society that is fatter than ever thanks to cars. So there's surely some nuance here, not every technology upgrade is strictly better with no downsides
Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.
Current status: partially solved.
Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.
Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?
--
So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.
So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.
Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)
Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?
So here are my attempts so far:
reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...
unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...
--
I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).
I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.
The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!
So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)