That’s because of several problems in the problem definition.
First off, a lot of research talks about consciousness, but doesn’t define it.
Do you mean attention?
Do you mean memory?
Do you mean awareness?
Do you mean self awareness?
Do you mean ability to report on a phenomenon?
Do you mean a combination of these things?
Then they’re looking into the physical or neurological basis. Why would that need to be there?
Are you going to look at the physical basis of a software application, when there are multiple chip architectures which work? When it could be done mechanically, when it could have been done with a pen and paper?
What part of the physical basis are you looking for? It’s neurons. We know how they work. (Well, and all the other cells in our nervous system and brain which for some reason we just ignore)
I think the problem with consciousness isn’t consciousness itself, it’s our preconceived notions of what constitutes and doesn’t constitute consciousness, and our inability thus far to accurate define and constrain both the subject and research question.
right: "Understanding the biophysical basis of consciousness remains a substantial challenge" precisely because there is no such thing. according to the Upaniṣads, consciousness is the Absolute. "How can one know that by which everything is known? How can one know the Knower?" — Bṛhadāraṇyakopaniṣad
There is no theory of consciousness, and no one is anywhere close to forming one.
In place of a theory, the paper supplies a circular set of references to attributes of consciousness associated with human activity. These references are coined in a manner as if having editorial control over some jargon, such as "perceptual self awareness", which is sandbagged by secondary subordinate, vague terms, such as "wakefulness", can seem like a cogent alternative to the total lack of any formal approach to understanding of consciousness.
Using bloated prose, which necessitates a disclaimer that it wasn't pooped out of a gen-AI, the paper surfs heavy waves of lamentation about the "complexity" of "phenomenological" and "clinical methods" to reach a shore of intelligibility that Descartes colonized centuries ago with the maxim: There is nothing a man comprehends more self-evidently than his own existence.
Intellectually there's precious little at stake in this paper, so what's its purpose? The answer can be found though an analogy of the resounding words of JFK announcing the Apollo program: "We set sail on this new sea because there is new knowledge to be gained, and new rights to be won..."; whomever can take control of an sciency vernacular for humanistic traits applied to today's AI will gain a seat at any roundtable on industrial policy, and this seat could prove very valuable.
In conclusion, this work needs funding, lots more funding!
Well said. When I read stuff like this implying 'consciousness studies' can be a hard science, the undefined and ill-defined terms just keep piling up until I nope out. I'm lucky if I make it past the abstract to maybe the third paragraph. No reproducible science can be built on so many subjective vagaries.
I think consciousness studies can be interesting but they need to stay in the philosophy department between Searle's Chinese room and the P-Zombie lounge until they're ready to experimentally test falsifiable hypotheses with neuroscientists. And until they have a rigorous, consistent definition of what human consciousness is and is not, they really need to stop pretending AI has any relevance to human consciousness. There's no evidence AIs are conscious, and even if there someday is reproducible evidence - there's no reason to think it might be similar enough to human consciousness to make useful predictions about either (and that's assuming humans are conscious, which is still a matter of some debate in the field).
That’s because of several problems in the problem definition.
First off, a lot of research talks about consciousness, but doesn’t define it.
Do you mean attention? Do you mean memory? Do you mean awareness? Do you mean self awareness? Do you mean ability to report on a phenomenon? Do you mean a combination of these things?
Then they’re looking into the physical or neurological basis. Why would that need to be there? Are you going to look at the physical basis of a software application, when there are multiple chip architectures which work? When it could be done mechanically, when it could have been done with a pen and paper?
What part of the physical basis are you looking for? It’s neurons. We know how they work. (Well, and all the other cells in our nervous system and brain which for some reason we just ignore)
I think the problem with consciousness isn’t consciousness itself, it’s our preconceived notions of what constitutes and doesn’t constitute consciousness, and our inability thus far to accurate define and constrain both the subject and research question.
right: "Understanding the biophysical basis of consciousness remains a substantial challenge" precisely because there is no such thing. according to the Upaniṣads, consciousness is the Absolute. "How can one know that by which everything is known? How can one know the Knower?" — Bṛhadāraṇyakopaniṣad
tl;dr
There is no theory of consciousness, and no one is anywhere close to forming one.
In place of a theory, the paper supplies a circular set of references to attributes of consciousness associated with human activity. These references are coined in a manner as if having editorial control over some jargon, such as "perceptual self awareness", which is sandbagged by secondary subordinate, vague terms, such as "wakefulness", can seem like a cogent alternative to the total lack of any formal approach to understanding of consciousness.
Using bloated prose, which necessitates a disclaimer that it wasn't pooped out of a gen-AI, the paper surfs heavy waves of lamentation about the "complexity" of "phenomenological" and "clinical methods" to reach a shore of intelligibility that Descartes colonized centuries ago with the maxim: There is nothing a man comprehends more self-evidently than his own existence.
Intellectually there's precious little at stake in this paper, so what's its purpose? The answer can be found though an analogy of the resounding words of JFK announcing the Apollo program: "We set sail on this new sea because there is new knowledge to be gained, and new rights to be won..."; whomever can take control of an sciency vernacular for humanistic traits applied to today's AI will gain a seat at any roundtable on industrial policy, and this seat could prove very valuable.
In conclusion, this work needs funding, lots more funding!
Well said. When I read stuff like this implying 'consciousness studies' can be a hard science, the undefined and ill-defined terms just keep piling up until I nope out. I'm lucky if I make it past the abstract to maybe the third paragraph. No reproducible science can be built on so many subjective vagaries.
I think consciousness studies can be interesting but they need to stay in the philosophy department between Searle's Chinese room and the P-Zombie lounge until they're ready to experimentally test falsifiable hypotheses with neuroscientists. And until they have a rigorous, consistent definition of what human consciousness is and is not, they really need to stop pretending AI has any relevance to human consciousness. There's no evidence AIs are conscious, and even if there someday is reproducible evidence - there's no reason to think it might be similar enough to human consciousness to make useful predictions about either (and that's assuming humans are conscious, which is still a matter of some debate in the field).