

Everything you said is right, but you’re only proving that LLM weights is a severely simplified version of neurons. It neither disproves that they don’t have consciousness or that being a mathematical model precludes it from having consciousness at all.
In my opinion, the current models doesn’t express any consciousness, but I am against saying they don’t because they are a mathematical model rather than by the results we can measure. The fact that we can’t theoretically prove consciousness in the human brain also means we can’t theoretically disprove consciousness in an LLM model. They aren’t conscious because they haven’t expressed enough to be considered conscious, and that’s the extent we should claim to know.





I don’t know what you’re even arguing. Your analogy breaks down because in this case, we can’t even see if the raven is black or not. No one can theoretically prove consciousness. The rest of your comments seems to be arguing that current AI has no consciousness, which is exactly what I said, so I guess this is just an attempt at supporting my point?