What does it mean to be conscious?

Discussion in 'Off Topic Area' started by Socrastein, Jan 20, 2011.

  1. AZeitung

    AZeitung The power of Grayskull

    I don't see how it could be. What's the definition of consciousness you're using if it's not "the thing that humans have"? That is, how are you defining consciousness? If we're defining it in a way such that nothing actually posesses it, where did that definition come from?

    Maybe this thread merrits a link to a thread I started about 3 years ago: http://martialartsplanet.com/forums/showthread.php?t=70993
     
  2. Nathaniel Cooke

    Nathaniel Cooke Valued Member

    What I mean is, given that we can only view the subject from inside our own experience, i.e. we are humans looking at humans, our act of observing and trying te measure consciousness comes from the act of consciousness (or whatever stands for consciousness) itself. Therefore whatever we view is influenced by our own machinations and internal processes, and the act of 'consciousness' itself.

    Also, subjective in that if we create a computer that perfectly mimics consciousness to the observer then, to that observer, it is conscious - yet to the creater or the computer, it is just a mass of wires and circuits. Thus the notion of consciousness is subjective depending on who is making the observation, but it does not change the 'truth' of the object being observed.

    I guess in the end I feel that what we call consciousness is a metaphor, or story we tell ourselves, to rationalise a process that currently is beyond our comprehension. I don't think it's as base and simple as a series of evolved and instinctual responses to stimuli, but nor do I see it as a mysterious 'entity' seperate to the rest of our bodies.
     
  3. Socrastein

    Socrastein The Boxing Philosopher

    Your summary of my view on the purpose of language and thought was spot on. I would say however that consciousness arises from the communication of information between various regions of the brain which process different functions and stimuli, not that it's what allows that to happen. I would say that early humans and our ancestors had consciousness, as in conscious experience, before they had much in the way of thought or language, which I think are deeply intertwined. This laid a foundation of function, so to speak, that allowed language and abstract thinking to later occur over time, which further improved the capacity for plasticity in the brain.

    In other words, a lot of parallel processes occurring in different brain regions are making judgments of various sorts and when this information is shared and co-processed to a sufficient complexity and breadth, then you have the software of consciousness. Then this slowly over time allows even MORE areas of the brain to 'get in on the action' of sharing and processing data.

    I'd say that brain sections become mutually incompatible because nature has no end goal in mind with evolution. Every adaptation has to have some sort of immediate benefit to the species, evolution can't lay the groundwork for a future brain that is a unified machine of great intricacy and co-opting of functions. She got close, but nature ended up adding some data processing here, some algorithmic functions there, a bit of reflex wiring here and there, and after billions of years we have in our brain fish parts, rodent parts, ape parts, and unique human parts. If you peel each layer back in order of how they came on, you still have a functioning brain underneath, just it has a little less function. That's why things ended up a bit jumbled, because they just got thrown in there as needed wherever they could fit.

    Animal behavior preconsciousness is more robotic and predictable I'd say. There aren't yet enough competing parts of the brain screaming their own unique commands, wants, and judgments for there to be any need for a unifying software.

    Nathaniel Cooke

    I'd say the reason we can be confident that anyone or anything that has all the physical properties of a conscious being is in fact conscious has less to do with any affirmative argument for why this must be the case, and more to do with how silly the notion of dualism and philosophical zombies is.

    Ultimately if you can have two bodies with the same brain structure and behavior patterns, for all intents and purposes physically identical in their functional capacities, but only one of them is conscious, even conceivably, then you must appeal to dualism of one sort or the other.

    Unfortunately, no form of dualism holds up to scrutiny, and therefore through a reduction to absurdity of dualism we can conclude that we needn't worry about zombies.
     
  4. AZeitung

    AZeitung The power of Grayskull

    Did you read the thread I linked to? I'm curious what you think about it.
     
  5. Socrastein

    Socrastein The Boxing Philosopher

    I'd answer no to all three questions. I also question the notion of reducing human thought to serial functions that a human/machine could calculate one at a time.

    Even if consciousness could be reduced in such a way, which I don't believe it can, you seem to be appealing to possible in theory, impossible in practice thought experiments that I don't see as being helpful for understanding consciousness. I think the tacit assumptions of your argument are the same sorts of confusions that have muddied the subject for thousands of years. Namely, assuming consciousness works the same way we feel and experience it working, as a uniform serial process of one thought or action to the next, and then assuming that it can be modeled in a serial manner as well, as in crunching one calculation at a time and drawing outputs on a piece of paper.
     
  6. Fish Of Doom

    Fish Of Doom Will : Mind : Motion Supporter

    an idle thought, given AZ's thread and reactions to it: consciousness is a human-made concept, so perhaps it doesn't really exist, and is actually only a pattern of different phenomena that we perceive as being a single cohesive thing? this would allow a new definition of consciousness as simply the presence of these phenomena (self-awareness, external senses, different cognitive abilities, maybe a form of communication or something else that can turn abstract ideas into coherent concepts?) and their interaction in a specific way.

    going back to AZ's thread, i think what complicates matters here is actually the definition of a being/entity that each commenter has, as some will not consider the computer as an entity, and thus it will be fundamentally incompatible with concepts such as consciousness or the self, and some may not accept that the computer AND the human operator would comprise a single entity, with the same result.
     
  7. AZeitung

    AZeitung The power of Grayskull

    If consciousness is a purely natural phenomenon -- as in, it arises from some process in our physical brain (which I assume you believe) -- how can that NOT be reducible to something that can modeled by crunching numbers? The behavior of everything in the universe can be modeled by number crunching.

    For example, in my lab, we do calculations of small biological systems, like proteins using ab-initio methods (i.e. we optimize the geometry, calculate the electronic wave function, and energy from first principles physics). In principle, although not in practice, because there's not anywhere near enough computing power, we could model an entire brain--every individual atom--via these same ab-initio methods.

    (In practice, however, the function of the brain probably doesn't even depend so much on the quantum physical processes that we model, and you could probably model it's functions pretty accurately with classical physics. And if we're only interested in AI, then most likely we could do away with modeling a lot of other things, like the atoms in cell walls, etc.)

    In principle, you could take that algorithm that's used to model the brain, print it out on a sheet of paper, and work through it by hand, and come up with the exact same results, although in practice you would never get very far in your life time.

    So, what do you think would be lacking from this model that we cannot put into the algorithm? I assume you're denying that it would be possible to create an algorithm that perfectly replicates human behavior, since you rejected the notion of a philosophical zombie as absurd a while ago in the thread (IIRC).
     
    Last edited: Feb 4, 2011
  8. Socrastein

    Socrastein The Boxing Philosopher

    Close, but this is the fundamental distinction I'm making: consciousness arises from thousands of processes that are going on somewhat simultaneously (there's a bit of temporal smearing of conscious events) within our mind.

    If you change your thought experiment from one turing machine calculating a single function at a time to a few million turing machines, all calculating different relevant functions in parallel, with some sharing of data between machines in a way that models our brain, then I would be willing to say yes, that network is probably conscious.

    Or at the very least, it's an easier foundation from which to argue for a model of consciousness, in my opinion.

    To be clear, I agree the mind is purely physical, I agree that consciousness is nothing more than a lot of fancy data processing, but I disagree it can be modeled appropriately in a serial fashion (one function at a time) because consciousness is the serial software that runs on our parallel hardware, so you'd need vast, fast parallel processing to have anything close to a human brain.
     
  9. AZeitung

    AZeitung The power of Grayskull

    But anything you can do in parallel you can do in serial, only slower. We actually have 6 eight-core machines here in the lab, by the way, as well as several older dual processor computers, as well as computers on various clusters that we can use to do calculations in parallel (where we can decide how many cores to use per calculation), but that's beside the point.

    Don't forget that in our simulation, time is also simulated. So, say, I want to calculate the state of the brain at t=1, I do all of the calculations, in serial, to figure out what the state of the brain is, then I have my program store "t=1, brain state" and then I move on to calculate t=2, etc.
     
  10. Socrastein

    Socrastein The Boxing Philosopher

    Yes, so you could model the output in the same way you could model what a hurricane is going to do over the next 3 days, but you no more have a consciousness with that model than you have a big storm.

    There's an experiment that tests what I believe is called change blindness, wherein a green light is lit in front of subject and then a fraction of a second later a red light is lit that is a bit to the right of the green one. So it's green light on, green light off, some time passes, red light on, red light off.

    What every subject reports, however, is that a light turns green then starts moving to the right as it transitions into red, and then there's a red light on the right that goes out shortly after.

    Does the subject actually consciously see the separate lights, and then the brain erases this memory and then implants a memory of what it assumed happened?

    Or does the brain see what happened and then send us a different experience, the one it determined happened?

    Where does that change between what the eyes see and what the conscious observer experiences occur? When?

    The truth is, it's impossible to answer this question, not only in practice but also in theory. The very question assumes that there is some line in the brain before which information is not consciously experienced, and after which we experience things. Of course there is no such division or center within the brain, so the question is irrelevant and both answers are, for all intents and purposes, the same thing.

    So while I understand that parallel processes can be modeled in a serial fashion accounting for the temporal properties of events, you can't place an exact time stamp on conscious events because they are smeared across a few hundred milliseconds worth of processing in the brain.

    Trying to determine "When did the brain become aware of the light change?" so you can know the precise order in which to crunch your calculations correctly to make an accurate model is impossible. If you want to try to model the calculations the brain makes in a way that makes a conscious mind, you have to model them the way the brain does them, all in parallel, all in different but close regions and all in different but close times. Once you try to determine the "order" in which everything is processed, you've already set out on an impossible task.

    Remember, we're talking about a processing machine with billions and billions of moving parts. How do you propose to determine in which order to calculate thousands upon thousands of parallel functions that we can't possibly sort out in any accurate way from a temporal perspective?

    Anyway, I think the point of your thread was to explore whether or not a perfectly modeled human consciousness would be conscious, and I say yes. What constitutes a perfectly modeled human consciousness is up for debate obviously, but it doesn't make or break the answer to the question as far as I can see.
     
  11. AZeitung

    AZeitung The power of Grayskull

    Sight itself is merely an interpretation of sensory input. I don't think it even makes sense to talk about "seeing things as they really are", since all sight is is us making the best sense we can out of a certain type of information that we're being fed. But I'm not trying to define consciousness in terms of how accurately we're able to interpret our surroundings. We should be able to discuss consciousness as it is experienced by someone who is blind and deaf, or in a sensory deprivation chamber.

    That's fine, but I'm not asking you to put an exact time step on conscious events. The question is whether or not the program has consciousness, not whether it's conscious at any particular point in time.

    Although, that's another interesting question. It's sort of like asking "if we take an infintessimal slice of time, and look at a moving object in that slice of time, does it have a velocity?" A layperson wrote a paper that was something along those lines around 10 years ago or so, and somehow got it published in a physics journal. It really wasn't very good. But the answer to that question is sort of yes (I think the guy who wrote the paper said no?). So, whether or not consciousness can "exist" at one particular point in time could be an interesting question.

    It's actually totally irrelevant for making a physical model. Physics behaves in a deterministic fashion (even quantum mechanics has deterministic aspects to it), so if I model every atom in the brain computationally, then model some photons interacting with the eye sockets, the laws of physics will determine the rest.

    It doesn't matter in which order I calculate them, I can deterministically figure out what they should be at any given point in time. Now, I'm not trying to restrict my definition of consciousness to individual points in time, merely pointing out that I could in theory calculate exact states of the brain at individual points in time, which determine the behavior of the ENTIRE brain as a function of time.

    Yes, that was the point, although the point was also, "if the answer is yes, how do we overcome what I perceive as a difficulty here". Now, obviously, you don't think that's an issue because of what you think needs to be done to model a human brain, but I think you've made a mistake in your reasoning, so if you DO change your mind, it will be interesting hear what you have to say, then.
     
    Last edited: Feb 4, 2011
  12. Socrastein

    Socrastein The Boxing Philosopher

    Actually you're quite right, and I realized while rereading my post (after posting it, when proofreading is the most effective of course ;) ) that I was making an erroneous point that wasn't actually relevant to your statements. My bad!

    In the same way you can build a ladder to the moon, or design a computer that can predict every future permutation the universe will be in at a particular moment: it's logically possible, but physically impossible. Therefore I don't see how it helps us understand consciousness.

    Would you mind articulating the difficulty you see with having a perfectly modeled brain being conscious? I think I understand your point, but I want to be sure before I address it again.
     
  13. AZeitung

    AZeitung The power of Grayskull

    I suppose the question boils down to, if we have an algorithm for modeling a brain perfectly, and rather than using a computer to run it, we run it by doing some abstract number crunching in our head, with a little help from a calculator and pencil and paper (in which case, the algorithm will still perfectly replicate the behavior of a human brain), "where" does the consciousness lie, or in what sense is there consciousness? To me, doing the calculations, everything is abstract. The calculator is only performing simple functions, and the paper is just a reminder of how the algorithm works. So, what exactly is going on that could be producing consciousness?
     
  14. Socrastein

    Socrastein The Boxing Philosopher

    Thank you for clarifying AZ. I will have to wait until I am at home with some time to pour through some materials before I can do your very insightful question any justice, so I'll get back to you.

    As an aside, it's really great to have a scientific/philosophical discussion with you after all these years :) You're even more smart than I remember you being.
     
  15. Socrastein

    Socrastein The Boxing Philosopher

    The reason I've avoided trying to directly answer this question isn't because I can't, it's because the answer is unlikely to be satisfying to someone this early in a discussion on consciousness. A lot of things have to be thought about and understood before this question can be satisfactorily addressed, but I suppose many of these points will surface in your reply and the replies of others and we shall address them as they come, although that's a bit like putting the cart before the horse if you ask me.

    I mentioned a couple times that I don't think your thought experiment helps anyone better imagine or understand consciousness, and that's my biggest problem with it. Philosophical thought experiments have the potential to open our minds to new possibilities and imagine new ways of looking at things, but they can also reinforce poor assumptions, false intuitions, and common misunderstandings, which only further obscures the issue rather than illuminating it.

    All that being said, let's get to the fun part.

    IF you were to perfectly model a serial virtual machine running on a parallel processing computer with over a million channels using a serial turing machine, that system would be conscious.

    While that seems to smack in the face of all our powerful intuitions about your thought experiment, that only reveals dubious assumptions that your question instills in most people.

    Of course the fact that you have a human inserted arbitrarily into the system only obscures the matter. I wouldn't be surprised if Searle's intention with the Chinese Room (I'm assuming that's the classical philosophical problem on which you base your slightly differed scenario) was to simply provide a stark contrast between the intelligent agency of himself and the mindless operations of the system he was participating in, to bolster the 'obviousness' of his conclusion.

    First it must be agreed that an algorithm is not medium-dependent. What I mean by that is, it doesn't matter through what medium you implement an algorithm, so long as the logical structure is in place it changes nothing but aesthetics and perhaps the time it takes for information to be processed.

    What I mean is, the game of Tic Tac Toe can be played on paper with pencil, in the sand with sticks, on a digital display while running electronically in a computer, you could even assign locations to 9 different friends on facebook and take turns poking the one you wished to place a O or X. So long as the logical structure is in place, it's all still Tic Tac Toe.

    That being said, as I believe you would already agree, having a human process the information, having a Dell process the information, having 3 highly trained and infallible monkeys process the information... none of it makes a difference to the logical structure of the program in question. If you found a way to simulate a human brain by bouncing billiard balls within a humongous cube suspended in space (let's pretend we've somehow made a perpetual motion machine, since we're using physically impossible scenarios anyhow) then it would be no more or less of an algorithm for human behavior than if you implemented it into a few billion interconnected neurons inside a human skull.

    That's the first important point. We don't have consciousness because there is something magical about organic neurons. The "magic" is in the logical structure of the system, not in the medium on which this structure is imposed.

    Even if this doesn't "feel" true, we can safely acknowledge this point and then try to find out why it doesn't "feel" true with your example.

    One problem I see with the OP in your thread is you don't propose the possibility that the entire system in question is conscious.

    I would say no to both questions as well. They're obvious answers, and one is tempted to extrapolate a more significant 'no' to the overall question "Well is any of it conscious?", which is the allure of the question.

    Does the drawing possess consciousness? No more than any random discharge of neural impulses in X region of our brain possesses our consciousness.

    Do you (the number cruncher) possess an extra consciousness inside your brain? No more than any of my individual neurons are themselves conscious: they're just mindless robots doing their job, and not a single one of them knows who I am or cares.

    And herein we come to what I believe is the real tacit assumption of your thought experiment. It's tempting to conclude:

    Well if this part doesn't have consciousness, and this other part isn't conscious, then the whole thing can't possibly be conscious!

    It doesn't matter how you divide the mechanical jobs that our neurons and brain structures perform. You can outsource those jobs to some poor sap who has to just trace circuit patterns his whole life, you can outsource those jobs to the entire nation of China (The Chinese Nation philosophical problem, closely related to this) and all its people, or you could outsource those jobs to some as-of-yet undiscovered assortment of silicon processors and chips.

    You'll always be left with the same question:

    How can more of the same, more mindless, mechanical number crunching, ever add up to anything not mindless and mechanical?

    I believe this is the ultimate crux of your thought experiment AZ, and I will argue that this is NOT an obvious proof that there must be some magical soul jelly that bridges the gap between mechanistic and conscious. Rather, I will argue that this thought experiment reveals nothing more than the poor powers of imagination that most people have.

    I can't IMAGINE how a hugely complex machine with billions of interacting parts, capable of higher order thoughts about its own informational states, and states about its states, could be conscious...

    Therefore it can't.


    I would say, imagine harder!

    Since it's been established that the medium is irrelevant to the algorithm, we can center the discussion on the algorithm itself.

    You propose that this program has all the logical structure and information processing power of a human brain, capable of communicating with humans, like a human, coming up with novel ideas not hard wired into it, etc.

    To pull of this feat, this program must have very complex informational states about itself, its own patterns of processing, its dispositions, what it does and doesn't know about its inner workings, and more. It must also have very complex information states about whomever it's speaking to, including vast amounts of information about humor, social mores, the disposition of the speaker, motivations for what is said and not said. It will have very similar information processing capacity with regard to itself (why it said this, why it said that, why this was offensive, why this was hilarious, etc.).

    Let's not forget that this machine can not only process information (Red light is registering in my visual field), but it can process information ABOUT that information (that's the same red light I see on Stop signs), and information ABOUT THAT information (I'm in a grass field, should I expect to see a traffic sign?) and information ABOUT THAT information (Perhaps there are dusty roads yet unseen and I'm in a rural area) and information ABOUT THAT....

    This program, if it is as powerful as you have contended, is privy to an infinite level of informational states about what it is processing, what it has processed before, what it is not processing, what it will process, etc. All of these different information states weave what Daniel Dennett calls a "Center of Narrative Gravity", a self that emerges as a user illusion within the virtual machine.

    The "I" in consciousness is a result of countless information states and discernment happening in the brain that all seem to be about the same entity. That entity is no more real and physical than a center of gravity is, but they are both very useful abstractions for dealing with single things. The truth is, the human brain is a myriad assortment of likes, dislikes, dispositions, fears, goals, regrets, etc.

    The self emerges as the liker, the dis-liker, the dis-positioned, the fearful, the goal-oriented, the regretful, etc.

    It's too hard to calculate the effect of every single piece of mass in a piano, so we sort of average everything out and find the center of gravity, which is useful for talking of the object as a single entity with a single center. It's especially useful for calculating how the piano will react if moved this way, tipped that way, hit with this much force, etc.

    It's too hard to calculate the various incompatible and erroneous behaviors, decisions, speech acts, opinions, etc. of a human, so we sort of average everything out and find the center of narrative gravity, which is useful for talking of the organism as a single mind with a single self nestled inside it. It's especially useful for predicting what someone will say, what they will do, how they will react to this or that statement, etc.

    When we take this abstraction too literally, the Chinese Room, the Chinese Nation, and your OP from the other thread all seem to obviously prove that zombies are possible. However, as I have attempted to show, this strong intuition stems from our inability to fully conceive of just how complex a perfectly modeled human brain would be, and our inability to see the "self", the "understander" as nothing more than millions of homunculi and quasi-understanding daemons.

    I'd say your example of someone crunching input-output by hand would create a system that was conscious but just thinking really slowly. Sounds crazy and hard to imagine how that could be, but if you just imagine someone right outside the event horizon of a black hole (somehow not completely destroyed of course) then from our perspective, they could be processing information about as fast as the dude with a GIANT circuit board and a billion pencils. So in 10,000 years the guy in the black hole might have enough processing time to think "Oh my God I'm going to die!" just as in 10,000 years of number crunching (probably much, much, much more but that needn't be accurate for the point to stand) your system might think "Hmmm.... why can't I move my arm?".

    I hope at least some of that made a little sense ;)
     
    Last edited: Feb 5, 2011
  16. AZeitung

    AZeitung The power of Grayskull

    It's pronnounced "more smarter," actually :D, but thank you. It's good to hear from you again too. I think I remember hoping that you'd show up when I posted that other thread for the first time, so I'm glad you got a chance to respond to it now.

    Actually, no. I had never heard of that before, but thank you for bringing it to my attention. I looked it up on Wikipedia just now. It sounds like what I wrote wasn't quite as original as I had hoped.

    Good, we agree on this, then.

    There's actually nothing too unphysical about billiard balls perpetually bouncing around. The more interesting question is why billiard balls DON'T behave that way, but that's totally irrelevant to this discussion, so I won't say any more about it.

    agreed.

    I think Strafio may have mentioned something to that effect in one of the later pages.

    Well, suppose I remember the algorithm well enough that I don't need to write anythign down on paper and I'm good enough at mental math that I don't need to use a calculator. Then, I presume, you'd say that yes, I do contain the consciousness in my own mind, in some abstract form, since I don't personally know exactly how everything translates into behaviors, I only understand the 1's and 0's that I should be outputting.

    Indeed.

    And I'm almost tempted to agree with you here, but I guess I have a problem with everything on a more fundimental level, which was kind of the point of breaking up the problem the way I did:

    Modeling a physical brain at any given point in time is a matter of solving a particular set of physical equations that determine brain's state at any given point in time.

    But supposing that rather than solve the equations, I just simply write down all of the equations and label them f(x), where f is a set of functions and x is some vector containing all the necessary input for the brain equations. Then I say that f(x) = y (where y represents whatever the output of this equation is supposed to be with x as the inputs).

    I can't wrap my head around how the act of actually solving the equation f, to find the actual values of y could produce consciousness any more than simply writing down the equations f, and then writing down the place holder y, where y represents the true solution. After all, my neurons firing in my brain only form a representation of the solution to the equation, just as whatever digits I write down also only form a representation of the solution.

    This is why I tried to break f up into multiple parts. But perhaps we can think of it another way. Lets say that I actually break the solution of f up into multiple steps.

    I'll perform step one, then send it to you, and you perform step two, which requires the answer found in step one to complete. Now say that while I send my solution to step one, it gets intercepted and replaced with a set of numbers produced by a random number generator that happen to match exactly the numbers that I've calculated. You then perform your calculations on those random numbers.

    Do we still have consciousness? Nothing has actually changed in our performance of steps one and two, except for the source of your numbers for step two.

    Or, lets suppose that I break the program into multiple pieces and give it to random people to solve, who have no idea what they're solving. Now, let's suppose that a bunch of people just happen to be solving those same equations for totally unrelated reasons somewhere else. Do we have consciousness in one case and not the other? Does the act of solving equations, however abstract, somehow produce elements of consciousness?

    Yes, but since this could all be modeled abstractly through our modeling of the physical system of the brain, we don't have to worry about, or necessarily know how many "levels" of thought are occurring.

    Or perhaps just very large?

    I find the language of this statement confusing, because the idea of an illusion usually requires a conscious entity whose perception is being distorted. But you seem to be saying that perception itself is the illusion. But then, what is percieving the illusion of perception?

    I can see this as a useful way of describing our perception of other people, but I don't exactly understand how we can apply it to self-perception without being circular.

    So it seems the answer SHOULD be, but I'm still not convinced that it works.
     
    Last edited: Feb 5, 2011
  17. Socrastein

    Socrastein The Boxing Philosopher

    Rather than address your many eccentric versions of the Chinese Room, we need to try to focus on the underlying issue here, which is what does our inability to imagine how something could work tell us?

    I can imagine a present day vitalist arguing that he can picture a cat with all the cells, proteins, and DNA that supposedly constitute a living being, but it still isn't alive. Is this person actually stating a useful or important proposition?

    Does it even deserve a response, or is the onus on the vitalist to explain why a physical replica of a live cat could possibly be a life zombie? (to contrast the mind zombies we are discussing)

    I say the onus would be on the vitalist, and I say the onus is equally on you to explain why a perfectly replicated human brain wouldn't be conscious. I've never heard a compelling account for dualism, and I'd be curious to hear what you have to say in support of it.

    I think the more reasonable response is: if I can't wrap my head around how it could be, it's likely an indication of my inability to imagine such a complex thing in all the detail and intricacy it requires, and I should trust that the more parsimonious, scientific, and logical conclusion to draw is that I simply don't have the powers of imagination I thought I did. That's a more reasonable assumption than assuming there must be some "extra mind stuff" there to explain my inability to imagine otherwise.

    I think this is another source of confusion here. I think, likely because of your background and knowledge, you focus on the physical level to the exclusion of other relevant levels of abstraction. If you only ever look at the circuits and electrons and physical reactions, you're going to miss the forest as they say. Your argument is a form of "There is nothing but trees here!" because you are looking for consciousness on the wrong level, so to speak.

    Couldn't I use a form of the reasoning you are employing to show how obvious it is that the Mona Lisa is not a painting of a beautiful woman, but a panel covered with a bunch of random oils?

    Moby Dick isn't a classic novel, it's just a bunch of wood pulp covered in splotches of ink!

    The Mona Lisa is at the same time 1) a panel covered in oil spots 2) an assortment of lines, colors, and shapes 3) a skilled representation of a human female smiling coyly. There are different levels at which to look at many things.

    If you're trying to understand consciousness by only looking at the physical level, you'll get about as far as a computer scientist trying to compare the pros and cons of Microsoft Word and OpenOffice Word Processor by comparing their individual voltage readings in memory.

    I agree that it can seem confusing, but I don't blame that on the idea itself but rather on our perception of it having grown up in a world where Renes and his damn Cartesian Theatre dominate our social and even philosophical mindset, right down to the very way we speak and express our most common ideas. (For instance, saying "I see" when we come to understand an idea, as though some entity in our head is viewing things as they come in and trying to make sense of them...)

    This is what makes it so hard to understand consciousness, in much the same way evolution was just an absolutely impossible concept to those who grew up in the time of "Intelligence first" and the "top down" view was absolutely pervasive in philosophy and common discourse.

    The illusion is thinking that we are a separate entity. It feels that way, so we assume it must be so. This is a result of us having very limited access to the goings on behind the scenes (this is what the user illusion is in computer science, the presentation to the user that things are colorful, assorted, and clean, when of course the actual programming and wiring does not represent what is on the screen in the slightest) and that is what I mean by user illusion. We are privy to little, and thus we assume much erroneously.
     
  18. AZeitung

    AZeitung The power of Grayskull

    Well, that's called a dead cat, and dead cats do exist.

    I think it's actually interesting to explore the differences between living and dead things. Sometimes there's obvious damage to a particular system, but I do wonder how much has to change physically when certain types of death occurr.

    But that's not exactly the same thing, either. In this case, it's not just "I'm going to picture something that acts like a human but has no consciousness" with no qualification--it's saying "consciousness happens in people who have certain types of physical processes occuring, but what if entirely different processes occur?"

    I haven't actually made a claim either way, what I want to know is how writing a function f, writing down an input vector, x, and then using the symbol y as a representation of the output is different from actually having someone solve the equation. All that happens when you solve the equation is that you change the representation of the solution from one form to another.

    Additionally, why should the equations suddenly take on consciousness when they're solved for a *purpose* rather than being solved for unrelated reasons?

    I phrased it that way deliberately in an attempt to be conciliatory, so let me pose the question once more that I've been trying to get at:

    1) How does merely changing the representation of information from one form to another produce consciousness:

    e.g. if I write f1(x1) = y1, f2(x1) = y2, etc. and when you ask me questions, respond simply "y1," "y2," "y3", then leave it up to you to decode the message, why is it only when the equations f1, f2, f3, etc. are solved that consciousness is produced, since y1, y2, and y3 contain the same information (which only needs to be decoded with those functions) as the actual numerical solutions.

    2) If it does, can solving equations in general produce some form of consciousness, and if not, why does the specific purpose for which they're being solved matter?


    I can also write equations that describe a gravitational field and write equations that describe the motion of a particle in a gravitational field, but that's still different than having an actual particle moving in a gravitational field.

    Why should we be content, then, with our limited imagination, rather than actually trying to find the actual answer?

    And I have no problem with using this to describe our experience of other people's consciousness, but I don't think it's helpful at all for understanding perception itself.

    Yes, but in the case of the mona lisa, we understand that there's a correspondence between the location and color of pigments on the canvas and the object that we see when we look at it. We can understand the correspondence between oils, pigments, and the object that they represent, and quite frankly, if we couldn't, that would be something I would want to explore.

    And by the same token, although the Mona Lisa is a painting of a woman, it's not an actual woman.

    This is sort of like if you asked me "why does entropy always increase or remain constant. The laws of physics are time-reversable" and I said "well, the fact that you don't know just shows that you can't imagine how it could be so". The fact that it's a difficult problem definitely doesn't mean that it should be ignored or treated as trivial, or simply due to a lack of imagination. It means that it needs to be investigated and that we either have to:

    1) find a way that time reversible laws can

    or

    2) admit that there's some aspect to physics that isn't time reversible.

    and simply chalking it up to our lack of imagination isn't satisfactory.

    But it seems to me like your attempts to explain conscoiusness have all been middied in a Cartesian Theatre framework, such as consciousness being an "illusion". And it's not merely the language of statments like that that's confusing, but the actual idea itself seems to be Cartesian Theateresque.

    Who thinks that? Our thought itself has to be an illusion, also, so what is that being that thinks of itself as a separate entity. And separate from what?

    And again, this presumes htat there is some sort of user that things can be presented to.
     
    Last edited: Feb 7, 2011
  19. Socrastein

    Socrastein The Boxing Philosopher

    I'm having trouble understand exactly where you're trying to take this discussion AZ.

    I have no desire to address any more of your thought experiments specifically, since every time I do you just fall back on another, even more obscure version without actually addressing the fundamental issue, which is who cares if you can't imagine how these systems are conscious? How is it any more interesting than a vitalist saying he can imagine an animated organism that isn't alive? I would tell him "No, you can't, and if you think you can, you're not imagining hard enough". I've told you the same thing, with no satisfactory response other than "But what about THIS scenario, how could there POSSIBLY be consciousness there?" which is more of the same, and the answer is the same.

    You're tacitly appealing to dualism without actually defending it. Until you offer an account of why there should be anything more to a conscious system, the onus sits with you. If you do not intend to argue for dualism, then again I don't see where you intend to take the conversation.

    If we accept that algorithms are medium independent, and acknowledge that humans have biological computers in our heads and are conscious, then why should we wonder if any other system that employs these same algorithms wouldn't be conscious? What happened to Occam's razor?
    We have the answer. It just doesn't feel intuitively correct. That is NOT any indication that it is wrong.

    Just because it feels like time should be universal, doesn't change what we know about relativity.

    Just because it's nearly impossible to imagine how a photon could sometimes, RANDOMLY, tunnel through a barrier and just appear on the other side doesn't mean QM is missing something.

    We should just accept that nature is full of crazy things that we're nowhere near intelligent enough to fully grasp.

    Now if you have a good reason why there must be some "mind stuff" to explain what you find hard to imagine, THEN we actually progress the discussion.

    Can you give me a good reason why the user must be a separate entity? You're the one making this assumption, not me. I'm perfectly content with a user that, when scrutinized in detail, is incorporated into the system itself. No, I haven't the time or patience to get into that detail, but this isn't necessary anyway until you give an account of how the user could possibly be a separate entity, and how a physical system can interact with a nonphysical system.

    To sum this up more succinctly since I imagine my point will get lost in your line by line dissection of my post again:

    If you can't/won't defend dualism, accept that zombies are impossible whether that intuitively makes sense to you or not.

    If you won't defend dualism, then what are you doing other than derailing this topic?
     
  20. AZeitung

    AZeitung The power of Grayskull

    Then, for the third time, I'll repost the two two fundimental issues:

    1) How does merely changing the representation of information from one form to another produce consciousness:

    e.g. if I write f1(x1) = y1, f2(x1) = y2, etc. and when you ask me questions, respond simply "y1," "y2," "y3", then leave it up to you to decode the message, why is it only when the equations f1, f2, f3, etc. are solved that consciousness is produced, since y1, y2, and y3 contain the same information (which only needs to be decoded with those functions) as the actual numerical solutions.

    2) If it does, can solving equations in general produce some form of consciousness, and if not, why does the specific purpose for which they're being solved matter?

    These two things which every thought experiment I've posted is meant to explore, and I keep changing the thought experiments because those issues never get addressed in the replies.

    I'm not specifically taking a side, but I think that all of your explanations that supposedly do away with dualism are implicitly invoking it in order for them to have any meaning, so I want either: 1) a good explanation that doesn't implicitly invoke dualism, or 2) and admission that what we know about consciousness right now actually doesn't explain this.

    You refer to "illusions". Illusions, as typically defined, require a consciousness to perceive them. So, you're using illusion in the sense that it doesn't require someone to perceive it. Now you've just used illusion in a way that is completely meaningless. I have no idea what you mean by the word, except that it seems to be a word that's defined to mean whatever it needs to be for your explanation to work.

    But that's NOT an answer. It's like if you asked me what causes gravity, and I answered "it's an attractive force between two bodies". Ok, that's true--but it doesn't explain why or how gravity works, or give any useful method of making predictions of what it does. That answer is basically just "well, it just is this way". And if "well, it just is this way" is the best argument you have, then you're really appealing to nothing more than personal belief--which is fine, but don't try to dress it up as some fundamental, logical truth.

    And once again, I can make a nice geometric picture that explains exactly why time behaves relativistically, rather than appealing to "lengths just get shorter and time slows down because that's how physics works".

    And again, that's very understandable when you actually understand QM. However, in the early days, it wasn't understood that well and I think Einstein was perfectly justified in the thought experiments that he posed to Heisenberg--the last of which by the way, Heisenberg may not have even made a very valid argument against.

    We now know for sure that Heisenberg was right and Einstein was wrong, but ONLY because we have a better and deeper understanding of QM than Heisenberg had at the time.

    No, we really shouldn't.

    My objection is that your "answers" aren't really answers or valid objections to anything I've written.
     
    Last edited: Feb 7, 2011

Share This Page