daniel_the_smith wrote:I'm assuming a successful reanimation requires nanotechnology; I'm expecting the reanimation machine to actually look at brains on the molecular level and reconnect any severed synapses that go together, etc. Obviously there's some level of damage past which it won't be possible. Even if it were as bad as a stroke, I'd be perfectly happy to take that chance for 1,000 extra years of life. I expect lots of damage to be done during the dying/freezing process, and some of that to be repaired prior to/during the reanimation process.
Reanimation would be extremely difficult. The only sort of society that will do it is one in which god like feats of engineering cost not much more than running a refrigerator. I think a lot of the objections are based on a failure to imagine how profoundly different that would make society. Did I mention I give this < 5% chance of working?
So just to numbers right, we have pr(broadly useful, functional nanotechnology), we have pr(reanimation is an application of nanotechnology), and we have pr(a good chunk of the brain damage can be repaired). You're suggesting that
.01 < pr(A)*pr(B)*pr(C) < .05
How would you break down the component probabilities?
daniel_the_smith wrote:the control group is not doing well at all.
Oh? Compared to whom? I would say we're actually doing pretty dang well, and the mountains we have left to climb are more in quality of life than quantity of life.
daniel_the_smith wrote:
No biological processes at all take place while the brain is at liquid nitrogen temperature, so memories won't fade while you're suspended. This is nothing like being in a coma. And once you're awakened, how will things be any different than they are now? I only have a few memories from when I was < 5 years old. If anything, I expect memory to be improved upon awakening (for any new memories, that is, and I would not expect old ones to deteriorate like they do now).
I think you're conflating two different suggestions which I should have clearly distinguished. (i) By the time you are resurrected radiant and incorruptible, significant damage will have been done to the brain that you call yours. (ii) After resurrection, you will continue to forget things like you do know (or perhaps much faster, because of the lack of reinforcing stimuli), but for a much longer period of time. Right now, you barely remember being 5, but presumably those memories were never central to your identity; would you still think the resurrected individual was you if/when he remembered half of his pre-resurrection life? 1/4? None?
(You've already answered the question, of course, but I believe you've answered it both ways. Two bodies could share a brain and not have any of the same memories, just as two bodies could share a heart and not have the same pulse or blood pressure.)
daniel_the_smith wrote: jts wrote:If you gain/lose one friend, perhaps not; but it's quite plausible to me that if you lose all of your friends, relatives, etc., you've acquired a (partially) new identity. And that might affect how much you care about what happens to the person who has your brain.
Since this happens already in life without people getting frozen, I don't see how it's relevant?
Have you ever heard the phrase "You're dead to me?"
Remember our original question. Several people in this thread seem to think that immortality is an important goal because it's like wanting to wake up tomorrow (which is, indeed, an important goal), times infinity. The implicit assumption: jts, jts tomorrow morning, and jts in 10^10 years stand in a relation of "being the same person", where "being the same person" entails some set of attitudes (self-preservation, for example). We're trying to figure out what needs to be true of this relation for it to entail the attitudes you have in mind.
daniel_the_smith wrote:
I said "informational content"-- the exact molecules that compose my brain, and even the particular hardware running it (neurons) are not important.
Perhaps I misunderstood what you meant. The molecules can't be important, since the body is constantly rebuilding cell structures and flushing out molecules. And, while I think people who talk about neurons as interchangeable with other kinds of circuits frequently don't understand neurons, that's fine: I'll grant to you that we don't care about the exact components of A's brain or B's brain.
But when you say that we're identifying the brain with it's informational content, I think you're begging the question. The content of the brain is exactly what changes. We've assumed, by hypothesis, that A and B have arbitrarily different memories, desires, attitudes, reactions, and so on. You can say they share an organ, but you can't say they share informational content. That's precisely what we stipulated was different.
daniel_the_smith wrote: I am my brain. You are yours.
Are you sure? So if someone poked the left hemisphere of my brain, should I describe what happened as "He poked my left hemisphere," or "He made me see a patch of blue, by poking the left hemisphere of my brain?" I wouldn't say "He shined a bright light into me", I would say "He made me see stars, by shining a bright light into my eyes." I wouldn't say "He made me release endorphins," I would say "He made me feel happy, by making my hypothalamus release endorphins."
The semantics of how we describe the relationship between me and my more important organs may be insignificant (although I think semantic confusions often point to conceptual confusions). But if we say that sharing my brain is the sufficient condition for two people being me, we've merely reworded the question: which brains count as
my brain?