Life In 19x19
http://www.lifein19x19.com/

The End of Humanity
http://www.lifein19x19.com/viewtopic.php?f=8&t=18897
Page 1 of 1

Author:  Kirby [ Thu Sep 15, 2022 7:30 pm ]
Post subject:  The End of Humanity

From our friends at DeepMind and the University of Oxford:

https://futurism.com/the-byte/google-ox ... -humankind

Quote:
Researchers at Google Deepmind and the University of Oxford have concluded that it's now "likely" that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict.


I wonder how many games I can get in before the endgame.

Author:  Gomoto [ Thu Sep 15, 2022 8:24 pm ]
Post subject:  Re: The End of Humanity

I dont know how many games I will get in, but I am sure I will enjoy every one of them :-)

Author:  jeromie [ Thu Sep 15, 2022 8:29 pm ]
Post subject:  Re: The End of Humanity

One of the key phrases in that article is, “under the conditions we have identified.” They make a lot of assumptions in order to be able to make a headline-grabbing prediction of doom.

Author:  Knotwilg [ Fri Sep 16, 2022 1:34 am ]
Post subject:  Re: The End of Humanity

Of course I should not expect linear progress, rather exponential, but as long as I am being offered hotels in Hamburg after a trip to Hamburg, I'm not too worried about AI, unless they are fooling me into believing they're just messing about.

Author:  jlt [ Fri Sep 16, 2022 3:06 am ]
Post subject:  Re: The End of Humanity

I am more worried about humans creating AIs to kill or dominate other humans, than about AIs which might break the rules set by their creators.

Author:  kvasir [ Fri Sep 16, 2022 8:03 am ]
Post subject:  Re: The End of Humanity

I only skimmed the paper. At first I thought it said "catastrophic consequences" are likely and didn't really say what they are or why it rises to level of being catastrophic. Then I found the argument and it is hilarious.

Quote:
Ultimately, our resource needs (energy, space, etc.) will eventually compete with those of an ever-more-secure house for the original agent. Those energy needs are not slight; even asteroids must be deflected away. No matter how slim the chance of a future war with an alien civilization, reward would be better secured by preparing for such a possibility. So if we are powerless against an agent whose only goal is to maximize the probability that it receives its maximal reward every timestep, we find ourselves in an oppositional game: the AI and its created helpers aim to use all available energy to secure high reward in the reward channel; we aim to use some available energy for other purposes, like growing food. Losing this game would be fatal. Now recall Assumption 6.


This is really only a slippery slope argument that leads down a path of increasingly less likely events while ignoring that this doesn't actually substantiate they claim that this (or anything similar) is in anyway likely. This type of argument also appears to be used elsewhere in the paper. I am no longer interested in reading the paper but maybe I should be, the FUD is a bit funny.

Author:  RobertJasiek [ Fri Sep 16, 2022 8:23 am ]
Post subject:  Re: The End of Humanity

The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.

Author:  dhu163 [ Fri Sep 16, 2022 8:40 am ]
Post subject:  Re: The End of Humanity

seems like its on everyone's mind. Just from reading your snippets, either I don't understand terms used or they are being too vague at the moment.

well if Go is a good case study, it seems they respect the general strength of AI in any field, recognising human weaknesses, while perhaps slightly disrespecting human progress that few even among pros probably understand well enough to predict.

if neural networks is a good case study, then it seems that general concepts etc propagate exponentially if there is enough bandwidth for it to fix an error (though it may cause destabilising oscillations to well connected dominant neurons). This may be like a virus, but normally if the engineering is well designed, it does cause a long term improvement as they work together and sometimes merge. However sometimes, it simply leads to death of everything else.

if entropy is valid, and life is simply a mechanism to disperse temperature difference, then human's success tells of something they are doing right that should be hard to dislodge except by a union of partial mimics and competitors and reducing rewards, or lack of food inputs. But in a sense, whatever follows will likely copy the best parts of humans also along with their own niches, so it is a matter of perspective. What is humanity given that it also likely built on what was here before?

if history is a good case study, end of humanity is inevitable at some point, the only question is when.

20221214
Multiple themes and cultures here. But one I've just noticed is Dante's circles of hell.
20221219
If this is a partially correct interpretation, my 8th rank is for fraud. I think I'm trying. And I don't understand how I am perceived enough to agree. But I also can't disagree.

Author:  kvasir [ Fri Sep 16, 2022 10:18 am ]
Post subject:  Re: The End of Humanity

RobertJasiek wrote:
The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.


Scenarios where someone does something deliberately always seem much more likely to me than pure accidents. Someone could possibly create a self-replicating devices with good intentions that are also very destructive, energy efficient and difficult to contain. That would be a deliberate construction of something dangerous, even if the intention was to always keep it contained. The idea of simply experimenting with something and then creating something very dangerous that is not contained in the lab environment doesn't seem likely to me, unless the danger was anyway evident.

Carl Jung wrote:
[...]the only real danger that exists is man himself. He is the great danger and we are pitifully unaware of it. We know nothing of man!

Author:  mart900 [ Thu Apr 27, 2023 10:42 am ]
Post subject:  Re: The End of Humanity

The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c

Author:  Harleqin [ Tue May 02, 2023 2:26 am ]
Post subject:  Re: The End of Humanity

No. In my view, it's just an automated con man, but without agency. What would need to happen so that the things it writes get an actual connection to reality, in both directions?

Author:  mart900 [ Tue May 02, 2023 3:09 am ]
Post subject:  Re: The End of Humanity

I'd say it's a system with superhuman intuition, but no ability to check its intuition with further thought, or do anything other than say the first thing that "comes to mind".

Imagine AlphaGo without MCTS. Without the tree search. It would have superhuman intuition but it cannot go beyond that. To go beyond, it needed a second component that searched the game tree. I think the key in developing AGI is now in that second component. A large language model by design does not have it, and it's not trivial to imagine what it would look like. Perhaps we could get there by creating feedback loops within the neural network that make it able to "ruminate" like humans do, and only reach a conclusion when it's ready.

Author:  Elom0 [ Tue May 02, 2023 4:36 am ]
Post subject:  Re: The End of Humanity

mart900 wrote:
The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c


In my view that 10% chance is more appropriate for 2040. The danger isn't superintelligent Artificial General Intelligence. The danger is pretty darn smart AI with flaws combined with humans that also have flaws in thinking, and having them compound upon each other.

Author:  elementc [ Mon Jul 03, 2023 7:30 pm ]
Post subject:  Re: The End of Humanity

My dudes, we're losing 2.5% per year of total biomass of insects globally, among many other existential problems. The AI had better hurry if it wants to get us first before we get ourselves.

Page 1 of 1 All times are UTC - 8 hours [ DST ]
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/