The End of Humanity

All non-Go discussions should go here.
Post Reply
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

The End of Humanity

Post by Kirby »

From our friends at DeepMind and the University of Oxford:

https://futurism.com/the-byte/google-ox ... -humankind
Researchers at Google Deepmind and the University of Oxford have concluded that it's now "likely" that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict.
I wonder how many games I can get in before the endgame.
be immersed
Gomoto
Gosei
Posts: 1733
Joined: Sun Nov 06, 2016 6:56 am
GD Posts: 0
Location: Earth
Has thanked: 621 times
Been thanked: 310 times

Re: The End of Humanity

Post by Gomoto »

I dont know how many games I will get in, but I am sure I will enjoy every one of them :-)
jeromie
Lives in sente
Posts: 902
Joined: Fri Jan 31, 2014 7:12 pm
Rank: AGA 3k
GD Posts: 0
Universal go server handle: jeromie
Location: Fort Collins, CO
Has thanked: 319 times
Been thanked: 287 times

Re: The End of Humanity

Post by jeromie »

One of the key phrases in that article is, “under the conditions we have identified.” They make a lot of assumptions in order to be able to make a headline-grabbing prediction of doom.
User avatar
Knotwilg
Oza
Posts: 2432
Joined: Fri Jan 14, 2011 6:53 am
Rank: KGS 2d OGS 1d Fox 4d
GD Posts: 0
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
Location: Ghent, Belgium
Has thanked: 360 times
Been thanked: 1021 times
Contact:

Re: The End of Humanity

Post by Knotwilg »

Of course I should not expect linear progress, rather exponential, but as long as I am being offered hotels in Hamburg after a trip to Hamburg, I'm not too worried about AI, unless they are fooling me into believing they're just messing about.
User avatar
jlt
Gosei
Posts: 1786
Joined: Wed Dec 14, 2016 3:59 am
GD Posts: 0
Has thanked: 185 times
Been thanked: 495 times

Re: The End of Humanity

Post by jlt »

I am more worried about humans creating AIs to kill or dominate other humans, than about AIs which might break the rules set by their creators.
kvasir
Lives in sente
Posts: 1040
Joined: Sat Jul 28, 2012 12:29 am
Rank: panda 5 dan
GD Posts: 0
IGS: kvasir
Has thanked: 25 times
Been thanked: 187 times

Re: The End of Humanity

Post by kvasir »

I only skimmed the paper. At first I thought it said "catastrophic consequences" are likely and didn't really say what they are or why it rises to level of being catastrophic. Then I found the argument and it is hilarious.
Ultimately, our resource needs (energy, space, etc.) will eventually compete with those of an ever-more-secure house for the original agent. Those energy needs are not slight; even asteroids must be deflected away. No matter how slim the chance of a future war with an alien civilization, reward would be better secured by preparing for such a possibility. So if we are powerless against an agent whose only goal is to maximize the probability that it receives its maximal reward every timestep, we find ourselves in an oppositional game: the AI and its created helpers aim to use all available energy to secure high reward in the reward channel; we aim to use some available energy for other purposes, like growing food. Losing this game would be fatal. Now recall Assumption 6.
This is really only a slippery slope argument that leads down a path of increasingly less likely events while ignoring that this doesn't actually substantiate they claim that this (or anything similar) is in anyway likely. This type of argument also appears to be used elsewhere in the paper. I am no longer interested in reading the paper but maybe I should be, the FUD is a bit funny.
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: The End of Humanity

Post by RobertJasiek »

The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.
dhu163
Lives in gote
Posts: 474
Joined: Tue Jan 05, 2016 6:36 am
Rank: UK 2d Dec15
GD Posts: 0
KGS: mathmo 4d
IGS: mathmo 4d
Has thanked: 62 times
Been thanked: 278 times

Re: The End of Humanity

Post by dhu163 »

seems like its on everyone's mind. Just from reading your snippets, either I don't understand terms used or they are being too vague at the moment.

well if Go is a good case study, it seems they respect the general strength of AI in any field, recognising human weaknesses, while perhaps slightly disrespecting human progress that few even among pros probably understand well enough to predict.

if neural networks is a good case study, then it seems that general concepts etc propagate exponentially if there is enough bandwidth for it to fix an error (though it may cause destabilising oscillations to well connected dominant neurons). This may be like a virus, but normally if the engineering is well designed, it does cause a long term improvement as they work together and sometimes merge. However sometimes, it simply leads to death of everything else.

if entropy is valid, and life is simply a mechanism to disperse temperature difference, then human's success tells of something they are doing right that should be hard to dislodge except by a union of partial mimics and competitors and reducing rewards, or lack of food inputs. But in a sense, whatever follows will likely copy the best parts of humans also along with their own niches, so it is a matter of perspective. What is humanity given that it also likely built on what was here before?

if history is a good case study, end of humanity is inevitable at some point, the only question is when.

20221214
Multiple themes and cultures here. But one I've just noticed is Dante's circles of hell.
20221219
If this is a partially correct interpretation, my 8th rank is for fraud. I think I'm trying. And I don't understand how I am perceived enough to agree. But I also can't disagree.
Last edited by dhu163 on Sun Dec 18, 2022 5:17 pm, edited 3 times in total.
kvasir
Lives in sente
Posts: 1040
Joined: Sat Jul 28, 2012 12:29 am
Rank: panda 5 dan
GD Posts: 0
IGS: kvasir
Has thanked: 25 times
Been thanked: 187 times

Re: The End of Humanity

Post by kvasir »

RobertJasiek wrote:The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.
Scenarios where someone does something deliberately always seem much more likely to me than pure accidents. Someone could possibly create a self-replicating devices with good intentions that are also very destructive, energy efficient and difficult to contain. That would be a deliberate construction of something dangerous, even if the intention was to always keep it contained. The idea of simply experimenting with something and then creating something very dangerous that is not contained in the lab environment doesn't seem likely to me, unless the danger was anyway evident.
Carl Jung wrote: [...]the only real danger that exists is man himself. He is the great danger and we are pitifully unaware of it. We know nothing of man!
mart900
Dies in gote
Posts: 24
Joined: Wed Jun 29, 2022 1:12 pm
Rank: OGS 3k
GD Posts: 0
Been thanked: 4 times

Re: The End of Humanity

Post by mart900 »

The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c
User avatar
Harleqin
Lives in sente
Posts: 921
Joined: Sat Mar 06, 2010 10:31 am
Rank: German 2 dan
GD Posts: 0
Has thanked: 401 times
Been thanked: 164 times

Re: The End of Humanity

Post by Harleqin »

No. In my view, it's just an automated con man, but without agency. What would need to happen so that the things it writes get an actual connection to reality, in both directions?
A good system naturally covers all corner cases without further effort.
mart900
Dies in gote
Posts: 24
Joined: Wed Jun 29, 2022 1:12 pm
Rank: OGS 3k
GD Posts: 0
Been thanked: 4 times

Re: The End of Humanity

Post by mart900 »

I'd say it's a system with superhuman intuition, but no ability to check its intuition with further thought, or do anything other than say the first thing that "comes to mind".

Imagine AlphaGo without MCTS. Without the tree search. It would have superhuman intuition but it cannot go beyond that. To go beyond, it needed a second component that searched the game tree. I think the key in developing AGI is now in that second component. A large language model by design does not have it, and it's not trivial to imagine what it would look like. Perhaps we could get there by creating feedback loops within the neural network that make it able to "ruminate" like humans do, and only reach a conclusion when it's ready.
Elom0
Lives in sente
Posts: 732
Joined: Sun Feb 20, 2022 9:03 pm
Rank: BGA 3 kyu
GD Posts: 0
KGS: Elom, Windnwater
OGS: Elom, Elom0
Online playing schedule: The OGS data looks pretty so I'll pause for now before I change it.
Has thanked: 1028 times
Been thanked: 32 times

Re: The End of Humanity

Post by Elom0 »

mart900 wrote:The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c
In my view that 10% chance is more appropriate for 2040. The danger isn't superintelligent Artificial General Intelligence. The danger is pretty darn smart AI with flaws combined with humans that also have flaws in thinking, and having them compound upon each other.
elementc
Beginner
Posts: 7
Joined: Mon Jul 23, 2012 6:14 am
GD Posts: 0
KGS: elementc
Has thanked: 1 time
Been thanked: 4 times

Re: The End of Humanity

Post by elementc »

My dudes, we're losing 2.5% per year of total biomass of insects globally, among many other existential problems. The AI had better hurry if it wants to get us first before we get ourselves.
Post Reply