It is currently Thu Mar 28, 2024 11:51 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 14 posts ] 
Author Message
Offline
 Post subject: The End of Humanity
Post #1 Posted: Thu Sep 15, 2022 7:30 pm 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
From our friends at DeepMind and the University of Oxford:

https://futurism.com/the-byte/google-ox ... -humankind

Quote:
Researchers at Google Deepmind and the University of Oxford have concluded that it's now "likely" that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict.


I wonder how many games I can get in before the endgame.

_________________
be immersed


This post by Kirby was liked by 2 people: Elom0, Gomoto
Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #2 Posted: Thu Sep 15, 2022 8:24 pm 
Gosei

Posts: 1733
Location: Earth
Liked others: 621
Was liked: 310
I dont know how many games I will get in, but I am sure I will enjoy every one of them :-)

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #3 Posted: Thu Sep 15, 2022 8:29 pm 
Lives in sente

Posts: 902
Location: Fort Collins, CO
Liked others: 319
Was liked: 287
Rank: AGA 3k
Universal go server handle: jeromie
One of the key phrases in that article is, “under the conditions we have identified.” They make a lot of assumptions in order to be able to make a headline-grabbing prediction of doom.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #4 Posted: Fri Sep 16, 2022 1:34 am 
Oza
User avatar

Posts: 2408
Location: Ghent, Belgium
Liked others: 359
Was liked: 1019
Rank: KGS 2d OGS 1d Fox 4d
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
Of course I should not expect linear progress, rather exponential, but as long as I am being offered hotels in Hamburg after a trip to Hamburg, I'm not too worried about AI, unless they are fooling me into believing they're just messing about.


This post by Knotwilg was liked by: Gomoto
Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #5 Posted: Fri Sep 16, 2022 3:06 am 
Gosei
User avatar

Posts: 1753
Liked others: 177
Was liked: 491
I am more worried about humans creating AIs to kill or dominate other humans, than about AIs which might break the rules set by their creators.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #6 Posted: Fri Sep 16, 2022 8:03 am 
Lives in sente

Posts: 905
Liked others: 22
Was liked: 168
Rank: panda 5 dan
IGS: kvasir
I only skimmed the paper. At first I thought it said "catastrophic consequences" are likely and didn't really say what they are or why it rises to level of being catastrophic. Then I found the argument and it is hilarious.

Quote:
Ultimately, our resource needs (energy, space, etc.) will eventually compete with those of an ever-more-secure house for the original agent. Those energy needs are not slight; even asteroids must be deflected away. No matter how slim the chance of a future war with an alien civilization, reward would be better secured by preparing for such a possibility. So if we are powerless against an agent whose only goal is to maximize the probability that it receives its maximal reward every timestep, we find ourselves in an oppositional game: the AI and its created helpers aim to use all available energy to secure high reward in the reward channel; we aim to use some available energy for other purposes, like growing food. Losing this game would be fatal. Now recall Assumption 6.


This is really only a slippery slope argument that leads down a path of increasingly less likely events while ignoring that this doesn't actually substantiate they claim that this (or anything similar) is in anyway likely. This type of argument also appears to be used elsewhere in the paper. I am no longer interested in reading the paper but maybe I should be, the FUD is a bit funny.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #7 Posted: Fri Sep 16, 2022 8:23 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #8 Posted: Fri Sep 16, 2022 8:40 am 
Lives in gote

Posts: 470
Liked others: 62
Was liked: 278
Rank: UK 2d Dec15
KGS: mathmo 4d
IGS: mathmo 4d
seems like its on everyone's mind. Just from reading your snippets, either I don't understand terms used or they are being too vague at the moment.

well if Go is a good case study, it seems they respect the general strength of AI in any field, recognising human weaknesses, while perhaps slightly disrespecting human progress that few even among pros probably understand well enough to predict.

if neural networks is a good case study, then it seems that general concepts etc propagate exponentially if there is enough bandwidth for it to fix an error (though it may cause destabilising oscillations to well connected dominant neurons). This may be like a virus, but normally if the engineering is well designed, it does cause a long term improvement as they work together and sometimes merge. However sometimes, it simply leads to death of everything else.

if entropy is valid, and life is simply a mechanism to disperse temperature difference, then human's success tells of something they are doing right that should be hard to dislodge except by a union of partial mimics and competitors and reducing rewards, or lack of food inputs. But in a sense, whatever follows will likely copy the best parts of humans also along with their own niches, so it is a matter of perspective. What is humanity given that it also likely built on what was here before?

if history is a good case study, end of humanity is inevitable at some point, the only question is when.

20221214
Multiple themes and cultures here. But one I've just noticed is Dante's circles of hell.
20221219
If this is a partially correct interpretation, my 8th rank is for fraud. I think I'm trying. And I don't understand how I am perceived enough to agree. But I also can't disagree.


Last edited by dhu163 on Sun Dec 18, 2022 5:17 pm, edited 3 times in total.

This post by dhu163 was liked by: Elom0
Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #9 Posted: Fri Sep 16, 2022 10:18 am 
Lives in sente

Posts: 905
Liked others: 22
Was liked: 168
Rank: panda 5 dan
IGS: kvasir
RobertJasiek wrote:
The danger is very great because potentially the following is sufficient for human extinction: AI with just the ability of physical, possibly non-biological self-replication using tiny amounts of energy per replication step. Exponential growth of such a "virus" population does the rest.


Scenarios where someone does something deliberately always seem much more likely to me than pure accidents. Someone could possibly create a self-replicating devices with good intentions that are also very destructive, energy efficient and difficult to contain. That would be a deliberate construction of something dangerous, even if the intention was to always keep it contained. The idea of simply experimenting with something and then creating something very dangerous that is not contained in the lab environment doesn't seem likely to me, unless the danger was anyway evident.

Carl Jung wrote:
[...]the only real danger that exists is man himself. He is the great danger and we are pitifully unaware of it. We know nothing of man!

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #10 Posted: Thu Apr 27, 2023 10:42 am 
Dies in gote

Posts: 22
Liked others: 0
Was liked: 4
Rank: OGS 3k
The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #11 Posted: Tue May 02, 2023 2:26 am 
Lives in sente
User avatar

Posts: 914
Liked others: 391
Was liked: 162
Rank: German 2 dan
No. In my view, it's just an automated con man, but without agency. What would need to happen so that the things it writes get an actual connection to reality, in both directions?

_________________
A good system naturally covers all corner cases without further effort.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #12 Posted: Tue May 02, 2023 3:09 am 
Dies in gote

Posts: 22
Liked others: 0
Was liked: 4
Rank: OGS 3k
I'd say it's a system with superhuman intuition, but no ability to check its intuition with further thought, or do anything other than say the first thing that "comes to mind".

Imagine AlphaGo without MCTS. Without the tree search. It would have superhuman intuition but it cannot go beyond that. To go beyond, it needed a second component that searched the game tree. I think the key in developing AGI is now in that second component. A large language model by design does not have it, and it's not trivial to imagine what it would look like. Perhaps we could get there by creating feedback loops within the neural network that make it able to "ruminate" like humans do, and only reach a conclusion when it's ready.


This post by mart900 was liked by: yoyoma
Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #13 Posted: Tue May 02, 2023 4:36 am 
Lives in sente

Posts: 724
Liked others: 1023
Was liked: 30
Rank: BGA 3 kyu
KGS: Elom, Windnwater
OGS: Elom, Elom0
Online playing schedule: The OGS data looks pretty so I'll pause for now before I change it.
mart900 wrote:
The "Sparks of AGI" paper about GPT-4, which is a significant jump in capabilities from GPT-3.5 (ChatGPT) proved to me that intelligence is an emergent property, and that it's much easier to replicate than I had previously thought. ChatGPT was impressive already, but it never quite felt like the paradigm shift that GPT-4 seems to be heralding. Yes, GPT-4 is still just a large language model, but the story is the potential it shows.

Following this discovery I got significantly more interested in this subject and now, after many hours of lectures, podcasts and reading, have to agree with DeepMind's (and a bunch of other people's) assessment that there is a chance AI ends humanity. In our lifetimes, even. Progress on alignment is slow, it seems like a hard problem to get right, and we likely need to get it right soon.

While don't think the chance of humanity ending before 2100 is greater than 50%, I do think it's greater than 10%. One thing I'm certain of: It is the most important crisis humanity will face in its history.

Has GPT-4 changed anyone else's mind on this?

https://www.youtube.com/watch?v=qbIk7-JPB2c


In my view that 10% chance is more appropriate for 2040. The danger isn't superintelligent Artificial General Intelligence. The danger is pretty darn smart AI with flaws combined with humans that also have flaws in thinking, and having them compound upon each other.

Top
 Profile  
 
Offline
 Post subject: Re: The End of Humanity
Post #14 Posted: Mon Jul 03, 2023 7:30 pm 
Beginner

Posts: 7
Liked others: 1
Was liked: 4
KGS: elementc
My dudes, we're losing 2.5% per year of total biomass of insects globally, among many other existential problems. The AI had better hurry if it wants to get us first before we get ourselves.


This post by elementc was liked by: ArsenLapin
Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group