[V 1.1] Supporting the development of friendly superintelligent AI should be humanity's highest p...

Get Started. It's Free
or sign up with your email address
Rocket clouds
[V 1.1] Supporting the development of friendly superintelligent AI should be humanity's highest priority by Mind Map: [V 1.1] Supporting the development of friendly superintelligent AI should be humanity's highest priority

1. OBJECTIONS

1.1. SOLVABILITY Ways to make progress in the development of friendly superintelligent AI are highly speculative.

1.1.1. BECAUSE

1.1.1.1. Because the largest potential risks are probably still at least a couple of decades away, a substantial risk of working in this area is that, regardless of what we do today, the most important work will be done by others when the risks become more imminent and comprehensible, making early efforts to prepare for the problem redundant or comparably inefficient.

1.1.1.2. Supporting the development of friendly AI might lead to unwarranted or premature regulation which might inadvertently increase the risk of catastrophic AI

1.1.1.2.1. BECAUSE

1.1.1.3. UNCERTAIN RFMF It is unclear how much room for more funding (RFMF) the field of AI safety research has. It is possible that there is very little that could effectively be done with additional funding at this point, due to other bottlenecks.

1.1.2. REBUTTALs

1.1.2.1. INCREASING AWARENESS

2. BECAUSE

2.1. URGENCY Humanity needs to start directing it's efforts towards developing friendly superintelligent AI immediately.

2.1.1. BECAUSE

2.1.1.1. RACE FOR INTELLIGENCE

2.1.1.2. ON THE HORIZON The advent of superintelligence might be within the lifetime of people alive today.

2.1.1.2.1. OBJECTIONS

2.1.1.2.2. BECAUSE

2.1.1.3. INEVITABILITY Baring any other global catastrophes occuring, we are almost certain to develop AGI eventually .

2.1.1.3.1. OBJECTIONS

2.1.1.3.2. BECAUSE

2.2. NEGLECTEDNESS The risks involved with advanced AI are severely neglected.

2.2.1. BECAUSE

2.2.1.1. FUNDING IS SCARCE

2.2.1.2. PUBLIC INTEREST IS LOW

2.2.1.2.1. BECAUSE

2.3. TECHNOVOLATILE FUTURE Advanced AI will be either the best or the worst thing to ever happen to humanity.

2.3.1. BECAUSE

2.3.1.1. EXISTENTIAL RISK Catastrophic superintelligence is the most serious existential risk to humanity.

2.3.1.1.1. BECAUSE

2.3.1.2. ONLY THE EXTREMES It is unlikely that the advent of superintelligence will be anything but the most disruptive event in earths history, wether positive or negative.

2.3.1.2.1. BECAUSE

2.3.1.3. OUR FINAL INVENTION [Copremise A] the first superintelligent machine is the last invention that humanity ever needs to make.

2.3.1.3.1. ALL-POWERFULL FRIEND [Copremise B] A friendly superintelligent agent would solve most of humanities problems including poverty, war and global warming.