the singularity will not happen
The core assumptions that the concept of the [[singularity]] or âsuperintelligenceâ rests upon are the following:
- Technological progress is always improving
- There is a tendency to automate labor over time
- [[Artificial intelligence]] will get to a sufficiently intelligent point that it can do not only one task better than humans, but all tasks
There are a number of problems with this:
- [[Automation]] is about reducing [[necessary labor-time]]. In this way, the argument still holds, but falls apart when we look out at the world. As of writing this, there are no firms who are actively looking to create a general artificial intelligence
-
Intelligence is situational. Intelligence comes from oneâs environment and the problems that arise in said environment
- As the article linked to below says, you could not simply put a human brain in an octopusâs and assume itâll be able to survive its environment. Much of what makes a human human is hard-coded (but not everything!)
- There is no such thing as âgeneralâ intelligence
- This puts far too much faith in software developers
Links