Transcription of a talk given by Cory Doctrow in 2011

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 年前

    Oh. I think the idea of a paperclip optimiser/maximiser is that it’s created by accident. Either do to an AGI emerging accidentally within another system, or a deliberately created AGI being buggy. It would still be able to self-improve, but wouldn’t do it in a direction that seems logical to us.

    I actually think it’s the most likely possibility right now, personally. Nobody understands how neural nets really work, and they’re bad at doing things in meatspace like would be required in a robot army scenario. Maybe whatever elites will overcome that, or maybe they’ll screw up.