The purpose of the profile isn't to argue a risk exists. We largely defer to the people we take to be experts on the issue, especially Nick Bostrom. We think he presents compelling arguments in Superintelligence, and although it's hard to say anything decisive in this area, if you think there's even modest uncertainty about whether AGI will be good or bad, it's worth doing more research into the risks.
If you haven't read Bostrom's book yet, I'd really recommend it. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...