The other day, I went to a meeting of London Futurists with the title When linearity met exponential – a summer at Singularity University. As expected, the conversation was focused on various technological aspects of our future. Towards the end I raised my hand to introduce a different, techno-communitarian perspective. There were more people who expressed their resonance with my point of view than I thought there will be.
After the end of the formal meeting, we continued the conversation in a nearby pub, and decided to meet again online. Using the message board of the London Futurists’ Meet-up site, I opened the Techno-progressive Pub.
Somebody wrote there:
> I’ve also read a bit about Principia Cybernetica and the Global Brain Project
It’s good to know. I presented a paper at the first Global Brain workshop in 2001 on Designing for the Emergence of a Global-scale Collective Intelligence: Invitation to a Research Collaboration.
As you will see, I’m a bit more concerned by liberating the Collective Intelligence (CI) of human communities and institutions than by AI alone or even Artificial General Intelligence (AGI) or friendly AI. That’s because I believe that some of the AI breakthroughs will be used for evil, oppressive purposes (in fact, they already are) and the best antidote or insurance policy is augmenting the CI of the ecosystem of social innovation initiatives.
> The cybernetic-techno-progressive-global-brain antithesis seems to be that humanity and technology will naturally co-evolve or self-organise to ever greater integration and harmony, unless some negative force counteracts that. Can these two worldviews be combined into one synthesis?
I define CI as “the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as variation-feedback-selection, differentiation-integration-transformation, and competition-cooperation-coopetition.” However, evolution is not linear; huge detours do happen and humankind can become an aborted experience if we don’t pay attention.
Whether singularity will be friendly or unfriendly is less of a question of clashing worldviews, but (differently educated) guessing. For me, the issue of worldviews come up around the choice about what we should invest the most attention/energy/resources. One is betting on AI and AGI, the other is on positively addressing the Einsteinian challenge: “No problem can be solved from the same level of consciousness that created it.” Raising our CI, at every scale, is not the same as, but a critical condition for, augmenting our level of consciousness.
The two worldviews can be combined into one synthesis, but not by a cognitive mash-up, rather in a transcend-and-include way. A CI-focused path would both include AI and AGI, and transcend them, by putting them in service to the need of empowering human communities and institutions to make wiser decisions.
If Effective Altruism is prioritizing projects focused humanity’s long-term future, I’m wondering whether there is anybody in that movement, who would consider providing to support to augmenting humankind’s intelligence that Doug Engelbart so eloquently spoke of, 50 years ago?
Thank you for your reflections and question that triggered mine. Can this be the beginning of a small-scale experiment in our collective intelligence? 🙂
You must be logged in to post a comment.