Heard in the Techno-progressive Pub…

Singularity UniversityThe other day, I went to a meeting of London Futurists with the title When linearity met exponential – a summer at Singularity University. As expected, the conversation  was focused on various technological aspects of our future.  Towards the end I raised my hand to introduce a different, techno-communitarian perspective. There were more people who expressed their resonance with my point of view than I thought there will be.

After the end of the formal meeting, we continued the conversation in a nearby pub, and decided to meet again online. Using the message board of the London Futurists’ Meet-up site, I opened the Techno-progressive Pub.

Somebody wrote there:

> I’ve also read a bit about Principia Cybernetica and the Global Brain Project

I replied:

It’s good to know. I presented a paper at the first Global Brain workshop in 2001 on Designing for the Emergence of a Global-scale Collective Intelligence: Invitation to a Research Collaboration.

As you will see, I’m a bit more concerned by liberating the Collective Intelligence (CI) of human communities and institutions than by AI alone or even Artificial General Intelligence (AGI) or friendly AI. That’s because I believe that some of the AI breakthroughs will be used for evil, oppressive purposes (in fact, they already are) and the best antidote or insurance policy is augmenting the CI of the ecosystem of social innovation initiatives.

> The cybernetic-techno-progressive-global-bra­­in antithesis seems to be that humanity and technology will naturally co-evolve or self-organise to ever greater integration and harmony, unless some negative force counteracts that. Can these two worldviews be combined into one synthesis?

I define CI as “the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as variation-feedback-selection, differentiation-integration-transformati­on, and competition-cooperation-coopetition.”­ However, evolution is not linear; huge detours do happen and humankind can become an aborted experience if we don’t pay attention.

Whether singularity will be friendly or unfriendly is less of a question of clashing worldviews, but (differently educated) guessing. For me, the issue of worldviews come up around the choice about what we should invest the most attention/energy/resources. One is betting on AI and AGI, the other is on positively addressing the Einsteinian challenge: “No problem can be solved from the same level of consciousness that created it.” Raising our CI, at every scale, is not the same as, but a critical condition for, augmenting our level of consciousness.

The two worldviews can be combined into one synthesis, but not by a cognitive mash-up, rather in a transcend-and-include way. A CI-focused path would both include AI and AGI, and transcend them, by putting them in service to the need of empowering human communities and institutions to make wiser decisions.

If Effective Altruism is prioritizing projects focused humanity’s long-term future, I’m wondering whether there is anybody in that movement, who would consider providing to support to augmenting humankind’s intelligence that Doug Engelbart so eloquently spoke of, 50 years ago?

Thank you for your reflections and question that triggered mine. Can this be the beginning of a small-scale experiment in our  collective intelligence? 🙂

Advertisement
This entry was posted in Singularity, Technologies That Support CI. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s