Evolution’s engine picking up steam

 

Screen Shot 2014-11-02 at 6.03.27 AM

Evolution’s engine is picking up steam.

Those on the edge are entertained by a dazzling firework of opportunities.

To choose well, engaging the flow for the highest impact, the collective mind has to access collective wisdom, swiftly.

Practice, practice, practice!

Posted in Autonomy, Communion, and CI, Collaborative Sense-Making, Collective Wisdom, Evolutionary Movement | Tagged , | 1 Comment

2015 Collective Intelligence Conference

cscs-banner3

The annual interdisciplinary conference that brings together researchers from the academy, businesses, non profits, governments and the world at large to share insights and ideas from a variety of fields relevant to understanding and designing collective intelligence in its many forms.

Location

Marriott Hotel, Santa Clara CA
2700 Mission College Blvd, Santa Clara, CA 95054
(408) 988-1500

The topic areas for collective intelligence include:

  • The evolution of collective intelligence
  • Crowdsourcing
  • Human and social computing
  • The emergence and intelligence of social movements
  • Collective response to environmental constraints
  • The spread and containment of rumors
  • Collective robustness, resilience, and stability
  • The evolution of scientific intelligence
  • Collective intelligence in plants and non humans
  • The Wisdom of Crowds & prediction markets
  • Collective search and problem solving
  • Collective memory
  • Emergent organizational forms
  • The intelligence of markets and democracies
  • Technology and software that make groups smarter
  • Collective Intelligence in the new journalism
  • Crowd solutions to policy problems and crises
Conference Organizer: Scott E. Page, University of Michigan
Program Chairs: Deborah M. Gordon, Stanford University
Lada Adamic, Facebook

Important Dates

Conference Dates  |  May 31 – June 2, 2015
Deadline for Abstracts  |  February 1, 2015, Midnight PST
Program Announcement  |   February 23, 2015
Hotel Reservation Deadline  |   May 1, 2015

Posted in Academic Research in CI, Uncategorized | Leave a comment

Europe’s big chance

Violeta_Bulc_2014Some of you read the transcript of my keynote address at 57th conference of the International Society for Systems Sciences, last year, on Augmenting the Collective Intelligence of the Ecosystem of Systems Communities. The title of the conference was “Curating the Conditions for a Thrivable Planet: Systemic Leverage Points for Emerging a Global Eco-Civilization.” The leader of the conference’s program design team was Violeta Bulc, a brilliant systems scientist from Slovenia.

The systems and behavior scientists among you may have also seen an extended version of my keynote in the current issue of Systems Research and Behavioral Science Volume 31, Issue 5, pages 595–605. What you may not know is that Violeta Bulc helped me shaping the ideas I presented in my keynote address and the subsequent academic publication. She is not only an eminent systems thinker, business and social innovator, but also has an intimate understanding of the intricate issues of collective intelligence and collective impact.

The good news is that the Prime Minister of Slovenia nominated her for Slovenia’s slot in the new EU government, the college of Commissioners. The European Parliament will have its confirmation vote on Oct. 22nd.

I see Bulc’s professional focus and recent article on mass participation in innovation, combined with her capacity to drive scalable innovation projects, which earned her present job of the vice-president in the Slovenian government, as key assets to a more prosperous Europe. If you read her article that I referenced here and you think that enabling mass participation in innovation would make a big difference for Europe and the world, please spread the word about Violeta and this blog.

Posted in Uncategorized | 4 Comments

A Project for a New Humanism: an interview with Pierre Lévy

pierre_levy

Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as a new tool that works with the ocean of data of participatory digital memory, which is common to all humanity, and systematically turns it into knowledge. A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory.

By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society. And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability.

In “The Technologies of Intelligence” and “Collective Intelligence”, you argue that the Internet and related media are new intelligence technologies that augment the intellectual processes of human beings. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence?

The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: it exists in all animal societies, especially insect societies and mammal societies, and of course the human species is a marvellous example of collective intelligence. In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture. Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans.

Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems. Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero. All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: the digital or – as I call it – algorithmic stage. Our new technical structure has given us ubiquitous communication, interconnection of information, and – most importantly – automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: the addressing of information in the memory of computers (operating systems), the addressing of computers on the Internet, and finally the Web – the addressing of all data within a global network, where all information can be considered to be part of an interconnected whole–. This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge?

This democratisation of knowledge is already happening. If you have ubiquitous communication, it means that you have access to any kind of information almost for free: the best example is Wikipedia. We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

What role do cultural institutions play in this democratisation of knowledge?

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.

dialect_human_development

A Model of Collective Intelligence in the Service of Human Development (Pierre Lévy, en The Semantic Sphere, 2011) S = sign, B = being, T = thing.

We are now in the midst of what the media have branded the ‘big data’ phenomenon. Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium?

First let’s say that what is happening now, the availability of big flows of data, is just an actualisation of the Internet’s potential. It was always there. It is just that we now have more data and more people are able to get this data and analyse it. There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected.

At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: it is the automatic transformation of data (through software).

We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations. That’s what we call big data today. In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data.

Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data. This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable?

Today we have something called the “semantic web”, but it is not semantic at all! It is based on logical links between data and on algebraic models of logic. There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. IEML will enable the simulation of ecosystems of ideas based on people’s activities, and it will reflect collective intelligence. This will completely change the meaning of “big data” because we will be able to transform this data into knowledge.

We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: how will we use all these tools in a meaningful way to augment human collective intelligence?

This is why I have invented a language that automatically computes internal semantic relations. When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library. So you have a kind of automatic semantic hypertextualisation. The IEML code programs semantic networks and it can easily be manipulated by algorithms (it is a “regular language”). Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code.

The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set. This is the point that connects IEML and Big Data.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development?

Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability. This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas. On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics (in conjunction with logic and statistics) that we can understand what is happening in the human realm. We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences.

Let’s talk about the current stage of development of IEML: I know it’s early days, but can you outline some of the applications or tools that may be developed with this metalanguage?

Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in 2011. And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence, where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published (online) in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me. I hope other people will contribute.

This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages?

The basis of IEML is a simple commutative algebra (a regular language) that makes it computable. A special coding of the algebra (called Script) allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts. The dictionary, made up of terms organized according to symmetric systems of relations (paradigms), gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks (expressed in natural languages). The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts. On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas – representing collective cognitive processes – will be cultivated in an interoperable environment.

start-ieml

Schema from the START – IEML / English Dictionary by Prof. Pierre Lévy FRSC CRC University of Ottawa 25th August 2010 (Copyright Pierre Lévy 2010 (license Apache 2.0)

Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable.

How do you envisage these big graphs? Can you give us an idea of what the visualisation could look like?

The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map. And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web (through tablets) and as augmented reality experienced in the 3D physical world (through Google glasses, for example).

I’m also curious about your thoughts on the social alarm generated by the Internet’s enormous capacity to retrieve data, and the potential exploitation of this data. There are social concerns about possible abuses and privacy infringement. Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium?

IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical. What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist. But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players.

And to finish up, last month the CCCB Lab held began a series of workshops related to theInternet Universe project, which explore the issue of education in the digital environment. As you have published numerous works on this subject, could you summarise a few key points in regard to educating ‘digital natives’ about responsibility and participation in the algorithmic medium?

People have to accept their personal and collective responsibility. Because every time we create a link, every time we “like” something, every time we create a hashtag, every time we buy a book on Amazon, and so on, we transform the relational structure of the common memory. So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence.

Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from? What is the author’s pragmatic context? And so on. The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from? Where do the grants come from? Transparency is the new objectivity.

Tags: , , , , , ,

Posted in Academic Research in CI, Methodologies associated with CI, Technologies That Support CI | 4 Comments

Issue paper for the workshop on Collective Intelligence for the Common Good (CI4CG)

“The notion of the common good is a denial that society is and should be composed of atomized individuals living in isolation from one another.” (Encyclopedia Britannica)

The workshop hosted by the Open University on September 29-30, 2014 is aimed at establishing an Open Research and Action Community Network to research CI4CG.

In support of the workshop’s objectives, I’m going to present the following issue paper:

Augmenting the Collective Intelligence of the Ecosystem of CI4CG Initiatives

Motivation

The developmental stage of collective intelligence used by the ecosystem of the various CI4CG-type initiatives, and the vitality of that ecosystem, have an impact on their effectiveness.

That stage and that vitality will shape the initiatives’ capacity to assist decision-makers, communities, and social movements in defining, mapping, and addressing critical local and global problems. Their enhanced capacity will help identifying options for wise, collective action and anticipating it outcomes.

Boosting the CI of the ecosystem of CI4CG projects is a pivotal task that our conscious evolution may hinges on. Given its convening intention, and the caliber of researchers it attracted, the now-forming CI4CG Open Research and Action Community Network is well poised to prototype the augmentation of CI by collaborative efforts, using Generative Action Research.

Framework

Augmentation theories go back to a seminal essay of by Doug Engelbart, whom the author of this issue paper had the good fortune to have as his mentor. There, Doug laid the foundation for augmentation as “a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully coexist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”[1]

Later, he went on describing CI not as a thing, but a process of “sharing among a community of humans the distributed nuclei of human resources represented by individuals with special knowledge, judgment, intuition, imagination, conceptual skills, etc. This human-resource sharing has explosive potential — I look to it with a biological metaphor as providing a new evolutionary stage for the nervous system of social organisms, from which much more highly developed institutional forms may evolve…”[2]

Those two quotes define the first two distinctions of the framework for the suggested participatory action research into augmenting the CI of the ecosystem of CI4CG projects. The third one is the concept of “innovation architecture”[3] comprised of the social, electronic, cognitive, and inner technologies and processes that we need to skillfully integrate for augmentation.

Potential research questions

The actual questions of the action research will need to be jointly defined by the researchers who feel called to this inquiry. The set of questions presented below serve only as conversation starter. Their exploration would kick in different phases of the research.

  • What are the mission-critical conditions for using our own medicine and enhancing the CI of our own community?
  • What role does CI play in enhancing collective wisdom, and vice versa?[4]
  • What are the implications of the “neurons who fire together, wire together” process of memory formation for the design of system features and functions that support communal memory formation?
  • What evidence do we see that today’s CI researchers and practitioners may be the tip of an evolutionary wave, of an idea movement that may significantly broaden in the coming years and decades? How can we accelerate that emergence and the learning of all who will be involved in it, including ourselves?
  • What uses by a social movement could benefit from any combination of such socio-technological systems as collective sensing organs and participatory sensory networks, pattern language, collective awareness platforms, web-enabled U Process, community asset mapping, Dynamic Knowledge Repositories, knowledge gardening, knowledge federation[5], global learning games, social learning? (This list can be narrowed down or expanded in function of the needs and aspirations of the research’s principal stakeholders.)
  • What progress did system biology make in explaining biological ecosystems, which is exploitable in designing IT platforms for CI augmentation?[6]
  • How may second-order cross-fertilization of cheap cloud storage, increasingly high bandwidth transmission, rapidly growing processing power in hand-held devices, and intelligent software agents, affect the evolution the augmentation of our collective intelligence at a massive scale? (This question can be explored using a Delphi Study method.)
  • What is the cutting edge of research in combining semantic and social networks with powerful visualizations tools, represented by the work of Simon Buckingham Shum and other researchers?

Methodology

The suggested methodology is based on the integration of the U Process and Generative Action Research (GAR) that belongs to the family of participatory action research methods. GAR is built on the disciplines of generative learning, action research, and appreciative inquiry. It is designed to mobilize and augment the collective intelligence of teams, organisations, communities and social movements, in increasing and cumulative circles of involvement. Its key characteristics are:

  • Cyclic — Action and understanding go through cycles of deliberate and spiraling intervention and reflection. Cycle 1 starts with discovering the questions that are the most compelling to the main stakeholders of the research.
  • Emergent — The design is not detailed in advance to allow its cycles to respond to relevant knowledge emerging from the previous one. Thus, when specific outcomes cannot be predicted, the process remains flexible and is allowed to develop on its own.
  • Participative — Key stakeholders of the project are actively involved in advising the process, reviewing and commenting its purpose and design.

The suggested 3 cycles of the research could involve A. the research team (1/2 year); B. the Open Research and Action Community Network (1/2 year); and C. the knowledge commons of one of the Global Solutions Networks.

U Process combined with GAR

 

 

 

 

 

 

Schematic illustration of how the U Process combines with the Generative Action Research

[1] A Conceptual Framework for the Augmentation of Man’s Intellect, 1963, Douglas Engelbart

[2] Coordinated Information Services for a Discipline- Or Mission-Oriented Community, 1972, Douglas Engelbart

[3] Liberating the innovation value of communities of practice, 2005, George Pór

[4] Collective Intelligence and Collective Leadership: Twin Paths to Beyond Chaos, 2008, George Pór

[5] Towards a Federated Framework for Self-evolving Educational Experience Design on Massive Scale, 2010, George Pór

[6] Framework for Awakening Collective Intelligence in the Ecosystem of Commons Initiatives, 2011, George Pór

 

Posted in Academic Research in CI, Democracy and CI, Movement Cartography | 1 Comment

From Right Mindfulness to Collective Intelligence to Collective Sentience

Abstracts of paper invited to Spanda Journal’s special issue on Collective Intelligence

 

Without an ethical foundation grounded in the common good and an integral, evolutionary worldview, the currently trending mindfulness practices and trainings risk to reduce a radical, ancient wisdom tradition of self-knowledge and self-transformation to a self-help technique or psychological state readily co-optable by the defenders of the institutional status quo.

“Mindfulness is not merely a compartmentalized tool for enhancing attention but is informed and influenced by many other factors—our view of reality; the nature of our thoughts, speech, and actions; our way of making a living; and our effort in avoiding unwholesome and unskillful states while developing those that are skillful and conducive to health and harmony.”[1]

Ethically grounded collective intelligence (CI) is built on right mindfulness. In this essay, we’ll use both the functional and evolutionary definition of CI.

That term “collective sentience” needs a bit more explanation. It’s neither the swarm intelligence of the murmuring starlings or the coordinated behavior of other social animals, nor the romantic notion of all humans getting enlightened at once. The collective sentience of a social organism, at any scale, implies the capacity to care for and foster the well-being of its parts and the whole, as well as of its larger, encompassing whole.

The aspiration to achieve collective sentience in small and large groups is an integral part of an evolutionary ethos, but given the dominance of today’s individualist culture, its realization is only one of the possible futures. In this essay, we intend to contribute to understanding the conditions for such realization, by noticing, observing and interpreting the signposts in the social field pointing to it.

[1] Purser, R. E.  & Milillo, J. (2014) Mindfulness Revisited: A Buddhist-Based Conceptualization. Journal of Management Inquiry, May 2014

 

Posted in Collective Wisdom, Global Brain, Shared Mindfulness | Tagged , | Leave a comment

The Evidence Hub: Harnessing the Collective Intelligence of Communities to Build Evidence-Based Knowledge

Conventional document and discussion websites provide users with no help in assessing the quality or quantity of evidence behind any given idea. Besides, the very meaning of what evidence is may not be unequivocally defined within a community, and may require deep understanding, common ground and debate. An Evidence Hub is a tool to pool the community collective intelligence on what is evidence for an idea. It provides an infrastructure for debating and building evidence-based knowledge and practice. An Evidence Hub is best thought of as a filter onto other websites — a map that distills the most important issues, ideas and evidence from the noise by making clear why ideas and web resources may be worth further investigation. This paper describes the Evidence Hub concept and rationale, the breath of user engagement and the evolution of specific features, derived from our work with different community groups in the healthcare and educational sector.

The Evidence Hub is a contested collective intelligence tool for communities to gather and debate evidence for ideas and solutions to specific community issues. By aggregating and connecting single contributions theEvidence Hub provides a collective picture of what is the evidence for different ideas, which have been shared by an online community.”

Read the full paper here.

Posted in Academic Research in CI, Technologies That Support CI | 1 Comment