I woke up this morning with some insights about the relationship between mental modeling and collective intelligence. They seem new but one never knows; I could have already thought of them years ago or somebody else may have done so. What interests me is not whether they are new or not but how they may relate to older expression of the same “source idea”. Do they improve the older ones thought by others or myself? What new meaning does become visible when they are overlaid on top of the older ones?
My first instinct is to check what connects the insights of this morning with other thoughts floating in the noosphere, is to google “mental modeling” AND “collective intelligence.” Surprisingly low number of hits; only 5 or 7, depending on whether I spell it with modeling or modelling. One of them is a page where I find an intriguing definition of CI, which is built on the relationship of local and global cognition
My next step in finding out where do this morning’s insights come from and what would be the most responsible way to take care of them, is to “spotlight” my hard disk. (Spotlight is the fantastic search tool, part of the Tiger operating system that came with my new G4 laptop.) Spotlight found a file of my notes of a conversation that I had with Peter Senge in the late 80’s, whilst visiting with him at MIT. Before going into the past, you may want to read, first, the summary of this morning’s thoughts:
Our mental models holds together various representations of the world, which make it perceived as coherent. The success of a model in guiding effective action —should we say, its accuracy–depends on, among other things, how well it is integrated in and supported by a network of adjunct and higher level models.
The more complexity we can open our thinking to, the more accurate those models can become, and vice versa, in an interlocking, positive feedback loop between better models and higher capacity to absorb complexity.
(Meta-process comment: As I write this entry, I notice that the simple act of paying attention to my experience of an early-morning insight, turns it into a mental model, an emerging pod of meaning, which can be projected onto my screen and into your mind.)
There seem to be not much new so far; Doug Engelbart, Stafford Beer, John Morecroft, all talked about that. What caught my attention as new is my understanding of how the urgent and huge task of growing a global collective intelligence can be enabled by two “minor” ones, through better mental models and modeling processes. Here they are:
1. Complex thoughts are comprised of simpler ones that can be held and spoken only sequentially, one by one. When they are new, not coming from well-travelled neural pathways, simply watching them can nourish them.
Seeking to relate with their surrounding ecosystem of mental models, the new thoughts call for representation, for being expressed in forms comparable with what precedes and surrounds them. It is a race between the speed with which new thoughts and mental models emerge, on one hand, and can be recorded, accessed, compared and studied, on the other hand. Will the 21st century see the emergence of a new kind of “recording industry” ?-)
2. The other task is an even more powerful enabler of growing a global collective intelligence. It’s about growing better connectivity with the experience, insights, and inspirations of peers, coaches and mentors.
Continually succeeding in that seems to be essential to passing our evolutionary test, as individuals and collectives. These two tasks are inter-related and I’m already thinking about prototyping an action-research course at the now-planned Better World University that I would offer as a facilitated learning expedition into the relationship of those two enablers.