Nugget from “As We May Think”

I read Dr Vannevar Bush’s 1945 paper tonight, and was struck by how much of it consisted of descriptions or analogies suggesting how certain things might be implemented in the future, and how succinct were the portions in which he described what (to me) are the most critical ideas. Given what we now know of electronics and semiconductors, his concepts for use of photography and lever-controls seem quaint and obsolete. Yet the key concepts and problems described have stood up to time and are yet unsolved!!

I believe the first critical portion is in section 1, paragraphs 3-5. The next is the first sentence of section 2. Next the first three sentences of section 4, and the fourth paragraph of section 4. The second and third paragraphs of section 5 are also key. Then the first two paragraphs of section 6, and the third sentence of the third paragraph: “Selection by association, rather than indexing, may yet be mechanized.” Section 7 describes key aspects of usage, and a couple paragraphs in section 8, perhaps the third and ninth, sum up. I should extract these, present them together, and comment further another night.

For tonight, I will make my nugget a subset of sentences from section 1, paragraphs 3-5:

There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers — conclusions which he cannot find time to grasp, much less remember, as they appear. … Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. … Mendel’s concept of the laws of genetics were lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential. The difficulty seems to be … that publication has been extended far beyond our present ability to make real use of the record…. The means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

This I think expresses a core concept in Bush’s paper: our current knowledge is so vast and increasing so fast, that our continued manual use of ancient memory aids (books and writing, libraries and indexes) cripples our human ability to take advantage of that knowledge and build effectively upon it. He therefore conceives of the memex, which is a device to aid INDIVIDUALS to store, access, and build upon an increasingly large body of knowledge.

I have heard that meme before, as a child in school: our knowledge is vast and growing fast, and we should build upon it effectively. I don’t know if what I heard was an echo from Vannevar Bush’s paper, or from other people noticing the same thing and repeating it to me. I hear it again when I read about how many more books are published than each year before, or how many kilo/mega/peta/exabytes of storage it takes to capture current human knowledge or the Library of Congress. And I concur with Bush’s thesis: our methods of transmitting and reviewing current knowledge are totally inadequate. We need better (semi-automated) methods of getting key information to the particular individuals who can take advantage of it.

I have worked at several jobs involving computer science and engineering, and in particular several which involved database work where the data itself was knowledge and information, not just cost accounts or sales figures. I can attest to the difficulty of designing GUIs and access mechanisms to allow the users to a) enter their information, b) find and retrieve what was relevant to the problem at hand, and c) update and maintain the stored information as new facts and inferences were discovered. The mechanisms we implemented were as automated as we could make them, but still involved extensive and tedious manual effort, and the users hated the tools.

The MAINTENANCE and UPDATE of the information was a key issue: entering new data, and the manual effort required to create associations between items. Vannevar Bush’s description in the second paragraph of section 7 brings chills to my spine. That’s too much work and takes too much time! The normal case needs to be better automated!! Neil Larson (owner and author of MaxThink, see also (for now) The story of Maxthink) had some very valuable things to say (and practical experience carrying it out!) about semi-automated linking and cross-referencing of hypertexts.

I believe that hypertext technology, and related writing/database/retrieval tools, are a key piece of the puzzle (but only a piece: there’s more required). The current World Wide Web implements but a pale shadow of what hypertext can be. In the 80’s and 90’s there were several hypertext research systems which explored many key concepts, which the WWW oversimplified. For instance, in addition to one-directional links as the HTML anchor tag implements, there were bi-directional links, links of different types, one-to-many, many-to-one, and many-to-many links. Some of these types are present in current markup languages (VRML? SMIL?) but the whole suite of hypertext link types is not popularly known and understood.

Eastgate Systems has some very interesting hypertext tools, Arbortext (now part of PTC?) used to have some really powerful SGML and XML tools, and there are a slew of other companies and tools available. However, I don’t know of ANYONE who has put together an effective system to sufficiently implement Vannevar Bush’s memex concept. Engelbart came close, but NLS/Augment seems to have been (at least partially) left behind, without effective replacement. I’ve heard that Microsoft’s OneNote does a very good job in many ways including for note-taking, but I haven’t used it, and don’t yet see how it can be used effectively for teamwork and collaboration. Ray Ozzie‘s Groove was an awesome teamwork tool, but now Microsoft has subsumed it somehow within Sharepoint, and what to me were key concepts were lost.

I’m out of time for tonight, but there’s lots more to say…

For my future follow up:

  • Create a blog post consisting of what I think are the critical sentences, then comment further.
  • Correlate key ideas in Bush’s paper with key concepts in Engelbart’s NLS/Augment implementation.
  • Add to Tom Woodward’s Google Doc, or (less desireably) create my own version, and comment on other aspects which struck me, especially creating hypertext links (associations) to other related materials of which I am aware.
  • Blog about the idea that the memex is a device for INDIVIDUALs, not groups, and contrast with NLS/Augment which was designed with special features for teamwork and collaboration.
  • Blog further about Neil Larsen’s ideas and tools involving automated linking.
  • Identify and link to other hypertext research tools from the 80’s and 90’s, and current tools which may implement some of the memex concepts.

2 thoughts on “Nugget from “As We May Think””

  1. To paraphrase the Bard’s Brutus in Julius Ceasar, perhaps “the fault, dear scientist, is not in our information plethora, but in ourselves, that we are beings of finite capacity.” We are subject to an inherent trade off of the scope of the knowledge we wield versus the depth of our understanding. This will always be the case, even in some trans-human future of artificially enhanced and expanded intelligence. Perhaps something quantifiable that could be called a consciousness quotient (indeed, in a different context this has already been done by some researchers) can be defined as a ratio of understanding and scope, which would roughly delineate the patch of of concept space which could be effectively operated within. Anyway, the growth of knowledge has certainly engendered a new kind of specialization which can be considered a meta-field, which is the science encompassing the totality of information use and not simply its organization and retrieval, nor application to a specific real-world problem. How can one facilitate collegiality among specialists in an increasingly balkanized sphere of human knowledge while minimizing overhead that comes from cobbling together a system (team or community of humans) from disparate parts? Such a system is minimally something like [Specialist A—-Specialist F(Facilitator or interfacing system)—-Specialist B], and none of the components is competent beyond its own purview. What if the design of machines to assist us can be informed by human psychology, a language developed to express the necessary interactions, along with a culture of interaction amongst the components of these cognitive systems or communities? I am not talking about truly conscious and intelligent machines and the singularity, although they might well come, but rather a non-physical (traditional) synergy of humans and machine systems which are designed on a basic logical level to conform to human cognitive functioning, which synergy would exist within a self conscious and purpose-built culture. Computational systems and methods would not be externalities to problems, which need to be operated and designed by specialists who are not necessarily conversant with the given problem under solution. When one considers that a computer made with gears or air valves is no less a computer than a Mac or a mainframe the possibility that such a synergy could exist as a real entity becomes less counterintuitive than at first blush. One realizes that physicality is necessary but not essential, and that the human mind is always the core (or CPU) of any system it builds. Bush and Babbage do not look quite so primitive in this light, and perhaps a synergistic culture of ordinary humans will be the manner in which the singularity arrives. We were always computationally linked.

    An interesting tool from the Macintosh world that I have played with is Devonthink, from Devonian Technologies. It alleviates the task of manually associating bits of relevant data by (from what I have gathered) analyzing the type of data and its source as well as its similarity to previous data, from which it can generate a classification. This is, conceptually, the kind of tool I was talking about above.

  2. This is an amazingly in depth analysis of Bush’s essay and how his ideas about storing knowledge are still relevant today. We, as a human race, have come full circle since this essay was first published. We have created myriad new technologies and ways of storing and processing information, and now have reached a point of requiring another technological leap so that we may store the new knowledge we have documented.

Leave a Reply

Your email address will not be published. Required fields are marked *