Input and Output Devices

I’ve been pondering the implementation of Dr. Engelbart’s NLS/Augment, and in particular the innovations he made in using new input and output devices.  I haven’t yet tried to generate an exhaustive list of all the ones he tried, but several in particular are obvious and stand out.  The mouse page at dougengelbart.org shows examples of several: the mouse, a (standard) keyboard, a chording keyboard or keyset, joy stick, light pen, and CRT display.

In the context of devices (tools) to augment a human’s ability to capture and interact with information, what options exist (so far), in what situations is each useful, and what makes sense for (either me, or most people) to use?

The mouse, light pen, and joystick could all be considered types of pointing devices.  We can contemplate other types of pointing devices as well: tablets (i.e. Wacom) using either fingers or stylii to point, touch tablets or touch screens (i.e. iPad or touchscreen laptop display) typically using fingers to point, track balls and other flavors of mice, trackpad and touchpads.  Some of the newer touch devices now allow gestures and multi-touch actions: for instance, the iPad/iPod/iPhone interface has two-finger pinch and expand motions, as well as others I’m not familiar with.  I believe Android and Windows platforms have similar multi-touch capabilities.

Each of these (pointing) devices has particular strengths and weaknesses, there is significant variation in which ones particular individuals find easiest to use, and there are some subtle distinctions in capabilities which need to be explored.  (Two-dimensional vs three-dimensional pointing, gross gesture vs very-fine-motion tracking, handwriting recognition, etc.)

Other input devices which easily come to mind include microphones, cameras, (image) scanners, fingerprint readers, head and eye trackers, and various types of proximity sensors.  Instrumented gloves or exoskeletons can be used for control as well as motion capture.  There are also devices such as the Kinect which can somehow sense positions of people and/or objects nearby.  That also suggests possible use of radar, sonar, infrared, and microwave sensors of various types.

For keyboards we normally think of a “standard” QWERTY keyboard.  There are other letter layouts such as Dvorak, and various “ergonomic” configurations and variations in key style and feel (chicklet, etc).  There are a handful of chording keyboards including the Twiddler and the Frog2, as well as a variety of DIY and Maker configurations.  As far as I’m aware, no one is currently marketing a 5-key chording keyset such as Engelbart used; a USB or MIDI keyboard with a small number of keys (usually 25) is the smallest I’ve seen.

I’ve been trying to figure out how to buy or build a chording keyset such as the one Engelbart used, so far with little success.  While there are several websites which describe various chording keyboards and how to build them, I’m not handy enough to fabricate one which looks and feels good enough to satisfy me.  I thought about buying a toy music keyboard for kids, and hacking the electronics to interface it to a computer, but found the prices were such it’s better to buy a purpose-built music keyboard with USB and/or MIDI interface.  Why buy a toy when you can buy a proper tool?

My current plan is to purchase a 25-key MIDI controller, which are available ranging in price from $60 to $150.  I tried several last weekend at a nearby music store, but didn’t like the feel of the keys on any of them. I’ll keep shopping until either I find one I like, or come up with a better idea…

Formal Logic – DiscussAmongstYourselves

On 26 July 2014, Jon Becker posted to Twitter “I think a course in formal logic should be a core/general education requirement on all college campuses. “.  I was watching for the discussion, but only two others [and Jon] have responded in that forum.  [If there was further discussion elsewhere, please point me to it…]  I have been pondering his statement since, and have some things to say.  [I’m finally pressing “Publish” after writing most of this in February 2015.  Wow!  Another year has gone by!]  There is much more than will fit in a Tweet, so here goes…

In general I strongly agree that learning key portions of formal logic should be part of the core/general education requirement.  Where I might disagree is whether [and when] it should be within an explicit [separate] course in logic, or whether it should be one or several units within another course having a more broadly-scoped focus.  I also think that in addition to being a college-level requirement, some elements should also be required at the high-school level [and perhaps even earlier.]

I derived much benefit from the Logic course I took as an undergraduate, and have applied elements of it in multiple contexts.  I was an Electrical Engineer with a Computer Science minor, so there were many EE and Comp-Sci applications of the formal logic material.  In particular, it prepared me for using Karnough Maps in Digital Circuits, and there were a couple constructions which were absolutely critical to memorize and have correct when writing computer software.  In particular, NOT (A AND B) == (NOT A) OR (NOT B), and NOT(A OR B) == (NOT A) AND (NOT B).  But what I learned about syllogisms and logic proofs and other topics has paid off handsomely in other areas as well, most especially in areas of systematic or structured analysis, and in reading and writing skills.  While I seldom have to write out statements in logical algebra, the thinking skills I learned in that Logic course are applied almost unconsciously every day in my work, and in all the reading and writing I do.

I think there are some undergraduate majors, such as Mathematics, Computer Science, several of the Engineering disciplines, and likely certain others [Lawyers? Communications majors?], for which a formal course in Logic should be required.  [Indeed, some Mathematicians or Scientist/Engineers may need SEVERAL courses of progressively more esoteric Logic and Logical Calculus.]  Others should be encouraged [but I think not required] to take a course focused upon Logic [perhaps especially Communications students and pre-Law students who need to prepare for (and use it during) their Rhetoric and Argument courses.]  But I think for many students, learning the basics of Formal Logic should be more at an “application level”, in bite-size chunks within other required courses.

A writing course [in particular] should have a unit [a week or two long] in which the students study syllogisms. Another short subunit should include some simple logical algebra, which includes at least those two “NOT” rules (NOT (A AND B) =, and NOT (A OR B) =), along with any other key rules which are typically applied in “normal” sentence structures.  The concept of a [logical] tautology is also important to convey.  In either the writing course or a reading course, students should have a short unit with some exercises in breaking down sentences and phrases into logical notation, and determining the validity of the logical argument being made.  [This could perhaps be most constructively and easily done during the unit(s) on syllogisms.]

A history course [especially a history of writing course] could have a reading or two on the history of argumentation and syllogisms and the use of logic and logical fallacies.   A course on Rhetoric or Speech should also include a unit on syllogisms, perhaps in a bit more depth than was covered in the writing course above.

From an administrative perspective, the easiest way to ensure that all undergraduates get an appropriate intro to formal logic is to require they take a particular course covering it, and then verify that course is in their transcript.  But I don’t think that’s the best approach: it would be much better to weave the most important and useful concepts into the other required core curriculum courses, distributed among them as appropriate for each major.

I think it’s the APPLICATION of the concepts which are most important, and the significant applications vary by major and topic area.  In addition, many students will be “turned off” by a somewhat-sterile mathematical approach to logic, while teaching the material within a writing and speech/rhetorical context, within a reading and analysis context, and within a computer literacy/(simple computer programming)/spreadsheets course, will communicate the key ideas and concepts in context(s) which appeal to students, helping them both latch onto and grasp the important ideas, as well as learn how to apply them effectively.  A key part of the appeal is the understanding of why they are learning logic, and what it is good for.  A pure-mathematics approach to formal logic will effectively communicate the rules, but not the varied and wide applications of it.

That’s my contribution to the discussion.  🙂   (So far…  Who’s next?)

 

P.S.  Regarding syllogisms, I remember a seeing a reference on Tom Van Vleck’s page to The Figures of the Syllogism.   There are probably better and more informative references, but that provides a good starting point for further study…

P.P.S.  for any “language lawyers” out there.  Jon used the hashtag DiscussAmongstYourselves.  There’s also one DiscussAmongYourselves.  What’s the difference (usage and connotation) between Amongst and Among?  http://grammarist.com/usage/among-amongst/ seems to have a decent description basically saying both are correct, but I suspect I’m still missing something important or interesting about the difference…

My Goals for ThoughtVectors / Why I signed up

[Drafted in mid-2014 during the first part of the Summer 2014 UNIV 200 course, just publishing now…]

When I first heard there would be a summertime MOOC which would study the works of Engelbart, Bush, Lickleder, and others, I was very excited and searched for more details. I had heard of “As We May Think”, and have been following (for many years!) several related efforts to rehost NLS/Augment and/or reimplement portions of it using current computer technology. I am particularly interested in software which facilitates and enhances my ability to perform “knowledge work”, and which facilitates my ability to collaborate with others in that work. I am also very interested in building tools which help others perform knowledge work and collaborate with each other.  So the opportunity to study the writings of these visionaries (especially Dr. Engelbart), to learn in depth about their key concepts, and to get my head around where they were pointing and why we haven’t yet fully implemented their visions, was too good to pass up. So I followed pointers from Dr. Gardner Campbell’s blog to the other early “conspirators”, to determine details of the course and how to join in.

At the same time, I took a new 5×7 paper journal off my shelf, and began making notes of ideas and questions to explore, things I want to say and share, people and software and ideas which I associate with knowledge work and teamwork, and concepts and writings I’ve come across over the years which have been very valuable to me in this area. (I’ve made 40 pages of outlines so far, and am really just getting started.)

When I discovered that this course wouldn’t be a focused deep-dive into the technical concepts, but rather a 200-level “writing course”, I was somewhat disappointed. However, I realized that a) practicing and learning more about writing and argument won’t do me any harm, and b) I can focus my Inquiry Project on the aspects I care about, and accomplish my learning goals that way. Also, I noted from the “DS206” links from Tom Woodward, Alan Levine (CogDog), and others, that it was likely I would be learning to include graphics and multimedia in my compositions. That’s going to be a huge stretch for me, but a very valuable one.

[Added February 2016.] The goals above are still good.  I’m leaning much more strongly toward the aspects of deep-dive study and discussion of the source materials (Engelbart, Kay, etc.), with less emphasis (but still some) on the pedagogic and multimedia presentation aspects of UNIV 200.  I also plan to work in (where appropriate) some semi-related material from other topics I’ve been studying over the past year and a half.

Life Happens / Continuing

My last ThoughtVectors post was in late June 2014, and I have been silent since.  Family activities and vacation trips, work, summer and fall activities, children’s homework and band/sports activities, etc. have taken priority over “finishing” the course.  This is just as it should be: it is very appropriate for my family life and work to take priority over my (open) participation in the summer 2014 UNIV 200 course.  But my interest and purpose in participating have not ended, I still have lots to learn and contribute, and I haven’t yet fully grasped the knowledge and material I wanted to explore.

Many times since Fall 2014 I’ve considered restarting, yet I have hesitated and failed to resume posting, in part because I fear I’ll be interrupted again by higher priorities, or not be able to post “often enough”, or want to explore and post on some other topic instead for a bit.  It’s time I simply pick up where I left off, post what I can when I can, and recognize that in large part I’m in this on my own at my own pace: “often enough” is whenever works [which is much better than never or delayed for years].  I’m not (and don’t have to be) on a college semester-synchronized schedule as are VCU UNIV 200 students.  I suspect that instead, I’m on the “#ThoughtVectors4Life” plan: my participation  will of necessity span several semester-iterations of the UNIV 200 course, and my interest in key parts of the material will likely continue for decades.

There have been several ThoughtVectors-related topics I’ve drafted lately, about which I want to post.  So I plan to get those out of my way and off my mind in the next couple posts.  That will help me prepare to dive back into exploring the specific areas which interest me, as well as “finishing up” the UNIV 200 learning exercises.

— John

P.S. Re “#ThoughtVectors4Life”: see http://ds106.us/ and (for instance) http://bavatuesdays.com/ds106-4life/ and http://rowanpeter.com/my-ds106/.  However, note that what I’m talking about is Thought Vectors and Bush’s memex and Engelbart and Nelson and Kay and so forth, NOT (just) a 200-level(sophomore) “UNIV 200 [writing course]”-4Life.

Nugget #3 on Augmenting Human Intellect

The Augmenting Human Intellect paper by Dr Douglas Engelbart is chock full of interesting and valuable ideas.  Most significantly, it in many ways blew away my initial concepts for my inquiry project.  I’ve been thinking about how features of NLS/Augment have or have not made it into current collaboration technology. What I neglected to take into account was that NLS/Augment was a research system and perhaps never intended to be a final system; perhaps it was never more than a stepping-stone to even more advanced artifacts for human augmentation.  It was only a (small!) part of the overall augmentation system, not even necessarily the key piece.

In the AHI paper, Engelbart describes the H-LAM/T system: the Human augmented by the Language, Artifacts, and Methodology in which he is Trained.  In this view, NLS/Augment is but an artifact, which provides a small (command-line) language and methods with which the trained human can use to increase his effectiveness.  Engelbart also states that the human and the artifacts are the only physical components of the system.  Hmmm… Where does software fit?  I think it’s an artifact, but there are languages and methods somehow involved, and I’m not yet sure where the separating lines are.

The nugget I wish to discuss is as follows:

Individuals who operate effectively in our culture have already been considerably “augmented.” Basic human capabilities for sensing stimuli, performing numerous mental operations, and for communicating with the outside world, are put to work in our society within a system–an H-LAM/T system–the individual augmented by the language, artifacts, and methodology in which he is trained. Furthermore, we suspect that improving the effectiveness of the individual as he operates in our society should be approached as a system-engineering problem–that is, the H-LAM/T system should be studied as an interacting whole from a synthesis-oriented approach.

Of particular interest and importance is that Engelbart is suggesting that the ENTIRE augmentation process be approached as a system-engineering problem. The definition of the system is particularly important: it’s not just a human, it’s not a particular tool or computer program, it’s not a language or method. It’s the COMBINATION of a human with language(s) and artifact(s) and method(s) (and methodologies), and the human is TRAINED in the use of all of them.

The concept of language is quite important.  It isn’t necessarily limited to human languages (English, Spanish, Russian, etc.) or computer languages (binary, ASCII, Unicode, Fortran, C, Javascript, Eiffel, Ruby, Postscript, SGML, etc. etc.), but includes ideas of the imperative commands we can provide software (including pointing and gestures and perhaps voice or thoughts), and the means by which a computer can respond (via a display or sounds or tactile feedback or other means.)  I need to study further to determine exactly what Engelbart conceived of as ‘language’, and how he intended his use of that term to be interpreted.

One of the very significant concepts within Engelbart’s formulation of the augmentation problem is what he termed the ABC model.  A activities are those we use to carry out our business.  B activities are those which improve the A activities.  And C activities are those which improve the B activities: they improve our ability to improve.  (Somewhere here there’s got to be a good analogy to math, which I’ll poorly phrase thus: the B activities are like the slope or derivative or velocity; C activities are like a second derivative or acceleration.  So A activities to which are applied both positive B and positive C activities will improve at a rapidly increasing pace.)  I’ve been thinking of NLS/Augment as the result of a B activity, which facilitates A activities.  What I neglected is that there must be C activities which will continue to build upon, extend, and eventually replace NLS/Augment and its capabilities.

Another major aspect I neglected to take into account in considering my Inquiry Project is that there are MULTIPLE artifacts in simultaneous use.  Paper.  Pencil and Pen. Computers. Telephones. Cars and Roads. Money.  Etc.  Engelbart defined the H-LAM/T system much more largely and widely than I originally considered.  I was focused on the NLS/Augment software and related computer hardware (display, mouse, chording keyboard, etc.); that computer system is just but one (significant but) minor component of the overall H-LAM/T system.  Even the training is important!  Both at work and at home I am constantly reminded how training and practice are both required to make effective use of the computers and software available to us. (I’ve had had extensive training and experience with tools my family and certain co-workers do not know how to use effectively.)

There are many more ideas within the AHI paper than I can fully grasp and map to features in NLS/Augment, within the scope of the UNIV 200 course or Inquiry Project.  I’m going to need to plan for a much smaller scope.  I still think deep study of the features of NLS/Augment and the documented concepts within the AHI paper are very worthwhile, but an appropriately scaled project scope is not yet obvious to me.

For future follow up:

  • Study more fully/deeply Engelbart’s concept and use of the term language.
  • Study more fully the concept of artifact, especially with respect to how software fits in.
  • Find and study more about how Engelbart set up the augmentation problem as a systems-engineering problem.
  • Define a smaller scope for my Inquiry Project.

Definitions within Jenny Stout’s Intro Video

A couple weeks ago I watched Jenny Stout’s video about “Thought Vectors in Concept Space” which is found on the syllabus page. I was pleased and intrigued with the differences in her definition and descriptions, as compared with my initial concepts. In particular, it was very interesting to me that her description suggested that the “Concept Space” is always a shared space. In my concepts, my own mind is itself a concept space. But the idea that it needs to be shared to be a concept space is intriguing.

To enable my own deeper thoughts on Jenny’s statements, I’ve transcribed her words for easier reference:

0:00 Hi everyone, my name is Jenny Stout and I’m one of the teaching and learning librarians here at VCU. In University 200, “Living the Dreams”, you all are learning about Thought Vectors and Concept Space, and I just wanted to add my two cents to the conversation about what Thought Vectors are, and what Concept Space is. So for me, a Thought is something that you hold in your head, so a Thought Vector is a thought that’s going somewhere, that it has a greater purpose.
0:30 Now when we go through life, we have lots of thoughts, we collect lots of wisdom, and that benefits us personally, we don’t need to share it in order for it to benefit us. However, there’s a special kind of magic that happens when you launch your Thought Vectors into Concept Space. And what is Concept Space? Well, for me, Concept Space is anything that is outside of your own head. So, Twitter is a concept space, a library is a concept space, a classroom (either in person or online) is a concept space, and even just a conversation between two people can be a concept space. Anywhere where you share your ideas and your thoughts, and you can inspire someone to share their ideas, and together kind of create new ideas, and have this sort of mutual riffing going on.
1:20 So think about this example: so let’s say you’re home alone, and you decide to watch a really cheesy 1950’s science fiction movie, like Godzilla vs Mothman, or something crazy, and it’s a really terrible movie. You’re watching it by yourself. You might be a little amused, but generally speaking it’s going to be pretty boring. Now imagine you’re watching the same movie with your group of your funniest friends. Now you probably can guess what would happen: you would be making fun of the movie, you would be joking about it, you would be riffing on it, and before long this terrible movie is actually really fun, and you’re having a great time with your friends.
1:50 So take that example and apply it to this class. If you have ideas, questions, thoughts, jokes, and you keep it to yourself, you’re not going to have half as much fun as if you actually share it with other people in the class, right? So we really encourage everybody to share their ideas, as crazy or as half-baked as they may be in this class, because we don’t really know what could happen, like amazing things could happen. We could have projects, we could have great conversations, we could learn new things, ask big questions that don’t even have answers to them, but we’re not going to know that unless people actually share their ideas.
2:27 I’ve been at VCU for a couple of years now, and I know that a lot of students are very hesitant to share ideas, or even ask questions in a classroom. And I think it’s this fear that people might laugh at us, or people might think our ideas are dumb, or the professor might think our ideas are dumb, right? So I really try to encourage students to break through that fear, and be willing to raise your hand and share ideas. In this classroom you won’t be sitting in a classroom raising your hand, but you will be asked to participate, and I really hope that you do it without fear. Because this is going to be a class where creativity, and crazy ideas, and crazy questions are encouraged.
3:07 So, as you’re going through this class, I hope that you will share your Thought Vectors, that you will launch them into Concept Space, right? And I think that if you do, you’re going to have a lot more fun, you might inspire some people, and other people might inspire you, than if you kind of just sit back and don’t really participate. So, please share your Thought Vectors, and once again, I’m Jenny Stout, I work at the library, and if you want to come by and ask me any questions at any time, feel free my door is always open. So have a really great time this semester taking University 200, Living the Dreams.

Jenny’s definition of a Thought Vector is at about 27 seconds: it’s a Thought which is going somewhere, which may have a greater purpose. Her description of a Concept Space is summarized by “Anything that’s outside of your own head.” And her REALLY KEY statement is that a Concept Space exists “anywhere where you share your ideas and your thoughts, and you can inspire someone to share their ideas, and together kind of create new ideas, and have this sort of mutual riffing going on.” That’s a fantastic idea!!

I’m still pondering this idea, and I’m currently still of the opinion that my own mind (and obviously everyone else’s mind also) is a concept space (which can hold thoughts and thought vectors), even though it isn’t as fun a concept space as Twitter or a blog or shared conversation. The thought vectors in my head ARE going somewhere: they directly and indirectly affect my actions. The thought vectors in my head are what prompt me to voice or write the same or similar ones, launching them into a shared concept space.

Another oddball thought is whether my mind potentially contains SEVERAL concept spaces. I partition my thoughts among various domains including work, family, the books I’m reading, etc. While there is some cross-flow of ideas among those areas, I find that the thoughts are strongly grouped. While in a sense they all exist within a single concept space, in another sense there are several concept spaces, within each of which the thoughts interact robustly, while between which fewer thoughts cross. I find I consciously note when a thought from one group is related to or applies to thoughts in another group.

Thank you Jenny Stout, for providing such rich food for thought! I’m still chewing, trying to think this through and fully digest the ideas. There’s a lot more yet to grasp!

Another useful angle to explore is exactly what Dr Douglas Engelbart meant when he voiced the “thought vectors in concept space” phrase. I haven’t yet tried to search for that phrase or those words in any of his writings. If anyone already knows where to find it, or can point me to where he defines his use of those terms, please let me know!

Purple Numbers

In Christina Engelbart‘s “Tips for blogging about Doug Engelbart and his work” post, she mentioned granular addressability and the “Purple Numbers” which were inspired by the NLS/Augment implementation.  For anyone simply commenting upon aspects of the “Augmenting Human Intellect” paper, her instructions in the “Tips for Blogging…” post are excellent and sufficient.

But for programmer-types, and anyone interested in more discussion of these concepts, there’s lots more information available to read and study.  I first learned about purple numbers from reading entries in Eugene Eric Kim‘s blog, and tonight am finding links to much more discussion by him and others.  A search on his blog for “purple” turns up a bunch of entries about it.  One of the earlier ones, “A Brief History of Purple Numbers” from August 2003, corrects and expands upon “Why is it Purple?” by Chris Dent in late July 2003.  Those two pages summarize most of the history.  There’s also a bit on “The History of Purple Numbers” written by Christina Engelbart in February 2005.

In 2004 Eugene Eric Kim posted “Tim Bray on Purple Numbers“, in response to Tim’s post “Purple Number Signs” in which Tim implemented, then removed, then put back the purple numbers in his post.  Tim Bray discusses several concerns, then points to postings by Simon Willison, Mark Nottingham, and the Chris Dent history post above.  In the same timeframe Chris Dent posted “Big Day for Purple Numbers” in response to Tim Bray and Jonas M Luster.  (Jonas Luster’s old site seems to have moved; I haven’t tried to use archive.org to find the original posts Chris referenced.)

Chris Dent has a series of entries from 2005, “Fundamentally Purple“,  “Purple Response“,  and “Purple Identification” which discuss some concerns and goals of these persistent identifiers.  A Google search for “purple site:burningchrome.com” yields a bunch more of Chris Dent’s posts about this topic.  Eugene Eric Kim refers to more of his discussion with Chris Dent in “Purple Numbers: Optimized for Synthesis” from May 2005.

Tonight my search also turned up three entries at Boris Mann’s site with some interesting comments, such as the fact that he learned the paragraph sign is called a ‘pilcrow’ from Tim Bray.

Eugene Eric Kim has purple-0.4 on his wiki, but his blog post indicates it was up to at least version 0.9 as of February 2006.  The wiki page for v0.4 points to “An Introduction to Purple” from August 2001, which I think is the earliest write-up I’ve found.

The Boris Mann comment reminds me that some other time we’ll have to follow some thought vectors starting with other posts on SGML and HTML and XML and other markup languages by Tim Bray, as well as some regarding typesetting and typography and computer-based tools like TeX  and Metafont developed by Donald Knuth, extensions such as LaTeX from Leslie Lamport, and other such things…

Thinking and Paper

Yesterday as I was reading Dr Engelbart’s “Augmenting Human Intellect” papers, I was reminded by his use of the H-LAM/T acronym (Human using Language, Artifacts, Methodology, in which he is Trained), and his definition of the term Artifact, that I very much wanted to write about Dr Edward Tufte‘s “Thinking and Paperforum entry.  It is at http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=00008c.

Dr Tufte’s entry begins with a reference to and comments upon Malcolm Gladwell‘s article “The Social Life of Paper“, which was originally published in the March 25, 2002 issue of the New Yorker.  That article is also available on Gladwell’s site gladwell.com.   (As an aside, Gladwell has an archive of articles he has written which are ALL very interesting and worth reading.  His article “Six Degrees of Lois Weisberg” is another particularly interesting one to read from a #ThoughtVectors perspective.)

Following Dr Tufte’s remarks, many readers have posted interesting and useful comments.  It’s worth quite a bit of time to read down through them, ponder in turn their content, and use them as launching points for further study and exploration.  The forum comments explore the idea that the use of paper is very valuable in facilitating thinking, and for some purposes is better than any other technique or tool.  Dr Tufte explicitly states “For some cognitive tasks, paper outperforms computers…”, and others provide many examples and suggestions along those lines.

It has been many years since I read Gladwell’s article and Tufte’s forum comments, but the key concept I gleaned is that sometimes paper IS the best tool to use.  While many things have been automated using computers and other tools, there are still lots of things for which paper is STILL the most effective and efficient approach.

Of additional interest, the forum entry includes multiple comments by  (and comments in response to) a gentleman named Martin Ternouth, describing the paper-based organization system he developed.  At one point he had a job for which “My desk system had to have all operational information immediately to hand, but in such a form that it could be cleared instantly: either to receive a hostile visitor complaining of a mispayment, or to substitute the paperwork for another complex problem totally unrelated to the last.”  These entries are HIGHLY worth reading.

Entries related to Ternouth’s system have been extracted and summarized at http://drauh.typepad.com/Ternouth/,  and described in several places.  Also, there have been a couple successful attempts at merging the Getting Things Done concepts with Ternouth’s, such as at 43folders based on an article by Ishbadiddle called “The Anxiety of Getting Things Done”.  (I’m having trouble finding the original article tonight, the link to http://triptronix.net/ishbadiddle/archives/2005/06/19/01.53.20/ seems broken. Perhaps we need to use the WayBack machine at archive.org…)  Note also that the donationcoder.com site has a PDF description from Mr Ternouth attached with permission.

So with respect to Dr Engelbart’s augmentation concepts, we must not neglect nor denigrate the value of paper.  And with the ideas in Tufte’s forum 00008c, Gladwell’s article archive, Getting Things Done, and the various descriptions of Ternouth’s system, there is no lack of mind fodder for, and distractions from, our #ThoughtVectors explorations.

Reflections at the Second Week Milestone

First, I wish to compliment Asa at Anonymous Octopus for an awesome  blog post title which is both creative and descriptive: A Fortnight in a Flash. I strongly agree that the first two weeks of this course have gone by extremely rapidly.

Due to my work and home/family schedules, I’ve been having trouble keeping up with the scheduled assignments from the syllabus.  As I’m an open participant, I don’t need to worry about that from a grading perspective, but I do want to participate fully and contribute appropriately to what should be a highly synergistic interaction among all participants.  To do that I need to keep up as best I can.  Also, I know from many previous experiences that I will benefit from this course in proportion to the effort I put in.  I want to learn and benefit greatly, so I will need to work hard and carve out the time needed as best I can.

Thus far I’ve focused most of my effort on the nugget and concept experiences, and not emphasized extensive commenting on other participants’ posts.  I’ve made some comments, but not the 5-10 scheduled for some days.  In the big scheme of things, this is probably OK: I think it will be of more value to the students for me to comment well but less frequently, than frequently but with less care and thought.

As I’ve partly described in my previous post, and will discuss more in future posts, my goals for this course are likely a bit different than most other participants.  While I expect to learn much about writing and multimedia presentation and how to construct and present effective arguments using more than just words, that’s not my initial or primary goal.  My main interest is a deep inquiry into the concepts and ideas of visionaries such as Drs Vannevar Bush and Douglas Engelbart, and an examination of how those concepts have and have not manifested themselves in current computer-aided knowledge-work tools.  It also would be nice to include the formulation of a plan or concept as to what “missing pieces” are needed and (perhaps) how to bring those “missing pieces” into existence through integration of existing tools or creation of new components, but that may be stretching a bit too far (at least within the duration of this course.)

There are also several items I wish very much to contribute and launch into the #thoughtvectors mix.  In the weeks before the course started, I made notes in a handwritten journal of a whole bunch of ideas which are either directly or tangentially related to course topics, about which I plan to post as food for thought.  These include many pointers to various useful concepts and resources I’ve found over the years, which I hope the students (and instructors) can benefit from in many ways.  Writing these up will take additional time from my schedule; I expect to have to prioritize the writeups, and continue posting them to my blog well after the summer course is complete.

I suspect very little about which I want to write is unknown (especially to the instructors), but perhaps my writing will prompt several to discuss and describe the ideas in more depth, with more usefulness and examples of application.  For instance, one specific area I plan to write about is outlining, including the concept of “mind maps”; I see that Karen Richardson has already created an excellent example of a mindmap in describing her associative trail.  Likewise, another topic involves various tools useful for brainstorming and writing and authoring various types of documents; Suzan has already described the start of an excellent list.

In summary, this first two weeks has been interesting and valuable and a great start to the course.  I’m looking forward eagerly to the next couple weeks, especially the Douglas Engelbart readings and the archive of Friday’s interview with Alan Kay.

Circling ’round to the Inquiry Project

For well over a decade (more like two or three) I’ve been studying and learning about various computer-based tools which were designed and intended to help humans think and communicate. Over the last several years I’ve been reading sporadically about Dr Douglas Engelbart’s NLS/Augment (see my Concept Exercise #2 post for links to more details), and various efforts to rehost or re-implement key components.  I’m particularly interested in exploring how the key concepts from Dr Engelbart and others have or can be implemented effectively. Most importantly, I currently believe that many important and valuable concepts have not made it into current-day software available for our (my!) use, and I would like to identify those, and both ponder and discuss how that can be corrected.

So an initial concept for my Inquiry Project is to investigate the concepts Dr Engelbart developed, and then examine how (and which of) these were implemented in NLS/Augment and other software since. Also worthy of examination is the reverse: what were the features and capabilities of NLS/Augment, how do these relate to Dr Engelbart’s concepts, and how have these features and capabilities informed or migrated to current-day software systems?

Related and worthwhile is to widen the scope of the study to include concepts from Drs Vannevar Bush, J.C.R. Licklider, Theodor (Ted) Nelson, Alan Kay, and others who have investigated aspects of computer-assisted human productivity, especially in the realm of assisted thought and knowledge work. Covering that scope well is years of work, not weeks, so a much more limited problem-statement will be required for this summer’s Inquiry Project.

One approach to narrowing the study would be to focus on identifying and describing the key concepts from the various authors we are studying during the UNIV 200 course. It could be narrowed even further to the particular papers we are reading, but I’d prefer to explore a bit more widely, especially among other papers by Dr Engelbart and his team.

Another approach is to focus on NLS/Augment itself, identify the features and capabilities it provided, and map those to current-day software to identify which are available through new combinations of tools, and which are either not available or are particularly difficult to replicate. This idea has significant promise, because I’m currently very interested in the question of “what’s missing?” from the current set of tools available to me. However, given that it currently appears that there is no existing operational version of NLS/Augment available for me to study, the source material would have to be Dr Engelbart’s written materials, perhaps some artifacts of the efforts to preserve and re-implement NLS/Augment (OpenAugment, Hyperscope, the Open Hyperdocument System, and perhaps others), and perhaps the source code of NLS/Augment itself if it is or can be made available.

Yet another approach is to identify a particular subset of concepts or features from either Dr Engelbart’s writings or the NLS/Augment implementation, and discuss how these could be implemented more effectively in today’s environment. Perhaps some key ideas from Theodor Nelson can also be worked in: transclusion, intertwingling, and so forth. This could also be approached from the direction of “what’s missing?” from today’s environment, and what did Dr Engelbart or others recommend to fill those gaps?

A very likely major problem with all of the above ideas is that the UNIV 200 course is more of a “writing” course than a “deep study” course, and that it appears to me at the moment that few of the participants have deep interest in computer science topics. Therefore it’s likely that my Inquiry Project would be a solo effort for the most part, which is mostly fine with me, but fails to accommodate course goals of interaction and cross-collaboration among course participants. So I should likely try to define an “Inquiry Project” which others in the course have some interest in, such that I will benefit from their inquiry and explorations, and such that they can also benefit from mine.

This week I read in a couple locations that the course is supposed to have a “New Media” focus. So perhaps investigating something concerning media types, both new and old, and the capabilities of current tools to create and manipulate these media to capture and present ideas would be of interest. My knowledge of hypertext and markup languages and various drawing and image manipulation and text processing formats and tools could be of interest and use in a project relating to multimedia authoring and collaboration. But I’m not really excited about digging deeply into that. I’d really rather study what Engelbart and others wrote and figure out how that applies today.

From a writing, authoring, composition, and argumentation perspective, I think perhaps the act of authoring an effective multimedia presentation is a key part of the goals of the course. I know I’m not very good at visual design or graphic arts; practicing use of media other than the written word will be good for me (but difficult and time-consuming!) It currently appears to me that the “Inquiry Project” is more-or-less a directed exercise in preparing an effective multimedia presentation. The benefit of having a topic I’m interested and passionate about is that it makes it much easier to spend the time and effort to create and refine the presentation.

My primary current interest is in learning what Engelbart, Bush, Nelson, Kay, and others have said, digging deep into their concepts, pondering how these are manifested or not in current software, and identifying what “missing pieces” are needed to enable more effective computer-augmentation of human thought and collaboration. The “Inquiry Project” would help force me to DOCUMENT what I learn along the way, in a manner which could be extended and built upon in the future.

I think my next steps are to read Dr Engelbart’s “Augmenting Human Intellect” thoroughly, and along the way be watching for particular aspects or subtopics which both interest me highly, and are likely to interest others in the course as well.

Formulative/Formulated Exercise around NLS/Augment

In Dr J.C.R. Licklider’s “Man-Machine Symbiosis” paper, he describes two different kinds of thinking processes which he terms “Formulative” and “Formulated”. In his abstract he says “The main aims [of man-computer symbiosis] are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs.” Later in section two of the paper, he states “One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems.”

“Formulative” thinking involves developing questions, hypotheses, identifying models and procedures which could be used, etc. “Formulated” thinking means following defined procedures and algorithms to work out the details, to verify or refute the hypotheses, to carry out the tasks which can be reduced to a pre-defined routine. “Formulative” thinking sets up the problem; “Formulated” thinking “turns the crank”, doing the necessary detail work which is required to make the problem solution clear or obvious. Licklider describes this most effectively in section four of his paper.

In the various instructions for “Concept Experience #2”, such as the ones at Team Zoetrope and Team Innov8, students were instructed to choose an obvious statement to analyze (“Analyze the obvious”) or one related to digital media, perhaps related to possible topics for their Inquiry Project. I’m going to take a similar but slightly different approach. I’m very interested in various aspects of Dr Engelbart’s NLS/Augment system, so I’m going to formulate some questions concerning it, and then use the computer to help me answer those questions and gather information which will help me formulate new/additional/better questions. Since I want to limit the exercise to a little over an hour, I’ll do what I can within a reasonable time limit rather than try to answer the questions completely. Since the purpose of the exercise is to observe and experience the difference between the two types of thinking, I’ll focus attention and time on observing my thinking and how the information presented by the computer assists, rather than solely on the questions and answers related to NLS/Augment. (For expansion later, I consider this one form of “Focused Inquiry”.)

OK, time-check: Starting at about ten minutes before the hour.

Initial questions:

  • “What is NLS/Augment?”
  • “What key features did it implement?”
  • “Where can key information about it be found?”
  • “What key artifacts (documentation, examples, demos, papers, videos, etc.) are available which provide details?”

Let’s start with a Google search for “NLS/Augment”, and expect to follow a link to Wikipedia very early on.

Sure enough, Google shows Wikipedia as the first hit. Opening each useful-looking hit in a new tab, and numbering them in order, we have:

Now it’s time to pick a tab, and see what it contains and where it leads…

Link 2 (Wikipedia) contains a lot of useful info. It provides a brief description of NLS, discusses its development with mentions of people and equipment involved, has a very good outline summary of “Firsts”, and has a section on “Decline and Succession”. There are also several (around 6) links to external references. Much to digest here, and the names and terminology would also be useful to enter into Google to perform a search very much like this one, to gather information on several of the subtopics. This page does a decent job of answering my first two questions (What is NLS/Augment? What key features did it implement?) with a top-level summary.

There are about 14 source references, and a BUNCH of references within Wikipedia. The external references include

Link 3, “About NLS/Augment” contains six meaty paragraphs which describe the development of NLS and Augment, point to multiple papers and references, and point ahead to information on the OHS (Open Hyperdocument System) and HyperScope. It points to two key papers on NLS/Augment, five key papers on OHS and Hyperscope, and also a complete bibliography of works by Douglas Engelbart and his staff. These will go a long way to answering my questions “Where can key information about it be found?” “What key artifacts (documentation, examples, demos, papers, videos, etc.) are available which provide details?” Of particular interest among the links are:

There’s also a reference to Link 2, http://en.wikipedia.org/wiki/NLS_(computer_system), which is the first cycle (circular reference) I have found so far during this search.

Link 4, “NLS/Augment Index”, contains a treasure-trove of references. Among other items, it has a section discussing a system clone and the source code, and efforts to make them public. There is also an extensive bibliography of papers by Engelbart and others. This bibliography should be compared to the one at the dougengelbart.org site (link 17 above) to identify any unique items to be read. There is also a section on photographs and films. This page is another key resource which answers “What key artifacts (documentation, examples, demos, papers, videos, etc.) are available which provide details?”

Link 5 describes the 1968 demo as a series of 35 video clips in flash format, each annotated with a brief description of what that clip contains. There is also a 100-minute flash video of the whole thing. It also has a jpg image of the original announcement of the demo.

This link therefore provides a partial answer to the artifacts question: video clips and annotations.

From link 6, the vimeo posting was by “Brad Neuberg”, and cites the blog location

Not much more here; I open that link in a new tab and plan to explore it further.

Link 7 provides about a page of description of NLS, points to a biography of Douglas Engelbart and his invention of the mouse, mentions Vannevar Bush and “As We May Think”, links to Engelbart’s paper “Augmenting Human Intellect”, mentions the 1968 demo and associated papers (capture these!), and mentions the relationship to the ARPANET.

I followed the link to Roberts/Arpanet, because “Roberts” is MY name! The page discusses Larry Roberts, who is considered the “father of the internet”. The page also talks about Licklider and others involved at MIT and elsewhere. Interesting! Slightly off topic, so I’d best not pursue this branch further right now. (I’m “pruning the search”.) Save the link for future exploration for fun.

On link 8, there is a link to “Table of Contents” which leads to

That page shows the entire document concerns the development of hypertext and GUI systems, lists many other systems, and seems to contain a good overview of the field. This is worth exploring in more detail later; I decide to save the link for exploration at a later time.

On link 9, there is a description with links to all 9 videos of the demo, and further information at other sites. There are also 330 comments, some of which may contain useful pointers to other resources.
The links outbound include:

These are all worth exploring later: in answer to my question about where more information can be found, I now have pointers to the entire demo in video form, a site with an annotated version, and a pointer to a site with more information. I also note that the last link is again to “www.dougengelbart.org”. That seems (somewhat obviously!) to be a key resource site; I should probably do an extensive exploration of that site later on.

So now I’ve explored one-deep across the first page of Google’s search results, including the Wikipedia entry. I have preliminary answers to all my questions, and several pages of pointers to lots more details. A time-check shows I’ve spent one hour and twenty minutes so far, so it’s a good time to cut this exercise off and think about what happened.

I was expecting to formulate some new and additional questions during my exploration, but my breadth-first search strategy across the first page of links I found used up all my time. Aha! That’s (in part) why the course instructors suggested following a link, seeing where that lead, and in essence performing a depth-first search. That approach is more likely to trigger a need to re-formulate the questions being explored, within the timeframe of the exercise. My re-formulation is happening now, after the (artificial) time limit for the search is up.

Given what I found and observed, I now can formulate several new questions. It’s valuable to notice that my initial questions were mostly answered: as I search more in the future I will continue to accumulate additional pointers to artifacts to study, but I already have a pretty good list. So given what I’ve learned, what new questions come to mind?

  • What key concepts were Engelbart and others consciously intending to incorporate into NLS/Augment?
  • What is mentioned in the audio or video (1968 demo especially) which clarifies or provides additional insight into the written descriptions of the concepts?
  • How does Doug’s 1968 demo differ from the 2005-timeframe screencasts?
  • What particular points do the modern screencasts highlight, in contrast to the older documentation and videos?
  • The screencasts are informed by 1990 and 2000-era computer technology; what modern systems and techniques and software are mentioned, which didn’t exist at the time of the earlier videos and papers?

I could generate another dozen questions, but it’s already pretty clear what my next steps should be. First, I need to read and study Douglas Engelbart’s 1962 and 1968 papers, to grasp the key concepts he had at that time. Then I need to watch the video of the 1968 demo, also referencing the annotations from the page at link 5. After that I’ll have a bunch more questions and topics to explore, and will be able to generate an even better set of questions for futher inquiry.

Time-check: 110 minutes, and I still need to paste this into WordPress and fix the formatting.
Time-check: 130 minutes. I’m still not happy with the formatting, but I’m going to call it good enough for right now.

For future follow-up:

  • Write a blog post concerning “Focused Inquiry”
  • If anyone asks, write a blog post concerning search strategies (breadth-first, depth-first, and various hybrid strategies)
  • Compare the bibliographies, and generate a prioritized list of papers to read
  • Go back and thoroughly study the pages at links 2-9 to actually grasp their content, rather than just seeing what’s there and how it addresses my original questions

Nugget #2 on Man-Computer Symbiosis

My initial reading of J.C.R. Licklider’s 1960 paper “Man-Computer Symbiosis” was somewhat disappointing, because it seemed to me that this visionary paper did not hold up quite as well as Vannevar Bush’s “As We May Think”. I’m trying to identify exactly what caused this effect in my mind, but I think in part it’s due to a contrast in how the two men presented their ideas. Dr Bush tried to describe various future concepts by painting a picture with intermediate steps, saying “it’s sort of like this, but done differently in a way we don’t yet know”, while Dr Licklider painted a picture but didn’t emphasize going beyond his description. In addition, Dr Bush’s descriptions are accessible to most readers, while Dr Licklider’s are a bit more obscure, relying more heavily on his readers’ background knowledge.

One particular example from Licklider’s paper is his description of the “trie” data structure as a memory mechanism, in section V, subsection C. I am familiar with that data structure from my “Data Structures and Algorithms” courses, so I could somewhat follow the verbal description he provides. But I would not expect “normal” people to grasp it easily, especially because no graphic was included. The Wikipedia article at http://en.wikipedia.org/wiki/Trie contains a useful graphic: Trie from Wikipedia
and a Google image search for trie provides many other examples. (As an aside, another problem I have with the trie is that it only provides rapid access if you know how to spell the index or key term. It does NOT of itself effectively implement associative memory, although it could be used as one component of such a scheme.)

I also noted several oblique references to several items of which I know, but I would expect most people don’t. These include BBN, a key Boston-area company in the development of networking and other defense-related technologies, the “SAGE System”, by which I think he meant the SAGE Semi-Automated Ground Environment air defense network, the IBM 704, the first mass-produced computer with floating-point capabilities on which FORTRAN and LISP were originally implemented (and which was small enough to be sometimes used as a ‘single user’ machine,) and various researchers and organizations such as Gelernter, Lincoln Labs, Bell Telephone Laboratories, and so on.

The nugget I chose to comment upon consists of the second paragraph of the paper:

“Man-computer symbiosis” is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses…. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.

The first aspect I wish to discuss is that of a “symbiosis”. The second is the number of people involved.

The definition of symbiosis given by Licklider in the first paragraph is quite important. A key phrase within the definition is the “living together…of two dissimilar organisms.” Note the words “living” and “organisms”. A key issue I have with Licklider’s concept is the idea of the computer being a living organism. I do not deny that it COULD be, but the description of the computer’s possible functions later in the paper do not quite require that the computer be alive. The Wikipedia description of Organism contains two interesting sentences: “In biology, an organism is any contiguous living system…”, and “There is a long tradition of defining organisms as self-organizing beings.” (Which begs the question of ‘what is a being?’) Also, “many sources propose definitions that exclude viruses and theoretically possible man-made non-organic life forms.” So I’m not alone in questioning the idea of a computer being a living organism.

Dr Licklider began to address my issue in section I.B by introducing and describing the ideas of a “mechanically extended man” and a “humanly extended machine”. He states that these are NOT symbiotic systems, but rather semi-automatic systems: systems that started out to be fully automatic but fell short of the goal. He also indicates that man-computer symbiosis is probably not the final state: later on electronic or chemical “machines” will outdo the human brain. (I might call those “pure” Artificial Intelligences, especially if they are self-aware.)

My hangup with the use of the term “symbiosis” in this case, is that the term suggests that both parties to the relationship are separately alive. Clearly the human is, but what about the computer? In section III.B, his description of the strengths and capabilities of the computer gloss over that point, and he ascribes attributes of independence to the computer without explicitly supporting it. “Symbiotic cooperation” is only cooperation if either party can choose NOT to cooperate. And back to section I.B, he explicitly says a “mechanically extended man” (in which the computer is an extension of the man and does not have a choice) is not what he is discussing.

Now I am muddying the discussion by introducing “choice” as an aspect to help me evaluate whether the computer is alive, which is clearly not valid. The fig tree (from the initial paragraph of Licklider’s paper) doesn’t have a choice in whether the insect larva inhabits it, but we agree that both tree and insect are alive, and are organisms. The networked SAGE computer exhibited many aspects of a multi-cellular organism, but no one would assert it was alive, and most would assert it is not a real organism. Can we prove it is not an organism as a theorem, rather than assert it as a postulate? I’m not quite sure how, as it depends heavily on the definition of “organism” we choose…

There’s more to discuss about whether the computer is a living organism, and whether that is required for symbiosis, and whether the computer is an organism at all. But let’s move on to the next point: how many humans are involved?

Licklider’s initial terms suggest there is one man and one machine in the symbiotic relationship, and most of the paper can be read cleanly with that concept in mind. However, note that in section V.A, he says “Any present-day large-scale computer is too fast and too costly for real-time cooperative thinking with one man. Clearly, for the sake of efficiency and economy, the computer must divide its time among many users.” In the next paragraph he describes hypothetical “thinking centers”, perhaps even networked together. So included but not explicitly stated in his concept is the idea that through the (networked) computers, multiple MEN interact in a symbiotic relationship. If person A is in symbiosis with machine M, as is person B, then indirectly persons A and B are also linked. If machine M is linked to machine N, then all symbiotes of M and N are symbiotes of each other.

This brings to mind aspects of Joe Haldeman‘s Forever War series, and especially described within Forever Peace, in which people remotely-operating battle robots using electronic “jacks” implanted in their heads become inextriciably linked with and sympathetic (or empathetic?) to the other persons within the computer-assisted (I’ll call it) “mind meld”.

The development of semiconductor computers, and the rapid reduction in size, weight, power, and cost that Moore’s law has brought, means that Licklider’s assumption that computers are too big and expensive not to share is no longer valid. We now are blessed with the situation in which there are multiple computers per person, rather than multiple persons per computer. Therefore “symbiosis” between one man and one (or more) computers is more practical than that of multiple people to one computer. If the symbiote computers are then linked through networking, that provides additional capability and possibilities that Licklider did not extrapolate in detail.

Further, hark back to Dr Bush’s memex concept: the memex was a tool for an individual, so now consider a person/memex pair. Instead of the memex, substitute a computer symbiote. Instead of exchanging photographic trails between memexes, network the symbiotes. Now we have a worldwide network of augmented humans. Very exciting!! (And I suggest keeping that thought in mind as we read Engelbart’s paper next week. I don’t know how close he comes to that idea, but I do know he emphasized aspects of using computers to facilitate collaboration and teamwork…)

P.S. Most of my links here are to Wikipedia, because I’m mostly trying to point at definitions for people not already familiar with what I mention. Wikipedia then serves as a convienient jumping-off point for further exploration of those words and topics for anyone with interest in learning more…

Ideas for future followup:

  • Dig deeper into what it takes for a computer to be an organism.
  • Dig deeper into whether a computer can be a symbiote.
  • Watch for and discuss Engelbart’s ideas for networking multiple instantiations of NLS/Augment, and for transferring data among instances.
  • See if other readers felt the same way about Dr Licklider’s paper not being quite as “visionary” as Dr Bush’s, and why.

Key Thread from “As We May Think”

When I wrote my first Nugget post, I identified what I there termed “critical portions”. I later realized that because of my bias and focus on the “tool” aspect of the paper, I was seeing just one thread within the paper. There simultaneously exist additional threads, each focused differently, each with its own thesis and supporting statements. Different phrases and sentences comprise the “critical portions” of each of those separate threads. I now see (almost?) ALL the words in the paper as critical and significant, but arranged in different subsets according to the threads they support.

Nonetheless, the portions I originally identified are still particularly significant to me, as they comprise the majority of the tool-requirements thread. I planned to extract and post these together, and comment further. So here goes…

I originally outlined:

I believe the first critical portion is in section 1, paragraphs 3-5. The next is the first sentence of section 2. Next the first three sentences of section 4, and the fourth paragraph of section 4. The second and third paragraphs of section 5 are also key. Then the first two paragraphs of section 6, and the third sentence of the third paragraph: “Selection by association, rather than indexing, may yet be mechanized.” Section 7 describes key aspects of usage, and a couple paragraphs in section 8, perhaps the third and ninth, sum up.

Substituting the sentences from the paper, this yields:

[1] There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.

Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month’s efforts could be produced on call. Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

[2] A record if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted.

[4] The repetitive processes of thought are not confined however, to matters of arithmetic and statistics. In fact, every time one combines and records facts in accordance with established logical processes, the creative aspect of thinking is concerned only with the selection of the data and the process to be employed and the manipulation thereafter is repetitive in nature and hence a fit matter to be relegated to the machine. Not so much has been done along these lines,beyond the bounds of arithmetic, as might be done, primarily because of the economics of the situation.

It is a far cry from the abacus to the modern keyboard accounting machine. It will be an equal step to the arithmetical machine of the future. But even this new machine will not take the scientist where he needs to go. Relief must be secured from laborious detailed manipulation of higher mathematics as well, if the users of it are to free their brains for something more than repetitive detailed transformations in accordance with established rules. A mathematician is not a man who can readily manipulate figures; often he cannot. He is not even a man who can readily perform the transformations of equations by the use of calculus. He is primarily an individual who is skilled in the use of symbolic logic on a high plane, and especially he is a man of intuitive judgment in the choice of the manipulative processes he employs.

[5] Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.

A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.

[6]The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.

The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

Selection by association, rather than indexing, may yet be mechanized.

[7]All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.

Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

[8]Thus science may implement the ways in which man produces, stores, and consults the record of the race. It might be striking to outline the instrumentalities of the future more spectacularly, rather than to stick closely to methods and elements now known and undergoing rapid development, as has been done here. Technical difficulties of all sorts have been ignored, certainly, but also ignored are means as yet unknown which may come any day to accelerate technical progress as violently as did the advent of the thermionic tube. In order that the picture may not be too commonplace, by reason of sticking to present-day patterns, it may be well to mention one such possibility, not to prophesy but merely to suggest, for prophecy based on extension of the known has substance, while prophecy founded on the unknown is only a doubly involved guess.

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

In going back to copy/paste those sections into this post, I note that I’ve left out some closely-related ideas which are significant to the tool-aspect, and also (especially from section 7) included some that could perhaps be left off. So I see again that Dr Bush’s composition is more complex and subtly crafted that I first perceived!

I think I need to go back and repeat this exercise in a word processor, highlighting the words to indicate the main tool-thread, but using different colors to indicate the core ideas as distinct from related ones.

There’s too much to comment on all at once, so I’ll pick a couple which jump out at me tonight.

First, the paragraph from section 5 beginning “A new symbolism” reminds me that the storage format is significant. Dr Bush is describing what’s necessary to (partially) automate certain aspects of logical analysis (thinking). If we are able to store our knowledge/information/thoughts in an appropriate manner, then machines can be programmed to process (reason with) that information to infer and derive new information and conclusions. Douglas Lenat (among others) has done significant research along these lines, and implemented several systems including Cyc which can represent facts, manipulate them logically, and derive new conclusions. But the representation is important: the Word docs, PPTs, GIFs and JPEGs, PDFs, MP3s, MP4s, and other files and formats we use today on the web are not (in themselves) suitable representations of facts and knowledge, with which computers can be programmed to reason.

Several persons who have studied the human mind and our thinking processes have described the key operation as symbol processing. We are able to process abstractions, somehow manipulating symbols in our brains, as we reason and think. About five years ago I read a fantastic book describing this, whose name and author currently escape me. I’m fascinated by the fact that Dr Bush specifically (and presciently) used the term “a new symbolism” to describe what is needed.

Second, the second paragraph of section 6 states that “The human mind … operates by association.” This is very important! How we build and represent associations, or relationships, between and among the facts/thoughts/mental symbols stored in our brains is critical. Somehow our brains are able to make inferences and create relationships within and among our memories, and those relationships are themselves stored and used as part of the associational index with which we retrieve our memories and thoughts. For computers to help us think, this process must be better understood, so that it can be automated within the computers. (Which is exactly what Dr Bush said in the next paragraph: “Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it.”)

The process of building a link (described in section 7) needs to be semi-automated: perhaps the computer can process the data which is being referenced by the user, and propose a bunch of possible associations, from which the user selects a subset to be saved.

I’m going to stop here for tonight, but there’s lots more to ponder!

Notes for future follow-up:

  • What was that book on the brain I read?
  • Highlight the thread in a word processor, and refine using colors.
  • Comment more fully on key concepts within this thread.
  • Who else besides Lenat made major progress on automated inferencing?
  • Post a blog entry on why I’m including these notes for follow-up.
  • Comment on the use of section referencing, and how much better it would be to have more finely-grained references such as used for religious texts, and especially “purple numbers”. Point at Christina Engelbart’s recent post, as well as Eugene Eric Kim’s implementation and other documentation.

“As We May Think” as a complex, multi-level argument

It occurred to me today that Dr. Vannevar Bush’s “As We May Think” article involves multiple theses, and simultaneously makes several interleaved arguments supporting those theses. It has several layers, brilliantly composed and interwoven, to effectively accomplish multiple purposes. Perception of the layers depends in part on each reader’s biases and motivations: particular readers will perceive one or more of the layers which particularly appeal to them, and ignore or overlook the others.

I was (an am!) reading the paper with respect to its description of what kind of tools are required to enable humans to capture, store, recall, and build upon our store of knowledge. Therefore I strongly perceived the sentences and ideas which pertained to that thread.

Another thread involves war and peace: given the end of World War II, and especially the horrific atomic blasts which helped end it, how can (and should) we build upon our new knowledge peacefully, and avoid or survive future conflicts?

Yet another thread involves hope, and a view that many fantastic things are possible. He describes certain aspects of technology, projects reasonable extensions which are potentially achievable, and then projects beyond that to indicate even better and more fantastic things may lie ahead.

I have a mental visualization of the paper in which different words and sentences appear in different layers (some in more than one layer at a time), where each layer describes one of the theses and its supporting statements. Each layer is not only at a different z-coordinate (depth), it is also rendered in a different color. Looking down through the layers, one can see how the entire paper is a brilliant interweaving of those different layers: they all fit together, and some words and sentences support several of the layers at once. I wish I had the graphic arts skill to actually draw this (or create it in a 3-D modeling program such as Google Sketchup). (Perhaps you do, dear reader?)

An alternative crude implementation is to use a word processor to highlight the words in different colors, depending on which layer they support.

Given that UNIV 200 is in part about “the Craft of Argument”, perhaps interested participants can together dig further into this, tease out the various threads and theses which Dr Bush incorporated into his paper, identify how each word and sentence supports one or more of those threads, and analyse how he interwove them so skillfully. The results of this analysis will then serve as an excellent example of a brilliantly crafted (set of!) argument(s)!

How does it feel when I think?

This post is out of order, as I prioritized writing the Wednesday “nugget” post ahead of catching up on Tuesday’s assignment.

Unfortunately, I usually feel frustrated when I think. There are a variety of reasons for this, not all of which are present at any particular time.

I read and think MUCH faster than I can speak, type, or listen. So typically, I am frustrated that I’m thinking a bunch of good stuff, which I can’t capture or record fast enough. Also, I’m typically in a location or situation where I can’t record my current thoughts, so I’m frustrated that I’ll have to take time later on to capture these ideas, and will invariably lose many of them. Or, I’ll recall that I’ve thought these same thoughts before, and STILL haven’t been able to record them effectively.

Another frustration is that I often think of several aspects (thought vectors?) at almost the same time, but they go in significantly different directions and I can’t follow them all at once. I have to pick one and write (or think) linearly about it, then go back to pick up the next one to document, then go back for the next one, and in the meantime I’ve thought of multiple branches off each of the intermediate thoughts along each path…

Sometimes, especially when I’m writing (code or documentation), I can get into a mental flow state, and my sense of time (and frustration!) disappears temporarily. (Mihaly Csikszentmihalyi has written and talked on flow, for anyone interested in learning more about this…) But when I come out of flow, I’m again frustrated, either because it took so long and I’m still not done, or because there are so many more aspects and threads remaining to capture, or because there’s something else I have to do before I can continue.

Finally, I’m sometimes frustrated because I have the feeling that my thinking is incomplete, or otherwise inaccurate or missing something critical. Sometimes external input is required, but I find it difficult to get because often people (including me!!) will take the ideas off in other directions before we’ve fully explored and fleshed out the entirety of what I was trying to capture and describe.

For follow-up:

  • Itemize some methods and techniques useful to rapidly capture multiple thoughts, such as outlines (traditional and bubble/mind-map), dictation, stenography?, lists, and possible use of visual representations. Also, memory techniques such as mnemonics and visualization.

Nugget from “As We May Think”

I read Dr Vannevar Bush’s 1945 paper tonight, and was struck by how much of it consisted of descriptions or analogies suggesting how certain things might be implemented in the future, and how succinct were the portions in which he described what (to me) are the most critical ideas. Given what we now know of electronics and semiconductors, his concepts for use of photography and lever-controls seem quaint and obsolete. Yet the key concepts and problems described have stood up to time and are yet unsolved!!

I believe the first critical portion is in section 1, paragraphs 3-5. The next is the first sentence of section 2. Next the first three sentences of section 4, and the fourth paragraph of section 4. The second and third paragraphs of section 5 are also key. Then the first two paragraphs of section 6, and the third sentence of the third paragraph: “Selection by association, rather than indexing, may yet be mechanized.” Section 7 describes key aspects of usage, and a couple paragraphs in section 8, perhaps the third and ninth, sum up. I should extract these, present them together, and comment further another night.

For tonight, I will make my nugget a subset of sentences from section 1, paragraphs 3-5:

There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers — conclusions which he cannot find time to grasp, much less remember, as they appear. … Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. … Mendel’s concept of the laws of genetics were lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential. The difficulty seems to be … that publication has been extended far beyond our present ability to make real use of the record…. The means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

This I think expresses a core concept in Bush’s paper: our current knowledge is so vast and increasing so fast, that our continued manual use of ancient memory aids (books and writing, libraries and indexes) cripples our human ability to take advantage of that knowledge and build effectively upon it. He therefore conceives of the memex, which is a device to aid INDIVIDUALS to store, access, and build upon an increasingly large body of knowledge.

I have heard that meme before, as a child in school: our knowledge is vast and growing fast, and we should build upon it effectively. I don’t know if what I heard was an echo from Vannevar Bush’s paper, or from other people noticing the same thing and repeating it to me. I hear it again when I read about how many more books are published than each year before, or how many kilo/mega/peta/exabytes of storage it takes to capture current human knowledge or the Library of Congress. And I concur with Bush’s thesis: our methods of transmitting and reviewing current knowledge are totally inadequate. We need better (semi-automated) methods of getting key information to the particular individuals who can take advantage of it.

I have worked at several jobs involving computer science and engineering, and in particular several which involved database work where the data itself was knowledge and information, not just cost accounts or sales figures. I can attest to the difficulty of designing GUIs and access mechanisms to allow the users to a) enter their information, b) find and retrieve what was relevant to the problem at hand, and c) update and maintain the stored information as new facts and inferences were discovered. The mechanisms we implemented were as automated as we could make them, but still involved extensive and tedious manual effort, and the users hated the tools.

The MAINTENANCE and UPDATE of the information was a key issue: entering new data, and the manual effort required to create associations between items. Vannevar Bush’s description in the second paragraph of section 7 brings chills to my spine. That’s too much work and takes too much time! The normal case needs to be better automated!! Neil Larson (owner and author of MaxThink, see also (for now) The story of Maxthink) had some very valuable things to say (and practical experience carrying it out!) about semi-automated linking and cross-referencing of hypertexts.

I believe that hypertext technology, and related writing/database/retrieval tools, are a key piece of the puzzle (but only a piece: there’s more required). The current World Wide Web implements but a pale shadow of what hypertext can be. In the 80’s and 90’s there were several hypertext research systems which explored many key concepts, which the WWW oversimplified. For instance, in addition to one-directional links as the HTML anchor tag implements, there were bi-directional links, links of different types, one-to-many, many-to-one, and many-to-many links. Some of these types are present in current markup languages (VRML? SMIL?) but the whole suite of hypertext link types is not popularly known and understood.

Eastgate Systems has some very interesting hypertext tools, Arbortext (now part of PTC?) used to have some really powerful SGML and XML tools, and there are a slew of other companies and tools available. However, I don’t know of ANYONE who has put together an effective system to sufficiently implement Vannevar Bush’s memex concept. Engelbart came close, but NLS/Augment seems to have been (at least partially) left behind, without effective replacement. I’ve heard that Microsoft’s OneNote does a very good job in many ways including for note-taking, but I haven’t used it, and don’t yet see how it can be used effectively for teamwork and collaboration. Ray Ozzie‘s Groove was an awesome teamwork tool, but now Microsoft has subsumed it somehow within Sharepoint, and what to me were key concepts were lost.

I’m out of time for tonight, but there’s lots more to say…

For my future follow up:

  • Create a blog post consisting of what I think are the critical sentences, then comment further.
  • Correlate key ideas in Bush’s paper with key concepts in Engelbart’s NLS/Augment implementation.
  • Add to Tom Woodward’s Google Doc, or (less desireably) create my own version, and comment on other aspects which struck me, especially creating hypertext links (associations) to other related materials of which I am aware.
  • Blog about the idea that the memex is a device for INDIVIDUALs, not groups, and contrast with NLS/Augment which was designed with special features for teamwork and collaboration.
  • Blog further about Neil Larsen’s ideas and tools involving automated linking.
  • Identify and link to other hypertext research tools from the 80’s and 90’s, and current tools which may implement some of the memex concepts.

Thought Vectors

What does “thought vectors in concept space” mean to me?

It’s a phrase Doug Engelbart voiced, which I believe alludes to several aspects of his concepts of how individuals and groups think and can collaborate.  In my mind it’s closely related to a concept and title of one of Dr. Engelbart’s papers, “Augmenting Human Intellect”.  I suspect (and expect to research in the next couple weeks) that figuring out how to capture, store, and cooperatively manipulate “thought vectors in concept space” is a key part of Dr. Engelbart’s effort to “Augment Human Intellect”.

When I focus simply on the words “thought vectors in concept space”, it brings to my mind the concept of a Vector from Physics and Mathematics (as distinct from a scalar).  In physics, a vector has magnitude and direction (while a scalar has only magnitude).  So a “thought vector” is an idea (of some size, big or small), which is going in a particular direction.   “Concept space” is therefore the multidimensional space within which the idea exists and is described and developed, against whose axes (dimensions) the thought vector’s direction is defined.

The above definition breaks down in several areas.  First, in my mind, a particular idea may go in several different directions (at once).  Perhaps an idea can be the “magnitude” of several different thought vectors, which each take it in a different direction.  Second, I’m not sure “concept space” has a fixed number of dimensions, nor whether it makes enough sense to define a thought vector’s direction with respect to “axes” defining a concept space, nor whether a particular thought vector’s direction needs to be defined with respect to ALL the dimensions of the concept space.  Third, is there just one “concept space”, or do we define new ones as needed (to describe particular problems or work areas), and therefore there exist a multiplicity of “concept spaces”?  It will be fun to ponder this more in the next few weeks…

At Our Summer cMOOC: Living the Dreams, Dr. Gardner Campbell said:

Why “thought vectors in concept space”? Because that’s how Doug Engelbart envisioned the mental environment that personal, interactive, networked computing would make possible, an environment in which our “collective IQ” could realize itself and rise to its full and necessary potential. For me, “thought vectors” are the lines of inquiry, wonder, puzzlement, and creative desire emerging from individual minds. We launch our thought vectors into “concept space,” the grand commons of human invention and communication, the space in which we build our symbols and work toward mutual intelligibility, mutual hope, mutual inspiration. If the thought vectors are weak or stunted, the concept space will be too, and vice-versa.

There are several very interesting threads to explore in Dr. Campbell’s definition, and I’m sure there are more in the various definitions and explications from other MOOC participants.  I hope to read and explore (and comment upon) some of those soon.

 

Questions and notes-to-self for future follow-up:

  • Where and when (in which papers, talks, etc.) did Dr Engelbart use that phrase?
  • Did he describe the phrase in detail? More than once? Did the definitions agree or evolve over time?
  • Who else has used that phrase? How did they define it?
  • I said I believe it alludes to SEVERAL aspects of his concepts. First, I should outline what I think it alludes to. Then, go study Dr. Engelbart’s writings, figure out what he meant, and compare to my list.
  • Can my definition be further developed and refined?  What are additional ways it breaks down, and can those be repaired?
  • Who else (MOOC participants) described “thought vectors in concept space”?  Ponder those and create additional blog entries commenting  on what their definitions evoke in my mind.