This section outlines a scenario for the World Wide Web. How could the Web be if ideas of other existing hypertext systems are incorporated?
The outline is arranged along the three interrelated topics hypertext concepts, user interface of Web clients, and integration with the operating system.
It is rather simple to keep hyperlinks valid if the hypertext is limited in size and a single author is editing at one terminal only. In this case the entire hypertext is stored on one machine and one program can update the links if any resource is moved or renamed. The global hypertext system World Wide Web is a different case. The data is distributed over millions of servers and no mechanism is reporting broken links to keep the Web free of hyperlink errors.
What can be done to get robust identification of hypertext nodes for the Web? Tim Berners-Lee’s original concept talks about Universal Resource Identifiers (URI) instead of Uniform Resource Locators (URL) [Berners-Lee 99, p. 62]. Uris are meant to stay constant as long as the designated resource exists. But they are vulnerable against location change of the file in the server’s file hierarchy. Hence the weaker concept of Uniform Resource Locators that contains the path to the resource that might change. Urls drop the quality to be persistent and universal in any means. This alone might be no problem, but the Web offers no compensating methods to regain stability in addressing Web pages.
Two different solutions have been proposed. The definition of really persistent identifiers like uris, and the introduction of a functional layer that translates persistent ids to real addresses. The first direction is taken by Xanadu. Xanadu introduces a global address scheme that guarantees for every character a unique and persistent address. «The central […] secret this all relied on [is] the freezing of content addresses into permanent universal ids» [Nelson 99a, p. 9]. Documents become sequences of pointers to the global address space. Hes has actually implemented this data model.
The second approach has been taken by Microcosm and Hyper-G. Linking information is stored in link databases external to the files. The Hyper-G protocol allows to broadcast update information between Hyper-G servers world-wide in order to synchronize the data.
The Web lacks power of expression to refer to a group of nodes. For good reason many hypertext systems have implemented means to deal with higher order entities of nodes. NoteCards has filebox and browser cards and Frank Halasz in his “7 Issues” calls for composites to augment the basic nodes and links model. Hyper-G has collections, clusters and sequences that are well integrated into the user interface of the Hyper-G client Harmony. The Dexter Hypertext Reference Model addresses the need by the definition of composite components.
The purpose of groups for the Web is manyfold. For example it could change the implicit notion of Web sites into an explicit concept. The assumption that a Web site is equivalent to a set of files that reside on the same Web server does not hold under all conditions. Given the knowledge of the Site structure Web clients can offer navigational aids to the user.
During the recent years the W3C has agreed on a formal language to encode semantic data. The Extensible Markup Language (XML) can be used to adopt the concept of aggregations of nodes for the Web.
Hyperlinks can be either text links or basic links in the notion of Storyspace. That means that the starting point can be attached to a piece of content or the entire node serves as the starting point. Hyperlinks can either point to a node, or to a position or spawn within the node. Hyperlinks can be unidirectional or bidirectional. Furthermore, according to the Dexter Model links can connect more than just two nodes with each other.
The Web supports unidirectional text links only. But the W3C has a working group to develop a generalization of links. XPointer is based on XML and incorporates the presented ideas. Also link types are possible with XPointer’s syntax.
Vannevar Bush and Ted Nelson emphasize the importance of parallel visualization of documents. At least two independent areas on screen are necessary to successfully handle text online. Windows are developed by Doug Engelbart in the 1960s at Sri and by Alan Kay in the early 1970s at Xerox Parc. They offer a flexibility to the user that also bears an additional complexity for the interface. Windows usually overlap and cover each other to a high percentage, because the desktop metaphor mimics working with one sheet of paper. A designated secondary window is not part of the WIMP interface. It is on the user’s behalf to arrange the windows to see the content of two windows side by side.
It is also Ted Nelson who suggests the use of animation for the interface. He argues [Nelson 74, p. dm 53]:
The text moves on the screen! […] Note that we do not refer to here to jerky line-by-line jumps, but to smooth screen motion, which is essential in a high-performance system. If the text does not move, you can’t tell where it came from.
This sort of behavior could support the orientation in hypertext. It would be obvious to distinguish between local links and links to a different page. It would also be obvious whether a local links points upwards or downwards in the current page.
Finally something about the relation between browser and Web page. Today an html page has no access to user interface elements outside the content of the browser window. But consider a Web site with control to the menu bar. It could provide information to the browser to add a standardized menu with landmark pages like GOTO HOMEPAGE OF THIS SITE, or SHOW IMPRINT.
Another effective point of control for a Web site is the cursor shape. The Guide example has illustrated how user interaction can take advantage of different mouse cursors (cf. Fig. 2.9).
Tim Berners-Lee recalls [Berners-Lee 99, p. 157],
I have always imagined the information space as something to which everyone has immediate and intuitive access, and not just to browse, but to create.
Consequently the first implementation of a Web client WorldWideWeb/Nexus was capable of browsing and editing html pages in WYSIWYG mode. The united approach got lost as the print publishing industry discovered the Web. Ncsa, who developed the early Web browser Mosaic, showed no interest in building an editor for html. Netscape and Microsoft didn’t shift the focus back to editing and «the Web became another consumer medium with many readers but relatively few publishers» [Gillies/Cailliau 2000, p. 243].
Nearly all hypertext systems presented in this chapter strive for an integrated environment between reading and writing. No artificial borders should hamper the user from editing and commenting existing content. Just Symbolics’ hypertext system wilfully separates the tasks between the application programs Document Examiner and Concordia. But it became apparent that even in the context of online documentation annotation capabilities are desirable.
If the Web should be used to support creative knowledge workers, flexible and easy to use editor capabilities are necessary. A single application program has the advantage to offer a consistent user interface. Existing WYSIWYG Web authoring tools like Adobe GoLive can at least soothe the process of editing html markup source code.
The new WebDAV protocol, which stands for Web-Based Distributed Authoring & Versioning, offers many features that can improve the current generation of Web authoring. Consequent implementation of WebDAV can bring the user experience near to the original vision.
The Web requires pages to be html encoded. Files of different format – for example plain text or images – might be nodes of the Web, but cannot be used as starting points for hyperlinks. Html files contain the content, linking information, structural information and definitions how to display the page in the browser.
The current approach taken to fight the overload of html files is the separation of content and appearance. A first step is the external definition of styles for html files using cascading style sheets (CSS). Just content and structural information remains in the html file. The next step is the abstract and formal representation of content and structure in XML encoded format. The extensible style language (XSL) and CSS is used to transform XML back to a form that can be displayed on screen. Linking structure is going to be encoded as XPointers; also a form based on the XML syntax.
The discussion of Open Hypermedia Systems has shown that a different approach is possible. The separation between content and linking structure has a lot of advantages for the ohs model. No markup is imposed on the files. Any file can contain links. And the system takes care for link consistency.
Ted Nelson goes even beyond. In Embedded Markup Considered Harmful [Nelson 97b] he argues against any inherent form of hierarchical structure for content. Any markup, if based on sgml, curtails the flow of thought. Text and markup for structure have to be kept separated. Markup for style forms a third layer. Nelson summarizes the three layers [Ibid.]:
A content layer to facilitate editing, content linking, and transclusion management.
A structure layer, declarable separately. Users should be able to specify entities, connections and co-presence logic, defined independently of appearance or size or contents; as well as overlay correspondence, links, transclusions, and “hoses” for movable content.
Finally, a special-effects-and-primping layer should allow the declaration of ever-so-many fonts, format blocks, fanfares, and whizbangs, and their assignment to what’s in the content and structure layers.
Nelson’s model solves the following problem. A quote, like the three paragraphs above, is taken by copy & paste from journal’s Web site. This breaks the software-based connection to the original text. You can look up the reference for ’[Nelson 97b]’ in the appendix, find in this case a URL, and with some luck the corresponding page on the Web still exists. Nelson’s concept of transclusion would maintain the link to the cited piece of text all the time. Moreover the separate layers would make possible the integration of the quote into the new context. Appropriate formatting could also be assigned.
Early hypertext systems like NLS exploit the resources of the mainframe computers in such a way that they interact with the hardware quite directly. But none of the hypertext systems thereafter has ever intruded the level of operating systems.**Sun’s Link Service provided an operating system level service for Sun workstations. It is described by Amy Pearl in Sun’s Link Service: A Protocol for Open Linking [Pearl 89]. The objective behind the development during the late 1980s was «that if a link service was a standard feature of the operating environment, then all serious applications would be written to make use of this feature» [Davis et al. 92, p. 185].
Open Hypermedia Systems define the foundation for such a service. But to achieve a robust and consistent user interface, integrated into the whole environment, support by the operating system is inescapable. Renaming or moving of files has to induce the update operations to keep the link data consistent.
This chapter shall close with a quote from Tim Berners-Lee [Gillies/Cailliau 2000, p. 195]:
’Picture a scenario in which any note I write on my computer I can “publish” just by giving it a name. […] In that note I can make references to any other article anywhere in the world in such a way that when reading my note you can click with your mouse and bring the referenced article up on your machine. Suppose, moreover, that everyone has this capability.’ That was the original dream behind the Web.