mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

620
comptes actifs

#hypertext

0 message0 participant0 message aujourd’hui

These late 1980s and early 1990s papers reviewed the state of hypertext research and applications, covering systems such as NoteCards, GNU Info, Intermedia, CREF by @kentpitman, HyperCard, and more. They capture the intense activity and exploration around a still young and rapidly evolving field.

A Survey of Hypertext
csis.pace.edu/~marchese/CS835/

State of the Art Review on Hypermedia Issues And Applications
academia.edu/download/11333046

A répondu dans un fil de discussion

@screwtape

That's distinguished from the CREF editor that I wrote in 1984, while on leave from my work on the Programmer's Apprentice to do a summer's work at the Open University.

CREF (the Cross-Referenced Editing Facility) was basically made out of spare parts from the Zwei/Zmacs substrate but did not use the editor buffer structure of Zmacs per se. If you were in Zmacs you could not see any of CREF's structure, for example. And the structure that CREF used was not arranged linearly, but existed as a bunch of disconnected text fragments that were dynamically assembled into something that looked like an editor buffer and could be operated on using the same kinds of command sets as Zmacs for things like cursor motion, but not for arbitrary actions.

It was, in sum, a hypertext editor though I did not know that when I made it. The term hypertext was something I ran into as I tried to write up my work upon return to MIT from that summer. I researched similar efforts and it seemed to describe what I had made, so I wrote it up that way.

In the context of the summer, it was just "that editor substrate Kent cobbled together that seemed to do something useful for the work we were doing". So hypertext captured its spirit in a way that was properly descriptive.

This was easy to throw together quickly in a summer because other applications already existed that did this same thing. I drew a lot from Converse ("CON-verse"), which was the conversational tool that offered a back-and-forth of linearly chunked segments like you'd get in any chat program (even to include MOO,), where you type at the bottom and the text above that is a record of prior actions, but where within the part where you type you had a set of Emacs-like operations that could edit the not-yet-sent text.

In CREF, you could edit any of the already-sent texts, so it was different in that way, and in CREF the text was only instantaneously linear as you were editing a series of chunks, but some commands would rearrange the chunks giving a new linearization that could again be edited. While no tool on the LispM did that specific kind of trick, it was close enough to what other tools did that I was able to bend things without rewriting the Zwei substrate. I just had to be careful about communicating the bounds of the region that could be editing, and about maintaining the markers that separated the chunks as un-editable, so that I could at any moment turn the seamed-together text back into chunks.

Inside CREF, the fundamental pieces were segments, not whole editor buffers. Their appearance as a buffer was a momentary illusion. A segment consisted of a block of text represented in a way that was natural to Zwei, and a set of other annotations, which included specifically a set of keywords (to make the segments easier to find than just searching them all for text matches) and some typed links that allowed them to be connected together.

Regarding links: For example, you could have a SUMMARIZES link from one segment to a list of 3 other segments, and then a SUMMARIZED-BY link back from each of those segments to the summary segment. Or if the segments contained code, you could have a link that established a requirement that one segment be executed before another in some for-execution/evaluation ordering that might need to be conjured out of such partial-order information. And that linkage could be distinct from any of several possible reading orders that might be represented as links or might just be called up dynamically for editing.

In both cases, the code I developed continued to be used by the research teams I developed it for after I left the respective teams. So I can't speak to that in detail other than to say it happened. In neither case did the tool end up being used more broadly.

I probably still have the code for CREF from the time I worked on it, though it's been a long time since I tried to boot my MacIvory so who knows if it still loads. Such magnetic media was never expected to have this kind of lifetime, I think.

But I also have a demo of CREF where took screenshots at intervals and hardcopied them and saved the hardcopy, and then much later scanned the hardcopy. That is not yet publicly available, though I have it in google slides. I'll hopefully make a video sometime of that just for the historical record.

3/n

#CREF#LispM#hypertext

The NoteCards hypermedia system was developed in Interlisp at Xerox PARC by @fghalasz Frank Halasz, Tom Moran, and Randy Trigg. In this 1985 videotape Moran introduced the main concepts of NoteCards, and Halasz demonstrated how to use the system to organize notes and sources for writing a research paper.

archive.org/details/Xerox_PARC

I did more work on XTXT, adding some very simple code examples in other languages: C, Assembler (Z80 and 8086), Pascal, Go, Python, Ruby, Swift.
github.com/ha1tch/xtxt/tree/ma
(the old and dirty Ruby samples are still available under the ruby.old directory, until we have newer and better ones)

Notably, the Z80 Assembler version of the most basic example compiles to a binary of just under 70 bytes. I used PASMO to assemble it. (if you need to compile Pasmo itself from source like I did on Apple M1/arm64, ensure that you add -std=c++17 to the CXXFLAGS variable in the generated Makefile)

GitHubxtxt/code at main · ha1tch/xtxtContribute to ha1tch/xtxt development by creating an account on GitHub.

For the first time in several years I've updated the XTXT project. Hopefully more will follow sometime soon.

github.com/ha1tch/xtxt/tree/ma

XTXT is a simple yet powerful format that addresses many challenges in modern data representation. Its ability to separate content, metadata, and structure into independent streams makes it a strong candidate for diverse applications, from hypertext systems to programming tools and cloud workflows.

GitHubxtxt/intro at main · ha1tch/xtxtContribute to ha1tch/xtxt development by creating an account on GitHub.
#xtxt#txt#hypertext

For the first time in several years I've updated the XTXT project. Hopefully more will follow sometime soon.

github.com/ha1tch/xtxt/tree/ma

XTXT is a simple yet powerful format that addresses many challenges in modern data representation. Its ability to separate content, metadata, and structure into independent streams makes it a strong candidate for diverse applications, from hypertext systems to programming tools and cloud workflows.

GitHubxtxt/intro at main · ha1tch/xtxtContribute to ha1tch/xtxt development by creating an account on GitHub.
#xtxt#txt#hypertext

Spreading out and organizing a stack of index cards to outline and write a paper. It's the metaphor behind this 1986 demonstration of NoteCards, the pioneering hypermedia system developed in Interlisp at Xerox PARC. At the workstation is NoteCards co-creator Frank Halasz @fghalasz who now contributes to the Medley Interlisp project.

archive.org/details/Xerox_Note

For an overview of NoteCards see:

dl.acm.org/doi/10.1145/29933.3

💡 A rich and wonderful program at the Computational Humanities Research #CHR2024! Kudos to the organizers @comphumresearch, #AarhusUniversity! Many facets of computational approaches in the Humanities are being discussed. Looking forward to the Lightning Talk session shortly and excited to contribute myself: "Second-Order Observation through AI: Towards a Humanistic Approach to Augmenting Human Intellect."
👉 2024.computational-humanities-
#Hypertext #AI #ComputationalHumanities #DigitalHumanities

The history of Xanadu is fascinating even if what little of the utopian hypertext system was eventually implemented didn't accomplish much. In this old post Jason Crawford shared some thoughts on the project itself and the management and design lessons we can learn from it.

jasoncrawford.org/the-lessons-

Jason Crawford · The lessons of Xanadu“The longest-running vaporware story in the history of the computer industry”
A répondu dans un fil de discussion

@CordeliaBeattie @histodons @histodon 🧵#Timelines

Dynamicland is “a nonprofit research lab creating a humane dynamic medium” that contains some of the most amazing timeline and other visualizations I’ve seen.

It’s very enjoyable to explore the lab’s own history — documented using @dynamicland of course!

#Dynamicland #hypertext #ux

dynamicland.org

dynamicland.orgDynamiclandIncubating a humane dynamic medium.

Long live #hypertext

Links — connections between ideas — are the magic of Internet

They power open #web, enriching online writing. Generative #AI is parasitic dark magic counterpart to links

Digital age #Fascism has only has 1 unifying goal which is #FalseConsensus

Memes, #ConspiracyTheories, AI-generated slop — are meant to be inescapable. Create feeling something is true/real because it’s everywhere, as we’ve seen with the #Springfield, Ohio story

tracydurnell.com/2024/09/19/lo

tracydurnell.comLong live hypertext! – Tracy Durnell's Mind Garden