This is the original announcement letter (with personal email addresses mangled) for usnatch, sent out about 19 Sep 2007:
I'm sending this to several public forums, (and several
private individuals) but not by CC: to avoid crosspost
message bounce hell.
As a courtesy to everyone to let them know where I'm sending this:
Rexx Language Association:
(for nonmembers perhaps, news://comp.lang.rexx)
(Los Angeles) LAMP SIG:
Feel free to forward this to any other place you want.
The purpose of this message is to announce a Rexx
script, 'usnatch' downloadable from:
This announcement is online at:
This script is used in conjunction with the Lynx text mode
browser as an 'EXTERN', and with mplayer allows the user
to play videos from several sites such as www.youtube.com, video.google.com
and such, with mplayer, all as a convenience to the user,
If their mplayer is configured to use drivers
such as svga, vesa, aalib, nvidia console driver, or even null
(in some cases the most important one),
the user can play the videos without a GUI.
It can be run from the command line, and though not tested,
I see no reason it can't be run from other browsers.
(Let me know about any success or problems with that)
It can probably be adapted to use other video players
with little trouble if desired.
The user must choose to run the EXTERNAL program,
deciding if he trusts the media source, and activating the
default EXTERNAL (if only one is defined), or the EXTERNAL menu
(if more than one EXTERNAL program is configured) for the URL type,
by typically pressing the ',' (for the current page) or '.'
(for the current link) keys.
From the command line the program might typically be run by:
$usnatch 'http://www.youtube.com/watch?v=zAGylaoBt6M' -i -p
or inside the Lynx config file:
EXTERNAL:http:usnatch %s -i -a:TRUE
The program started out as a late night hack to scrape
the actual flash video (.flv) link from the http://videodownloader.net
page in the backend and then play the video.
It has since been expanded to scrape directly from
www.youtube.com when appropriate, using the algorithm from
the youtube-dl program (http://www.arrakis.es/~rggi3/youtube-dl/),
or from http://KeepVid.com,
or even try a few guesses that have sometimes worked in the past
if nothing else works.
The main point of the program is to handle the
interactions of URLs found using lynx,
the getting of video URLs from the download helper sites,
and mplayer, all as a convenience to the user.
I first used some of the screen scraping techniques in it
when fixing a program to email my L.A. public library
account summaries nightly.
Two weeks after writing this script, I ran into another fellow Lynx user
at a local LUG meeting. Especially as this gentleman is totally blind,
a news junky and there being so much content on sites such as Youtube,
that he had no access to, Usnatch quite fits the bill.
There are many ways to render a web page. Leaving aside considerations
like tactile or audio presentation of the data, a few, with possible
gradations between them, are:
1. a simple dump of the html etc.
2. text mode rendering, as in a simple text mode browser
3. text mode rendering with links to images and multimedia content
4. graphical mode rendering the includes images and possibly other media
5. As a checksum to for detecting changes in the content.
6. as a list of links, as Lynx does with '-dump -listonly' or when
'L' (LIST command typically bound to the 'L' key) is pressed.
This view I like to think of as the 'Google' view of a page.
My understanding is that they based their search algorithms on the idea
that this was typically the most important information on a web page,
and this mode instantly renders it.
7. As a listing of the header-metadata in a page. I personally know of no
software that shows this, and am thinking about writing one.
This last item is starting to drift off into semantic web concerns,
where probably they have dealt with it.
End of rant.
The build for some 'Red Hat' derived Linux systems don't seem to have
the Lynx '-listonly' switch by default (this is easily worked around,
but awkward). Odd, because some people think of Lynx first
when trying to extract URLs from html.
Apparently the default Lynx build for some BSD systems
do not have EXTERNAL capability.
It would be nice for some people if wget had an odometer style download
progress report mode, like Lynx's, with capability somewhere between no
progress report and the 'dot' character graphic report.
Prehaps a switch for Lynx could be added, '-odometer', for use
with '-source' or '-dump', to send a progress report to standard error?
Not sure how practicle that would be, it's just an idea.
It would also be good if the Lynx EXTERNAL menu was brought
into line with the download and print menus, and allow using
a discriptive comment instead of just the literal commands
of the menu.
I respectfully dedicate the program usnatch to the memories
of H.G. Wells, and Jorge Luis Borges.
And don't forget all the people who don't know how or for some
reason can't use simple HTML links for their media content,
without whom the program wouldn't be needed.
Dallas E. Legan II / email@example.com / firstname.lastname@example.org / email@example.com