On Sat, 28 Jun 1997, Christian Schwarz wrote:

> Why? The files are called ".html.gz" in the file system. Thus, these links
> are valid. We only have to implement on-the-fly decompression on some web
> servers. (This functionality could be useful for others, too, so we could
> forward our patches to the upstream maintainers of the web servers as
> well.)

 So..

---------------------------------------
GET http://localhost/hello.html.gz
[...]
Content-Type: text/html

[uncompressed HTML]
---------------------------------------

 This is non-standard... the file in the HD exists, httpd is supposed to
send it as is, and using the suffix `html.gz' for every uncompressed HTML
documentation would be strange, or even annoying for a user trying to
`save as' the file in w95.

 I think that Christoph's idea is the elegant way of doing this. The www
server could even be just something like...

----------------------------------------------------
#!/bin/bash
read req
read
req=${req#GET }
req=${req% HTTP*}
if [ -r $req ]; then
        echo HTTP/1.0 200 OK
        echo Content-type: text/html
        echo
        cat "$req"
else
        if [ -r $req.gz ]; then
                echo HTTP/1.0 200 OK
                echo Content-type: text/html
                echo
                zcat "$req.gz"
        fi
        echo HTTP/1.0 404 Not found
        echo Content-type: text/html
        echo
        echo "<H1>Can't find $req here!</H1>"
fi
-----------------------------------------------------
 (with `debdoc   stream  tcp     nowait  nobody  /usr/sbin/tcpd
/usr/sbin/in.debdoc')

 This is only for testing, but works fast..! A VERY small C program can do
this safely...
 And connections to that service could be restricted by default to the
local machine...

-- 
Nicolás Lichtmaier.-


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] .
Trouble?  e-mail to [EMAIL PROTECTED] .

Reply via email to