<br><br><div class="gmail_quote">On Tue, Nov 1, 2011 at 12:20 PM, Samium Gromoff <span dir="ltr"><<a href="mailto:skosyrev@common-lisp.net">skosyrev@common-lisp.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Fri, 28 Oct 2011 11:29:38 -0400, Faré <<a href="mailto:fahree@gmail.com">fahree@gmail.com</a>> wrote:<br>
> Dear Christian,<br>
><br>
> I'm interested in your web scraping technology in CL.<br>
><br>
> I'd like to build a distributed web proxy that persistently records<br>
> everything one views, so that you can always read and share the pages<br>
> you like even when the author dies, the servers are taken off-line,<br>
> the domain name is bought by someone else, and the new owner puts a<br>
> new robots.txt that tells <a href="http://archive.org" target="_blank">archive.org</a> to not display the pages<br>
> anymore.<br>
><br>
> I don't know if this adventure tempts you, but I think the time is<br>
> ripe for end-user-controlled peer-to-peer distributed archival and<br>
> sharing of information. Obvious application, beyond archival, is a<br>
> distributed facebook/g+ replacement.<br>
<br>
</div>I cannot add anything, but express an emphatic agreement.<br>
<br>
One important thing, IMO, would be a mathematically-sound, peer-to-peer<br>
archive authenticity co-verification -- perhaps in the same sense as<br>
git manages to do it.<br>
<span class="HOEnZb"><font color="#888888"><br></font></span></blockquote><div><br></div><div>I agree. It's becoming pretty obvious to me that the 'web' can be described as being in a state of constant rot and regrowth (sites go down. other sites go up). Unfortunately, the rot takes with it some really valuable pieces of information.</div>
</div><br><div>An interesting definition of a website might be to be actually a git repository - hyperlinks take both a file and a changeset hash the file was valid at; a 'certified' website might have a gpg signature on the commits as well.</div>
<div><br></div><div>One interesting application might be an 'archiving browser', which caches all/most of the sites you visit. Instead of rummaging through google trying to figure out what the search terms were to hit that one site (if it's still indexed by google and if it's still up), you can instead run a query on your local application.</div>
<div><br></div><div>As a personal project, I have been contemplating putting together a web spider/index for better web searching; it would be nice to contribute components from that to a larger project relating to web storage & archiving.</div>
<div><br></div><div>Regards,</div><div>Paul</div>