[PHP] Accessing Files Outside the Web Root
Let me preface my question by noting that I am virtually a PHP novice. Although I am a long-time webmaster, and have used PHP for some years to give visitors access to information in my SQL database, this is my first attempt to use it for another purpose. I have browsed the mailing list archives and have searched online but have not yet succeeded in teaching myself how to do what I want to do. This need not provoke a lengthy discussion or involve extensive hand-holding - if someone can point to an appropriate code sample or online tutorial that might do the trick. I am the author of a number of PDF files that serve as genealogical reference works. My problem is that there are a number of sites which are posing as search engines and which display my PDF files in their entirety on their own sites. These pirate sites are not simply opening a window that displays my files as they appear on my site. They are using Google Docs to display copies of my files that are cached or stored elsewhere online. The proof of that is that I can modify one of my files and upload it to my site. The file, as seen on my site, immediately displays the modification. The same file, as displayed on the pirate sites, is unmodified and may remain unmodified for weeks. It is obvious that my files, which are stored under public_html, are being spidered and then stored or cached. This displeases me greatly. I want my files, some of which have cost an enormous amount of work over many years, to be available only on my site. Legitimate search engines, such as Google, may display a snippet, but they do not display the entire file - they link to my site so the visitor can get the file from me. A little study has indicated to me that if I store those files in a folder outside the web root and use PHP to provide access they will not be spidered. Writing a PHP script to provide access to the files in that folder is what I need help with. I have experimented with a number of code samples but have not been able to make things work. Could any of you point to code samples or tutorials that might help me? Remember that, aside from the code I have written to handle my SQL database I am a PHP novice. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: FW: [PHP] Accessing Files Outside the Web Root
At 04:58 PM 3/13/2013, Jen Rasmussen wrote: >Have you tried keeping all of your documents in one directory and blocking >that directory via a robots.txt file? A spider used by a pirate site does not have to honor robots.txt, just as a non-Adobe PDF utility does not have to honor security settings imposed by Acrobat Pro. The use of robots.txt would succeed mainly in blocking major search engines, which are not the problem. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: FW: [PHP] Accessing Files Outside the Web Root
At 05:04 PM 3/13/2013, Dan McCullough wrote : >Web bots can ignore the robots.txt file, most scrapers would. and at 05:06 PM 3/13/2013, Marc Guay wrote: >These don't sound like robots that would respect a txt file to me. Dan and Marc are correct. Although I used the terms "spiders" and "pirates" I believe that the correct term, as employed by Dan, is "scrapers," and that twerm might be applied to either the robot or the site which displays its results. One blogger has called scrapers "the arterial plaque of the Internet." I need to implement a solution that allows humans to access my files but prevents scrapers from accessing them. I will undoubtedly have to implement some type of challenge-and-response in the system (such as a captcha), but as long as those files are stored below the web root a scraper that has a valid URL can probably grab them. That is part of what the "public" in public_html implies. One of the reasons why this irks me is that the scrapers are all commercial sites, but they haven't offered me a piece of the action for the use of my files. My domain is an entirely non-commercial domain, and I provide free hosting for other non-commercial genealogical works, primarily pages that are part of the USGenWeb Project, which is perhaps the largest of all non-commercial genealogical projects. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: FW: [PHP] Accessing Files Outside the Web Root
At 04:06 AM 3/14/2013, tamouse mailing lists wrote: >If the files are delivered via the web, by php or some other means, even if >located outside webroot, they'd still be scrapeable. Bots, however, being "mechanical" (i.e., hard wired or programmed) behave in different ways than humans, and that difference can be exploited in a script. Part of the rationale in putting the files outside the root is that they have no URLs, eliminating one vulnerability (you can't scrape the URL of a file if it has no URL). Late last night I figured out why I was having trouble accessing those external files from my script, and now I'm working out the parsing details that enable one script to access multiple external files. My approach probably won't defeat all bad bots, but it will likely defeat most of them. You can't make code bulletproof, but you can wrap it in Kevlar. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: [PHP] Re: Accessing Files Outside the Web Root
At 11:20 AM 3/14/2013, Jim Giner wrote: >And use a captcha (which I personally can never read!) to keep the robots at >bay. I dislike CAPTCHAs, and some bots are pretty good at beating them. I'm exploring alternatives that exploit the differences between the ways that bots deal with pages and the ways that humans deal with pages. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
[PHP] Accessing Files Outside the Web Root - Progress Report 1
I have made some progress. It occurred to me that the problem that I had in accessing files outside the web root could be a pathing problem, and that was the case. I finally ran phpinfo() and examined &_SERVER["DOCUMENT_ROOT"] to see what the correct path should be. I then figured that for portability my script should build the paths for desired files beginning with a truncated &_SERVER["DOCUMENT_ROOT"] and concatenating the external folder and the filename, and that is working fine. I now have a script that will give the visitor access to a PDF file stored outside the web root and whose filename is hard-coded in the script The next step is to create the mechanism that lets one of my HTML pages pass the desired filename to the script and have the script retrieve that file for the visitor. That should be simple enough since it is just string manipulation (once I get the hang of some additional PHP string manipulation functions). Then I can move on to making my script bot-resistant before implementing it on my site. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: [PHP] Accessing Files Outside the Web Root
At 12:40 PM 3/14/2013, Ravi Gehlot wrote: >In order to address the problem, you should create a "Members Restricted Area" You need to understand my target demo to understand my approach. Much of my target audience for those files is elderly and not very computer savvy. Having to register for access would discourage some of them. I prefer to keep their access as simple as possible and as close as possible to the way in which I have always provided it. Some of those files have been available (and have been updated quarterly) for nearly a decade, and many visitors are used to downloading and perusing some of those files quarterly. Registration poses its own problems, as some miscreants will attempt to register in order to usurp my resources. It probably wouldn't be as much of a nuisance as it is when running a phpbb (I run two of those) but I'd rather avoid dealing with registration. All in all, I'd rather use a server-side approach incorporating methods to differentiate between human visitors and bad bots. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: [PHP] Re: Accessing Files Outside the Web Root - Progress Report 1
At 01:26 PM 3/14/2013, Jim Giner wrote: >I don't think you ever want the filename buried in the web page. Why not? The file itself is outside the root. In any case the finished product will use tokens instead of filenames. While I am working out the bugs it is easier to use filenames. I use an incremental approach to programming (and have used that approach ever since I moved from batch programming to time-shared terminals over 40 years ago). >Plus - easy way to specify a path outside of your webroot: That is precisely what I indicated that I was doing, using &_SERVER["DOCUMENT_ROOT"], and then using truncation and concatenation. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: [PHP] Re: Accessing Files Outside the Web Root
At 02:27 PM 3/14/2013, Marc Guay wrote: >Assuming humans will continue to behave as I've seen them behave so >far, even this "simple" system will guarantee you angry emails from >people who'll want to argue that "five" is a colour in some >circumstances. Marc has a very valid point. My goal is to weed out the scrapers without angering humans. >Please see discussions on this list about top-posting as a reference. I don't care about top-posters vs. bottom posters. What irks me are those who do not edit their quotes, and they are one of the reasons why I do not sub any list in digest mode. :-) Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: FW: [PHP] Accessing Files Outside the Web Root
At 09:44 PM 3/14/2013, tamouse mailing lists wrote: >If you are delivering files to a (human) user via their browser, by whatever >mechanism, that means someone can write a script to scrape them. That script, however, would have to be running on my host system in order to access the script which actually delivers the file, as the latter script is located outside of the web root. Dale H. Cook, Market Chief Engineer, Centennial Broadcasting, Roanoke/Lynchburg, VA http://plymouthcolony.net/starcityeng/index.html -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php
Re: [PHP] Accessing Files Outside the Web Root
At 09:27 AM 3/15/2013, Stuart Dallas wrote: >You'll have to pursue legal avenues to prevent it being made available, and >that's usually prohibitively expensive. Not necessarily. Most of the host systems for the scraper sites are responsive to my complaints. Even if a site owner will not respond to a DMCA takedown notice the host system will often honor that notice, and other site owners and hosts will back down when notified of my royalty rates for the use of my files by a commercial site. >At the end of the day the question is this: would you rather control access to >your creation (in which case charge a nominal fee for it), or would you prefer >that it (and your name/cause) gets in to as many hands as possible. I merely wish to try to prevent commercial sites from profiting from my work without my permission. I am in the process of registering the copyright for my files with LOC, as my attorneys have advised. That will give my attorneys ammunition. Dale H. Cook, Member, NEHGS and MA Society of Mayflower Descendants; Plymouth Co. MA Coordinator for the USGenWeb Project Administrator of http://plymouthcolony.net -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php