wierd js / canvas issue - code example doesn't work locally - javascript

Here is the best js graph drawing thing - http://thejit.org/static/v20/Jit/Examples/Other/example2.html
It is so impressive, that I wanted to play with it a bit, but after downloading all the files, it isn't working.
So I did put this html (with all other files) page on web server (jetty) but still no result:
Opening file with Chrome, with properly added --allow-file-access-from-file doesn't fix the issue either
Obviously I am doing something wrong, but I have no idea what, so I will be very grateful for any input.

Somehow html file provided in .zip download from download page is different than html file downloaded directly from online example page (this one uses obfuscated (sic!) javascript file named jit-yc.js in place of 'normal' and WELL DOCUMENTED jit.js:) ).
So, if anyone would like to try JIT (there ins't much competition in good graph drawing tools - http://arborjs.org/ seems the only one)- using official download link is the only valid way ;)

Related

How can I prevent a PDF file from being downloaded or printed with PHP or JavaScript?

I am looking for ways to present a PDF file in the browser but make it not downloadable or printable.
If someone really goes through all the trouble to disable the JavaScript library or anything like that, that is fine. This is more for the reason that the content within the PDF will be updated regularly so if you download it it will be out of date by the next day.
I also cannot rely on marking the pdf as protected or encryption as a reasonable way to accomplish this.
If you have any library recommendations or anything else it would be appreciated. I am currently exploring if it is feasible using PDF.js and ViewerJS
I was able to find a solution using ViewerJS and this CSS. The CSS shows a blank page when you try to print (ViewerJS already distorts it to a non-printable state) and ViewerJS prevents you from downloading as a PDF file and instead tries to save as an HTML file.
This meets the requirements of making it just inconvenient enough to discourage users from trying to download the file since the file is always easily accessible on almost any page of the site.
https://gist.github.com/ActuallyConnor/2a80403c7827dd1f78077fb2b5b7e785

Save HTML As Standalone Page: Exporting Tool?

I need to regularly send html pages to a client as standalone .html files with no external dependencies. The original pages are done with node.js and express and they contains several librairies such as High Charts.
I have done the preparation manually until now, this includes:
Transform all images into blobs
Copy all external .js and .cs inside the page
Minimize where possible (standards librairies such as jQuery or Bootstrap...)
The result is a single .html file that can be opened without an internet connection and looks just like the original.
Is there any tool to do this automatically? If not, maybe I'll code it myself in Python. Do you have any recommendation around that?
Thanks
Monolith is a CLI tool for saving complete web pages as a single HTML file
See https://github.com/Y2Z/monolith
With apologies to OP, as this answer is probably far too late for him, but I'm posting it to help anyone with a similar problem:
HTTrack is an open-source project that does almost exactly what you described, though it doesn't work perfectly on some of the more peculiar JS.
It saves the page with most of the JS, the major images, and everything that the page needs to appear complete. It can be configured to include or exclude the entire or partial JS, images, and CSS.
This does not import all of the JS and other content into the HTML file, but neatly organizes all of the content into one folder and corrects all of the paths to make the folder portable.
It also seems to have trouble grabbing some external sources that are protected, but if it is your local site and simply uses common scripts like JQuery, you should be fine. When I tested it, it correctly downloaded all of my local CSS and any valid external CSS library that I incorporated, the JQuery and derivative scripts that I was using, and the embedded images.
Just to save everyone a question, the program by default saves the downloaded websites to C:\My Web Sites.

HTML Source-Code rip-save?

i came across a js library (jsMovie) and wanted to see the example files, but it is really badly documented (usage), so i tried to download the authors page to look in the source-code. But when trying to do that, I've recognized that "view-source" wasn't giving the full code (almost 80% of the code did not appear). (Tried in Chrome, Firefox)
So my question is, how can this be? Firebug is displaying everything propperly. At this moment i thought, that this could be as well a good way to prevent kiddies from ripping sites.
here the page: http://konsultaner.de/entwickler#Konsultaner
Hints are welcome
Generate the current source code, as interpreted by the browser. This can be done using an XMLSerializer on document.
var generatedSource = new XMLSerializer().serializeToString(document);
From there, if you want to open a page just showing the source, you could do
window.open('data:text/plain,'+encodeURIComponent(generatedSource), '_blank');
They are using AngularJS, a front-end javascript framework. That means almost all parts of the page are generated dynamically using javascript. Therefore, you can't see the page without javascript running (using view-source), but you can see the generated HTML via inspector.
If it is a static website (the javascripts and templates are all there), you can still 'rip' it. But not if it is a dynamic website, since all data and logic are 'fed' by the server.

Why do some PHP or JS files get uploaded as one-line rather than keep formatting?

I am offering zip files of a plugin I wrote with JS, PHP and CSS files for the user to upload to their server. However, in some user cases the JS file gets uploaded as one line, obviously causing a massive FAIL and a complaint from users. To get it working again, I just open the file and copy/paste from my properly formatted version onto theirs. Presto! So, can someone explain what is going wrong here and how I can prevent this easy-to-fix but time consuming problem? I am using Notepad++ on Win, is there some kind of setting I should be using to save my files? Or is it a remote server problem that I just can't prevent?
Most likely this is caused by different line endings and their interpretations on various operating systems. I would thought that nowadays these problems are over, apparently not.
Ask your customer for any file created on the target system and see what line ending is natively used there. Then simply give them file for the target platform (AFAIR Notepadd++ allows you to save file in with any EOL).

Automatically refresh and download Asirra images

If you're unfamiliar with Asirra, it's a CAPTCHA technique developed by microsoft that uses the identification of cats and dogs rather than a string of text for human verification.
I'd like to use their database of millions of pictures of cats and dogs for some machine learning experiments, and so I'm trying to write a script that will automatically refresh their site and download 12 images at a regular interval. Unfortunately, I'm a novice when it comes to JavaScript.
The problem is, for very obvious security reasons, it's hard to find the actual url of the image because it's all behind obfuscated javascript. I tried using Curl to see what html was returned using a terminal app, and it's the same deal - just javascript. So, using a script, how do I get access the actual images? Obviously the images are being transferred to my computer since they're showing up on my screen, but I don't know how to capture those images using a script.
Also a problem is that I don't want the smaller images that first load, I need the larger ones that only show up only when you mouse over them, so I guess I need to overwrite that javascript function to give the larger images to me via the script as well.
I'd prefer something in Python or C#, but I'll take anything - thanks!
Edit: Their public corpus doesn't have near enough images for my uses, so that won't work. Also, I'm not asking necessarily for you to write me my script, just some guidance on how to access the full-size images using a script.
Try using their public corpus http://research.microsoft.com/en-us/projects/asirra/corpus.aspx
While waiting for an answer here I kept digging and eventually figured out a sort of hacked way of getting done what I wanted.
First off, the reason this is a somewhat complicated problem (at least to a javascript novice like me) is that the images from ASIRRA are loaded onto the webpage via javascript, which is a client-side technology. This is a problem when you download the webpage using something like wget or curl because it doesn't actually run the javascript, it just downloads the source html. Therefore, you don't get the images.
However, I realized that using firefox's "Save Page As..." did exactly what I needed. It ran the javascript which loaded the images, and then it saved it all into the well-known directory structure on my hard drive. That's exactly what I wanted to automate. So... I found a firefox Add-on called "iMacros" and wrote this macro:
VERSION BUILD=6240709 RECORDER=FX
TAB T=1
URL GOTO=http://www.asirra.com/examples/ExampleService.html
SAVEAS TYPE=CPL FOLDER=C:\Cat-Dog\Downloads FILE=*
Set to loop 10,000 times, it worked perfectly. In fact, since it was always saving to the same folder, duplicate images were overwritten (which is what I wanted).

Categories

Resources