I'm trying to do some image processing on external images using Pixastic. I know that I need to use $.getImageData to do operations on external images because otherwise you'll get DOM Exception 18 due to the canvas being "tainted by cross-origin data". Unfortunately, the appspot service "img-to-json" that $.getImageData uses is down with a "503 Over Quota" error, and has been for several days. I found another service called "img2json" that was actually working yesterday for a bit, but I'm not entirely sure it does the same thing (I think img2json only gives you the basic metadata as opposed to actual pixel data). So, my actual questions are:
Are "img2json" and "img-to-json" the same service? If so, maybe I can just
modify the $.getImageData code to use img2json.
Is it common to see appspot applications being down like this? What
are the chances of img-to-json coming back up in the recent future?
If not, how easy would it be to temporarily download these external images to the server, do the image processing on them, then delete them? Is it possible to do that with only javascript?
Related
I am making an on-line shop for selling magazines, and I need to show the image of the magazine. For that, I would like to show the same image that is shown in the website of the company that distributes the magazines.
For that, it would be easy with an absolute path, like this:
<img src="http://www.remotewebsite.com/image.jpg" />
But, it is not possible in my case, because the name of the image changes everytime there is a new magazine.
In Javascript, it is possible to get the path of an image with this code:
var strImage = document.getElementById('Image').src;
But, is it possible to use something similar to get the path of an image if it is in another HTML page?
Assuming that you know how to find the correct image in the magazine website's DOM (otherwise, forget it):
the magazine website must explicitly allow clients showing your website to fetch their content by enabling CORS
you fetch their HTML -> gets you a stream of text
parse it with DOMParser -> gets you a Document
using your knowledge or their layout (or good heuristics, if you're feeling lucky), use regular DOM navigation to find the image and get its src attribute
I'm not going to detail any of those steps (there are already lots of SO answers around), especially since you haven't described a specific issue you may have with the technical part.
You can, but it is inefficient. You would have to do a request to load all the HTML of that other page and then in that HTML find the image you are looking for.
It can be achieved (using XMLHttpRequest or fetch), but I would maybe try to find a more efficient way.
What you are asking for is technically possible, and other answers have already gone into the details about how you could accomplish this.
What I'd like to go over in this answer is how you probably should architect this given the requirements that you described. Keep in mind that what I am describing is one way to do this, there are certainly other correct methods as well.
Create a database on the server where your app will live. A simple MySQL DB will work, but you could use anything. Create a table called magazine, with a column url. Your code would pull the url from this DB. Whenever the magazine URL changes, just update the DB and the code itself won't need to be changed.
Your front-end code needs some sort of way to access the DB. One possible solution is a REST API. This code would query the DB for the latest values (in your case magazine URLs), and make them accessible to your web page. This could be done in a myriad of different languages/frameworks, here's a good tutorial on doing something like this in Node.js and express (which is what I'd personally use).
Finally, your front-end code needs to call your REST API to get the updated URLs. This needs to be done with some kind of JavaScript based language. jQuery would make this really easy, something like this:
$(document).ready(function() {
$.Get("http://uri_to_your_rest_api", function(data) {
$("#myImage").attr("scr", data.url);
}
});
Assuming you had HTML like this:
<img id="myImage" src="">
And there you go - You have a webpage that pulls the image sources dynamically from your database.
Now if you're just dipping your toes into web development, this may seem a bit overwhelming. But I promise you, in the long run it'll be easier then trying to parse code from an HTML page :)
I am writing a Media-HUD that runs totally on local notecard files stored within the prim's inventory, with notecards named like index.html, style.css, icon.svg etc.
My hope is to use the LSL HttpServer functions, and the script's URL to create a totally self-contained media based HUD that is easy to edit like editing any web page.
This is completely possible on its own, however there is a limitation in that, the pages must fit into the memory allocated to the LSL script. Under mono this is only 64kb.
I want to remove this limitation, by somehow, perhaps from javascript, reading in each 'file' from a notecard line by line in the users browser itself (thusly, getting around the memory limit by only bringing one notecard line into memory at a time).
Is there a way to do this? generate a entire file in javascript procedurally by loading in the strings making it up line by line, and then serve it as though it were a whole file? I'm not sure how feasible this is.
Any idea's/guidance greatly appreciated!
You could do this through Javascript using XMLHttpRequest. jQuery's wrapper for this is called Ajax. You could request each line individually, which would be slightly slower, or read in a number of lines at a time, at the script's leisure. http_request is not throttled so either works. Note that the loader has to be sent in a single response, because the LSL server has no way of pushing data "piecemeal" like an actual server does.
Notes:
llGetNotecardLine only returns the first 255 bytes per line.
llHTTPResponse must be called within ~20 seconds of the request, so you can't feasibly read more than 20 lines from a notecard at a time.
I'm not sure how this would work for non-DOM filetypes. All files would need to be embed-able in the HTML using the Javascript DOM. To my knowledge, Javascript can't arbitrarily create an external file and serve it to itself. Obviously it would not work for non-text filetypes, but you could certainly load in the rest of the HTML, CSS, and HTML5 SVG. Basically, if it can go in a single HTML file, you could load it in through Javascript.
I have no experience with React but it gives a good example of what is possible on the UI side with loading things in purely through Javascript.
So less than 64 thousand characters in memory at most per script. I will give you some advise that might make your task feasable:
External resources
Minimize the amount of code you have to have in your notecards by sourcing popular libraries from the web Bootstrap, React and such.
You will have to rely on their mirror's availability thought. But it will greatly reduce the amount of memory needed to provide pretty and functional pages.
Minify your code
Use tools like Uglify or Closure Compiler to make your javascript lighter. Though you must be careful, as these tools will fit all your code a single long line by the default, and you can't read lines longer than 255 characters with LSL. Luckily you can customize your options in these tools to limit the amount of characters per line.
Divide and conquer
Since a single script can't handle much memory, make dedicated scripts. One could serve resources (act as a file server, providing your html and js) while the other would receive API request calls, to handle the application's logic.
i want to use SCORM (ver. 2004) using multiple html pages and i need to switch them using location.href
when im using only 1 html file its working as intended.
when using multiple files and switching them off with location.href, we get no connection on the new page and cannot initialize new connection because its already initialized.
Thank you very much for your help.
So the connection being Initialized isn't a big deal. But, each page loading and trying to initialize just generates a SCORM warning/error. That technically is a non-actionable error.
Cons of this approach
JavaScript has to instantiate on each page - each time. This means it has to pull back down (depending on the features your using) bookmarking, suspend data, etc...
So this is where mitigating all this becomes problematic.
When do you terminate?
How can you bookmark or support bookmarking?
What happens if curriculum adds or removes a page later?
Can I limit the number of times I try to initialize?
Will the LMS even allow this (since sometimes they salt and pepper values in the query string)?
The share-ability barometer on doing this I'd say is ripe with failure and I'd caution against it. Some LMS systems even detect the unload. Can you over come some of the above - sure. But will you be over taken by the rest... absolutely.
SCO = Shareable content object. And anything that diminishes the shareable part will hurt downstream.
Alternative
Use a single page SCO collection defined in a imsmanifest.xml. See https://github.com/cybercussion/SCOBot/wiki/Single-Pages-Managed-by-LMS-Navigation
Comment
Hope that helps. I was involved with a project a very long time ago where a architect wanted to do things simple like this, and it really requires some added elbow grease to either support single pages managed by the LMS, a AJAX or IFRAME approach to do it right.
I have an HTML file with some Javascript and css applied on.
I would like to duplicate that file, make like file1.html, file2.html, file3.html,...
All of that using Javascript, Jquery or something like that !
The idea is to create a different page (from that kind of template) that will be printed afterwards with different data in it (from a XML file).
I hope it is possible !
Feel free to ask more precision if you want !
Thank you all by advance
Note: I do not want to copy the content only but the entire file.
Edit: I Know I should use server-side language, I just don't have the option ):
There are a couple ways you could go about implementing something similar to what you are describing. Which implementation you should use would depend on exactly what your goals are.
First of all, I would recommend some sort of template system such as VueJS, AngularJS or React. However, given that you say you don't have the option of using a server side language, I suspect you won't have the option to implement one of these systems.
My next suggestion, would be to build your own 'templating system'. A simple implementation that may suit your needs could be something mirroring the following:
In your primary file (root file) which you want to route or copy the other files through to, you could use JS to include the correct HTML files. For example, you could have JS conditionally load a file depending on certain circumstances by putting something like the following after a conditional statement:
Note that while doing this could optimize your server's data usage (as it would only serve required files and not everything all the time), it would also probably increase loading times. Your site would need to wait for the additional HTTP request to come through and for whatever requested content to load & render on the client. While this sounds very slow it has the potential of not being that bad if you don't have too many discrete requests, and none of your code is unusually large or computationally expensive.
If using vanilla JS, the following snippet will illustrate the above:
In a script that comes loaded with your routing file:
function read(text) {
var xhr=new XMLHttpRequest;
xhr.open('GET',text);
xhr.onload=show;
xhr.send();
}
function show() {
var text = this.response;
document.body.innerHTML = text;//you can replace document.body with whatever element you want to wrap your imported HTML
}
read(path/to/file/on/server);
Note a couple of things about the above code. If you are testing on your computer (ie opening your html file on a browser, with a path like file://__) without a local server, you will get some sort of cross origin request error when trying to make an XML request. To bypass this error, either test your code on an actual server (not ideal constantly pushing code, I know) or, preferably, set up a local testing server. If this is something you would want to explore, its not that difficult to do, let me know and I'd be happy to walk you through the process.
Alternately, you could implement the above loading system with jQuery and the .load() function. http://api.jquery.com/load/
If none of the above solutions work for you, let me know more specifically what it is that you need, and I'll be happy to give a more useful/ relevant answer!
My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?
I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).
My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.