non iframe html transition when using SCORM - javascript

i want to use SCORM (ver. 2004) using multiple html pages and i need to switch them using location.href
when im using only 1 html file its working as intended.
when using multiple files and switching them off with location.href, we get no connection on the new page and cannot initialize new connection because its already initialized.
Thank you very much for your help.

So the connection being Initialized isn't a big deal. But, each page loading and trying to initialize just generates a SCORM warning/error. That technically is a non-actionable error.
Cons of this approach
JavaScript has to instantiate on each page - each time. This means it has to pull back down (depending on the features your using) bookmarking, suspend data, etc...
So this is where mitigating all this becomes problematic.
When do you terminate?
How can you bookmark or support bookmarking?
What happens if curriculum adds or removes a page later?
Can I limit the number of times I try to initialize?
Will the LMS even allow this (since sometimes they salt and pepper values in the query string)?
The share-ability barometer on doing this I'd say is ripe with failure and I'd caution against it. Some LMS systems even detect the unload. Can you over come some of the above - sure. But will you be over taken by the rest... absolutely.
SCO = Shareable content object. And anything that diminishes the shareable part will hurt downstream.
Alternative
Use a single page SCO collection defined in a imsmanifest.xml. See https://github.com/cybercussion/SCOBot/wiki/Single-Pages-Managed-by-LMS-Navigation
Comment
Hope that helps. I was involved with a project a very long time ago where a architect wanted to do things simple like this, and it really requires some added elbow grease to either support single pages managed by the LMS, a AJAX or IFRAME approach to do it right.

Related

waiting until page loads completely in selenium for node js

How can i make my code wait until the page load completely in Nodejs?
I use selenium-webdriver version 4.0.0
const driver = new Builder().forBrowser("firefox").build();
await driver.get("http://www.tsetmc.com/Loader.aspx?ParTree=15131F");
// here we should wait but how?
There are 2 concepts here - there is:
Page Load time - time for the page and content to download (...this downloads the scripts), then....
Script load time - time for scripts to run, get more data and populate the page
Page load time is relatively easy - but this is enabled by default and might not help much if you're already experiencing issues.
For script load time, you need an object to synchronise on.
The selenium docs give a good outline of different wait strategies and provide code examples for different languages.
Looking at your page, you will mostly likely want to try the explicit wait. In Javascript this is:
await driver.wait(until.elementLocated(By.id('foo')), 30000);
You can update the By identifier to be the LAST item on the page or, use this approach for each item you need to interact with.
The same pages also lists other expected conditions too depending on the state you need.
A quick final note, if you're not aware selenium 4 is still in alpha. You can access it if you need the new features but they're still being testing and it is subject to frequent change. If you don't need selenium 4's new features, you might want to roll back to latest stable version.

How to attach large amounts of data with tampermonkey script?

My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?
I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).
My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.

$.getImageData not working because img-to-json is down?

I'm trying to do some image processing on external images using Pixastic. I know that I need to use $.getImageData to do operations on external images because otherwise you'll get DOM Exception 18 due to the canvas being "tainted by cross-origin data". Unfortunately, the appspot service "img-to-json" that $.getImageData uses is down with a "503 Over Quota" error, and has been for several days. I found another service called "img2json" that was actually working yesterday for a bit, but I'm not entirely sure it does the same thing (I think img2json only gives you the basic metadata as opposed to actual pixel data). So, my actual questions are:
Are "img2json" and "img-to-json" the same service? If so, maybe I can just
modify the $.getImageData code to use img2json.
Is it common to see appspot applications being down like this? What
are the chances of img-to-json coming back up in the recent future?
If not, how easy would it be to temporarily download these external images to the server, do the image processing on them, then delete them? Is it possible to do that with only javascript?

How can I manage MSI session state within Javascript Custom Actions?

I have an ISAPI DLL, an add-on to IIS. I build the installer for it using WIX 3.0.
In the installer project, I have a number of custom actions implemented in Javascript. One of them, run at the initiation of the install, stops any IIS websites that are running. Another starts the IIS websites at the end of the install.
This stuff works, the CA's get invoked at the right times and under the right conditions. but the logic is naive. It stops all websites in the beginning (even if they are already stopped) and starts all websites at the end (even if they were previously stopped). This is obviously wrong.
What I'd like to do is keep track in the session of which websites required a stop at the beginning, and then, at the end, only try to restart those websites. Getting the state of a website is easy using the ServerState property on the CIM object. The question I have is, how should I store this information in the MSI session?
It's easy to stuff a single piece of information into a session Property, but what's the best way to store a set of N pieces of information, one for each website? In some cases there can be 1 website, in some cases, 51 websites.
I suppose I could use each distinct website name to create a distinct property name. Just not sure that is the best, most-efficient, most efficacious way to do things. Also, is it legal to use slashes in the name of an MSI Session property? (the website names will have slashes in them)
Suggestions?
You might want to check out:
VBScript (and Jscript) MSI CustomActions suck
C++ or C# is a much better choice. If your application already has dependencies on the framework then adding dependencies in your installer is a good logical choice. WiX has Deployment Tools Foundation ( DTF ) that has a custom action pattern that feels a lot jscript. You could then create a dictionary of websites and their run state and serialize it out to a single property. On the back side you could reconsitute that collection and then act upon it.
Not to mention the debugging story is MUCH better in DTF.
There's a simple solution. I was having a brain cramp.
All of the items I needed to store were strings - actually the names of websites that had been stopped during the installation. I just used the Javascript String.join method to create a single string, and the stuffed that into the session variable. Like this:
Session.Property("CA_STOPPEDSITES") = sitesThatWereStopped.join(",");
Then to retrieve that information later in another custom action, I do
var stoppedSites = Session.Property("CA_STOPPEDSITES");
if (stoppedSites != null) {
var sitesToStart = stoppedSites.split(",");
....
Simple, easy.

What are benefits of serving static HTML and generating content with AJAX/JSON?

https://urbantastic-blog.tumblr.com/post/81336210/tech-tuesday-the-fiddly-bits/amp
Heath from Urbantastic writes about his HTML generation system:
All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript. Put another way, the server software for Urbantastic produces and consumes JSON exclusively. HTML, CSS, Javascript, and images are all sent via a different service (a vanilla Nginx server).
I think this is an interesting model as it separates presentation from data physically. I am not an expert in architecture but it seems like there would be a jump in efficiency and stability.
However, the following concerns me:
[subjective] Clojure is extremely powerful; Javascript is not. Writing all the content generation on a language created for another goals will create some pain (imagine writing Javascript-type code in CSS). Unless he has a macro-system for generating Javascript, Heath is probably up to constant switching between JavaScript and Clojure. He'll also have a lot of JS code; probably a lot more than Clojure. That might not be good in terms of power, rapid development, succinctness and all the things we are looking at when switching to LISP-based langauges.
[performance] I am not sure on this but rendering everything on user's machine might lag.
[accessibility] If you have JS disabled you can't use site at all.
[accessibility#2] i suspect that a lot of dynamic data filling with JavaScript will create cross-browser issues.
Can anyone comment? I'd be interested in reading your opinions on this type of architecture.
References:
Link to discussion on HN.
Link to discussion on /r/programming.
"All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript."
I think that's the standard model of an RIA. The emphasis word seems to be 'All' here. Cause in many websites a lot of the dynamic content is still not obtained through Ajax, only key features are.
I don't think the rendering issues would be a major bottleneck if you don't have a huge webpage with a lot of elements.
JS accessibility is indeed a problem. But then, users who want to experience AJAX must have JS enabled. Have you done a survey on how many of YOUR users don't have it enabled?
The advantage is, you can serve 99% (by weight) of the content through CDN (like Akamai) or even put it on external storage (eg. S3). Serving only the JSON it's almost impossible for a site to get slashdoted.
When AJAX began to hit it big, late 2005 I wrote a client-side template engine and basically turned my blogger template into a fully fledged AJAX experience.
The thing is, that template stuff, it was really easy to implement and it eliminated a lot of the grunt work.
Here's how it's was done.
<div id="blogger-post-template">
<h1><span id="blogger-post-header"/></h1>
<p><span id="blogger-post-body"/><p>
<div>
And then in JavaScript:
var response = // <- AJAX response
var container = document.getElementById("blogger-post-template");
if (!template) { // template context
template = container.cloneNode(true); // deep clone
}
// clear container
while(container.firstChild)
container.removeChild(template.firstChild);
container.appendChild(instantiate(template, response));
The instantiate function makes a deep clone of the template then searches the clone for identifiers to replace with data found in the response. The end result is a populated DOM tree which was originally defined in HTML. If I had more than one result I just looped through the above code.

Categories

Resources