I am on my way to some web development at the moment. There I have a set of views (different versions of the site the user will be able to see). Many of those allow some interaction that is JS/Ajax based. This is just the context of this question:
Where can I put the request URLs of the various ajax requests?
I know this seems a little stupid this question thus let me explain a bit. I assume jQuery but this question is basically not strictly related to it. I will try to give very minimalistic snippets to see the idea, these are of course not 1:1 correct/finished/good.
Typically such a site has not only one single type of request but a whole bunch of these. Think of a site where the user sees his personal data like name, mail, address, phone etc. On clicking on one such entry, a minimal form should be displayed to allow modification of the entry. Of course you need minor changes in the replacements (e.g. distinguish between change name and change phone).
First approach was to write ajax code for each and every possible entry separately in a JS file. I mean that each entry gets its own html id and I just replace the content of the element with the named id with the new content. I write code for each id explicitly in JS causing quite some redundancy in code (although a well designed set of functions will help here):
$("#name").click(function(){ /* replace #name, hardcode url */});
$("#phone").click(function(){ /* replace #phone, hardcode url */});
One other way was to put some <a> tag with the href set to the url of the AJAX request. Then the developer can define some classes that need to follow a defined and fixed scheme. The JS code gets smaller in size as only a single event must be registered and I need to follow the convention throughout the site.
<div class='foo'>... <a href="ajax.php?first" class="ajax"></div>
<div class='foo'>... <a href="ajax.php?second" class="ajax"></div>
and the simplified JS:
$(".foo a.ajax").click(function(ev){ /* do something and use source of ev to fetch the url */ });
This second method could be done even worse if you did put the url in any html tag and hide it from the user (scary).
Ideally one should write the page such, that all interaction that is AJAX-enabled should be doable with JS disabled as well. Thus I think the way of putting the urls in <a> tags is not good. However I think hardcoding them is also not ideal.
So did I miss a useful/typical part of how one can do this? is there even some consesus where such data can be located best?
If your website is big enough, you should seperate your urls based on modules such as banking, finance, user etc. But if you do not have that much urls, you can store all of them in a single javascript file.
You should store BASE url in a single javascript file with all of should import it(in case of your domain changes or development to production mode).
//base_url.js
var BASE_URL_PROD = "www......com"; // production server url
var BASE_URL_DEV = "localhost:3000"; // local server url
var BASE_URL = BASE_URL_DEV; // change this if you are on dev or prod mode.
// urls.js
var FETCH_USER = BASE_URL + "/user/fetch";
var SAVE_USER = BASE_URL + "/user/save";
// in some javascript class
$("#clickMe").ajax({url: FETCH_USER} ...);
The question here is: do you want to offer a way to access the information, if javascript is turned off or not loaded yet?
You already answered yourself: If javascript is disabled or not loaded yet, the user will directly go to the given url.
If you want to offer a none-javascript way, change your controller and check for ajax request or just use the javascript way, like Abdullah described already.
Related
I am creating a firefox extension that lets the operator perform various actions that modify the content of the HTML document. The operator does not edit HTML, they take other actions and my extension modifies the document by inserting elements, adding attributes, and so forth.
When the operator is finished, they need to be able to save the HTML document as a file (or have my extension send it to an internet destination, but this is not required since they can email the saved file).
I thought maybe the changes made by the javascript code in my extension would be reflected in the HTML document, but when I ask the firefox browser to "view source" after making modifications, it displays the original HTML text.
My questions are:
#1: What is the easiest way for the operator to save the HTML document with all the changes my extension has made?
#2: What is the easiest way for the javascript code in my extension to process the HTML document contents and write to an HTML file on the local disk?
#3: Is any valid HTML content incapable of accurate representation in the saved file?
#4: Is the TreeWalker part of the solution (see below)?
A couple observations from my research so far:
I've read about the TreeWalker object, which seems to provide a fairly painless way for an extension to walk through everything (?or almost everything?) in the HTML document. But does it expose everything so everything in the original (and my modifications) can be saved without losing anything of importance?
Does the TreeWalker walk through the HTML document in the "correct order" --- the order necessary for my extension to generate the original and/or modified HTML document?
Anything obscure or tricky about these problems?
Ok so I am assuming here you have access to page DOM. What you need to do it basically make changes to the dom and then get all the dom code and save it as a file. Here is how you can download the page's html code. This will create an a tag which the user needs to click for the file to download.
var a = document.createElement('a'), code = document.querySelectorAll('html')[0].innerHTML;
a.setAttribute('download', 'filename.html');
a.setAttribute('href', 'data:text/html,' + code);
Now you can insert this a tag anywhere in the DOM and the file will download when the user clicks it.
Note: This is sort of a hack, this injects entire html of the file in the a tag, it should in theory work in any up to date browser (except, surprise, IE). There are more stable and less hacky ways of doing it like storing it in a file system API file and then downloading that file instead.
Edit: The document.querySelectorAll line accesses the page DOM. For it to work the document must be accessible. You say you are modifying DOM so that should already be there. Make sure you are adding the code on the page and not your extension code. This code will be at the same place as your DOM modification code, not your extension pages that can't access the DOM.
And as for the a tag, it will be inserted in the page. I skipped the steps since I assumed you already know how to manipulate DOM and also because I don't know where you would like to add the link. And you can skip the user action of clicking the link too, but it's a hack and only works in modern browsers. You can insert the a tag somewhere in the original page where user won't see it and then call the a.click() function to simulate a click event on the link. But this is not a legit way and I personally only use it on my practice projects to call click event listeners.
I can only test this on chrome not on FF but try this code, this will not require you to even add the a link to DOM. You need to add this next to the DOM manipulation code. This will work if luck is on your side :)
var a = document.createElement('a'), code = document.querySelectorAll('html')[0].innerHTML;
a.setAttribute('download', 'filename.html');
a.setAttribute('href', 'data:text/html,' + code);
a.click();
There is no easy way to do this with the web API only, at least when you want a result that does not omit stuff like the doctype or comments. You could still write a serializer yourself that goes through document.childNodes and serialized according to the node type (Element.outerHTML, Comment.data and so on).
Luckily, you're writing a Firefox add-on, so you have access to a lot more (powerful) stuff.
While still not 100% perfect, the nsIDocumentEncoder implementations will produce pretty decent results, that should only differ in some whitespace and explicit charset declaration at most (everything else is a bug).
Here is an example on how one might use this component:
function serializeDocument(document) {
const {
classes: Cc,
interfaces: Ci,
utils: Cu
} = Components;
let encoder = Cc['#mozilla.org/layout/documentEncoder;1?type=text/html'].createInstance(Ci.nsIDocumentEncoder);
encoder.init(document, 'text/html', Ci.nsIDocumentEncoder.OutputLFLineBreak | Ci.nsIDocumentEncoder.OutputRaw);
encoder.setCharset("utf-8");
return encoder.encodeToString();
}
If you're writing an SDK add-on, stuff gets more complicated as the SDK abstracts some important stuff away. You'll need to go through the chrome module, and also figure out the active window and tab yourself. Something like Services.wm.getMostRecentWindow("navigator:browser").content.document (Services.jsm) should do the trick.
In XUL overlay add-ons, content.document should suffice to get the document of the currently active tab, and you have Components access already.
Still, you need to let the user choose a file destination, usually through nsIFilePicker and then actually write the file, by using something like a file stream or the fully async OS.File API.
Looks like I get to answer my own question, thanks to someone in mozilla #extdev IRC.
I got totally faked out by "view source". When I didn't see my modifications in the window displayed by "view source", I assumed the browser would not provide the information.
However, guess what? When I "file" ===>> "save page as...", then examine the page contents with a plain text editor... sure enough, that contained the modifications made by my firefox extension! Surprise!
A browser has no direct write access to the local filesystem. The only read access it has is when explicitly provide a file:// URL (see note 1 below)
In your case, we are explicitly talking about javascript - which can read and write cookies and local storage. It can also send stuff back to the server and retrieve it, e.g. using AJAX.
Stuff you put in local storage/cookies is effectively not accessible to other programs (such as email clients).
It is possible to create very long mailto: URLs (see note 2) but only handles inline content in the email and you're going to run into all sorts of encoding issues that you're not ready to deal with.
Hence I'd recommend pursuing storage serverside via AJAX - and look at local storage once you've got this sorted/working.
Note 1: this is not strictly true. a trusted, signed javascript has access to additional functions which may include direct file access.
Note 2: (the limit depends on the browser and the email client - Lotus Notes truncaets the content rather a lot)
Is there a way to force the clients of a webpage to reload the cache (i.e. images, javascript, etc) after a server has been pushed an update to the code base? We get a lot of help desk calls asking why certain functionality no longer works. A simple hard refresh fixes the problems as it downloads the newly updated javascript file.
For specifics we are using Glassfish 3.x. and JSF 2.1.x. This would apply to more than just JSF of course.
To describe what behavior I hope is possible:
Website A has two images and two javascript files. A user visits the site and the 4 files get cached. As far as I'm concerned, no need to "re-download" said files unless user specifically forces a "hard" refresh or clears their cache. Once a site is pushed an update to one of the files, the server could have some sort of metadata in the header informing the client of said update. If the client chooses, the new files would be downloaded.
What I don't want to do is put meta-tag in the header of a page to force nothing from ever being cached...I just want something that tells the client an update has occurred and it should get the latest once something has been updated. I suppose this would just be some sort of versioning on the client side.
Thanks for your time!
The correct way to handle this is with changing the URL convention for your resources. For example, we have it as:
/resources/js/fileName.js
To get the browser to still cache the file, but do it the proper way with versioning, is by adding something to the URL. Adding a value to the querystring doesn't allow caching, so the place to put it is after /resources/.
A reference for querystring caching: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.9
So for example, your URLs would look like:
/resources/1234/js/fileName.js
So what you could do is use the project's version number (or some value in a properties/config file that you manually change when you want cached files to be reloaded) since this number should change only when the project is modified. So your URL could look like:
/resources/cacheholder${project.version}/js/fileName.js
That should be easy enough.
The problem now is with mapping the URL, since that value in the middle is dynamic. The way we overcame that is with a URL rewriting module that allowed us to filter URLs before they got to our application. The rewrite watched for URLs that looked like:
/resources/cacheholder______/whatever
And removed the cacheholder_______/ part. After the rewrite, it looked like a normal request, and the server would respond with the correct file, without any other specific mapping/logic...the point is that the browser thought it was a new file (even though it really wasn't), so it requested it, and the server figures it out and serves the correct file (even though it's a "weird" URL).
Of course, another option is to add this dynamic string to the filename itself, and then use the rewrite tool to remove it. Either way, the same thing is done - targeting a string of text during rewrite, and removing it. This allows you to fool the browser, but not the server :)
UPDATE:
An alternative that I really like is to set the filename based on the contents, and cache that. For example, that could be done with a hash. Of course, this type of thing isn't something you'd manually do and save to your project (hopefully); it's something your application/framework should handle. For example, in Grails, there's a plugin that "hashes and caches" resources, so that the following occurs:
Every resource is checked
A new file (or mapping to this file) is created, with a name that is the hash of its contents
When adding <script>/<link> tags to your page, the hashed name is used
When the hash-named file is requested, it serves the original resource
The hash-named file is cached "forever"
What's cool about this setup is that you don't have to worry about caching correctly - just set the files to cache forever, and the hashing should take care of files/mappings being available based on content. It also provides the ability for rollbacks/undos to already be cached and loaded quickly.
i use a no-cache parameter for this situations...
a have a string constant value like (from config file)
$no_cache = "v11";
and in pages, i use assets like
<img src="a.jpg?nc=$no_cache">
and when i update my code, just change the $no_cache value, and it works like a charm.
I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>
I currently have the following JavaScript function that will take current URL and concatenate it to another site URL to route it to the appropriate feedback group:
function sendFeedback() {
url = window.location.href;
newwin = window.open('http://www.anothersite.com/home/feedback/?s=' + url, 'Feedback');
}
Not sure if this is the proper terminology, but I want to mask the URL in the window.open statement to use the URL from the current window.
How would I be able to mask the window.open URL with the original in JavaScript?
Things you could do:
1- Mask the external site in a html frame inside a document from your site.
(for example www.mysite.com/shortUrl/)
2-Send a Location HTTP header (real url will eventually be displayed)
Keep in mind that browsers do their best to show the real address due to phishing concerns.
I wouldn't use javascript if I wanted to mask url even thought it would work with javascript. You wouldn't get much benefits in that scenario.
The reason is simple:
javascript/jQuery = functions belongs to client-side (browswer/your PC/DOM)
links, url, http, and headers = functions belongs to Apache.
Apache is always top level above client-side. Whenever link is fired to SampeLink.html, Apache wakes up and reads the file, but links/urls are already owned before javascript could claim them. So, it is kinda of pointless if you tried to manipulate links in your javascript scripts, even though it works but weak.
I'd point you to this awesome approach: .htaccess and you will be surprised how powerful it is. If .htaccess is presented in the parent folder of SampleLink.html, Apache denies the DOM engine (your browser) from reading files until Apache have finished reading .htaccess.
With your scenario, .htaccess can do some work for you by rewriting links and send "decoy" links to the DOM engine, meanwhile keeping the orginial links/urls behind the curtain; and visitors would reach to 404page if they tried to break the app or whatever you are concerned about.
This is a bit complicated, but it never ceased to fail me. I use this as my "bible" http://corz.org/serv/tricks/htaccess2.php.
I want to make an app that can display on any webpage, just like how Disqus or IntenseDebate render on articles & web pages.
It will display a mini-ecommerce store front.
I'm not sure how to get started.
Is there any sample code, framework, or design pattern for these "widgets"?
For example, I'd like to display products.
Should I first create a webservice or RSS that lists all of them?
Or can one of these Ajax Scripts simply digest an XHTML webpage and display that?
thanks for any tips, I really appreciate it.
Basically you have two options - to use iframes to wrap your content or to use DOM injection style.
IFRAMES
Iframes are the easy ones - the host site includes an iframe where the url includes all the nessessary params.
<p>Check out this cool webstore:</p>
<iframe src="http://yourdomain.com/store?site_id=123"></iframe>
But this comes with a cost - there's no easy way to resize the iframe considering the content. You're pretty much fixed with initial dimensions. You can come up with some kind of cross frame script that measures the size of the iframe contents and forwards it to the host site that resizes the iframe based on the numbers from the script. But this is really hacky.
DOM injection
Second approach is to "inject" your own HTML directly to the host page. Host site loads a <script> tag from your server and the script includes all the information to add HTML to the page. There's two approaches - first one is to generate all the HTML in your server and use document.write to inject it.
<p>Check out this cool webstore:</p>
<script src="http://yourdomain.com/store?site_id=123"></script>
And the script source would be something like
document.write('<h1>Amazing products</h1>');
document.write('<ul>');
document.write('<li>green car</li>');
document.write('<li>blue van</li>');
document.write('</ul>');
This replaces the original <script> tag with the HTML inside document.write calls and the injected HTML comes part of the original page - so no resizing etc problems like with iframes.
<p>Check out this cool webstore:</p>
<h1>Amazing products</h1>
<ul>
<li>green car</li>
<li>blue van</li>
</ul>
Another approach for the same thing would be separating to data from the HTML. Included script would consist of two parts - the drawing logic and the data in serialized form (ie. JSON). This gives a lot of flexibility for the script compared to the kind of stiffy document.write approach. Instead of outpurring HTML directly to the page, you generate the needed DOM nodes on the fly and attach it to a specific element.
<p>Check out this cool webstore:</p>
<div id="store"></div>
<script src="http://yourdomain.com/store_data?site_id=123"></script>
<script src="http://yourdomain.com/generate_store"></script>
The first script consists of the data and the second one the drawing logic.
var store_data = [
{title: "green car", id:1},
{title: "blue van", id:2}
];
The script would be something like this
var store_elm = document.getElementById("store");
for(var i=0; i< store_data.length; i++){
var link = document.createElement("a");
link.href = "http://yourdomain.com/?id=" + store_elmi[i].id;
link.innerHTML = store_elmi[i].title;
store_elm.appendChild(link);
}
Though a bit more complicated than document.write, this approach is the most flexible of them all.
Sending data
If you want to send some kind of data back to your server then you can use script injection (you can't use AJAX since the same origin policy but there's no restrictions on script injection). This consists of putting all the data to the script url (remember, IE has the limit of 4kB for the URL length) and server responding with needed data.
var message = "message to the server";
var header = document.getElementsByTagName('head')[0];
var script_tag = document.createElement("script");
var url = "http://yourserver.com/msg";
script_tag.src = url+"?msg="+encodeURIComponent(message)+"&callback=alert";
header.appendChild(script_tag);
Now your server gets the request with GET params msg=message to the server and callback=alert does something with it, and responds with
<?
$l = strlen($_GET["msg"]);
echo $_GET["callback"].'("request was $l chars");';
?>
Which would make up
alert("request was 21 chars");
If you change the alert for some kind of your own function then you can pass messages around between the webpage and the server.
I haven't done much with either Disqus or IntenseDebate, but I do know how I would approach making such a widget. The actual displaying portion of the widget would be generated with JavaScript. So, say you had a div tag with an id of commerce_store. Your JavaScript code would search the document, when it is first loaded (or when an ajax request alters the page), and find if a commerce_store div exists. Upon finding such a container, it will auto-generate all the necessary HTML. If you don't already know how to do this, you can google 'dynamically adding elements in javascript'. I recommend making a custom JavaScript library for your widget. It doesn't need to be anything too crazy. Something like this:
window.onload = init(){
widget.load();
}
var widget = function(){
this.load = function(){
//search for the commerce_store div
//get some data from the sql database
var dat = ajax('actions/getData.php',{type:'get',params:{page:123}});
//display HTML for data
for (var i in dat){
this.addDatNode(dat[i]);
}
}
this.addDatNode = function(stuff){
//make a node
var n = document.createElement('div');
//style the node, and add content
n.innerHTML = stuff;
document.getElementById('commerce_store').appendNode(n);
}
}
Of course, you'll need to set up some type of AJAX framework to get database info and things. But that shouldn't be too hard.
For Disqus and IntenseDebate, I believe the comment forms and everything are all just HTML (generated through JavaScript). The actual 'plugin' portion of the script would be a background framework of either ASP, PHP, SQL, etc. The simplest way to do this, would probably just be some PHP and SQL code. The SQL would be used to store all the comments or sales info into a database, and the PHP would be used to manipulate the data. Something like this:
function addSale(){ //MySQL code here };
function deleteSale(){ //MySQL code here };
function editSale(){ //MySQL code here };
//...
And your big PHP file would have all of the actions your widget would ever need to do (in regards to altering the database. But, even with this big PHP file, you'll still need someway of calling individual functions with your ajax framework. Look back at the actions/getData.php request of the example JavaScript framework. Actions, refers to a folder with a bunch of PHP files, one for each method. For example, addSale.php:
include("../db_connect.php");
db_connect();
//make sure the user is logged in
include("../authenticate.php");
authenticate();
//Get any data that AJAX sent to us
var dat = $_GET['sale_num'];
//Run the method
include("../PHP_methods.php");
addSale(dat);
The reason you would want separate files for the PHP_methods and run files, is because you could potentially have more than one PHP_methods files. You could have three method API's, one for displaying content, one for requesting content, and one for altering content. Once you start reusing your methods more and more, its best to have them all in one place. Rewritten once, rewritten everywhere.
So, really, that's all you'd need for the widget. Of course, you would want to have a install script that sets up the commerce database and all. But the actual widget would just be a folder with the aforementioned script files:
install.php: gets the database set up
JavaScript library: to load the HTML content and forms and conduct ajax requests
CSS file: for styling the HTML content and forms
db_connect: a generic php script used connect to the database
authenticate: a php script to check if a user is logged in; this could vary, depending on whether you have your own user system, or are using gravitars/facebook/twitter/etc.
PHP_methods: a big php file with all the database manipulation methods you'd need
actions folder: a bunch of individual php files that call the necessary PHP methods; call each of these php files with AJAX
In theory, all you'd have to do would be copy that folder over to any website, and run the install.php to get it set up. Any page you want the widget to run on, you would simply include the .js file, and it will do all the work.
Of course, that's just how I would set it up. I assume that changes in programming languages, or setup specifics will vary. But, the basic idea holds similar for most website plugins.
Oh, and one more thing. If you were intending to sell the widget, it would be extremely difficult to try and secure all of those scripts from redistribution. Your best bet would be to have the PHP files on your own server. The client would need to have their own db_connect.php, that connects to their own database and all. But, the actual ajax requests would need to refer to the files on your remote server. The request would need to send the url of the valid db_connect, with some type of password or something. Actually, come to think of it, I don't think its possible to do remote server file sharing. You'd have to research it a bit more, 'cuz I certainly don't know how you'd do it.
I like the Azmisov's solution, but it has some disadvantages.
Sites might not want your code on their servers. It'd be much better if you would switch from AJAX to loading scripts (eg. jQuery's getJSON)
So I suggest:
Include jquery hosted on google and a short jquery code from your domain to the client sites. Nothing more.
Write the script with cross-domain calls to your server (through getJSON or getScript) so that everything is fetched directly and nothing has to be inctalled on the client's server. See here for examples, I wouldn't write anything better here. Adding content to the page is easy enough with jQuery to allow me not mentioning it here :) Just one command.
Distribute easily by providing two lines of <script src= ... ></script>