URI that rapidly refreshes dynamic contents from a server script - javascript

I hope questions from beginners are acceptable. I don't mind studying, but based on research so far, I'm not sure where to even begin........
I'm in need of a script that can fit into a small URI space (around 1,000 characters or less) that will rapidly update itself with information parsed by a server script (lsl/mono served by a meta-verse object over http).
The target browser is the built in media viewer of the Second Life viewer (Mozilla based). The parsed html will come from an LSL/mono script. I am attempting to display the resulting html on a primitive inside the Second Life meta-verse (which basically just turns the prim face into a kind of UV projected browser window) at an update resolution from 0.2 to 0.5 seconds.
I gather that I need something like ajax to constantly ping a serving object for the refreshed dynamic information and update a section in the initial URI? I'm at a loss as to how to set this up.
Already Tried:
I've tried simply putting my small bit of html in the URI itself and having the meta-verse's mono/lsl script force a browser update, and this works to some degree, but forcing a media refresh via such a script is throttled to a reliable refresh resolution of around 2 seconds. I really need the refresh to be fully client side instead...at a resolution of more like 0.2 seconds, as the information is used to update a moving vehicle's digital dash instruments.
Already Tried:
Just using a meta based refresh in the URI. Either I did it wrong, or it just doesn't work. Would such a method support a resolution of less than a second anyway?
Also attempted using script examples about ajax from this web site, and while they give good example code, they don't show how to set up the headers and such in the browser to use whatever libraries they're speaking of (it's assumed at the level of those threads the reader knows what libraries they're talking about and how to set them up)...so none of that works for me at this point.
The server script parses a simple bit of dynamically refreshed html formatted text. It could all be dumped into a single div area on each refresh pass.
Example of html that requires rapid parsing:
<body bgcolor="black">
<font size="7" color="cyan"><center>
Throttle: 50%<br>
Speed: 40<br>
Bearing: 100, 100, 1000<hr="red">
HP: 200 - Kills: 3<br>
Damage Dealt: 1000
</center></body>
Or if it's more efficient, it could be dumped as simple variable updates for a more advanced script that only change 'the numbers' in a table? But I have no idea how to do that either.
I think I have a pretty good understanding on how to get the server side scripts to parse the required html I wish to display. I'm just at a total loss on how to set up a URI that will ask for it every 0.2 seconds from the client-side...and avoid pulling that information from a 'cache' rather than the actual target url.

If interpret Question correctly , try utilizing XMLHttpRequest
var js = 'data:text/html;charset=utf-8,<html><script>(function r(){var x=new XMLHttpRequest();x.open("GET","https://gist.githubusercontent.com/anonymous/27e432abdb3c506aaa04/raw/109eb3da644a4bbc4aaa4d10ed286471a31b9655/update.html",true);x.onload=function(){document.write(x.responseText);setTimeout(function(){console.log(r)},200)};x.send()}())</script></html>';
431 characters
// note, `console.log(r)` called at `x.onload` instead of `r()` ,
// at stacksnippets ; to prevent recursive call to `r` , multiple requests , here
var js = 'data:text/html;charset=utf-8,%3Chtml%3E%3Cscript%3E(function%20r()%7Bvar%20x%3Dnew%20XMLHttpRequest()%3Bx.open(%22GET%22%2C%22https%3A%2F%2Fgist.githubusercontent.com%2Fanonymous%2F27e432abdb3c506aaa04%2Fraw%2F109eb3da644a4bbc4aaa4d10ed286471a31b9655%2Fupdate.html%22%2Ctrue)%3Bx.onload%3Dfunction()%7Bdocument.write(x.responseText)%3BsetTimeout(function()%7Bconsole.log(r)%7D%2C200)%7D%3Bx.send()%7D())%3C%2Fscript%3E%3C%2Fhtml%3E';
location.href = js;

Related

ESP8266 serving HTML+js

I try to host an HTML file on an esp8266 access point. I can properly show an .html file. Unfortunately, when accessing the html page, my browser cannot display javascript content. Strangely, when I work locally on my machine - it works perfectly fine. When I access the page on the esp8266 I receive the error
"Not found: dygraph.min.js."
Obviously, the browser does not find the javascript source. I wonder why. I have tried out several ways of naming and referencing, but I was not lucky until now.
I upload the files with the ESP8266 Sketch Data Upload tool to the SPIFFS. In the html file I reference the js as <script type="text/javascript"
src="dygraph.min.js"></script>.
Did anybody experience anything like this before? The whole code can be found here:
https://github.com/JohnnyMoonlight/esp8266-AccessPoint-Logger-OfflineVisualisation
I am looking forward for your input!
Thanks and best!
Take a read through your code, and imagine the requests that will be made of your web server.
Your code is written to handle requests for two URLs: / and /temp.csv - that's it.
When /temp.csv is accessed, you serve the contents of index.html. When the browser interprets that file it will try to load /dygraph.min.js from your ESP. You don't have a handler for that file. So the load fails.
You need to add a handler for it and then serve the file. So you'll need to add a line like:
server.on("/dygraph.min.js", handleJS);
and define function void handleJS() that does what handleFile() does.
You'll need to do the same thing for the /dygraph.css; you don't have a handler for it either.
I would do it this way:
void handleHTML() {
handleFile("index.html");
}
void handleJS() {
handleFile("dygraph.min.js");
}
void handleCSS() {
handleFile("dygraph.css");
}
void handleFile(char *filename) {
File f = SPIFFS.open(filename, "r");
// the rest of your handleFile() code here
}
and in your setup():
server.on("/", handleRoot);
server.on("/temp.csv", handleHTML);
server.on("/dygraph.css", handleCSS);
server.on("/dygraph.min.js", handleJS);
Separately:
Your URL to file mappings are messed up. The code I shared above is consistent with what you have now, but normally you'd want / to serve index.html; you have it serving a fragment of HTML.
Normally /temp.csv would serve a comma-separated value file. I see you have one, in the repo and you have code to add data to it; you're just not serving it. Right now you have that serving index.html. Once you start successfully loading the Javascript you'll have problems with that.
You'll need to sort those out to get this working right.
Also, in loop() you should move server.handleClient(); to be the first thing in the loop. The way you have it written you're only checking to see if there's a web request if it's time to take another temperature reading. You should always check to see if there's a web request, otherwise you're unnecessarily slowing down web service.
One last thing, completely separate from the web server code, and I wouldn't worry about this till you get the rest of your code working: your code is writing to SPIFFS roughly every 5 seconds. SPIFFS is stored in flash memory on the ESP8266. ESP8266 boards use cheap flash memory that doesn't last a long time - it wears out after maybe 10,000 to 100,000 write cycles (this is a little complicated; it's broken into "pages" and the individual cells in the pages wear out, but you have to write the entire page at the same time).
It's hard to say for sure what its lifetime will be; it depends on the specific ESP8266 boards and flash chips involved. 10,000 write cycles means the flash memory on your board might start failing after 50,000 seconds - 100,0000 write cycles would give you about 500,000 writes -- if you keep writing to the same spot. It depends on how often the same place in flash is getting written to. If that's a problem for you, you might want to increase the delay between writes or do something else with your data.
You might not run into this because you're appending to a file - you'll still rewrite the same blocks of flash memory many times, but not 10,000 times - unless you often remove the CSV file and start over. So this might be a problem for you long term or might not.
You can read more about these problems at https://design.goeszen.com/mitigating-flash-wear-on-the-esp8266-or-any-other-microcontroller-with-flash.html
Good luck!

Unable to reload same gif image, if used twice in a page [duplicate]

I know there are many ways to prevent image caching (such as via META tags), as well as a few nice tricks to ensure that the current version of an image is shown with every page load (such as image.jpg?x=timestamp), but is there any way to actually clear or replace an image in the browsers cache so that neither of the methods above are necessary?
As an example, lets say there are 100 images on a page and that these images are named "01.jpg", "02.jpg", "03.jpg", etc. If image "42.jpg" is replaced, is there any way to replace it in the cache so that "42.jpg" will automatically display the new image on successive page loads? I can't use the META tag method, because I need everuthing that ISN"T replaced to remain cached, and I can't use the timestamp method, because I don't want ALL of the images to be reloaded every time the page loads.
I've racked my brain and scoured the Internet for a way to do this (preferrably via javascript), but no luck. Any suggestions?
If you're writing the page dynamically, you can add the last-modified timestamp to the URL:
<img src="image.jpg?lastmod=12345678" ...
<meta> is absolutely irrelevant. In fact, you shouldn't try use it for controlling cache at all (by the time anything reads content of the document, it's already cached).
In HTTP each URL is independent. Whatever you do to the HTML document, it won't apply to images.
To control caching you could change URLs each time their content changes. If you update images from time to time, allow them to be cached forever and use a new filename (with a version, hash or a date) for the new image — it's the best solution for long-lived files.
If your image changes very often (every few minutes, or even on each request), then send Cache-control: no-cache or Cache-control: max-age=xx where xx is the number of seconds that image is "fresh".
Random URL for short-lived files is bad idea. It pollutes caches with useless files and forces useful files to be purged sooner.
If you have Apache and mod_headers or mod_expires then create .htaccess file with appropriate rules.
<Files ~ "-nocache\.jpg">
Header set Cache-control "no-cache"
</Files>
Above will make *-nocache.jpg files non-cacheable.
You could also serve images via PHP script (they have awful cachability by default ;)
Contrary to what some of the other answers have said, there IS a way for client-side javascript to replace a cached image. The trick is to create a hidden <iframe>, set its src attribute to the image URL, wait for it to load, then forcibly reload it by calling location.reload(true). That will update the cached copy of the image. You may then replace the <img> elements on your page (or reload your page) to see the updated version of the image.
(Small caveat: if updating individual <img> elements, and if there are more than one having the image that was updated, you've got to clear or remove them ALL, and then replace or reset them. If you do it one-by-one, some browsers will copy the in-memory version of the image from other tags, and the result is you might not see your updated image, despite its being in the cache).
I posted some code to do this kind of update here.
Change the image url like this, add a random string to the querystring.
"image1.jpg?" + DateTime.Now.ToString("ddMMyyyyhhmmsstt");
I'm sure most browsers respect the Last-Modified HTTP header. Send those out and request a new image. It will be cached by the browser if the Last-Modified line doesn't change.
You can append a random number to the image which is like giving it a new version. I have implemented the similar logic and it's working perfectly.
<script>
var num = Math.random();
var imgSrc= "image.png?v="+num;
$(function() {
$('#imgID').attr("src", imgSrc);
})
</script>
I found this article on how to cache bust any file
There are many ways to force a cache bust in this article but this is the way I did it for my image:
fetch('/thing/stuck/in/cache', {method:'POST', credentials:'include'});
The reason the ?x=timestamp trick is used is because that's the only way to do it on a per image basis. That or dynamically generate image names and point to an application that outputs the image.
I suggest you figure out, server side, if the image has been changed/updated, and if so then output your tag with the ?x=timestamp trick to force the new image.
No, there is no way to force a file in a browser cache to be deleted, either by the web server or by anything that you can put into the files it sends. The browser cache is owned by the browser, and controlled by the user.
Hence, you should treat each file and each URL as a precious resource that should be managed carefully.
Therefore, porneL's suggestion of versioning the image files seems to be the best long-term answer. The ETAG is used under normal circumstances, but maybe your efforts have nullified it? Try changing the ETAG, as suggested.
Change the ETAG for the image.
See http://en.wikipedia.org/wiki/URI_scheme
Notice that you can provide a unique username:password# combo as a prefix to the domain portion of the uri. In my experimentation, I've found that inclusion of this with a fake ID (or password I assume) results in the treatment of the resource as unique - thus breaking the caching as you desire.
Simply use a timestamp as the username and as far as I can tell the server ignores this portion of the uri as long as authentication is not turned on.
Btw - I also couldn't use the tricks above with a google map marker icon caching problem I was having where the ?param=timestamp trick worked, but caused issues with disappearing overlays. Never could figure out why this was happening, but so far so good using this method. What I'm unsure of, is if passing fake credentials will have any adverse server performance affects. If anyone knows I'd be interested to know as I'm not yet in high volume production.
Please report back your results.
Since most, if not all, answers and comments here are copies of parts the question, or close enough, I shall throw my 2 cents in.
I just want to point out that even if there is a way it is going to be difficult to implement. The logic of it traps us. From a logical stance telling the browser to replace it's cached images for each changed image on a list since a certain date is ideal BUT... When would you take the list down and how would you know if everyone has the latest version who would visit again?
So my 1st "suggestion", as the OP asked for, is this list theory.
How I see doing this is:
A.) Have a list that our dynamic and manual changed image urls can be stored.
B.) Set a dead date where the catch will be reset and the list will be truncated regardless.
C.0) Check list on site entrance vs browser via i frame which could be ran in the background with a shorter cache header set to re-cache them all against the farthest date on the list or something of that nature.
C.1) Using the Iframe or ajax/xhr request I'm thinking you could loop through each image of the list refreshing the page to show a different image and check the cache against it's own modified date. So on this image's onload use serverside to decipher if it is not the last image when it is loaded go to the next image.
C.1a) This would mean that our list may need more information per image and I think the obvious one is the possible need of some server side script to adjust the headers as required by each image to minimize the footstep of re-caching changed site images.
My 2nd "suggestion" would be to notify the user of changes and direct them to clear their cache. (Carefully, remove only images and files when possible or warn them of data removal due to the process)
P.S. This is just an educated ideation. A quick theory. If/when I make it I will post the final. Probably not here because it will require server side scripting. This is at least a suggestion not mentioned in the OP's question that he say's he already tried.
It sounds like the base of your question is how to get the old version of the image out of the cache. I've had success just making a new call and specifying in the header not to pull from cache. You're just throwing this away once you fetch it, but the browser's cache should have the updated image at that point.
var headers = new Headers()
headers.append('pragma', 'no-cache')
headers.append('cache-control', 'no-cache')
var init = {
method: 'GET',
headers: headers,
mode: 'no-cors',
cache: 'no-cache',
}
fetch(new Request('path/to.file'), init)
However, it's important to recognize that this only affects the browser this is called from. If you want a new version of the file for any browser once the image is replaced, that will need to be accomplished via server configuration.
Here is a solution using the PHP function filemtime():
<?php
$addthis = filemtime('myimf.jpg');
?>
<img src="myimg.jpg?"<?= $addthis;?> >
Use the file modified time as a parameter will cause it to read from a cached version until the file has changed. This approach is better than using e.g. a random number as caching will still work if the file has not changed.
In the event that an image is re-uploaded, is there a way to CLEAR or REPLACE the previously cached image client-side? In my example above, the goal is to make the browser forget what "42.jpg" is
You're running firefox right?
Find the Tools Menu
Select Clear Private Data
Untick all the checkboxes except make sure Cache is Checked
Press OK
:-)
In all seriousness, I've never heard of such a thing existing, and I doubt there is an API for it. I can't imagine it'd be a good idea on part of browser developers to let you go poking around in their cache, and there's no motivation that I can see for them to ever implement such a feature.
I CANNOT use the META tag method OR the timestamp method, because I want all of the images cached under normal circumstances.
Why can't you use a timestamp (or etag, which amounts to the same thing)? Remember you should be using the timestamp of the image file itself, not just Time.Now.
I hate to be the bearer of bad news, but you don't have any other options.
If the images don't change, neither will the timestamp, so everything will be cached "under normal circumstances". If the images do change, they'll get a new timestamp (which they'll need to for caching reasons), but then that timestamp will remain valid forever until someone replaces the image again.
When changing the image filename is not an option then use a server side session variable and a javascript window.location.reload() function. As follows:
After Upload Complete:
Session("reload") = "yes"
On page_load:
If Session("reload") = "yes" Then
Session("reload") = Nothing
ClientScript.RegisterStartupScript(Me.GetType), "ReloadImages", "window.location.reload();", True)
End If
This allows the client browser to refresh only once because the session variable is reset after one occurance.
Hope this helps.
To replace cache for pictore you can store on server-side some version value and when you load picture just send this value instead timestamp. When your image will be changed change it`s version.
Try this code snippet:
var url = imgUrl? + Math.random();
This will make sure that each request is unique, so you will get the latest image always.
After much testing, the solution I have found in the following way.
1- I create a temporary folder to copy the images with the name adding time () .. (if the folder exists I delete content)
2- load the images from that temporary local folder
in this way I always make sure that the browser never caches images and works 100% correctly.
if (!is_dir(getcwd(). 'articulostemp')){
$oldmask = umask(0);mkdir(getcwd(). 'articulostemp', 0775);umask($oldmask);
}else{
rrmfiles(getcwd(). 'articulostemp');
}
foreach ($images as $image) {
$tmpname = time().'-'.$image;
$srcimage = getcwd().'articulos/'.$image;
$tmpimage = getcwd().'articulostemp/'.$tmpname;
copy($srcimage,$tmpimage);
$urlimage='articulostemp/'.$tmpname;
echo ' <img loading="lazy" src="'.$urlimage.'"/> ';
}
try below solutions,
myImg.src = "http://localhost/image.jpg?" + new Date().getTime();
Above solutions work for me :)
I usually do the same as #Greg told us, and I have a function for that:
function addMagicRefresh(url)
{
var symbol = url.indexOf('?') == -1 ? '?' : '&';
var magic = Math.random()*999999;
return url + symbol + 'magic=' + magic;
}
This will work since your server accepts it and you don't use the "magic" parameter any other way.
I hope it helps.
I have tried something ridiculously simple:
Go to FTP folder of the website and rename the IMG folder to IMG2. Refresh your website and you will see the images will be missing. Then rename the folder IMG2 back to IMG and it's done, at least it worked for me in Safari.

Preventing 'content-sniffing' type vulnerabilities when handling user-uploaded images?

The problem:
I work on an internal tool that allows users to upload images - and then displays those images back to them and others.
It's a Java/Spring application. I have the benefit of only needing to worry about IE11 exactly and Firefox v38+ (Chrome v43+ would be a nice to have)
After first developing the feature, it seems that users can just create a text file like:
<script>alert("malicious code here!")</script>
and save it as "maliciousImage.jpg" and upload it.
Later, when that image is displayed inside image tags like:
<img src="blah?imgName=foobar" id="someImageID">
actualImage.jpg displays normally, and maliciousImage.jpg displays as a broken link - and most importantly no malicious content is interpreted!
However If the user right-clicks on this broken link, and clicks 'view image'... bad things happen.
the browser does 'content-sniffing' a concept which is new to me, detects that 'maliciousImage.jpg' is actually a text file, and very kindly renders it as HTML without hesitation. Any script tags are passed to the JavaScript interpreter and, as you can imagine, we don't want this.
What I've tried so far
In short, every possible combination of response headers I can think of to prevent the browser from content-sniffing. All the answers I've found here on stackoverflow, and other docs, imply that setting the content-type header should prevent most browsers from content-sniffing, and setting X-content options should prevent some versions of IE.
I'm setting the x-content-type-options to no sniff, and I'm setting the response content type. The docs I've read lead me to believe this should stop content-sniffing.
response.setHeader("X-Content-Type-Options", "nosniff");
response.setContentType("image/jpg");
I'm intercepting the response and these headers are present, but seem to have no effect on how the malicious content is processed...
I've also tried detecting which images are and are not malicious at the point of upload, but I'm quickly realizing this is very much non-trivial...
End goal:
Naturally - any output at all for images that aren't really images (garbled nonsense, an unhandled exception, etc) would be better than executing the text-file as HTML/javascript in the clear, but displaying any malicious HTML as escaped/CDATA'd plain-text would be ideal... though maybe a bit impractical.
So I ended up fixing this problem but forgot to answer my own question:
Step 1: blocking invalid images
To get a quick fix out, I simply added some fairly blunt code that checked if an image was actually an image - during upload and before serving it, using the imageio lib:
import javax.imageio.ImageIO;
//......
Image img = attBO.getImage(imgId);
InputStream x = new ByteArrayInputStream(img.getData());
BufferedImage s;
try {
s = ImageIO.read(x);
s.getWidth();
} catch (Exception e) {
throw new myCustomException("Invalid image");
}
Now, initially i'd hoped that would fix my problem - but in reality it wasn't that simple and just made generating a payload more difficult.
While this would block:
<script>alert("malicious code here!")</script>
It's very possible to generate a valid image that's also an XSS payload - just a little more effort....
Step 2: framework silliness
It turned out there was an entire post-processing workflow that I'd never touched, that did things such as append tokens to response bodies and use additional frameworks to decorate responses with CSS, headers, footers etc.
This meant that, although the controller was explicitly returning image/png, it was being grabbed and placed (as bytes) post processing was taking that bytestream, and wrapping it in a header and footer, to form a fully qualified 'view' - this view would always have the 'content-type' text/html and thus was never displayed correctly.
The crux of this problem was that my controller was directly returning an image, in a RESTful fashion, when the rest of the framework was built to handle controllers returning full fledged views.
So I had to step through this workflow and create exceptions for the controllers in my code that returned something other than worked in a restful fashion.
for example with with site-mesh it was just an exclude(as always, simple fix once I understood the problem...):
<decorators defaultdir="/WEB-INF/decorators">
<excludes>
<pattern>*blah.ctl*</pattern>
</excludes>
<decorator name="foo" page="myDecorator.jsp">
<pattern>*</pattern>
</decorator>
and then some other other bespoke post-invocation interceptors.
Step 3: Content negotiation
Now, I finally got the stage where only image bytecode was being served and no review was being specified or explicitly generated.
A Spring feature called 'content negotiation' kicked in. It tries to reconcile the 'accepts' header of the request, with the 'messageconverters' it has on hand to produce such responses.
Because spring by default doesn't have a messageconverter to produce image/png responses, it was falling back to text/html - and I was still seeing problems.
Now, were I using spring 4, I could've simply added the annotation:
#Produces("image/png")
to my controller - simple fix...
Step 4: Legacy dependencies
but because I only had spring 3.0.5 (and couldn't upgrade it) I had to try other things.
I tried registering new messageconverters but that was a headache or adding a new post-method interceptor to simply change the content-type back to 'image/png' - but that was a hacky headache.
In the end I just exposed the request/reponse in the controller, and wrote my image directly to the response body - circumventing Spring's content-negotiation altogether
....and finally my image was served as an image and displayed as an image - and no injected code was executed!
That sounds odd, because it works perfectly elsewhere. Are you sure the X-Content-Type-Options header is present in the responses?
Here is a demo I built a while back, where I have a file that's a valid html, gif and javascript. As you can see it first loads as an HTML, but then loads itself as an image and as a script (which executes):
http://research.insecurelabs.org/content-sniffing/gifjs.html
However if you load it using the "X-Content-Type-Options: nosniff" header, the script no longer executes:
http://research.insecurelabs.org/content-sniffing/nosniff/gifjs.html
Btw, the image renders properly in FF/IE, but not in Chrome.
Here is a demo, where I attempted what you described:
http://research.insecurelabs.org/content-sniffing/stackexchange.html
First image is without nosniff, and second is with, and it seems to work as intended. Second one does not run the script when opened with "view image".
Edit:
Firefox doesn't seem to support X-Content-Type-Options: nosniff
So, you should also add "Content-disposition: attachment;filename=image.gif" or similar to the images. The image will load normally if loaded through an image tag, but if you open the URL directly, you will force a download instead of showing the image directly in the browser.
Example: http://research.insecurelabs.org/content-sniffing/attachment/
adeneo is pretty much spot-on. You should use whatever image library you want to check if the uploaded file is a valid file for the type it claims to be. Anything the client sends can be manipulated.

How do I protect JavaScript files?

I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>

are there any negative implications of sourcing a javascript file that does not actually exist?

If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this?
Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path.
Since no errors are displayed to the user, should this practice be considered a viable option?
Well the major drawback is performance since the browser will try (hard) to download the file and your server will look for it. Finally the browser may download the 404 page instead - thus slowing down the page load.
If you have the script referred to in the <head> tag, ( not recommended for starters ), it will slow down the initial page-render time somewhat too.
If instead of quickly returning a 404, your site just accepts the connection and then never responds, this can cause the page to take an indefinite amount of time to load, and in some cases, lock up the entire user interface.
( At least that was the case with one revision of FireFox, I hope they've fixed it since I saw that happen ~2 years ago.* )
You should at least put the tags as low in the page order as you can afford to to remedy this problem.
Your best bet by far is to have one consistent no-op url that is used as a fill-in for all "doesn't exist" JavaScript files, that returns a 0-byte response with HTTP headers telling the UA to cache it till the cows come home, that should negate most your server<->client load penalties beyond the first hit ( and that should hardly hurt people even on ye-olde dialup )
*Lesson learned: don't put script-src references in head, especially for 3rd-party scripts hosted outside your machine, because then you can have the joy of having clients be able to access your website, but run the risk of the page being inoperable because of a bit of advertising JS that was inaccessible due to some internet weirdness. Even if they're a reputable-ish 3rd-party.
If your web server is configured to do work on a 404 error ("you might be looking for this", etc) then you're also causing unnecessary load on the server.
you should ask yourself why you were too lazy to test this yourself :)
i tested 1000 randomized javascript filenames and it took several nanoseconds to load, so no, it doesn't make a difference. example:
script src="/7701992spolsky.js"
This was on my local machine however, so it should take N * roundTripTime for the browser to figure out for remote servers, where N is the number of bad scripts.
If however, you have random domain names that don't exist, like
script src="http://www.randomsite7701992.com/spolsky.js"
then it will take a long f-in time.
If you choose to implement it this way, you could tune the web server that if the referenced JS file is not found, instead of 404, it could return a redirect (301) to an empty/default JS file.
If you are using asp.net you can look into using custom handlers (ASHX files).
Here's an example:
public class JavascriptHandler : IHttpHandler {
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/plain";
//Some code to check if javascript code exists
string js = "";
if(JavascriptExists())
{
js = GetJavascript();
}
context.Response.write(js);
}
}
Then in your html header you could declare a file pointing to the custom handler:
src="/js/javascripthandler.ashx"

Categories

Resources