Here is an image of my recent webpage test:
Is there any way to start as many http requests as early as possible? For example, the google font file requests start very late. Similarly, I want to move the jQuery request to as early as the script.min.js request which is hosted on the domain. Basically, I am looking for any way to make these requests more efficient.
First of all, make sure the request are done straight from HTML whenever possible, and not dynamically via JavaScript. Put your JS and CSS requests into <head> if possible. That way, the preload scanner of the browser will request those files as soon as possible.
Be careful though about where the CSS is placed - see https://csswizardry.com/2018/11/css-and-network-performance/ for more.
If you can't put everything into <head> of the HTML, you're looking for <link rel="preload">, which was created exactly for that purpose; it tells the browser to download some resources, but not execute them yet.
Some literature:
https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content
https://developers.google.com/web/updates/2016/03/link-rel-preload
https://medium.com/reloading/preload-prefetch-and-priorities-in-chrome-776165961bbf
https://caniuse.com/#feat=link-rel-preload
Note though that as of late 2018, it's disabled in Firefox, with no ETA when this will ship.
Also, be careful when using preload as the preloaded requests get the highest priority, which sometimes might mean than other critical but not preloaded resources will wrongly get deprioritized.
In some cases (depending on browser, order of entries in HTML etc.), preload can lead to double fetches (particularly for webfonts). Make sure to check for that.
I found this script somewhere on a web page which has external php file linked in script tag. I googled it but nothing found useful. can anyone make me understand about this?
<script type="text/javascript" src="file.php"></script>
Below is the content of file.php
Bootstrapper._serverTime = '2015-08-01 11:50:50';
Bootstrapper._clientIP = '103.39.117.145';
Bootstrapper.callOnPageSpecificCompletion();
Bootstrapper.setPageSpecificDataDefinitionIds([])
Yes, it's not an unknown practice, many CMS's as well as projects use an "endpoint" or a "script" dedicated to handle the frontend resources.
They can perform tasks such as minification of resources into one single chunk, or just plain concatenation of resource files into one.
As long as anything (script or endpoint) outputs javascript, or in this case echo's javascript, it can be used referred in a script tag.
Note - This requires the script or endpoint to handle output headers as well, which is commonly done by sending in a Content-Type header along the response
In PHP it's usually done using header('Content-Type: application/javascript');
In my view in the given case File.php may simply generate javascript code dynamically (e.g. load some JS from database and so on).
I wonder if anyone has found a way to send at mid rendering the head tag so CSS and Javascript are loaded before the page render has finished? Our page takes about 523ms to be rendered and resources aren't loaded until the page is received. I've done a lot of PHP and it is possible to flush the buffer before the end of the script. I've tried to add a Response.flush() at the end of the Masterpage page_load, but the page layout is horribly broken afterward. I've seen a lot of people using an update panel to send the content using AJAX afterward but I don't quite know what impact it would have on the SEO.
If I don't find a solution I guess I'd have to go the reverse proxy route and find a way to invalidate the proxy cache when the pages content change.
Do not place the Flush on code behind but on your html page as:
</head>
<%Response.Flush();%>
<body >
This can make something like fleekering effect on the page, so you can try to move the flush even a little lower to the page.
Also on Yahoo tips page at Flush the Buffer Early
http://developer.yahoo.com/performance/rules.html
Cache on Static
Additionally you can add client cache on static content like the css and javascript. In this page have all the ways for all iis versions.
http://www.iis.net/ConfigReference/system.webServer/staticContent/clientCache
Follow up
One more think that I suggest you to do after I see your pages is to place all css and javascript in one file each. And also use minified to minimize them.
I use this minified http://www.asp.net/ajaxlibrary/Download.ashx with very good results and real time minified.
Consider using a content-delivery-network (CDN) to host your images, CSS and JS files. Browsers have either an eight or four connection limit per domain - so once you use those up the browser has to wait for the resources to be freed up.
By hosting some files on the CDN you get another set of connections to use concurrently, allowing everything to load faster.
Also consider enabling GZIP on your server if you haven't already. This compresses files on the fly, resulting in smaller transfers.
You could use jQuery to execute your js as soon as it is loaded.
$.fn.ready(function(){
//Your code here
})
Or you could just take the standalone ready function -> $(document).ready equivalent without jQuery
You could do a fade-in or show once the document has been loaded. Just set body display:none;
I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>
If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this?
Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path.
Since no errors are displayed to the user, should this practice be considered a viable option?
Well the major drawback is performance since the browser will try (hard) to download the file and your server will look for it. Finally the browser may download the 404 page instead - thus slowing down the page load.
If you have the script referred to in the <head> tag, ( not recommended for starters ), it will slow down the initial page-render time somewhat too.
If instead of quickly returning a 404, your site just accepts the connection and then never responds, this can cause the page to take an indefinite amount of time to load, and in some cases, lock up the entire user interface.
( At least that was the case with one revision of FireFox, I hope they've fixed it since I saw that happen ~2 years ago.* )
You should at least put the tags as low in the page order as you can afford to to remedy this problem.
Your best bet by far is to have one consistent no-op url that is used as a fill-in for all "doesn't exist" JavaScript files, that returns a 0-byte response with HTTP headers telling the UA to cache it till the cows come home, that should negate most your server<->client load penalties beyond the first hit ( and that should hardly hurt people even on ye-olde dialup )
*Lesson learned: don't put script-src references in head, especially for 3rd-party scripts hosted outside your machine, because then you can have the joy of having clients be able to access your website, but run the risk of the page being inoperable because of a bit of advertising JS that was inaccessible due to some internet weirdness. Even if they're a reputable-ish 3rd-party.
If your web server is configured to do work on a 404 error ("you might be looking for this", etc) then you're also causing unnecessary load on the server.
you should ask yourself why you were too lazy to test this yourself :)
i tested 1000 randomized javascript filenames and it took several nanoseconds to load, so no, it doesn't make a difference. example:
script src="/7701992spolsky.js"
This was on my local machine however, so it should take N * roundTripTime for the browser to figure out for remote servers, where N is the number of bad scripts.
If however, you have random domain names that don't exist, like
script src="http://www.randomsite7701992.com/spolsky.js"
then it will take a long f-in time.
If you choose to implement it this way, you could tune the web server that if the referenced JS file is not found, instead of 404, it could return a redirect (301) to an empty/default JS file.
If you are using asp.net you can look into using custom handlers (ASHX files).
Here's an example:
public class JavascriptHandler : IHttpHandler {
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/plain";
//Some code to check if javascript code exists
string js = "";
if(JavascriptExists())
{
js = GetJavascript();
}
context.Response.write(js);
}
}
Then in your html header you could declare a file pointing to the custom handler:
src="/js/javascripthandler.ashx"