What's difference between static and non-static resources? - javascript

I am primarily a front-end developer/designer, however recently, I've been exploring end to end solutions. Yesterday I finished a TODO application using the mean stack and would like to start exploring deployment options to my VPS.
That being said, I've been advised to use nginx as a reverse proxy is for serving up static resources? Unfortunately, I'm getting stuck on simple questions.
What are the example static resource?
What factors define static resources?
What are examples of non-static resources?
Lastly, are there any weird edge-cases I should be aware of?
Sorry about the noobness of this question.

In this case, a static resource refers to one that is not generated with code on the fly, meaning that its contents won't change from request to request.
Images, JavaScript, CSS, etc., are all candidates for this. Basically, you set a large cache time for these resources, and your Nginx servers can keep a copy on disk (or in Redis or something similar) so that they are ready to return to the client without hitting your application servers.
It's important to remember to use versioned file names when setting large cache times. header-image-20140608.png for example, means you can have a later version without worrying about the old one still being in the cache.

A static resource is something that isn't generated dynamically.
An example of a static resource is an image. It's the same for each and every request. It's a file on the filesystem that doesn't require any processing - you simply tell nginx send this file as-is to the user.
An example of a dynamic resource is json data specific to the user requesting it (it has to be generated specifically for that user).
With a dynamic resource these is also often your own domain specific code executed, a request to the database etc.
The reason nginx should serve static content is because it excels at serving this content in a parallel way - it was designed exactly for this.
If you are using Ruby/Python/node.js/Java etc, you can also serve static resources through these processes (Just call File.open() and start streaming the data) - however it would be much slower, and also lower the number of simultaneous dynamic requests you could serve.

A static resource is resource which will not be changed frequently and this can be stored on client's browser end unless required , to prevent load on web server and loading the site faster at client end.
Some examples of these are : images, javascript, css
A dynamic resource is the content that changes on a web resource which is mainly data that keeps changing on a page which is specific to a user or items.
In order to make sure that your static data reduces the load on your server and ensures fast performance on client end you need to take care of various server specific configurations like enabling the compressing of js files , rendering the header content for images properly.
When you want to change the file content make sure you prevent the browser from picking this static old content from cache, attach a time stamp with the urls of these static resources which will ensure upldated resource is loaded when needed

Static resources mean resources that don't change and do not involve server-side code.
This typically means images, CSS, and somethimes client-side Javascript.

Related

Difference between WebApp that connects to API vs backend rendering

Sometimes when I create basic web tools, I will start with a nodeJS backend, typically creating an API server with ExpressJS. When certain routes are hit, the server responds by rendering the HTML from EJS using the live state of the connection and then sends it over to the browser.
This app will typically expose a directory for the public static resources and will serve those as well. I imagine this creates a lot of overhead for this form of web app, but I'm not sure.
Other times I will start with an API (which could be the exact same nodeJS structure, with no HTML rendering, just state management and API exposure) and I will build an Angular2 or other HTML web page that will connect to the API, load in information on load, and populate the data in the page.
These pages tend to rely on a lot of AJAX calls and jQuery in order to refresh angular components after a bunch of async callbacks get triggered. In this structure, I'll use a web server like Apache to serve all the files and define the routes, and the JS in the web pages will do the rest.
What are the overall strengths and weaknesses of both? And why should I use one strategy versus the other? Are they both viable and dependent upon scale and use? I imagine horizontal scaling with load balancers could work in both situations.
There is no good or bad approach you could choose. Each of the approaches you described above have some advantages and you need to decide which one suits best to your project.
Some points that you might consider:
Server-side processing
Security - You dont have to expose sensitive information (API tokens, logins etc).
More control - You will have more control over what you do with your resources
"Better" client support - Some clients (IE) do not support same things as the others. Rendering HTML on the server rather than manipulating it on client will give you more support for clients.
It can be simpler to pre-render your resources on server rather than dealing with asynchronous approach on client.
SEO, social sharing etc. - How your server sends resources, thats how bots see them. If you pre-render everything on the server bot will be able to scrape your site, tag it etc. If you do it on the client, it will just see non-processed page. That being said, there are ways to work around that.
Client-side processing
Waiting times. Doing stuff on the client-side will improve your load times. But be careful not to do too many things since JS is single-threaded and heavy stuff will block your UI.
CDN - you can serve static resources (HTML, CSS, JS etc) from CDN which will be much faster than serving them from your server app directly
Testing - It is easy to mock backend server when testing your UI.
Client is a front-end for particular application/device etc. The more logic you put into client, the more code you will have to replicate across different clients. Therefore if you plan to have mobile app, it will be better to have collection of APIs to call rather than including your logic in the client.
Security - Whatever runs on the client can be fully read by the client. No matter how much you minify, compress, encrypt everything a resourceful person will always be able to do whatever he wants with your code
I did not mark pro/con on each point on purpose because it is up to you to decide which it is.
This list could go on and on, I didn't want to think about more points because it is very subjective, and in the end it depends on the developer and the application.
I personally tend to choose "client making ajax requests" approach or blend of both - pre-render something on the server and client takes care of rest. Be careful with the latter though as it will break your automated tests, IDE integration etc. if not implemented correctly.
Last note - You should always do crucial validations on the server. Never rely on data from client.

Proper way to inject javascript code?

I would like to create a site with a similar functionality like translate.google.com or hypothes.is has: users can enter any address and the site opening with an additional menu. I gues this is done with some middleware-proxy solution and a javascript is injected in the response, but I'm not sure. Do you have any idea how to implement the same feature? How can it work with secured (https) sites?
Many Thanks
The entire site is fetched by the server, the source code is parsed, code injected and then sent back to the requesting client.
It works with SSL just fine, because it's two separate requests - the request that gets sent to the endpoint is not seen by the user.
The certificate is valid because it's being served under google's domain.
In order to actually implement something like this could potentially be quite complicated, because:
The HTML you are parsing won't necessarily conform to your expectations, or even be valid
The content you're forwarding to the client will likely reference resources with a relative URI. This means that you also need to intercept these requests and pull the resources (images, external css, js, etc) and serve them back to the client - and also rewrite the URLs.
It's very easy to break content by injecting arbitrary javascript. You need to be careful that your injected code is contained and won't interfere with any existing code on the site.
It's very common for an implementation such as this to have non-obvious security concerns, often resulting in XSS attacks being possible.

How to load a javascript file from the best source

I have a few web servers in different locations and would like to load my javascript files from the fastest (nearest??) server. For example in Location A, I would expect the users to get their files from servers in that Location, but users from Location B would get their files from other servers, hopefully servers from location B, but that is not necessary.
I have found how to load javascript files conditionally, and I think that is a good start. I just need a way to find which is the best source(faster response).
Thanks,
Just use a CDN if you want that minimal performance advantage. This would differ a few milliseconds.
There is a list of CDN on http://jquery.com/download/#using-jquery-with-a-cdn
The only advantage of using a CDN is that the user may have downloaded the jQuery library earlier from another website, so the jQuery library is reused from it's cache.
If you are encountering performance problems, try profiling the website and check the ammount of time that a resource takes to run or load.
This isn't really a problem the client should solve. You should put your server behind a proxy that balances the load. If the proxy's bandwidth isn't enough, then I think you're out of luck. A quick and dirty solution is to do a Math.random() in the client side and choose the server based on that. It should balance the load pretty evenly.
If you were to measure the response time from the mirror servers, you would just introduce more load. Lets say, we have a way to determine the response time. You would either request the file from all servers, meaning you just made everything worse, or you would wait for server1, and if that didn't respond in time, you would move to server2. But by doing this you introduced load to server1.
Also if you were to ping the server, that isn't a real indicator if the available performance of that server. The server might be able to respond fast as the response is short and requires no real IO, but if you were to request a file that would mean possibly reading from the disk.

How can I ambidextrously handle different hosts when Django is running Gunicorn behind Apache?

I have a Django installation that I would like to run multiple variations of the same site: same data, different static content, with an ultimate goal of demonstrating XYZ as implemented with various JavaScript frameworks. I would like to have different home pages load, and those pull their own distinct static content. (All intended projects are SPAs.)
I tried the solution at How can I get the domain name of my site within a Django template?, but on my system the incumbent site doesn't give a hostname of 'pragmatometer.com'; it gives a hostname of 'localhost:8000', because Django / Gunicorn is serving up pages as localhost. I tried specifying in /etc/hosts that pragmatometer.com is 127.0.0.1 and having Apache proxy to pragmatometer.com, but that resulted in an error. That leaves open the prospect of running separate hosts on different ports, which should be proxied as distinct, or making the homepage redirect to a URL-specific landing page, a solution which would sacrifice the clean URL of xyz.pragmatometer.com to demonstrate the XYZ framework implementation. I'm seeing multiple ways of duct taping it with JavaScript, only one or two of which I would want a future boss to see...
I would ideally like to have multiple (sub)domains' root URL's pulling a subdomain-specific homepage and the /load/*, /save/* etc. consistent across them. I would also like to have the root URL's pulling their own CSS and JavaScript, but that's easy enough if I can get the root URL working appropriately.
The best solution I am seeing so far is having separate server processes listening on the same IP, but having isomorphic servers running on different ports and proxied by different Apache VirtualHosts. Either that or having JavaScript detect the URL and overwrite the page with the "real" index for the domain, which has a bit of a smell.
Comments about a better solution or how to execute the above intent well?
--EDIT--
Or another approach which might be a little cleaner:
Have a home image that loads the contents of /framework/ for each framework, and then document.write()s it after the page is loaded enough for a document.write() to clobber existing page contents.
If I used jQuery to clobber and load a page in this fashion, would it leave behind any pollution that would interfere with frameworks working appropriately?
Your stack looks kinda crazy.
You want one webserver with Django which can be accessed by multiple domains. Each domain causes the Django application to serve different content. Did i understand you correctly?
If yes, maybe you are successful by replacing Apache by Nginx. It can resolve the requesting hostname and decide how to redirect the request:
What's the difference of $host and $http_host in Nginx
Multiple Domain Hosting With One Django Project
Update
Relevant nginx documentation for distinguishing between different hostnames:
http://nginx.org/en/docs/http/request_processing.html
http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name
Relevant nginx documentation for adding request headers:
http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header
Also see this answer:
Adding and using header (HTTP) in nginx

Download one file, with pieces stored on more than one server (HTTP)

I am working on a file upload system which will store individual parts of large files on more than one server. So the distribution of a 1GB file will look something like this:
Server 1: 0-128MB
Server 2: 128MB-256MB
Server 2: 256MB-384MB
... etc
The intention of this is to allow for redundancy (each part will exist on more than one server), security (no one server has access to the entire file), and cost (bandwidth expenses are distributed).
I am curious if anyone has an opinion on how I might be able to "trick" web browsers into downloading the various parts all in one link.
What I had in mind was something like:
Browser is linked to Server 1, which provides a content-size of the full file
Once 128MB is served, Server 1 will intentionally close the connection
Hopefully, the browser will try to restart the download, requesting Server 1
Server 1 provides a 3XX redirect to Server 2
Browser continues downloading from Server 2
I don't know for certain that my example works, as I haven't tested it yet. I was curious if there were other solutions someone might have?
I'd like to make the whole process as easy as possible (ideally requiring no work beyond a simple download). I don't want the users to have to use another program (ie: cat'ing the files together). I'd also like to not use a proxy server, since it would incur extra bandwidth costs.
As far as I'm aware, there is no javascript solution for writing a file, if there was one, that would be great.
AFAIK this is not possible by using the HTTP protocol. You can probably use a custom browser extension but it would depend on the browser. Another alternative is to create a Java applet that would download the file from different servers. The applet can accept the URLs to the different servers as parameters.
To save the generated file:
https://stackoverflow.com/a/4551467/329062
That solution stores the file in memory though, so it won't work with very large files.
You can download the partial files into a JS variable using JSONP. That will also let you get around the same-origin policy.
Javascripts security model will only allow you to access data from the same origin where the Javascript came from - i.e. not multiple servers.
If you are going to have the file bits on multiple servers, you will need the user to load the web page, fetch the bit and then finally stick the bits together in the correct order. If you can manage to get all your users to do this (correctly), you are a better man than I.
It's possible to do in modern browsers over standard HTTP.
You can use XHR2 with CORS to download file chunks as ArrayBuffers and then merge them using Blob constructor and use createObjectURL to send merged file to the user.
However, I suspect that browsers will store these objects in RAM, so it's probably a bad idea to use it for large files.

Categories

Resources