So, let me explain it better, I'm pretty sure the title doesn't says much.
I'm in charge of doing some improvements on the company system. Now the system saves the current image, as a file, and insert into the database the path of the image.
Then, what's the problem if it works? Well, if the server gets a lot of images being inserted all the time, the server will need some improvements, so my first thought was, why don't I save the image data to the database and load it through ajax call? That's nice, it won't make the server overload from lots of gigabytes from the images, won't have problems with duplicates and such, but when I call the ajax method from jQuery, the data sent to the server side is via query string and if the data is higher than 6000 characters, like most are when I use base64 encoding, I can't call the method.
My other thought was, send the image to the server side and then generate the data to save to the database, and when the user wants to see the image, the server transform the data to an image and sends to the client side. But the problem is, there are lots of users for this system, and sending/receiving lots of images will make the server get memory problems due to the high amount of requisitions.
So to make long story short, what is the best way to send/receive images from client to server and the opposite without overloading the server capacity.
P.S.: Sorry for the long post, I wanted to make a clear question.
Thanks in advance.
The server will overload even more storing the images in the DB. Most web servers serve static files (like images) very efficiently.
You have to think more about the problem before trying to solve it:
Is your problem bandwidth overload? Try resizing the images client side before upload (you can use tools like PlUpload). If you can't, at least resize them server side to serve them smaller and save on download bandwidth
Is your problem harddrive space overload? Again, resize the images client o server side.
Is your problem CPU overload? Try to determine the root code that is the cause of the spikes. Be aware that lots of requests will cause high CPU, and you might need a more beefy server (or another webserver)
Related
I am trying to create an input form that allows for multiple files to be uploaded to my server. The way I currently have the system set up is shown in this diagram:
Essentially:
The website is running locally on each client's machine. The website makes XMLHttpRequests to the node.js webhost. If the request requires data from the Teradata database, it will in turn make a request to a local server though the node module http. The local server will then send back the data requested, then the webhost will pass that data to the client website.
The issue that am currently running into is when trying to make a POST XMLHttpRequest to the Webhost from the Website. When I do a small amount of data, it works without issue. But when I try to pass a larger amount of data such as the binary of an image in string form, it will either end the connection or somehow lose the end of the request message, I don't really know how to tell which is going on. Since I want to be able to push the image from the Website all the way to the Local Server and insert it into the database, I need to find out how to make sure I get all of it passed to the webhost correctly. I ran into this problem going the other direction, from the Local Server to the Website, but I fixed that by "chunking" the data and sending it in smaller packages. I can't seem to find a way to do that with XMLHttpRequests.
A few things to note: I cannot change the type of database, I cannot change the overall structure of the network.
If anyone has any insights in either how to troubleshoot this or suggestions for methods other than XMLHttpRequest for being able to send larger amounts of data, I would be much appreciative.
I have an idea to make something similar to Workflowy but with some new features.
(Workflowy is basically a note-taking app which beautifully organises all your notes as an endless tree)
At first, I implemented the logic in Python. It works in a terminal by printing notes line-by-line and then waiting for the command.
Is this a good idea to keep all the logic at the server and use JS only to render items and to send commands to the server?
For instance, if I want to move the entire folder into another folder, there are two ways of doing this:
Way 1: With Python which receives a command from JS 'move folder x to folder y', processes it and sends back a result to render.
Way 2: With JS which then has to understand all the folder structure and logic. In this case, the app will use a server only for storing data.
I have a feeling that way 2 (using JS to understand all the logic and Python only for saving data) is more appropriate, but this means that I have to rewrite everything from scratch.
Is the way 1 also reasonable?
Many thanks in advance!
It depends on the application you are making.
Like if you want to display thousands of data in html file, and data are stored in a json file. If you send html file and json file to the client from the server, then on the client side, you run a script that reads json file and displays it in html, then it will be slower, because client device may not be that powerful as the server is.
So for performance, use heavy tasks on server side, this may cause little more internet usage because as the client has no data in formatted manner, whenever new task on data is to be performed, you have to request the server again.
But for opposite case, you can save internet and little low performance. Here you can do some heavy tasks on client side.
It also depends on which device is used at client side.
I have a few web servers in different locations and would like to load my javascript files from the fastest (nearest??) server. For example in Location A, I would expect the users to get their files from servers in that Location, but users from Location B would get their files from other servers, hopefully servers from location B, but that is not necessary.
I have found how to load javascript files conditionally, and I think that is a good start. I just need a way to find which is the best source(faster response).
Thanks,
Just use a CDN if you want that minimal performance advantage. This would differ a few milliseconds.
There is a list of CDN on http://jquery.com/download/#using-jquery-with-a-cdn
The only advantage of using a CDN is that the user may have downloaded the jQuery library earlier from another website, so the jQuery library is reused from it's cache.
If you are encountering performance problems, try profiling the website and check the ammount of time that a resource takes to run or load.
This isn't really a problem the client should solve. You should put your server behind a proxy that balances the load. If the proxy's bandwidth isn't enough, then I think you're out of luck. A quick and dirty solution is to do a Math.random() in the client side and choose the server based on that. It should balance the load pretty evenly.
If you were to measure the response time from the mirror servers, you would just introduce more load. Lets say, we have a way to determine the response time. You would either request the file from all servers, meaning you just made everything worse, or you would wait for server1, and if that didn't respond in time, you would move to server2. But by doing this you introduced load to server1.
Also if you were to ping the server, that isn't a real indicator if the available performance of that server. The server might be able to respond fast as the response is short and requires no real IO, but if you were to request a file that would mean possibly reading from the disk.
I'm making a JavaScript Web app that must pull hundreds of fragments of JSON off the server and am concerned about the hit from so many HTTP GETs. There's no opportunity to concatenate the fragments (so as to have one GET) because the server/client doesn't know in advance what fragments it's going to need to GET until runtime.
Question is: would it be insanity to use something like WebSocket or Web RTC as the transport, with the client firing requests down the socket and the server grabbing each file as requested and firing it back down the socket?
Assuming that the server is fast enough at loading the files, this would be way more responsive that HTTP, right?
I guess I lose out on caching, but I can live with that.
What is it best to handle pagination? Server side or doing it dynamically using javascript?
I'm working on a project which is heavy on the ajax and pulling in data dynamically, so I've been working on a javascript pagination system that uses the dom - but I'm starting to think it would be better to handle it all server side.
What are everyone's thoughts?
The right answer depends on your priorities and the size of the data set to be paginated.
Server side pagination is best for:
Large data set
Faster initial page load
Accessibility for those not running javascript
Client side pagination is best for:
Small data set
Faster subsequent page loads
So if you're paginating for primarily cosmetic reasons, it makes more sense to handle it client side. And if you're paginating to reduce initial load time, server side is the obvious choice.
Of course, client side's advantage on subsequent page load times diminishes if you utilize Ajax to load subsequent pages.
Doing it on client side will make your user download all the data at first which might not be needed, and will remove the primary benefit of pagination.
The best way to do so for such kind of AJAX apps is to make AJAX call the server for next page and add update the current page using client side script.
If you have large pages and a large number of pages you are better of requesting pages in chunks from the server via AJAX. So let the server do the pagination, based of your request URL.
You can also pre-fetch the next few pages the user will likely view to make the interface seem more responsive.
If there are only few pages, grabbing it all up-front and paginating on the client may be a better choice.
Even with small data sizes the best choice would be server side pagination. You will not have to worry later if your web application scales further.
And for larger data sizes the answer is obvious.
Server side - send to the client just enough content for the current view.
In a practical world of limits, I would page on the server side to conserve all the resources associated with sending the data. Also, the server needs to protect itself from a malicious/malfunctioning client asking for a HUGE page.
Once that code is happily chugging along, I would add "smarts" to the client to get the "next" and "previous" page and hold that in memory. When the user pages to the next page, update your cache.
If the client software does this sort of page caching, do consider how fast your data ages (is likely to change) and if you should check that your cached page of data is still valid. Maybe re-request it if it ages more than 2 minutes. Maybe have a "dirty" flag in it. Something like that. Hope you find this helpful. :)
Do you mean that your JavaScript has all the data in memory, and shows one page a time? Or that it downloads each page from the server as it's needed, using AJAX?
If it's the latter, you also may need to think about sorting. If you sort using JavaScript, you'll only be able to sort one page at a time, which doesn't make much sense. So your sorting should be done on the server.
I prefer server side pagination. However, when implementing it, you need to make sure that you're optimizing your SQL properly. For instance, I believe in MySQL, if you use the LIMIT option it doesn't use the index so you need to rewrite your sql to use the index properly.
G-Man
One other thing to point out here is that very rarely will you be limited to simply paging through a raw dataset.
You might have to search for certain terms in one or more columns you are displaying, and then say sort on a few columns and then give the users the ability to page through this filtered dataset.
In a situation like this you might have to see whether it would be better to have this logic search and/or sort client side or server side.
Another thing to consider is that Amazon's cloud search api gives you some very powerful searching abilities and obviously you'll want to allow cloud search to handle searching and sorting for you if you happen to have your data hosted there.