Display result (image) of computation in website - javascript

I have a python script that generates a heightmap depending on parameters, that will be given in HTML forms. How do I display the resulting image on a website? I suppose that the form submit button will hit an endpoint with the given parameters and the script that computes the heightmap runs then, but how do I get the resulting image and display it in the website? Also, the computation takes a few seconds, so I suppose I need some type of task queue to not make the server hang in the meanwhile. Tell me if I'm wrong.
It's a bit of a general question because I myself don't know the specifics of what I need to use to accomplish this. I'm using Flask in the backend but it's a framework-agnostic question.

Save the image to a file. Return a webpage that contains an <IMG SRC=...> element. The SRC should be a URL pointing at the file.
For example, suppose you save the image to a file called "temp2.png" in a subdirectory called "scratch" under your document root. Then the IMG element would be <IMG SRC="/scratch/temp2.png"> .
If you create and save the image in the same program that generates the webpage that refers to it, your server won't return the page until the image has been saved. If that only takes a few seconds, the server is unlikely to hang. Many applications would take that long to calculate a result, so the people who coded the server would make sure it can handle such delays. I've done this under Apache, Tomcat, and GoServe (an OS/2 server), and never had a problem.
This method does have the disadvantage that you'll need to arrange for each temporary file to be deleted after an expiry period such as 12 hours or whenever you think the user won't need it any more. On the webpage you return, if the image is something serious that the user might want to keep, you could warn them that this will happen. They can always download it.
To delete the old files, write a script that checks when they were last updated, compares that with the current date and time, and deletes those files that are older than your expiry period.
You'll need a way to automatically run it repeatedly. On Unix systems, if you have shell access, the "cron" command is one way to do this. Googling "cron job to delete files older than 1 hour on web server" finds a lot of discussion of methods.
Be very careful when coding any automatic-deletion script, and test it thoroughly to make sure it deletes the right files! If you make your expiry period a variable, you can set it to e.g. 1 minute or 5 minutes when testing, so that you don't need to wait for ages.
There are ways to stream your image back without saving it to a file, but what I'm recommending is (apart possibly from the file deleter) easy to code and debug. I've used it in many different projects.

Related

Data synchronisation between webpage clients

First I'll describe my problem:
I have a web-page, that continuously writes data as json in file. Everything is ok while I have only one entity of this web-page, but as soon as second client connects to this web-page and changes values both entities write to same file and my C program that reads this file(used as pipe) goes crazy showing data from one page and then from another(predictable).
Sorry, if there already was such question, but I don't how to describe it in few words to ask google:)
So first my idea was first to read that file with web page to actualize data and then save changes. But actualizing data twice in one second would make impossible to change something on web-page(you start writing, but web-pages actualizes and cleans your changes).
Is there a way to synchronize data between clients of the same web-page, but allowing them to make changes too

Upload files asynchronously then save data about it

I am building a way for users to upload tracks with information about that track but I would like to do this asynchronously much like YouTube does.
At the moment there is an API endpoint of tracks that accepts a POST request with the uploaded file and all the meta data. It processes the track, validates everything and will then save the path to the track and all of its meta data in the database. This works perfectly but I am having trouble thinking of ways to do this asynchronously.
The user flow will be:
1) User selects a track and it starts uploading
2) A form to fill in meta data shows and user fills it in
3) Track is uploaded with its metadata to the endpoint
The problem is that the metadata form and the file upload are now two separate entities and now the file can finish uploading before the metadata is saved and vice-versa. Ideally to overcome this both the track and metadata would be saved in the browser as a cookie or something until they are both completed. At that point both would be sent to the endpoint and no changes would be required at the back end. As far as I am aware there is no way of saving files client side like this. Oh apart from that filesystem API which is pretty much deprecated.
If anyone has any good suggestions about how to do this it would be much appreciated. In a perfect world I would like there to be no changes to the back end at all but little changes are probably going to be required. Preferably no database alterations though.
Oh by the way I'm using laravel and ember.js just in case anyone knows of any packages already doing this.
I have thought about this a lot few months ago.
The closest solution that I managed to put together is to upload file and store it's filename, size, upload time (this is crucial) and other attributes in DB (as usual). Additionally, I've added the column temporary (more like a flag) which would initially be set to TRUE and only after you would sent meta data it would be negated.
Separately, I've set the cron job (I used Symfony2, but in Laravel is all the same) that would run on every 15-30 minutes and delete those files (and corresponding database records) which had temporary = TRUE and exceeded time window. In my case it was 15 minutes but you could set it to be coarse (every hour or so).
Hope this helps a bit :)

How to get the number of seconds a page loads after all data are shown on the page?

Is it possible to get the TOTAL NUMBER OF SECONDS a page fetch and display the data?
like from the moment I click a link to the moment all data are displayed on the page, done on OnInit, onrender, pageload onprerender and so on..... is it possible?
thanks
yes its possible just you need to add the code in your web.config file and run the application it will prompt you the loading time just after page rendering. Scroll the mouse and get the details.
<system.web>
<trace pageOutput="true" requestLimit="10" enabled="true" localOnly="true" traceMode="SortByTime" mostRecent="true"/>
</system.web>
Note: you need to write the trace part of the code in system.web which already exists in your web.config file.
You can easily check the time PHP needs to run the script: start to finish.
Simply store the time at start and end. Look here:
http://nl.php.net/manual/en/function.microtime.php
However, this is not the time the user experiences.
For example: If PHP needs 0.1 sec to produce the HTML, and the HTML contains 100 big images, the actual pageloading takes a lot longer than 0.1 sec.
The actual time the enduser experiences depends on a lot of factors, like webserver that is inbetween (and that need to invoke PHP), networkspeed, caching, etc..
I think your best bet is to approach this via Javascript, and use the onLoad event handler that can be attached to body.
Use some external window to do the timing between clicking and the firing of onload.
ALso, keep in mind that result might differ for other visitors with different cache-settings, different networkspeed, etc.. SO you can only get an approximation.
It's possible, but kinda complicated, because the load time from click to full load consists of so many things:
request to the server (connection roundtrip, dns lookup sometimes etc)
request processing server side (this you can measure insude your APS code)
request load till any of the events fire
etc
Long story short is would be impossible to measure it with any single method and combining many would be a pain and would not include all the parts to be measured.
In this particular case the best thing you could do is: bind onclick (on link) an ajax request with current timestamp (millisecond precision) and do a 2nd request, with current timestamp onload and substract the two.
Send a variable from server having current time in it before displaying page.
On HTML page, run a javascript function on onload(). This function is called after page is loaded. Get the current time again in this function.
Match the both time variables. One sent from server and one in onload() function. You will get the number of seconds.

Monitoring User Sessions to Prevent Editing Conflict

I'm working on something similar to a pastebin (yeah, it's that generic) but allowing for multiple user editing. The obvious problem is that of multiple users attempting to edit the same file. I'm thinking along the lines of locking down the file when one user is working on it (it's not the best solution, but I don't need anything too complex), but to prevent/warn the user I'd obviously need a system for monitoring each user's edit sessions. Working with database and ajax, I'm thinking of two solutions.
The first would be to have the edit page ping the server at a arbitrary interval, say a minute, and it would update the edit session entry in the db. Then the next time a script request to edit, it checks for the most recent ping, and if the most recent was another arbitrary time ago, say five minute, then we assume that the previous user had quited and the file can be edited again. Of course, the problem with this method is that the assumption that the previous user had quited is simply an assumption. He could be having flaky wi-fi connection and simply dropped out for ten minutes, all the time with the window still open.
Of course, to deal with this problem, we'd have to have the server respond to new request from previously closed sessions with an error, telling the client side to point out to the user that his session has ended, and then deal with it by, say, saving it as another file on the server and asking the user to manually merge it, etc. It goes without saying that this is rather horrible for the end user.
So I've came around to think of another solution. It may also be possible to get a unload event to fire when the user's session ends, but I cannot be sure whether this will work reliably.
Does anybody has any other, more elegant solution to this problem?
If you expect the number of concurrent edits to the file to be minor, you could just store a version number for the file in the db, and when the user downloads the file into their browser they also get the version number. They are only allowed to upload their changes if the version number matches. First one to upload wins. When a conflict is detected you should send back the latest file and the user's changes so that the user can manually merge in the changes. The advantage is that this works even if it's the same user making two simultaneous edits. If this feature ends up being frequently used you could add client-side merging similar to what a diff tool uses (but you might need to keep the old revisions in that case).
You're probably better off going for a "merge" solution. Using this approach you only need to check for changes when the user posts their document to the server.
The basic approach would be:
1. User A gets the document for editing, document is at version 1
2. User B gets the document for editing, document is at version 1
3. User B posts some changes, including the base version number of 1
4. Server updates document, document now at version 2
5. User B posts some changes, including the base version number of 1
6. Server responds saying document has changed since the user starts editing, and sends user the new document, and their version - user will then need to perform any merging of their changes into document version 2, and post back to the server. User is essentially now editing document version 2
7. User A posts some changes, including the version number of 2
8. Server updates the document, which is now at version 3
You can still do a "ping" every minute, to get the current version number - you already know what version they're editing, so if a new version is available you can let them know and let them download the latest version to make their changes into.
The main benefit of this approach is that users never lock files, so you don't need any arbitrary "time-outs".
I would say you are on the right track. I would probably implement a hybrid solution:
Have a single table called "active_edits" or something like that with a column for the document_id, the user, and the last_update_time. Lets say your ping time is 1 minute and your timeout is 5 minutes. So a use-case would look like this:
Bob opens a document. It checks the last_update_time. If it is over 5 minutes ago, update the table with Bob and the current time. If it is not, someone else is working on the document, so give an error message. Assuming it is not being edited, Bob works on the document for a while and the client pings an update time every minute.
I would say do include a "finish editing" button and a onunload handler. Onunload, from what I understand can be flaky, but might as well add it. Both of these would send a single send-only post to the server saying that Bob is done. Even if Bob doesn't hit "finish editing" and onunload flakes out, the worst case is that another user would have to wait 5 more minutes to edit. The advantage is that if these normally work (a fair assumption) then the system works a bit better.
In the case you described where a Bob is on a bad wireless connection or takes a break: I would say this isn't a big deal. Your ping function should make sure that the document hasn't been taken over by someone else since Bob's last ping. If it has, just give Bob a message saying "someone else has started working on the document" and give them the option to reload.
EDIT: Also, I would be looking into window.onbeforeunload, not onunload. I believe it executes earlier. I believe this is the function website (slashdot included) use to allow you to confirm that you actually want to leave the page. I think it works in the major browsers except Opera.
As with this SO question How do you manage concurrent access to forms?, I would not try to implement pessimistic locking. It is simply too difficult to get working reliably in a stateless environment. Instead, I would use optimistic locking. However, in this case I used something like a SHA hash of the file to determine if the file had changed since the user last read from the file. For each request to change the file, you would run a SHA hash of the file bytes and compare it with the version you pulled when you first read the data. If had changed, you reject the change and either force the user to do their edits again (pulling a fresh copy of the file contents) or you provide a fancier conflict resolution.

Ajax /jQuery finding if user completed the download

Here is what I am trying to do: I am making a custom text file containing a test. This test is unique to the user and well I don't want my server to stack all those text files.
Is there a way to use Ajax/JavaScript/jQuery to find if the user has already finished the download and if they have get a return value (1 if finished) so the response can be sent back to the PHP file and it can delete that file off the server (real-time like)?
I know there are plenty of ways to do this using PHP. Sort of like run clean up upon user log out and so on but I wanted to try using the method above since it can have many other applications that might be cool to use. I tried most search engines but they have nothing close to what I need.
Why do you need to store them in a file? Just use a PHP script or such that creates the test and outputs it directly to the user. That's how it will get deleted when the download is complete.
If it's important you may want the user to return to your server with the hash of the downloaded file. If the hash matches you know two things:
1. The user downloaded the file successfully
2. It's now ok to delete the file
Well it is very simple. I don't know how to make a PHP webpage send itself to the user other than to make the PHP make a text file and force send that to the user. This creates the problem of having so many text files in a temporary folder.
Now if the test required say 15 chapters each having a text or HTML format file then the script neatly zips all those files and sends them to the user. Again falling on the same problem if the user is finished downloading I am trying to get any type of script to delete the temporary zip or text file out of the temporary directory in somewhat real time.
If I could MD5 a downloaded file using JavaScript I welcome it and it would be a hack solution to the problem but how will the JavaScript gain access to the root access of the download folder of the user? There are security issues there if I am not mistaken. Hope this helps round the question a bit more.
I have a good solution for you here using the jQuery File Download plugin I created. It allows for you to get the behavior of performing an Ajax file download (not actually possible possible) complete with Success and Failure callbacks. In a nutshell you can just use the Success callback (that indicates the file download was successful) to perform an Ajax post back to the server to delete the file. Take a look at the blog post for an example on how to use the Success callback option or a demo which uses those callbacks in the context of showing modals to inform the user of what is going on.

Categories

Resources