I am writing a web based GUI interface for a large Python script used to plumb a few activities on a server. While the application itself is very light weight, I am using Django to create the web GUI, because I can reuse many of the already builtin features such as user management among other.
I have a function, that takes number of logical steps, checks, copying of files. In the vanilla program we were using a log file to capture all statuses. Customer wants a page which shows each step with status changes in real time.
What is the right strategy. What are the steps taken? A progess bar is not what I require, it's more a progress report in real time? Can you advise the steps to be taken or point to any tutorial
You'll need to poll the backend from the browser periodically to see if there are any updates. This does assume that the long-running script is running asynchronously. The periodic update is akin to a log file tail, once in a while you check if new information has been added to the log.
There are several existing jQuery plugins that'll help you build this; PeriodicalUpdater for jQuery is a nice one, in that it'll adjust the poll interval if the server response doesn't change in a while.
Basically, with such a plugin, you'll need a Django view that returns the current status, the log file output of your process so to speak, and have PeriodicalUpdater poll that view. In the callback function for PeriodicalUpdater you'll need to add a check that the process is complete, of course; perhaps your server view could end with an easy to detect "Process complete" line at the end of the "log", or return a response that only consists of the final status.
Related
I've created an aspx page which contains options for configuration of the final operation, the final operation on this page requires it to create backups of a number of folders; this can take a while.
Upon clicking the start button, it currently starts a timer which reads a log file (updated by the backup processes) from the system and periodically outputs this to a multi-line text box on the webpage. However, as the backing up process can take a while, this textbox does't get updated until the backup has been completed.
I just want to know, what's the best way to approach this I've been considering the following:
Webservice - As it's on the same server is this worth it? Also will this run the longer processes, allowing the timer to continue working?
Javascript to read the logfile and keep the textbox updated?
This is all on 1 server, it's just accessed/managed via this webpage.
Moving the long running operation outside the web application is a good practice, it can save threads for processing web requests. The web service is worth it, the web method of the web service that triggers the operation could run the operation on a new thread and return (fire and go). The web application could query a DB table for status (progress, result, error message) of the job.
Scenario:
I have a php vagrant virtualbox server and a runnable jar that takes up to some minutes to run.
Ideally, when the user goes to a certain route in my application, the server will start running said jar in the background so the user wont have to wait looking at a blank page, and, when the jar finishes executing, the page will automatically refresh(or the data in it) giving the user information that it got from the output of the jar.
Problem:
So far, I managed to run the jar in the background using exec() function. I can even get the process id and check if it is still running or not.
The problem i have is how can the php(per say) be notified when the jar stops running without blocking the normal execution?
Is there even any way to do this? I've searched everywhere from laravel queues to forking the running process but nothing(as i understood) suits my needs.
EDIT:
I think I may have found a solution but i would like to ask the community anyway. Say I use a javascript loop (with setInterval) after the page loads, and ,inside the loop, i make an ajax async request to the server asking if the process has ended, and, if so, bringing back the output of the jar and updating the web page with the new data.
This would make for a very "pull" solution instead of the preferable "push" but I don't even know it the push approach is possible.
I'm still learning to code for web app and so this might be a silly question and for that I apologize. Please feel free to put me in the right path if this solution is completely off the rails.
Thanks in advance,
You should take a look at JS librairies already made for such purpose ex: NodeJS.
It keeps a pipe open between the user and your webserver and you can send a notice from the server to a specific user or a group of user.
Really love it myself.
I have a web application in PHP. One component of the application is submitting the data to a backend pipeline (also written in PHP). The pipeline is an external script, which my application calls using 'exec' php function. The pipeline is a multistep pipeline which executes several programs taking input from the program run before. I want to be able to display a message on the application page which submits to the pipeline, with completion of each step in the pipeline, such as
Step 1 completed... Now doing step 2.
Step 2 completed... Now doing step 3.
and so on and so forth.
I am open to use javascript/AJAX to do this, and also any other language compatible with PHP.
Any help is greatly appreciated.
Well working on the assumption that you have some kind of database backing your PHP front-end and pipeline, you don't specifically need something compatible with PHP, but rather something that could interface with your database.
Without having any further details on what you've set up/tried, etc. I can only offer an overview of the workflow I would use in this situation.
Front-end script is submitted and pushes the request into a processing queue.
User is shown a "processing, please wait" type page. This page makes a long-polling AJAX request or a Websocket connection to the front-end site to a script which polls the database for updates on the pipeline processing.
The pipeline scripts chain off each other and push into the database the details of their completion which are read off by the Websocket/long-polling front-end script and returned to the user via Javascript to display on the page.
Using the database as a go-between would be the easiest and most flexible approach. You could also use other languages if you're more comfortable with them so long as they're compatible with your database used on the PHP/pipeline side.
EDIT
While I don't have any links to a tutorial on exactly what you want to do, there are some basics behind it that you can use to piece together your solution:
I would start by getting your pipeline (processing) script to run in the background at interval using cron. Alternatively you could daemonize that pipeline using something like this PHP-Daemon framework so that it runs in the background. Perhaps having a cron task to check if it's running and restart it if needed.
From there you can build a table in your database that contains status updates on the processing tasks at hand and build a PHP script that checks the status of a given task and outputs JSON data about it. This JSON data could easily be read using AJAX requests, the simplest of which would probably be jQuery's .ajax() method. This could be called at interval on the client side "status" page using a setTimeout call at the end of each "loop" to poll for status changes every X seconds. This would be the easiest implementation of what you're after, although not the best performing or optimal way to do it.
So the general workflow here would change to this:
Front-end script is submitted and pushes the request into processing queue with status set to pending.
User is shown the processing page which pings status.php?task=12345 every X seconds.
Back-end daemon/cron script picks up the request and begins processing, pushing status updates into the database at interval.
status.php script begins to return different status information in the JSON code and it is displayed to the user.
I have a CQRS application with eventual consistency between the event store and the read model. In it I have a list of items and under the list a "Create new" button. When a user successfully creates a new item he is directed back to the list but since the read model has not been updated yet (eventual consistency) the item is missing in the list.
I want to fake the entry in the list until the read model has been updated.
How do I best do that and how do I remove it when the new item is present in the actual list? I expect delays of about 60 seconds for the read model to catch up.
I do realize that there are simpler ways to achieve this behavior without CQRS but the rest of the application really benefits from CQRS.
If it matters the application is a c# mvc4 application. I've been thinking of solutions involving HTML5 Web Storage but want to know what the best practice is for solving this kind of problem.
In this situation, you can present the result in the UI with total confidence. There is no difference in presenting this information directly and reading it from the read model.
Your domain objects are up to date with the UI and that's what really matters here. Moreover, if you valid correctly your AR state in every operation and you keep track of the concurrency with the AR's version then you're safe and your model will be protected against invalid operations.
At the end, what are the probability of your UI going out of sync? This can happen if you there are many users modifying the information you're displaying at the same time. This can be avoided by creating task based UI and following the rule 'one command/operation in the AR per request'.
The read model can be unsynced until the denormalizers do their job.
In the other hand, if the command will generate a conversation (long running operation) between a saga and AR's then you cannot do this and must warn the user about it.
It doesn't matter that's a asp.net mvc app. The only solution I see, besides just telling the user to wait a bit, is to have another but this time synchronous event handler that generate the same model (of course the actual model generation should be encapsulated in a service) and sends it to a memory cache.
Being everything in memory makes it very fast and being synchronous means it's automatically executed before the request ends. I'm assuming the command is executed syncronously too.
Then in your query repository you also consider results from cache, removing it if that result is already returned by the db.
Personally, for things that I know I want to be available to the user and where the read model generation is trivial, I would use only synchronous event handlers. The user doesn't mind waiting a few seconds when submitting something and if updating a read model takes a few seconds, you know you have a backend problem.
I see that eventual consistency is applicable to application only if application environment has multiple front-end servers hosting application and all these servers has own copy of read model. All servers uses same copy of event store.
When something is changed to event store, read model that is used to read result to user must be updated in sync with event store. Rest of servers and read models managed by them can be updated using eventual consistency.
This way result to user (list of items) can be read from local read model copy because it is already updated in sync. No need for special complex fake updates/rollbacks.
Only case when user can see incomplete list is that user hits F5 to refresh list after update change and load balancing directs user request to front-end server which read model is not yet updated (60 second delay), but this can be avoided so that load balancing does not change users server in middle of session.
So, if application has only one front-end server, eventual consistency is not very usable or it does not give any benefits without some special fake updates/rollbacks with read model...
All my research so far suggests this can't be done, but I'm hoping someone here has some cunning ideas.
I have a form on a website which allows users to bulk upload lots of URLs to add to a list on the server. There's quite a lot of server-side processing to do on each URL, so to avoid timeouts and to display progress, I've implemented the upload using jQuery to submit the URLs one at a time using ajax.
This is all working nicely. However, part of the processing on each URL is deduplicating it against the complete list. The ajax call returns a status indicating either a successful upload or a rejection due to duplication. As the upload progresses, I tell the user how many URLs have been rejected as duplicates (along with overall progress and ETA).
The problem now is how to give the user a complete list of the failed duplicate URLs. I've kept them in an array in my jQuery, and would like the user to be able to click on a link on the form to download a text file containing those URLs. Is this possible just using client-side processing?
The server-side processing basically handles a single keyword at a time. I'd rather not have to store the duplicates in a database table with some kind of session key which gets sent with every ajax call, and is then used at the end to generate the text file server-side (and then gets cleaned up some time later). I can see how to do this, but it seems very clunky and a bit 20th century.
I haven't used it myself yet, but Downloadify was built for exactly this purpose I think.
Downloadify is a tiny JavaScript + Flash library that enables the generation and saving of files on the fly, in the browser, without server interaction.
It was created by Doug Neiner who is also pretty active on Stack Overflow.
It needs Flash 10 to work.