I'm attempting to query a MySQL database on a webpage. Within my R script, I have 4 different "query" functions along with multiple calculations which will display statistical graphs to my webpage, all dependent on an "N" variable. I'm using PHP (using shell_exec) to call to R and send "N". I'm using the RMySQL & ggplot2 libraries in R.
Running my R script with just 1 basic query function (includes dbConnect(), dbGetQuery and on.exit(dbDisconnect()), then using png(), plot(), and dev.off() takes ~15 seconds to display the graph on my website.
With 2 functions and 2 plots, I haven't even had the patience to wait it out to see if it works since the load time is so long. The queries themselves are rather lengthy (could probably made easier through looping), but I've tested them to work through MySQL and I'm not sure how to avoid loop errors with SQL.
Could the long loading time be due to having dbConnect/dbDisconnect in each individual function? Should I only do this once in the script (i.e. create a new "connect" function and then call to the other functions from here)?
Is it the fact I'm running multiple and lengthy query requests? If that's the case, would it be better if I split each "query function" into individual R scripts, then "shell_exec" each and allow the user to select which graphs to display (i.e. check boxes in HTML/PHP that allow for the execution of each script/graph desired)?
Through testing, I know that my logic is there, but I might be totally missing something. I would like to speed up the process so the website user doesn't have to stare at a loading screen forever and I can actually get some tangible results.
Sorry for the lengthy request, I appreciate any help you can give! If you'd like to see the webpage or any of my code to get a better idea, I can upload that and share.
Thanks!
EDIT: It should also be noted that I'm using a while loop (x < 100) for some of the calculations; I know loops in R are typically known to be expensive processes but the whole vectoring thing (I think that's the name?) is over my head.
Your requests are probably very demanding and cannot be executed synchronously. You could instead use a queue system. When a request is made, it is send to queue. The results will be output asynchronously when the server will be ready. In the meantime, you can redirect your user to another page and the use could be made aware of when the results is available.
Here are some suggestions:
PHP Native
PHP Framework
GitHub
Related
In short we have a massive database and need to provide results as the user types them in the search box. It is impossible for us to preload queries and attempt to match that way.
Currently we send a request to the server with the new query string every 2 chars or 5 seconds. This is a bit of a mess however, so I'm looking to see if there is a faster/better way of doing this.
Previous solutions I've seen would require pre-feteching which in our case is not possible and considering the size of the return too costly.
I would recommend using debounce for this. It will make the function wait a certain amount of time after being called before running. Additional calls to the function will reset the timer. That way, the function will not run until users have finished (or paused) typing.
This which will prevent unnecessary load on your database, while still providing a good user experience (as long as you have a reasonable debounce time). Examples of how to do debounce in React can be found here
I have run into a bit of an issue in my apex controller. I am looping through events between a start and end date (max amount of days to pull events from is a month), but this leads to potentially thousands of events. So I do a soql loop of events so it chunks them into 200 events, then I loop through each event in the 200 chunk to create custom event objects to return to my visual force page.
However, inside this second for loop, I need MORE for loops to do work in finding people invited and what not, so my runtime for this is unfortunately O(3n^3) however, the first two for loops generally act as one (get one chunk of 200, loop through them all, get next one, etc) so it is more O(3n^2), but when in a test org with 2777 events in one month, I ran into CPU governor limit.
I am wanting to throw this into a batch job, as I think that is the only way to handle these large amounts of events (I cannot reduce my for loops anymore).
I am wanting to have the lightning loading spinner run until batch job finishes. However... I am not sure how to communicate between the batch job finish() method and my javascript in my visualforce page. I will be calling a method in my controller with a remote action call and unhiding the spinner, then that method will initiate the batch job, then when batch job ends, spinner stops and page refreshes with data.
But yeah, I don't quite know how I can connect the finish() to my javascript to detect when the batch job has finished.
So you have a VF page that you want to react to a batch job that could take an nth number of time.. If you want that page to to be updated I would recommend looking into Streaming API, which I'm not even sure would solve your situation... Batch jobs are asynchronous obviously, so I don't think your requirements are realistic. I guess the bigger question is what are you trying to solve for, and if your requirements is to build a dynamic page off a async job that isn't realistic
You can check a salesforce batch status by providing job ID.
Below is a REST example:
curl https://instance.salesforce.com/services/async/39.0/job/jobId/batch/batchId -H "X-SFDC-Session: sessionId"
Currently, I'm using setTimeout() to pause a for loop on a huge list so that I can add some styling to the page. For instance,
Eg: http://imdbnator.com/process?id=wtf&redirect=false
What I use setTimeOut for:
I use setTimeout() to add images,text and css progress bar (Why doesn't Progress Bar dynamically change unlike Text?2).
Clearly, as you can see it is quite painful for a user to just browse through the page and hover over a few images. It gets extremely laggy. Is there any any workaround to this?
My FOR Loop:
Each for loop makes an ajax request on the background to a PHP API. It definitely costs me some efficiency there but how do all other websites pull it off with such elegance? I mean, I've seen websites show a nice loading image with no user interference while it makes an API request. While I try to do something like that, I have set a time-out everytime.
Is that they use better Server-Client side interaction languages like the node.js that I've heard?
Also, I'e thought of a few alternatives but run into other complications. I would greatly appreciate if you can help me on each of these possible alternatives.
Method 1:
Instead of making an AJAX call to my PHP API through jQuery, I could do a complete server side script altogether. But then, the problem I run into is that I cannot make a good Client Side Page (as in my current page) which updates the progress bar and adds dynamic images after each of the item of the list is processed. Or is this possible?
Method 2: (Edited)
Like one the useful answers below, I think the biggest problem is the server API and client interaction. Websockets as suggested by him look promising to me. Will they necessarily be a better fix over a setTimeout? Is there any significant time difference in lets say I replace my current 1000 AJAX requests into a websocket?
Also, I would appreciate if there is anything other than websocket that is better off than an AJAX call.
How do professional websites get around with a fluidic server and client side interactions?
Edit 1: Please explain how professional websites (such as http://www.cleartrip.com when you are requesting for flight details) provide a smooth client side while processing the server side.
Edit 2: As #Syd suggested. That is something that I'm looking for.I think there is a lot of delay in my current client and server interaction. Websockets seem to be a fix for that. What are the other/ best ways for improving server cleint interaction apart from the standard AJAX?
Your first link doesn't work for me but I'll try to explain a couple of things that might help you if I understand your overall problem.
First of all it is bad to have synchronous calls with large amount of data that require processing in your main ui thread because the user experience might suffer a lot. For reference you might want to take a look into "Is it feasible to do an AJAX request from a Web Worker?"
If I understand correctly you want to load some data on demand based on an event.
Here you might want to sit back and think what is the best event for your need, it's quite different to make an ajax request every once in a while especially when you have a lot of traffic. Also you might want to check if your previous request has completed before you initialize the next one (this might not be needed in some cases though). Have a look at async.js if you want to create chained asynchronous code execution without facing the javascript "pyramid of doom" effect and messy code.
Moreover you might want to "validate - halt" the event before making the actual request. For example let's assume a user triggers a "mouseenter" you should not just fire an ajax call. Hold your breath use setTimeout and check if the user didn't fire any other "mouseenter" event for the next 250 ms this will allow your server to breath. Or in implementations that load content based on scroll. You should not fire an event if the user scrolls like a maniac. So validate the events.
Also loops and iterations, we all know that if the damn loop is too long and does heavy lifting you might experience unwanted results. So in order to overcome this you might want to look into timed loops (take a look at the snippet bellow). basically loops that break after x amount of time and continue after a while. Here are some references that helped me with a three.js project. "optimizing-three-dot-js-performance-simulating-tens-of-thousands-of-independent-moving-objects" and "Timed array processing in JavaScript"
//Copyright 2009 Nicholas C. Zakas. All rights reserved.
//MIT Licensed
function timedChunk(items, process, context, callback){
var todo = items.concat(); //create a clone of the original
setTimeout(function(){
var start = +new Date();
do {
process.call(context, todo.shift());
} while (todo.length > 0 && (+new Date() - start < 50));
if (todo.length > 0){
setTimeout(arguments.callee, 25);
} else {
callback(items);
}
}, 25);
}
cleartip.com will probably might use some of these techniques and from what I've seen what it does is get a chunk of data when you visit the page and then upon scroll it fetches other chunks as well. The trick here is to fire the request a little sooner before the user reaches the bottom of the page in order to provide a smooth experience. Regarding the left side filters they only filter out data that are already in the browser, no more requests are being made. So you fetch and you keep something like cache (in other scenarios though caching might be unwanted for live data feeds etc).
Finally If you are interested for further reading and smaller overhead in data transactions you might want to take a look into "WebSockets".
You must use async AJAX calls. Right now, the user interaction is blocked while the HTTP ajax request is being done.
Q: "how professional websites (such as cleartrip.com) provide a smooth client side while processing the server side."
A: By using async AJAX calls
I'm using the setTimeout() function in javascript to allow a popup that says "loading" to be shown while I'm parsing some xml data. I found that at small enough delay values (below 10ms) it doesn't have time to show it before the browser freezes for a moment to do the actual work.
At 50ms, it has plenty of time, but I don't know how well this will translate to other systems. Is there some sort of "rule of thumb" that would dictate the amount of delay necessary to ensure a visual update without causing unnecessary delay?
Obviously, it'll depend on the machine on which the code is running etc., but I just wanted to know if there was anything out there that would give a little more insight than my guesswork.
The basic code structure is:
showLoadPopup();
var t = setTimeout(function()
{
parseXML(); // real work
hideLoadPopup();
}, delayTime);
Thanks!
UPDATE:
Turns out that parsing XML is not something that Web Workers can usually do since they don't have access to the DOM or the document etc. So, in order to accomplish this, I actually found a different article here on Stack Overflow about parsing XML inside a Web Worker. Check out the page here.
By serializing my XML object into a string, I can then pass it into the Web Worker through a message post, and then, using the JavaScript-only XML parser that I found in the aforementioned link, turn it back into an XML object within the Web Worker, do the parsing needed, and then pass back the desired text as a string without making the browser hang at all.
Ideally you would not ever have to parse something on the client side that actually causes the browser to hang. I would look into moving this to an ajax request that pulls part of the parsed xml (child nodes as JSON), or look at using Web Workers or a client side asynchronous option.
There appears to be no "rule-of-thumb" for this question simply because it was not the best solution for the problem. Using alternative methods to do the real meat of the work was the real solution, not using a setTimeout() call to allow for visual update to the page.
Given options were:
HTML 5's new Web Worker option (alternative information)
Using an AJAX request
Thanks for the advice, all.
I have noumerous ajax calls on page Load and those calls bring neseesary resources to start application (JSON massives and functions).
The problem is, that those resources get loaded not strictly on after another, but asynchroniusly and before get in used they get formatted by success functions.
To start application a have to check that all resources loaded and have been formatted properly. The best way to do this I found is to run circle with timeOut, checking all of them unless all are ready. It doesn't seem right for me, couse it takes a while to start application, so I thought that may be there is another appoach.
I thought about getting setInterval function with a little timeout and a bunch of nested IF's, and at the end of those IF's I could cancel Interaval functin and start application.
May be soneone is familiar with right approach to such things and could share some code?