I'm developing a facebook app which searches for facebook events near your position.The only way to do so is to search for all the places id's in your zone and then for each of those check if there is an event today.The problem I have is that the computation takes like 1-1:30 min which is kinda long. This is the code I use(might not be the best, I know):
foreach (var item in allPlacesIds)
{
RunOnUiThread (() =>loading.Text = string.Format ("Loading {0} possible events out of {1}",count,allPlacesIds.Count));
string query = string.Format ("{0}?&fields=id,name,events.fields(id,name,description,start_time,attending_count,declined_count,maybe_count,noreply_count).since({1}).until({2})", item,dateNow,dateTomorrow);
JsonObject result=(JsonObject)fb.Get (query, null);
try
{
JsonArray allEvents= (JsonArray)((JsonObject) result ["events"])["data"];
foreach (var events in allEvents)
{
Events theEvent= new Events(((JsonObject)events) ["id"].ToString(),
((JsonObject)events) ["name"].ToString(),
((JsonObject)events) ["description"].ToString(),
((JsonObject)events) ["start_time"].ToString(),
int.Parse(((JsonObject)events) ["attending_count"].ToString()),
int.Parse(((JsonObject)events) ["declined_count"].ToString()),
int.Parse(((JsonObject)events) ["maybe_count"].ToString()),
int.Parse(((JsonObject)events) ["noreply_count"].ToString()));
todaysEvents.Add(theEvent);
}
}
catch(Exception ex)
{
}
count++;
}
Where the try starts I used to have an if but that made it take even longer so I replaced it with a try block, as the result comes as null.
I know this isn't exactly a technical issue but I felt maybe you guys know a faster and better implementation of this, my only other option is to create and host a web service and use that just to interrogate data. the problem with that is that I need to invest a lot of money into a server/real ip/ and then I need to create a scheduled job to update the data daily.
Each API call takes some time, the only way to make it faster is to use Batch Requests. Here´s the documentation about those: https://developers.facebook.com/docs/graph-api/making-multiple-requests
Keep in mind that this will not count as one API call, it´s still the same amount, so be careful with API limits.
Related
I have code on a web-worker and because i can't post to it an object with methods(functions) , i dont know how to stop blocking the UI with this code:
if (data != 'null') {
obj['backupData'] = obj.tbl.data().toArray();
obj['backupAllData'] = data[0];
}
obj.tbl.clear();
obj.tbl.rows.add(obj['backupAllData']);
var ext = config.extension.substring(1);
$.fn.dataTable.ext.buttons[ext + 'Html5'].action(e, dt, button, config);
obj.tbl.clear();
obj.tbl.rows.add(obj['backupData'])
This code exports records from an html table. Data is an array and is returned from a web worker and sometimes can have 50k or more objects.
As obj and all the methods that it contains are not transferable to we-worker, when data length 30k ,40k or 50k or even more, the UI blocks.
which is the best way to do this?
Thanks in advance.
you could try wrapping the heavy work in an async function like a timeout to allow the engine to queue the whole logic and elaborate it as soon as it has time
setTimeout(function(){
if (data != 'null') {
obj['backupData'] = obj.tbl.data().toArray();
obj['backupAllData'] = data[0];
}
//heavy stuff
}, 0)
or , if the code is extremely long, you can try figure it out a strategy to split your code into chunk of operation and execute each chunk in a separate async function (timeout)
Best way to iterate over an array without blocking the UI
Update:
Sadly, ImmutableJS doesn't work at the moment across webworkers. You should be able to transfer the ArrayBuffer so you don't need to parse it back into an array. Also read this article. If your workload is that heavy, it would be best to actually send back one item at a time from the worker.
Previously:
The code is converting all the data into an array, which is immediately costly. Try returning an immutable data structure from web worker if possible. This will guarantee that it doesn't change when the references change and you can continue iterating over it slowly in batches.
The next thing you can do is to use requestIdleCallback to schedule small batches of items to be processed.
This way you should be able to make the UI breathe a bit.
I'm supposed to parse a very large JSON array in Javascipt. It looks like:
mydata = [
{'a':5, 'b':7, ... },
{'a':2, 'b':3, ... },
.
.
.
]
Now the thing is, if I pass this entire object to my parsing function parseJSON(), then of course it works, but it blocks the tab's process for 30-40 seconds (in case of an array with 160000 objects).
During this entire process of requesting this JSON from a server and parsing it, I'm displaying a 'loading' gif to the user. Of course, after I call the parse function, the gif freezes too, leading to bad user experience. I guess there's no way to get around this time, is there a way to somehow (at least) keep the loading gif from freezing?
Something like calling parseJSON() on chunks of my JSON every few milliseconds? I'm unable to implement that though being a noob in javascript.
Thanks a lot, I'd really appreciate if you could help me out here.
You might want to check this link. It's about multithreading.
Basically :
var url = 'http://bigcontentprovider.com/hugejsonfile';
var f = '(function() {
send = function(e) {
postMessage(e);
self.close();
};
importScripts("' + url + '?format=json&callback=send");
})();';
var _blob = new Blob([f], { type: 'text/javascript' });
_worker = new Worker(window.URL.createObjectURL(_blob));
_worker.onmessage = function(e) {
//Do what you want with your JSON
}
_worker.postMessage();
Haven't tried it myself to be honest...
EDIT about portability: Sebastien D. posted a comment with a link to mdn. I just added a ref to the compatibility section id.
I have never encountered a complete page lock down of 30-40 seconds, I'm almost impressed! Restructuring your data to be much smaller or splitting it into many files on the server side is the real answer. Do you actually need every little byte of the data?
Alternatively if you can't change the file #Cyrill_DD's answer of a worker thread will be able to able parse data for you and send it to your primary JS. This is not a perfect fix as you would guess though. Passing data between the 2 threads requires the information to be serialised and reinterpreted, so you could find a significant slow down when the data is passed between the threads and be back to square one again if you try to pass all the data across at once. Building a query system into your worker thread for requesting chunks of the data when you need them and using the message callback will prevent slow down from parsing on the main thread and allow you complete access to the data without loading it all into your main context.
I should add that worker threads are relatively new, main browser support is good but mobile is terrible... just a heads up!
i'm currently building a website which searches an external database and brings up records which match the given search string. The search is live, so results are brought up as the user types.
now the first (and current) approach i took, is that the page actually connects to the mySQL server and retrieves content via AJAX, with EVERY letter the user types in the search box.
now i am starting to look at JSON objects (i only very recently started building websites), and was wondering if it would be a good idea, to load the entire database into a JSON object in the beginning and then look through that when searching.
is this a good idea? would it be faster? thanks in advance
It totally depends on the size of the data and the complexity of the query. If you can reasonably send the data to the client in advance and then search it locally, then sure, that's useful because it's all local and you don't have the latency of querying the server. But if you have a large amount of data, or the query is complex, it may well make more sense to do the query on the server.
There's no one-size-fits-all solution, it's data-dependent.
...and retrieves content via AJAX, with EVERY letter the user types in the search box.
That's usually overkill. Normally, you want to wait until there's a pause in the user's typing before firing off the ajax call, so that if they type "james" in rapid succession, you search for "james" rather than searching for "j", then "ja", then "jam", then "jame", and then "james".
For instance, let's say your search trigger is a keypress event. This would be a fairly common approach:
var keypressTimer = 0;
function handleKeypress() {
if (keypressTimer) {
cancelTimeout(keypressTimer);
}
keypressTimer = setTimeout(doSearch, 100); // 100ms = 1/10th of a second
}
function doSearch() {
var searchValue;
keypressTimer = 0;
searchValue = /*...get the search value...*/;
doAjaxCallUsing(searchValue);
}
This is called "debouncing" the input (from hardware engineering, related to the mechanical and electrical "bouncing" of a key as it's pressed).
I've got a simple app that fetches a user's complete feed from the Facebook API in order to tally the number of words he or she has written total on the site.
After he or she authenticates, the page makes a Graph call to /me/feed?limit100 and counts the number of responses and their dates. If there is a "next" cursor in the response, it then pings that next URL, which looks something like this:
https://graph.facebook.com/[UID]/feed?limit=100&until=1386553333
And so on, recursively, until we reach the time that the user joined Facebook. The function looks like this:
var words = 0;
var posts = function(callback, url) {
url = url || '/me/posts?limit=100';
FB.api(url, function(response) {
if (response.data) {
response.data.forEach(function(status) {
if (status.message) {
words += status.message.split(/ /g).length;
}
});
}
if (response.paging && response.paging.next) {
posts(callback, response.paging.next);
} else {
alert("You wrote " + words + " on Facebook!");
}
});
}
This works just fine for people who have posts a total of up to 4,000 statuses, but it really starts to crawl for power users with 10,000 lifetime updates or more. Each response from the API is only about 25Kb, but I cannot figure out what's straining the most.
After I've added the number of words in each status to my total word count, do I need to specifically destroy the response object so as not to overload memory?
Alternatively, is the recursion depth a problem? we're realistically talking about a total of 100 calls to the API for power users. I've experimented with upping the limit on each call to fetch larger chunks, but it doesn't seem to make a huge difference.
Thanks.
So, you're doing this with the JS SDK I guess, which mean this runs in the Browser... Did you try to run this in Chrome and then watch the network monitor to see about the response time etc.?
With 100 requests, this also means that the data object/JSON must be about the size of 2.5mb, which for some browsers/machines could be quite challenging I guess. Also, it must take quite a while to fetch the data from FB. What does the user see in the meantime?
Did you think of implementing this in the backend on the server side, and then just passing the results to the frontend?
For exmple use NodeJS together with SocketIO to do it on the server side and dynamically update the word count?
I want to extract some data from the database without refreshing a page. What is the best possible way to do this?
I am using the following XMLHTTPRequest function to get some data (shopping cart items) from cart.php file. This file performs various functions based on the option value.
For example: option=1 means get all the shopping cart items. option=2 means delete all shopping cart items and return string "Your shopping cart is empty.". option=3, 4...and so on.
My XHR function:
function getAllCartItems()
{
if(window.XMLHttpRequest)
{
allCartItems = new XMLHttpRequest();
}
else
{
allCartItems=new ActiveXObject("Microsoft.XMLHTTP");
}
allCartItems.onreadystatechange=function()
{
if (allCartItems.readyState==4 && allCartItems.status==200)
{
document.getElementById("cartmain").innerHTML=allCartItems.responseText;
}
else if(allCartItems.readyState < 4)
{
//do nothing
}
}
var linktoexecute = "cart.php?option=1";
allCartItems.open("GET",linktoexecute,true);
allCartItems.send();
}
cart.php file looks like:
$link = mysql_connect('localhost', 'user', '123456');
if (!$link)
{
die('Could not connect: ' . mysql_error());
}
mysql_select_db('projectdatabase');
if($option == 1) //get all cart items
{
$sql = "select itemid from cart where cartid=".$_COOKIE['cart'].";";
$result = mysql_query($sql);
$num = mysql_num_rows($result);
while($row = mysql_fetch_array($result))
{
echo $row['itemid'];
}
}
else if($option == 2)
{
//do something
}
else if($option == 3)
{
//do something
}
else if($option == 4)
{
//do something
}
My Questions:
Is there any other way I can get the data from database without
refreshing the page?
Are there any potential threats (hacking, server utilization,
performance etc) in my way of doing this thing? I believe a hacker
can flood my server be sending unnecessary requests using option=1,
2, 3 etc.
I don't think a Denial of Service attack would be your main concern, here. That concern would be just as valid is cart.php were to return HTML. No, exposing a public API for use via AJAX is pretty common practice.
One thing to keep in mind, though, is the ambiguity of both listing and deleting items via the same URL. It would be a good idea to (at the very least) separate those actions (or "methods") into distinct URLs (for example: /cart/list and /cart/clear).
If you're willing to go a step further, you should consider implementing a "RESTful" API. This would mean, among other things, that methods can only be called using the correct HTTP verb. You've possibly only heard of GET and POST, but there's also PUT and DELETE, amongst others. The reason behind this is to make the methods idempotent, meaning that they do the same thing again and again, no matter how many times you call them. For example, a GET call to /cart will always list the contents and a DELETE call to /cart will always delete all items in the cart.
Although it is probably not practical to write a full REST API for your shopping cart, I'm sure some of the principles may help you build a more robust system.
Some reading material: A Brief Introduction to REST.
Ajax is the best option for the purpose.
Now sending and receiving data with Ajax is done best using XML. So use of Web services is the recommended option from me. You can use a SOAP / REST web service to bring data from a database on request.
You can use this Link to understand more on Webservices.
For the tutorials enough articles are available in the Internet.
you're using a XMLHttpRequest object, so you don't refresh your page (it's AJAX), or there's something you haven't tell
if a hacker want to DDOS your website, or your database, he can use any of its page... As long as you don't transfer string between client and server that will be used in your SQL requests, that should be OK
I'd warn you about the use of raw text response display. I encourage you to format your response as XML or JSON to correctly locate objects that needs to be inserted into the DOM, and to return an tag to correctly handle errors (the die("i'm your father luke") won't help any of your user) and to display them in a special area of your web page
First, you should consider separating different parts of your application. Having a general file that performs every other tasks related to carts, violates all sorts of software design principles.
Second, the first vulnerability is SQL injection. You should NEVER just concatenate the input to your SQL.
Suppose I posted 1; TRUNCATE TABLE cart;. Then your SQL would look like:
select itemid from cart where cartid=1; TRUNCATE TABLE cart; which first selects the item in question, then ruins your database.
You should write something like this:
$item = $_COOKIE['cart'];
$item = preg_replace_all("['\"]", "\\$1", $item);
To avoid refreshing, you can put a link on your page. Something like, Refresh
In terms of security, it will always pay to introduce a database layer concerned with just your data, regardless of your business logic, then adding a service layer dependent on the database layer, which would provide facilities to perform business layer actions.
You should also take #PPvG recommendation into note, and -- using Apache's mod_rewrite or other similar facilities -- make your URLs more meaningful.
Another note: try to encapsulate your data in JSON or XML format. I'd recommend the use of json_encode(); on the server side, and JSON.parse(); on the client side. This would ensure a secure delivery.