Angular Asynchronous Timing - javascript

I am working on a updating list that shows twitch information about certain users. It should show their logo a, name, and if they are online. Currently, I am grabbing the channels and putting them into three arrays. One array for the Twitch channel information that contains information on all channels. One array that holds the display_name of the online users. One array that holds the display_name of the offline users.
It currently works, but it only works if you press the beginning button. Why is the button necessary and how do I make the list populate without the button?
http://codepen.io/crosscris/pen/ZGEOor?editors=101
var app = angular.module('FCCTwitchChecker', []);
app.controller('FCCTwitchController', function() {
var TwitchCheck = this;
var FCCstreamers = ["freecodecamp", "storbeck", "terakilobyte", "habathcx","notmichaelmcdonald","RobotCaleb", "medrybw","comster404","brunofin","thomasballinger","joe_at_underflow","noobs2ninjas","mdwasp","beohoff","xenocomagain"];
TwitchCheck.AllUsersChannelObjects = [];
TwitchCheck.OfflineUsers = [];
TwitchCheck.OnlineUsers = [];
var BeginningURL= "https://api.twitch.tv/kraken/";
FCCstreamers.forEach(function(streamer){
var streamURLstring = BeginningURL + "streams/" + streamer + "?client_id=1234chrisclientid4321&callback=?";
var channelURLstring = BeginningURL +"channels/" + streamer + "?client_id=1234chrisclientid4321&callback=?";
$.ajax({
type: "GET",
dataType: "json",
url: streamURLstring,
success: function(result) {
if(result.stream === null || result.error==="Not Found") {
TwitchCheck.OfflineUsers.push(streamer);
} else {
TwitchCheck.OnlineUsers.push(streamer);
}
},
error: function() {
console.log("It failed");
}
});
$.ajax({
type: "GET",
dataType: "json",
url: channelURLstring,
success: function(result) {
if (result.error !== "Not Found") {
TwitchCheck.AllUsersChannelObjects.push(result);
if (result.logo === null) {
TwitchCheck.AllUsersChannelObjects[TwitchCheck.AllUsersChannelObjects.length - 1].logo === "http://placehold.it/350x150";
}
}
},
error: function() {
console.log("It failed");
}
});
});
TwitchCheck.showArray = function () {
console.log(TwitchCheck.AllUsersChannelObjects);
}
TwitchCheck.OnorOff = function(ChannelName) {
if(TwitchCheck.OnlineUsers.indexOf(ChannelName.name) !== -1) {
return "on";
} else {
return "off";
}
}
});
I was given a list of display_names and some of the display names are not valid channels. I believe ajax/angular is stuck trying to find information of invalid channels, but I am not sure.

Since the jQuery http call is outside of Angular digest cycle (it is an asynchronous call), you need to manually run that digest cycle using $scope.$apply() from your controller in the success callback.
See example here: http://codepen.io/anon/pen/oXgqGM
However, as suggested by mcranston18 on a comment, it is better to use Angular's $http service rather than jQuery, as it takes care of the scope digestion and you don't need to do it yourself.

This is because Angular updates all of its views, based on diffs, during its $digest cycle.
It uses a cycle of checking all of its children and its chilren's children's children, etc... so that it only does the work of checking when it knows something has changed.
...but the only way it knows something has changed, automagically, is if that change is because of an event in the Angular system.
Now, these changes might be so magical and so well hidden that you've never noticed or thought about them.
That's totally fine. We've all been in that place, and it means that the ng-team has done a great job of hiding those complexities, to make the experience seamless.
But reaching outside of angular to grab data or modify data, and then reaching into Angular's world from outside to poke at Angular values isn't something that Angular knows how to check against.
The reason it works when you press the button is because it causes a hidden event, which tells Angular: "After this event has run (and all of its subroutines run), do a digest and apply all changes based on the diffs you find."
That (or similar) is the glue which keeps Angular (and React and ...library-X) magical.
To keep Angular happy, use $http

Related

Laravel: Can I cache a query and then display results with jquery/javascript based on time and user key?

I want to trigger a flash message when a specific event (row) in a query satisfies two criteria:
The time ('PropTime') is within 5 minutes of expiring
The user ('userID') is associated with the row
Rather than querying the database constantly for each user, I would like to produce a global query every hour that lists all upcoming events (with PropTime and userID) and cache it (using Laravel's built in 'Remember()' method), then search this list of results (with jquery or javascript) for qualifying events every 3 seconds or so (and trigger a flash message).
But I am not sure how to access the cached results with jquery/javascript nor how to write the script - or if this is even possible.
You'll do something like this. The Laravel side:
Route::get('events/query', function() {
// Get the events that belong to the User
// where the PropTime is less than 5 minutes away
// and cache the returned events for 1 hour
$events = \Cache::remember('events', 60, function() {
return \App\Event::where('PropTime', '<', \Carbon\Carbon::now()->addMinutes(5)
->where('userID', \Auth::user()->id)
->get();
});
return response()->json($events);
});
The JS/jQuery side (using jQuery for $.ajax):
$.ajax({
url: '/events/query',
type: 'GET',
dataType: 'json',
success: function(data) {
// log the data so you can see how to
// get what parts you want with JS
console.log(data);
// loop through the events
for(var i = 0; i < data.length; i++) {
// the event name
console.log(data[i].name);
}
}
});

Angular dealing with incorrect cached data

Okay this might be a long post but please do not click away you may know a simple answer.
The case:
Lets say you have build an angular app where people log into the system do some operations and then might log out again. The application will collect data from an API using a factory and service and in order to make the application load even faster you save these data in variables like such:
app.factory("divisionService", function (api, $http, $q) {
var division = {};
var divisionArray = [];
var mergedUserList = [];
return {
getList: function () {
var d = $q.defer();
if (divisionArray <= 0) {
$http.get(api.getUrl('divisionWithUsers', null))
.success(function (response) {
divisionArray = response;
d.resolve(divisionArray);
});
}
if (divisionArray.length > 0) {
d.resolve(divisionArray);
}
return d.promise;
},
This will make sure that if the user attempts to use a controller that uses the divisionService then that user will instantly get the data if it is already fetched.
The issue:
Now the user log's out and another user logs in (without refreshing / reloading ) the page. Once the controller calls this factory it already thinks that it has the correct list meaning that return would be the same as the previous user however this data might be incorrect!
Since all angular services are singletons the service will not be destoryed upon logout even though it should.
The obvious answer
An answer to this question might be: "Well then don't store the data in a variable" and since this will work enormous amount of data might make content of the page load slowly.
So my question is what do you do in the above situation? do you really have to deal with loading the data every time it is request or does angular provide a smart way to solve this problem?
Create a clear() function
Add a clear() function to your divisionService factory which will be responsible to empty the cached data structures (arrays, objects, ...)
app.factory("divisionService", function () {
var division = {};
var divisionArray = [];
var mergedUserList = [];
return {
clear: function(){
// Clear the cached data
for (var key in division)
{
delete division[key];
}
divisionArray.length = 0;
// ...
},
getList: ...
}
});
And call this function from when you logout
function logout(){
divisionService.clear();
}
Refresh the application
You can also refresh the entire application when you logout if you don't want to deal with clearing the cached data (e.g. calling divisionService.clear())
function logout(){
$window.location.reload();
}
this will cause the entire application to be reloaded, and all of the temporary (variable based) cached data will be cleared
Marc,
My first thought is just run
divisionArray = [];
On logout. Let me know if that works. If not, I'll look into it further.
You can cache the user information as well and compare it to see if the user has changed before deciding to refresh the data.

How can I cleanly pull a Parse.Object relation's records when fetching the object?

In the Parse JavaScript guide, on the subject of Relational Data it is stated that
By default, when fetching an object, related Parse.Objects are not
fetched. These objects' values cannot be retrieved until they have
been fetched.
They also go on to state that when a relation field exists on a Parse.Object, one must use the relation's query().find() method. The example provided in the docs:
var user = Parse.User.current();
var relation = user.relation("likes");
relation.query().find({
success: function(list) {
// list contains the posts that the current user likes.
}
});
I understand how this is a good thing, in terms of SDK design, because it prevents one from potentially grabbing hundreds of related records unnecessarily. Only get the data you need at the moment.
But, in my case, I know that there will never be a time when I'll have more than say ten related records that would be fetched. And I want those records to be fetched every time, because they will be rendered in a view.
Is there a cleaner way to encapsulate this functionality by extending Parse.Object?
Have you tried using include("likes")?
I'm not as familiar with he JavaScript API as the ObjC API.. so in the example below I'm not sure if "objectId" is the actual key name you need to use...
var user = Parse.User.current();
var query = new Parse.Query(Parse.User);
query.equalTo(objectId, user.objectId);
query.include("likes")
query.find({
success: function(user) {
// Do stuff
}
});
In general, you want to think about reverse your relationship. I'm not sure it is a good idea be adding custom value to the User object. Think about creating a Like type and have it point to the user instead.
Example from Parse docs:
https://parse.com/docs/js_guide#queries-relational
var query = new Parse.Query(Comment);
// Retrieve the most recent ones
query.descending("createdAt");
// Only retrieve the last ten
query.limit(10);
// Include the post data with each comment
query.include("post");
query.find({
success: function(comments) {
// Comments now contains the last ten comments, and the "post" field
// has been populated. For example:
for (var i = 0; i < comments.length; i++) {
// This does not require a network access.
var post = comments[i].get("post");
}
}
});
Parse.Object's {Parse.Promise} fetch(options) when combined with Parse.Promise's always(callback) are the key.
We may override fetch method when extending Parse.Object to always retrieve the relation's objects.
For example, let's consider the following example, where we want to retrieve a post and its comments (let's assume this is happening inside a view that wants to render the post and its comments):
var Post = Parse.Object.extend("Post"),
postsQuery = new Parse.Query(Post),
myPost;
postsQuery.get("xWMyZ4YEGZ", {
success: function(post) {
myPost = post;
}
).then(function(post) {
post.relation("comments").query().find({
success: function(comments) {
myPost.comments = comments;
}
});
});
If we had to do this every time we wanted to get a post and its comments, it would get very repetitive and very tiresome. And, we wouldn't be DRY, copying and pasting like 15 lines of code every time.
So, instead, let's encapsulate that by extending Parse.Object and overriding its fetch function, like so:
/*
models/post.js
*/
window.myApp = window.myApp || {};
window.myApp.Post = Parse.Object.extend("Post", {
fetch: function(options) {
var _arguments = arguments;
this.commentsQuery = this.relation("comments").query();
return this.commentsQuery.find({
success: (function(_this) {
return function(comments) {
return _this.comments = comments;
};
})(this)
}).always((function(_this) {
return function() {
return _this.constructor.__super__.fetch.apply(_this, _arguments);
};
})(this));
}
});
Disclaimer: you have to really understand how closures and IIFEs work, in order to fully grok how the above works, but here's what will happen when fetch is called on an existing Post, at a descriptive level:
Attempt to retrieve the post's comments and set it to the post's comments attribute
Regardless of the outcome of the above (whether it fails or not) operation, always perform the post's default fetch operation, and invoke all of that operation's callbacks

Saving Breeze entities manually and updating keyMappings

I want to manually save entities in Breeze. We just don't have the option (as much as I try to fight for my opinion) to use the SaveChanges(JObject saveBundle) and need to directly hit a 3rd party Web API with a specific URL for POST/PUT requests.
So I am basically looping through EntityManager.getChanges() and then handling Modified, Added, and Deleted entities.
I can handle the "Modified" without any problems. However, on "Added", I know I need to update keyMappings when I add a new entity after successful save but cannot find any documentation on how to do that manually in JavaScript.
I also wanted to see if there any examples in returning any errors. Basically I want to hook into this call:
$http(params).then(
function (response) { // success
console.log(response);
// update key mappings if its an "Added" somehow
// entityAspect.acceptChanges();
dfd.resolve("something eventually");
},
function () { // error
// added error object here and reject changes on this entity? or just show error message?
dfd.reject("error");
});
return dfd.promise;
In case anyone's wondering, I just check the entityAspect.entityState.isAdded() method. Get the new identity returned from my 3rd party and just update the id accordingly. Our system is a little bit nicer in that we have a set key for all of the entities.
Code wise it looks something like this (dfd is a $q defer):
$http(params).then(function (response) { // success
// on add update the instance id with the new instance id
if (entityState.isAdded()) {
var newId = response.data.changedInstance.newId;
entity.Id = newId ;
}
entityAspect.acceptChanges();
dfd.resolve(response.data);
},
function (response) { // error
dfd.reject(response.data);
});

Saving only changed attributes in Backbone.js

I'm attempting to save only changed attributes of my model by doing the following:
this.model.set("likes", this.model.get("likes") + 1);
this.model.save();
and extending the Backbone prototype like so:
var backbone_common_extension = {
sync: function(method,model,options) {
options.contentType = 'application/json';
if (method == 'update') {
options.data = JSON.stringify(model.changedAttributes() || {});
}
console.log(options.data);
Backbone.sync.call(this, method, model, options);
}
};
_.extend(Backbone.Model.prototype, backbone_common_extension);
The problem is that model.changedAttributes() is always empty. I've tried passing {silent: true} on the set method, but same thing. How can I keep backbone from clearing out changedAttributes() before sync?
Relying on changedAttributes for this sort of thing tends not to work very well, Backbone doesn't really have a well defined set/commit/save/rollback system so you'll find that changedAttributes will get emptied when you don't expect it to.
Lucky for you, that doesn't matter; unlucky for you, it doesn't matter because your whole approach is wrong. What happens if you load your model from the server and then five likes come in from other browsers? Your approach would overwrite all those likes and data would get lost.
You should let the server manage the total number of likes and your model should simple send a little "there is one more like" message and let the server respond with the new total. I'd do it more like this:
// On the appropriate model...
add_like: function() {
var _this = this;
$.ajax({
type: 'post',
url: '/some/url/that/increments/the/likes',
success: function(data) {
_this.set('likes', data.likes);
},
error: function() {
// Whatever you need...
}
});
}
Then the /some/url/... handler on the server would send a simple "add one" update to its database and send back the updated number of likes. That leaves the database responsible for managing the data and concurrency issues; databases are good at this sort of thing, client-side JavaScript not so much.

Categories

Resources