I'm trying to make a script with Greasemonkey for the Facebook's Timeline log page.
I need to catch two kind of events :
1)The URL change (Since the changes are made with AJAX on Facebook and therefore the page isn't fully reloaded, and neither the script).
2)The appearance of some elements.
I tried to make two MutationObservers, one to catch URL changes, and the other to catche the elements appearance.
But it seems to trigger only one (the url_mutation_observer).
Here is some of my code :
function handling_url_change(mutations){
mutations.forEach(function (mutation){
if (check_timeline()){
if (!buttons_added){
var element = $(document).find(button_location);
if (element && element.length > 0){
add_buttons();
}
}
}else if (set){
reset();
}
});
}
function handle_deletion(mutations){
mutations.forEach(function (mutation){
if (mutation.addedNodes){
for (var i = 0; i < mutation.addedNodes.length; i++){
if (isheader(mutation.addedNodes[i])){
mutation.addedNodes[i].remove()
}else if (isactivity(mutation.addedNodes[i])){
delete_activity(mutation.addedNodes[i]);
}
return true;
}
}
});
return true;
}
/*
** Mutation observers :
*/
var url_mutation_observer = new MutationObserver(handling_url_change);
var delete_mutation_observer = new MutationObserver(handle_deletion);
I have some questions :
1) Isn't each MutationObserver catching every mutation?
2) Is the addition of a button (like my script does) counted as a mutation?
3) If I add an element with a callback function triggered by the addition of such element. Isn't there a risk of recursion and infinite loop?
4) Is that possible to catch only the URL change with a MutationObserver, and to catch the appearance of only some kinds of elements with the other one?
(To be sure that each one catches only what he needs to catch, I didn't know how to do and I checked the url with a function inspecting the page, and not with a function checking if the mutation is a "UrlChangeMutation" or something like that).
5) What could I do?
NB:
If you want too, here is the full script with console.log() calls for debugging.
http://dpaste.com/3NWV08J
NB2:
If you also want this script without the console.log() calls :
http://dpaste.com/3AH8YF5
First off, note that your for loop is broken, because it contains an unconditional return true.
1) Isn't each MutationObserver catching every mutation?
Yes, because you're observing document.
2) Is the addition of a button (like my script does) counted as a mutation?
Of course.
3) If I add an element with a callback function triggered by the addition of such element. Isn't there a risk of recursion and infinite loop?
Absolutely.
(It will be a non-blocking loop though, meaning the page might not freeze.)
4) Is that possible to catch only the URL change with a MutationObserver [...]?
No, that's not what MutationObservers do. From MDN:
MutationObserver provides developers a way to react to changes in a DOM.
Note that "UrlChangeMutation or something like that" does not exist.
4) [...] and to catch the appearance of only some kinds of elements with the other one?
You cannot specify a filter, but you can check whether the affected elements meet your criteria in your callback function.
Hell, you have access to all methods and attributes of every node, what more could you need?
5) What could I do?
My suggestion? Google more.
For example, the first result for "javascript on url change" is:
How to detect url change
And I suggest you read up on the DOM, because it seems that you are simply unaware of many things that exist there.
I had an error : observer.observer() isn't a function. observer.observe() is : http://dpaste.com/3AH8YF5#line-254
Related
I have setup a Custom JavaScript variable that works intermittently. The function is simply designed to return true or false if a text is contained on the page.
The below code works fine when the page is loaded directly from the URL bar and when executed in the developer tools console. When running the function in the console, the function indeed turns true. When the code is executed within debug mode in GTM, the value returns false when a history change event occurs.
function() {
var content = document.body.innerText;
var query = "text to search";
if (content.search(query) > -1 ) {
return true;
} else {
return false;
}
}
Any assistance/insight is very much appreciated!
To me this seems like expected behavior. Since you are talking about history changes, you are probably working with a single page application, or some other page where the DOM is changed after the initial page load.
Custom Javscript variables evaluate a function and return the result each time you reference it. How I imagine the flow of operations goes is this.
Page Loads (target text is in the page body) -> Custom JS evaluates on page view and returns true -> User presses some button -> DOM is modified to display new content (target text is removed and no longer present -> History change occurs -> Custom JS evaluates again, the text is no longer present so returns false.
If the target text is still present after the history change, then I can understand why we have some unexpected behavior. The history change trigger is based off of the push state api so what could be happening is that the pushState() function is called before the DOM is finished being modified. In this case, the text isn't present at time of the history change event even though it is shortly afterwards.
You could change the page so pushState() is only called after the DOM is done being modified, use a custom event as a trigger instead (again, pushing it after the DOM is done being modified), or use a different trigger like the element visibility trigger that will only fire after the new DOM elements you want to target appear on-screen.
I have a js code which resets the value of elements on jsp by type on ajax response. Code runs a for loop for all elements; gets all elements by name for nearly 800-900 elements. IE 8 gives annoying popup message for unresponsive script. I have been through lot of articles regarding it but none helped so far or I couldnt implement to fix an issue.
Below is the code which is causing pop up.
function clearField(ele,eleType)
{
if(eleType=="checkbox")
{
ele.prop('checked', false);
}
else if(eleType=="text")
{
ele.val("");
ele.removeAttr('disabled');
ele.removeAttr('readonly');
}
else if(eleType=="radio")
{
if(ele.is(':checked'))
{
ele.prop('checked', false);
}
}
else if(eleType=="multiple")
{
if(ele.data('echMultiselect')!==undefined)
{
ele.multiselect("uncheckAll");
ele.multiselect("refresh");
}
}
else if(eleType=="hidden")
{
ele.val("");
}
else
{
ele.val("Select");
ele.removeAttr('disabled');
ele.removeAttr('readonly');}
}
Above function gets called in for loop iteration. ele is fetched as below and passed to a function.
var ele = $("input[name='"+ elementName+"']");
Kindly suggest if any improvement can be done or any other approach can be used to implement the same.
The unresponsive script popup is caused by a long-running function. This blocks javascript's single threaded event loop and renders the page unresponsive until the function exits. If you run the clear asynchronously, the event loop won't get blocked. For example, you can replace clearField(ele,eleType) with setImmediate(clearField.bind(null,ele,eleType)). This is a quick hack to free up the event loop and prevent the popup from appearing, but does not address the performance issues underneath.
DOM access is an expensive operation and accessing hundreds of elements should be avoided, if possible. If your use case is to always reset all the fields to a default stage, I'd suggest having a looooong HTML string of all your elements and just calling $(parent).html(htmlString) to set the elements. This way you only need one DOM access and the effect is instantanious.
http://api.jquery.com/html/
So, basically all my events(there's min. 360 of them) have team1 vs. team2 or - vs. team2 or team1 vs. - placeholders.
And on the initial render events change color depending on whether the event has one or two teams.
Orange color for the one team , and green for the two teams. Also, the event changes color on click.
But mostly, I'm interested in increasing performance with rendering events.
Rendering performance is going really bad in fullCalendar, and I couldn't find any solution to this problem.
So here's my code:
eventRender: function (event, element) {
$(element).append((event.teams[0] != null ? event.teams[0] : '-') + '</br> vs. </br>' + (event.teams[1] != null ? event.teams[1] : '-'));
if (event.teams.length === 1) {
$(element).css('background', 'orange');
}
else if (event.teams.length > 1) {
$(element).css('background', 'green');
}
}
My main issue is that when I click on event to change its color, the script automatically goes to the eventRender or eventAfterRender event, and its behavior is exactly like the for statement - it iterates over events and then it does the stuff that I want to do with the individual event, but only when the loop lands on the clicked event.
Also, in the eventClick I've called $('#myCalendar').fullcalendar('updateEvent',event) and I think there is a bug, because it automatically goes to the eventAfterRender or the eventRender, iterating over the whole events collection again.
Even tough 'updateEvent' parameter should instruct fullCalendar to update/render only the specific event.
Does anyone have any advice on this subject?
Fullcalendar now supports the renderEvents method: https://fullcalendar.io/docs/renderEvents.
Simply build your events list and send them all at once:
$("#calendar").fullCalendar('renderEvents', events, true);
I know this is an old question, but i solved the same performance problem in v5 of the fullcalendar with this configuration option:
https://fullcalendar.io/docs/rerenderDelay
It basically adds a delay after each operation that would trigger a render event.
if the framework detects another operation within that delay, it renders these events in one operation and thereby increases performance.
setting the value to 1 (so 1 millisecond delay) did the trick for me. I simply added it to the configuration in my angular component:
calendarOptions: CalendarOptions = {
...,
rerenderDelay: 1,
}
In fullcalendars source-code (at least in my version of it) there is the renderEvent-handler, that calls reportEvents -function which is the bottleneck of performance. I worked my way around this issue, by adding handling of mass-rendering events to the source-code.
I wrote a short function:
function massRenderEvents(events, stick) {
var i;
for (i = 0; i < events.length; i += 1) {
normalizeEvent(events[i]);
if (!events[i].source) {
if (stick) {
stickySource.events.push(events[i]);
events[i].source = stickySource;
}
cache.push(events[i]);
}
}
reportEvents(cache);
}
Under "EventManager" -function, and added it to EventManagers exports, like:
t.massRenderEvents = massRenderEvents;
Now, for every batch of rendered events, the heavy and slow reportEvents is called just once. Note, that massRenderEvents -function is very similar to the original renderEvent -function.
I have changed
$("#calendar").fullCalendar('renderEvent', eventData1, true);
to
$("#calendar").fullCalendar('addEventSource', eventData1, true);
and that worked for me. I have read the issue on several related website and as per their suggestion I have done this.
The main difference between renderEvent and addEventSource is that the first one tries to interact with calendar when even a single event created which take much time because of regular callback function, and the second one sends a bucket of JSON events to calendar which require only single callback function which improve the performance and take less time.
This is a very simple use case. Show an element (a loader), run some heavy calculations that eat up the thread and hide the loader when done. I am unable to get the loader to actually show up prior to starting the long running process. It ends up showing and hiding after the long running process. Is adding css classes an async process?
See my jsbin here:
http://jsbin.com/voreximapewo/12/edit?html,css,js,output
To explain what a few others have pointed out: This is due to how the browser queues the things that it needs to do (i.e. run JS, respond to UI events, update/repaint how the page looks etc.). When a JS function runs, it prevents all those other things from happening until the function returns.
Take for example:
function work() {
var arr = [];
for (var i = 0; i < 10000; i++) {
arr.push(i);
arr.join(',');
}
document.getElementsByTagName('div')[0].innerHTML = "done";
}
document.getElementsByTagName('button')[0].onclick = function() {
document.getElementsByTagName('div')[0].innerHTML = "thinking...";
work();
};
(http://jsfiddle.net/7bpzuLmp/)
Clicking the button here will change the innerHTML of the div, and then call work, which should take a second or two. And although the div's innerHTML has changed, the browser doesn't have chance to update how the actual page looks until the event handler has returned, which means waiting for work to finish. But by that time, the div's innerHTML has changed again, so that when the browser does get chance to repaint the page, it simply displays 'done' without displaying 'thinking...' at all.
We can, however, do this:
document.getElementsByTagName('button')[0].onclick = function() {
document.getElementsByTagName('div')[0].innerHTML = "thinking...";
setTimeout(work, 1);
};
(http://jsfiddle.net/7bpzuLmp/1/)
setTimeout works by putting a call to a given function at the back of the browser's queue after the given time has elapsed. The fact that it's placed at the back of the queue means that it'll be called after the browser has repainted the page (since the previous HTML changing statement would've queued up a repaint before setTimeout added work to the queue), and therefore the browser has had chance to display 'thinking...' before starting the time consuming work.
So, basically, use setTimeout.
let the current frame render and start the process after setTimeout(1).
alternatively you could query a property and force a repaint like this: element.clientWidth.
More as a what is possible answer you can make your calculations on a new thread using HTML5 Web Workers
This will not only make your loading icon appear but also keep it loading.
More info about web workers : http://www.html5rocks.com/en/tutorials/workers/basics/
To see the problem in action, see this jsbin. Clicking on the button triggers the buttonHandler(), which looks like this:
function buttonHandler() {
var elm = document.getElementById("progress");
elm.innerHTML = "thinking";
longPrimeCalc();
}
You would expect that this code changes the text of the div to "thinking", and then runs longPrimeCalc(), an arithmetic function that takes a few seconds to complete. However, this is not what happens. Instead, "longPrimeCalc" completes first, and then the text is updated to "thinking" after it's done running, as if the order of the two lines of code were reversed.
It appears that the browser does not run "innerHTML" code synchronously, but instead creates a new thread for it that executes at its own leisure.
My questions:
What is happening under the hood that is leading to this behavior?
How can I get the browser to behave the way I would expect, that is, force it to update the "innerHTML" before it executes "longPrimeCalc()"?
I tested this in the latest version of chrome.
Your surmise is incorrect. The .innerHTML update does complete synchronously (and the browser most definitely does not create a new thread). The browser simply does not bother to update the window until your code is finished. If you were to interrogate the DOM in some way that required the view to be updated, then the browser would have no choice.
For example, right after you set the innerHTML, add this line:
var sz = elm.clientHeight; // whoops that's not it; hold on ...
edit — I might figure out a way to trick the browser, or it might be impossible; it's certainly true that launching your long computation in a separate event loop will make it work:
setTimeout(longPrimeCalc, 10); // not 0, at least not with Firefox!
A good lesson here is that browsers try hard not to do pointless re-flows of the page layout. If your code had gone off on a prime number vacation and then come back and updated the innerHTML again, the browser would have saved some pointless work. Even if it's not painting an updated layout, browsers still have to figure out what's happened to the DOM in order to provide consistent answers when things like element sizes and positions are interrogated.
I think the way it works is that the currently running code completes first, then all the page updates are done. In this case, calling longPrimeCalc causes more code to be executed, and only when it is done does the page update change.
To fix this you have to have the currently running code terminate, then start the calculation in another context. You can do that with setTimeout. I'm not sure if there's any other way besides that.
Here is a jsfiddle showing the behavior. You don't have to pass a callback to longPrimeCalc, you just have to create another function which does what you want with the return value. Essentially you want to defer the calculation to another "thread" of execution. Writing the code this way makes it obvious what you're doing (Updated again to make it potentially nicer):
function defer(f, callback) {
var proc = function() {
result = f();
if (callback) {
callback(result);
}
}
setTimeout(proc, 50);
}
function buttonHandler() {
var elm = document.getElementById("progress");
elm.innerHTML = "thinking...";
defer(longPrimeCalc, function (isPrime) {
if (isPrime) {
elm.innerHTML = "It was a prime!";
}
else {
elm.innerHTML = "It was not a prime =(";
}
});
}