Continuous page refresh causes Firefox to increase memory consumption on windows - javascript

I have an odd situation with a webapp that keeps running down memory on Firefox / Windows. Basically the app refreshes the data in the page using a POST call to the server made via jQuery. Every time the call is made, Firefox's memory consumption increases in an amount that's disproportional to the size of the data returned from the server.
To see if this was specific to my app, I wrote a simple test app using Sinatra (Ruby 1.9.2-p318) and jQuery (1.7.1). The app sends a request to the server every 10 seconds and loads a 1MB html chunk to the page:
Server side:
require 'rubygems'
require 'sinatra'
require 'erb'
require 'json'
configure do
set :static, true
end
post '/' do
content_type :json
# a simple html file containing ~ 1MB of data
html = File.read( File.join(File.dirname(__FILE__), 'html.txt' ) )
# convert to JSON and return to the client
return { "html" => html }.to_json
end
Client side:
<!doctype html>
<html>
<head>
<script type="text/javascript" src="/js/jquery-1.7.1.min.js"></script>
</head>
<body>
<h1>Test Page</h1>
<div id="results" style="display: none;"></div>
<script type="text/javascript">
$(function(){
// refresh the data every 10 sec
setInterval( function(){ doRefresh(); }, 10 * 1000 );
});
function doRefresh() {
$.post('/', function(data){
$('#results').html( data.html );
// attempt to free some memory
delete data;
}, 'json');
}
</script>
</body>
</html>
What doesn't seem to change is that the memory consumption by the Firefox process (observed through Windows' Task Manager) keeps rising in 10's of megabytes with each call. Despite the fact that the new data replaces the old one in the page, it seems Firefox isn't disposing of that allocated space in memory. Turns out this runs down the memory completely if the page is left open overnight (on simple, 4GB machines).
Is this a javascript issue or something with Firefox? Can I somehow force garbage collection in either? thanks.
EDIT: This memory issue wasn't observed with Google Chrome (13.0.782.112 on Win7).

If your 'data' argument has been instantiated with the 'new' keyword by jQuery, you should write this code :
…
$('#results').html( data.html );
delete data;
…
If deleting the data variable returns false. I think you can't do anything.

Related

Cucumber+Ruby+Capybara+Selenium: How to make the 'visit' method wait for dynamic content

Here is the issue that has been nagging for weeks and all solutions found online do not seem to work... ie. wait for ajax, etc...
here is versions of gems:
capybara (2.10.1, 2.7.1)
selenium-webdriver (3.0.1, 3.0.0)
rspec (3.5.0)
running ruby 2.2.5
ruby 2.2.5p319 (2016-04-26 revision 54774) [x64-mingw32]
in the env.rb
Capybara.register_driver :selenium do | app |
browser = (ENV['browser'] || 'firefox').to_sym
Capybara::Driver::Selenium.new(app, :browser => browser.to_sym, :resynchronize => true)
Capybara.default_max_wait_time = 5
end
Here is my dynamicpage.feature
Given I visit page X
Then placeholder text appears
And the placeholder text is replaced by the content provided by the json service
and the step.rb
When(/^I visit page X$/) do
visit('mysite.com/productx/')
end
When(/^placeholder text appears$/) do
expect(page).to have_css(".text-replacer-pending")
end
Then(/^the placeholder text is replaced by the content provided by the json service$/) do
expect(page).to have_css(".text-replacer-done")
end
the webpage in question, which I cannot add it here as it is not publicly accessible, contains the following on page load:
1- <span class="text-replacer-pending">Placeholder Text</span>
after a call to an external service (which provides the Json data), the same span class gets refreshed/updated to the following;
2- <span class="text-replacer-done">Correct Data</span>
The problem I have with the "visit" method in capybara + selenium is that as soon as it visits the page, it thinks everything loaded and freezes the browser, and it never lets the service be called to dynamically update the content.
I tried the following solutions but without success:
Capybara.default_max_wait_time = 5
Capybara::Driver::Selenium.new(app, :browser => browser.to_sym, :resynchronize => true)
add sleep 5 after the visit method
wait for ajax solution from several websites, etc...
adding after hooks
etc...
I am at a complete loss why "visit" can't wait or at least provide a simple solution to an issue i am sure is very common.
I am aware of the capybara methods that wait and those that don't wait such as 'visit' but the issue is;
there is no content that goes from hidden to displayed
there is there is no user interaction either, just the content is getting updated.
also unsure if this is a capybara issue or a selenium or both.
Anyhow have insight on any solutions? i am fairly new to ruby and cucumber so specifically what code goes in what file/folder would be much appreciated.
Mel
Restore wait_until method (add it to your spec_helpers.rb)
def wait_until(timeout = DEFAULT_WAIT_TIME)
Timeout.timeout(timeout) do
sleep(0.1) until value = yield
value
end
end
And then:
# time in seconds
wait_until(20) { has_no_css?('.text-replacer-pending') }
expect(page).to have_css(".text-replacer-done")
#maxple and #nattf0dd
Just to close the loop on our issue here...
After looking at this problem from a different angle,
we finally found out Cucumber/Capybara/ is not a problem at all :-)
The issue we are having lies with the browser Firefox driver (SSL related), since we have no issues when running the same test with the Chrome driver.
I do appreciate the replies and suggestions and will keep those in mind for future.
thanks again!

CQ: Why does jquery add /ajax to the start of my web service url?

I have written a little servlet that outputs data in the form of an RSS feed. It's running on my webserver at /services/rss.servlet and is returning data nicely.
In my webpage I am attempting to load data from the rss servlet like so:
$(document).ready(function() {
$.get("/services/rss.servlet")
.done(function(data) {
console.log("Success: " + data);
})
.fail(function( jqxhr, textStatus, error ) {
var err = textStatus + ", " + error;
console.log( "Request Failed: " + err );
});
});
MOST of the time, this works fine and I get data. But every now and then, the request fails and I see the following request in my network debugging page:
Request GET /ajax/services/rss.servlet HTTP/1.1
Why am I seeing /ajax prepended to my URL? It seems completely undocumented in JQuery. In particular, I notice this behavior all the time in IE9 with quirks mode, but not in IE9 with standard browser mode.
I discovered that this problem was unique to a "feature" of Adobe CQ/WEM.
In a "normal" layout, CQ would have the following subdirectories publicly exposed:
www.example.com/apps
www.example.com/libs
www.example.com/etc
However, CQ contains code allowing it to be hosted in a relative URL deeper than the document root, in case your directory structure happened to be...
www.example.com/subdirectory/cqRoot/apps
www.example.com/subdirectory/cqRoot/etc
www.example.com/subdirectory/cqRoot/libs
The code that supports this is all client-side javascript which scans the <head> for these directories and figures out where CQ "ought to be".
In our case, one of the first items in our <head> was <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script>, and CQ thought this meant the application was stored under /ajax.
There is a configuration to defeat this feature starting in CQ 5.6 from the Felix console. http://help-forums.adobe.com/content/adobeforums/en/experience-manager-forum/adobe-experience-manager.topic.685.html/forum__omci-we_are_on_cq54.html
You can also force it to be disabled in your <head>, so that it does not auto-detect.
<script>
window.CQURLInfo = window.CQURLInfo || {};
CQURLInfo.contextPath = "";
</script>

jQuery ajax get response not parsed or not seen

The problem is that jQuery does not receive the response by the server, and I can't figure out why. This is my setup, Windows 7 64bit:
ext.js:
$('#button').click(function(){
var string= $('#string').val();
$.get('http://localhost:3000',
{"input":string},
function(data){
alert(data);
$('#feedback').text(data);
});
})
099.html:
<doctype html>
<html lang="en">
<head> <!-- charset title style -->
<meta charset="uft-8"/>
<title>jQuery 099</title>
</head>
<body><!-- tables, div's bad, html 5 is better: -->
<input type="text" id="string" value=""/>
<input type="button" id="button" value="ajax"/>
<br/>
<div id="feedback"></div>
</body>
<script type="text/javascript" src="./js/jquery-2.1.1.min.js"></script>
<script type="text/javascript" src="./js/ext.js"></script>
</html>
server.js:
var express = require('express');
var app = express();
app.get('/', function(req, res){
console.log("got");
console.log(req.query.input);
var content = req.query.input;
res.send(content);
});
app.listen(3000);
I run node 0.10 from command line using
C:\dev\nodejs\0.10\servers\stackoverflow>node server.js
In my FF browser i type
http://localhost:3000/?input=hi
and i get a blank screen containing hi,
which is good. Also node.js prints got and then hi on the command line
I run 099.html from notepad++ > run > chrome > so it runs on a completely other drive but surely it doesn't need to be in a server, right? When i type something XYZ the textfield and click ajax button, node responds on the console XYZ, which is good: the request is discovered by node, so it would send a response, but i don't see the response in my html.
The expected behavior was an alert and my div gets filled in the html and displays XYZ.
What obvious point am i missing?
I'm stuck for 2 hours now and couldnt find a similar question perhaps because of my not knowing jquery.
ps the 099 is from the newboston youtube tutorial and jquery is from the jquery site. i don't know the express version, it's a fairly new one.
ps2: the jquery $.get() api is too vague:http://api.jquery.com/jquery.get/ states: "A callback function that is executed if the request succeeds." well, can i conclude that the request succeeded because the nodejs console reacted to it, and if so, why did the callback function not execute.
ps3: the last argument is dataType, perhaps node responds in a way $.get did not expect? any datatype suggestions?
EDIT: yesterday i dusted off my tomcat and put the above files into it and jquery runs like a charm.
How stupid of me, assuming that a file on a disk can communicate over http to a server, what was i thinking.
the essence of ajax is "listening for the asynchronous response" (for example http://msdn.microsoft.com/en-us/magazine/cc163479.aspx), so the file needs to reside in something that establishes an IP address of some kind obviously. Sorry for polluting the Internet, case closed.
You cannot alter port number when issuing ajax request - this is Same origin policy restriction. See what you can do with SocketIO instead.

Angularjs' $http.get only executed once in IE11

I'm learning angularjs, and as a test project I'm polling a server that returns a list of active processes (their pids) and displaying these.
The client code looks like this:
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript" src="static/angular/angular.js"></script>
<script>
function ProcessCtrl($scope, $http, $interval) {
$scope.ReloadData = function() {
var result = $http.get("processdata", {timeout:1000});
result.success(function(data,status,headers,config) {
$scope.processes = data;
});
}
$scope.ReloadData();
var stop = $interval(function(){$scope.ReloadData()}, 1000);
}
</script>
</head>
<body ng-app>
<div ng-controller="ProcessCtrl">
Processes:
<ul>
<li ng-repeat="process in processes">
{{process.pid}} is running
</li>
</ul>
</div>
</body>
</html>
This works in Firefox and Chrome, but not quite in Internet Explorer 11.
All browsers execute the ReloadData method every second, but IE11 doesn't actually fetch the process data from the server. Firefox and Chrome do fetch the data every second. I can see this also in the output from my server, which logs every request.
All three browsers execute the code in result.success, but IE11 keeps reusing the old data it got the first time, where FireFox and Chrome use the newly fetched data.
I've checked the web console in IE11 for warnings or errors, but there are none.
Edit:
As the chosen answer suggested it was a caching problem. I have made the server add a 'cache-control' header to the response with the value 'no-cache'. This has solved the problem.
It's possible that the request is cached, as it is valid to cache GET requests. FF and Chrome probably have disabled caches, because of running dev tools or other reasons. You could append a timestamp as url query string "processdata?" + (new Date()).getTime() to see if it is a caching problem.
Prettier ways to prevent IE caching can be found here:
Angular IE Caching issue for $http
I prefer to use $http.post in order to prevent any cache.

How to access performance object of every resource in a web page?

I can see, in Chrome Developer tools, loading time, time it took to get a particular resource from server and other info, for all of the resources in a webpage.
I want to capture these stats using JavaScript. How is it possible?
there is window.performance object available, but only for the requested page, not for page resources.
Is there any way to access performance object of all of the page resources.
You should be able to use window.performance.getEntries() to get resource-specific stats:
var resource = window.performance.getEntries()[0];
console.log(resource.entryType); // "resource"
console.log(resource.duration); // 39.00000000430737
console.log(resource.startTime); // 218.0000000007567
Sample from the above link:
There is still a bug in latest version of chrome - 29.0.1547.76 m
When you raise an xmlhttprequest, lets say while downloading an image, you can see that the resource is downloaded with status code 200 OK in network tab, but the performance.getEntries() or performance.getEntriesByName(resourceUrl) doesn't list the resource entry. When the page load is complete and you evaluate performance.getEntriesByName(resourceUrl) in console, it lists properly. So, there is a lag in chrome while populating the resource entries in performance entries?
In IE10, this works perfectly fine.
window.performance.getEntries()
may return not all resources. after bufferful some records is disapear
need check it before it happend
head code part
var storedEntries = [];
var updateStoredEntries = p => {
storedEntries.concat(
p.getEntries().filter(rec => /yourRegExp/.test(rec.name))
)
};
performance.addEventListener('resourcetimingbufferfull', e => {
updateStoredEntries(e.target)
});
...
later part
updateStoredEntries(performance)

Categories

Resources