I'm writing a Firefox extension and I need to to access items in Firefox's memory cache.
Here is the code I'm working with:
nsICache = Components.interfaces.nsICache
cacheservice = Components.classes["#mozilla.org/network/cache-service;1"].getService(Components.interfaces.nsICacheService);
cachesession = cacheservice.createSession("javascript", nsICache.STORE_IN_MEMORY, false);
cachesession.doomEntriesIfExpired=false;
//fileurl is captured from the nsIObserver and does print out correctly
cachedescriptor = cachesession.openCacheEntry( fileurl, nsICache.ACCESS_READ, false );
ERROR:NS_ERROR_CACHE_KEY_NOT_FOUND here
Since this is data fetched in the background, I have to use an nsIObserver to capture the request and snag it's URI to be used as the cache key.
As I showed above, I get a NS_ERROR_CACHE_KEY not found, though a look through about:cache shows that it clearly is there. I also used a proxy to force caching to disk, but I got the same problem (with the code modified to look at the disk cache). I thought that this might be because the cache item was still being written, so I made a recursive window.setTimeout to continuously call the functions, but even after it is finished downloading I get the same error.
Is this, perhaps, and issue with the nsICacheSession? Maybe I'm not use the correct clientId. If so, what clientId should I be using?
I'm really at a loss here, so I'm hoping you guys can help me out.
Problem was the clientId. I used "javascript" because I saw it in an example. Turns out I needed to use "HTTP" instead.
Related
I have been sitting here for almost an hour here to test the website I'm building. Since I wanted to see the new changes from my code I reloaded, but it was reloading old one. I opened the devetools to hard reload and empy cache hard reload, they both load my old code. I went to incognito mode and it did the same thing. I went to devtools again to disable the cache from the settings and checked the disable cache in the network tab; it still cache my old code. Add-ons to clear the cache didn't work as well. Man, I haven't had this problem before and it only happened last night and it's worst today.
I'm so lost now since chrome doesn't load my new changes from my javascript file. Is there a solution for this?
One solution for this problem is to force reloading the resource in order to avoid the cache. You can get this modifying the url with http get parameters:
Change:
<script src="myscripts.js"></script>
to:
<script src="myscripts.js?newversion"></script>
Where newversion can be any string as it will be ignored. A useful option is to use the date, or version, of your code.
I found this workaround particularly useful when I came across this same problem and wanted to ensure that all clients (not just my own browser!) would run the new version of the code.
I think there's an even better way:
You can use PHP to add the last modification date of your JavaScript file to the URI of that file.
<script src="js/my-script.js?<?php echo filemtime('js/my-script.js'); ?>">
</script>
The browser will receive:
<script src="js/my-script.js?1524155368"></script>
The URI of the file will automatically change if the file is updated.
This way the browser can still cache unchanged files while recognizing changes instantly.
Are you using any type of compilation tools (like gulp or grunt)? It's possible that there is an error in your code, and the tool is not compiling the updated code.
Otherwise, the solution #airos suggested should work. Appending any unique query string to the reference of your JS will always serve a fresh copy on first reload (since the browser will be caching a new URL).
i am trying to refresh a particular area of my php page, which will load the updated information from database. My code are working on localhost. But, when the same code i'm trying to execute on my domain. Then, it is not refreshing and not showing updated information, and i don't why... Anybody have any idea..
setInterval(updateShouts, 10000 );
function updateShouts(){
$('#refresh').load('ajax/check.php');
};
this is the code, which i'm using for refreshing the
.
I'd check that the URL is correct:
You can use Firebug (or another Javascript debugger) to watch the request going out, and you can see if it was a 404 error or if it worked.
Also, in the Console, just type in $('#refresh') and make sure it returns an actual object.
if it just displays [] or undefined, then the selector is wrong.
Try:
function updateShouts()
{
$('#refresh').load('ajax/check.php');
};
setInterval(function(){updateShouts();}, 10000 );
Problem is with most localhost dev servers the configuration, security, etc.. is usually at the load end of the scale vs a host else where. So that may or may not be part of the issue, I couldn't say for sure though
Edit I agree with the notion of checking to make sure the path ajax/check.php is valid also. And that Firebug is a very handy tool to have when developing with jquery (or javascript stand alone)
I have an application which updates an image from time to time. The update interval is not predictable. The image itself is updated atomically on the web server via rename(). That is all this application does and there shall be no change on the Apache side such that the webserver can continue to only serve static files.
There is some AJAX script which displays the content and updates this image when it is changed. This is done using polling. The naive JavaScript version used a counter and updated pulled the image each second or so by adding a query timestamp. However 99% of the time this pulls the image unchanged.
The current not so naive version uses XMLHttpRequest aka. AJAX to check the If-Modified-Since-header, and if a change is detected the update is invoked.
The question now is, is there a better way to archive this effect? Perhaps look at the last paragraph of this text before you dive into this ;)
Here are the core code snippets of the current version. Please note that the code is edited for brevity, so var initialization left away and some lines removed which are not of interest.
First the usual, slightly extended AJAX binding:
// partly stolen at http://snippets.dzone.com/posts/show/2025
function $(e){if(typeof e=='string')e=document.getElementById(e);return e};
ajax={};
ajax.collect=function(a,f){var n=[];for(var i=0;i<a.length;i++){var v=f(a[i]);if(v!=null)n.push(v)}return n};
ajax.x=function(){try{return new XMLHttpRequest()}catch(e){try{return new ActiveXObject('Msxml2.XMLHTTP')}catch(e){return new ActiveXObject('Microsoft.XMLHTTP')}}};
ajax.send=function(u,f,m,a,h){var x=ajax.x();x.open(m,u,true);x.onreadystatechange=function(){if(x.readyState==4)f(x.responseText,x,x.status==0?200:x.status,x.getResponseHeader("Last-Modified"))};if(m=='POST')x.setRequestHeader('Content-type','application/x-www-form-urlencoded');if(h)h(x);x.send(a)};
ajax.get=function(url,func){ajax.send(url,func,'GET')};
ajax.update=function(u,f,lm){ajax.send(u,f,'GET',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
ajax.head=function(u,f,lm){ajax.send(u,f,'HEAD',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
The basic HTML part, it includes 2 images which are flipped after loading, and a third one (not referenced in the code snippets) to display archived versions etc., which prevents flipping the updates as well:
</head><body onload="init()">
<div id="shower"><img id="show0"/><img id="show1"/><img id="show2"/></div>
The initial part includes the timer. It is a bit more to it, to compensate for network delays on slow links, reduce the polling rate etc.:
function init()
{
window.setInterval(timer,500);
for (var a=2; --a>=0; )
{
var o=$("show"+a);
o.onload = loadi;
}
disp(0);
}
function disp(n)
{
shown=n;
window.setTimeout(disp2,10);
}
function disp2()
{
hide("show0");
hide("show1");
hide("show2");
show("show"+shown);
}
function hide(s)
{
$(s).style.display="none";
}
function show(s)
{
$(s).style.display="inline";
}
function timer(e)
{
if (waiti && !--waiti)
dorefresh();
nextrefresh();
}
function nextrefresh()
{
if (sleeps<0)
sleeps = sleeper;
if (!--sleeps)
pendi = true;
if (pendi && !waiti)
dorefresh();
}
From time to time dorefresh() is called to pull the HEAD, tracking If-Modified-Since:
function dorefresh()
{
waiti = 100; // allow 50s for this request to take
ajax.head("test.jpg",checkrefresh,lm);
}
function checkrefresh(e,x,s,l)
{
if(!l)
{
// not modified
lmc++;
waiti = 0;
}
else
{
lmc=0;
lm=l;
$("show"+loadn).src = "test.jpg?"+stamp();
waiti=100;
}
pendi=false;
sleeper++;
if (sleeper>maxsleep)
sleeper = maxsleep;
sleeps=0;
nextrefresh();
}
function stamp()
{
return new Date().getTime();
}
When the image is loaded it is flipped into view. shown usually is 0 or 1:
function loadi()
{
waiti=0;
$("show"+loadn).style.opacity=1;
if (shown<2)
disp(loadn);
loadn=1-loadn;
}
Please note that I only tested this code with Webkit based browsers yet.
Sorry, I cannot provide a working example, as my update source is non-public.
Also please excuse that the code is somewhat quick-n-dirty quality.
Strictly speaking HEAD alone is enough, we could look at the Last-Modified header of course.
But this recipe here also works for GET requests in a non-image situation.
AJAX GET in combination with images makes less sense, as this pulls the image as binary data.
I could convert that into some inline image, of course, but on bigger images (like mine) this will exceed the maximum URL length.
One thing which possibly can be done is using the browser cache.
That is pull the image using an ajax.update and then re-display the image from the cache.
However this depends on the cache strategy of a browser. On mobile devices the image might be too big to be cached, in that case it is transferred twice. This is wrong as usually mobile devices have slow and more important expensive data links.
We could use this method if the webserver would write a text file, like JSON or a JS snippet, which then is used to display the image.
However the nice thing about this code here is, that you do not need to provide additional information.
So no race conditions, no new weird states like in disk full cases, just works.
So one basic idea is to not alter the code on the webserver which generates the picture, just do it on the browser side.
This way all you need is a softlink from the web tree to the image and make sure, the image is atomically updated.
The downside of AJAX is the same origin policy, so AJAX can only check the HEAD of resources from the host which provided the running JavaScript code.
Greasemonkey or Scriptlets can circumvent that, but these cannot be deployed to a broad audience.
So foreign resources (images) sadly cannot be efficiently queried if they were updated or not.
At my side luclily both, the script and the image, originate from the same host.
Having said this all, here are the problems with this code:
The code above adds to the delay. First the HEAD is checked and if this shows that something has changed the update is done.
It would be nice to do both in one request, such that the update of the image does not require an additional roundtrip.
GET can archive that with If-Modified-Since, and it works on my side, however I found no way to properly display the result as an inlined image. It might work for some browsers, but not for all.
The code also is way too complex for my taste. You have to deal with possible network timeouts, not overwhelming limited bandwidth, trying to be friendly to the webserver, being compatible to as many browsers as possible, and so on.
Also I would like to get rid of the hack to use a query parameter just to pull an updated image, as this slowly fills/flushes the cache.
Perhaps there is an - unknown to me - way to "re-trigger" image refresh in the browser?
This way the browser could check with If-Modified-Since directly and update the image.
With JavaScript this could trigger a .load event then or similar.
At my side I even do not need that at all, all I want is to keep the image somewhat current.
I did not experiment with CANVAS yet. Perhaps somebody has an idea using that.
So my question just is, is there any better way (another algorithm) than shown above, except from improving code quality?
From what I understand, you have 2 sources of information on the server: the image itself and time of last update. Your solution polls on both channels and you want to push, right?
Start here: http://en.wikipedia.org/wiki/Comet_(programming), there should be a simple way to let the server update the client on a new image url. In case server and client support websockets it's a shortcut.
However, most simple solution assumes no image url change and runs
image.src = "";
image.src = url;
by using setInterval() and let the browser deal with cache and headers.
This is my first time working with localStorage, and I'm trying to store a json object which I am stringify(ing).
On my home page, I use the following code
localStorage.setItem("users",JSON.stringify(userList));
alert(localStorage.getItem("users"));
location.href="test2.html";
This alert spits out the userlist no problem.
Then on the test2.html, I have
var userList=JSON.parse(localStorage.getItem("users"));
alert(userList.toSource());
Firebug spits out a 'userlist is null' error.
I've tried using the string 'test' in place of the strigified json object, but no luck. I've also tried removing the JSON.parse and just get the string returned. This also didn't work.
What am I doing wrong??
Modern browsers have decided that local files ("file://" URLs, in other words) are not all to be considered members of a common "domain", at least for some purposes. Chrome seems to be the strictest, but maybe in the HTML5 features Firefox is also as strict.
It kind-of makes sense: if somebody invents some kind of TiddlyWiki-like tool that uses HTML5 storage, you wouldn't want its stuff to be available to some random other HTML file you download for totally unrelated reasons.
For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?
That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...
Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas
You could try to render the whole page in canvas and save this image back to server. have fun :)
A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.
Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete
I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)
You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.
Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.
You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});
I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.
Print Screen? Old school and a couple of keypresses, but it works!
This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.
i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.
We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.
In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.