Stop browser caching images to disk - javascript

I'm developing a local application that displays sensitive images from a secure local server and I need to ensure those images are only viewable inside my web application - so I don't want them stored on disk (via cache or anything like that).
Here are my response headers (nodejs express):
res.header("Cache-Control", "no-cache, no-store, must-revalidate");
res.header("Pragma", "no-cache");
res.header("Expires", "0");
res.send(image);
Here is my AngularJS code to get the image and render it. I'm using AJAX because I'm authenticating the user via JWT so I need to include the auth header.
$http.get(src, {
responseType: 'arraybuffer'
})
.then(function(res){
var headers = res.headers();
var blob = new Blob([res.data], {
type: headers['content-type']
});
image.src = $window.URL.createObjectURL(blob);
});
It seems to work but I'm confused as to what the Chrome and Firefox developer tools are showing me. I need to be 100% sure these images aren't stored on disk.
But I see:
Status Code: 200 OK (from cache)
Chrome Dev Tools Image
I can't see the image in the chrome://cache tab but it worries me that I'm seeing (from cache) in the network monitor.
In my Firefox cache viewer I see the url to the image but the device column says "memory". Can anyone who's looking at memory view it after some period of time, especially after a user has logged out?
Firefox cache viewer
So I guess my real question is what are these two browsers doing when I create a blob? And can anyone access these images outside the browser? What does it do on a mobile/tablet device?
If I were to transfer the image using a base64 string, would that be safer or worse in terms of privacy and security?

Yoy can convert jpg blob to canvas image data, by jpg.js and then load it to a canvas, and don't use images, in this case, there are any no images and resources to load by HTTP.

Related

Fetch download progress in a content script

I fetch some data and display it on a web page after that. It works OK.
I want to show the downloading progress on the web page, but the next code does not work in a content script properly.
(async () => {
const response = await fetch("https://i.imgur.com/Rvvi2kq.mp4");
const reader = response.body.getReader();
const contentLength = +response.headers.get('Content-Length');
alert(contentLength) // 0
// other code...
})();
It works properly (shows 2886550, not 0) only if I run it in the context of the page in the same domain (i.imgur.com for this example).
Does it can work (properly) in a content script or at least in a background script? And works when I fetch a data from not the same domain too?
Is there any way to fetch a data (not just download to Downloads folder) for working with it after that and see downloading (fetching) progress?
Upd: The code above* works properly in the background script, but only in Firefox and Chromium 76+ based browsers. It was a Chromium's bug, that the code shows 0.
*It's a part of the code from here.
Imgur.com's server does not send a Access-Control-Expose-Headers header exposing content-length, so download progress indicators are not possible. It could possibly be faster to serve static content with HTTP/2 from your own domain/server since you would not be opening new socket connections to other CDNs. You could also use your server as a proxy to Imjur.com, but you run the risk of them blocking your server's IP
The fetch-progress-indicators examples show various download progress indicators with Fetch.

DropZone.js not working using iOS

The file upload with DROPZONE.JS just working in Desktop Browser and Android Browsers. It actually isn't working when it's using iOS. It shows like the file it's uploaded but when I refresh the page it isn't there...
This is the code...
jQuery(function () {
Dropzone.autoDiscover = false;
Dropzone.options.imageUpload = {
paramName: "file", // The name that will be used to transfer the file
maxFilesize: 5, // MB
parallelUploads: 2, //limits number of files processed to reduce stress on server
addRemoveLinks: true,
accept: function(file, done) {
// TODO: Image upload validation
done();
},
sending: function(file, xhr, formData) {
// Pass token. You can use the same method to pass any other values as well such as a id to associate the image with for example.
formData.append("_token", $('meta[name="csrf-token"]').attr('content')); // Laravel expect the token post value to be named _token by default
},
init: function() {
this.on("success", function(file, response) {
// On successful upload do whatever :-)
console.log(response);
});
}
};
// Manually init dropzone on our element.
var myDropzone = new Dropzone("#image-upload", {
url: '/post-scheduling/add'
});
I had the same issue as #Chad, and the reason was the value for "acceptedFiles".
when I changed it from ".jpg,.png" to "image/jpeg,image/png" it started to work normally.
I'm not sure if this answer will solve your entire issue, but it will at least be a solid start. I have just encountered a similar issue, so I've had to troubleshoot similar conditions.
In iOS 12, I've found the following to be true in both Chrome and Safari:
I am able to upload photos of either orientation using the Take Photo option
I cannot upload a photo that is in my photo library, though I am able to see and interact with all the photos contained in this section
Using the Browse option, I do not have any images that I can test the upload with, but I am able to browse through the different apps' files
The first place I looked was the privacy permissions of Chrome. I had previously granted access to the camera, so that showed up (allowing me to upload photos via Take Photo), but the Photos privacy section did not contain Chrome (or Safari, for that matter). I believe this to be a bug, as in theory if Chrome does not have access to the Photos app, it should not be able to even browse your Photo Library. But currently, as stated above, you can in fact browse through it despite a lack of permissions in the privacy settings.
Next up, I had to verify that my server was configured to accept a larger max filesize for upload files (some phones nowadays including iPhones create some pretty large file sizes). This will be a different process depending on whether you're running apache, nginx, or another server configuration.
Lastly, make sure you have increased your maxFilesize inside your Dropzone.options. Right now, your code is set to 5 MB, and this is most assuredly too low for basically any smartphone since probably 2009 if not sooner.
Well, my issue is still unresolved after these steps, as I do not know how to fix what appears to be a bug in the permissions & handling of iOS's Photo Library upload. If anyone else can chime in, please do.

Meteor android app not showing images

I am using the excellent file-collection package,
https://atmospherejs.com/vsivsi/file-collection
to store images in my Mongo database. Running the app on Android doesn't show the images (they appear as broken images). In the browser it is perfect.
I don't think the problem is unique to this package, as it is using Mongo's gridfs to store the images, and provides URL's to access them.
Here is a note from Vaughn in the documentation:
Cordova Android Bug with Meteor 1.2+
Due to a bug in the Cordova Android version that is used with Meteor
1.2, you will need to add the following to your mobile-config.js or you will have problems with this package on Android devices:
App.accessRule("blob:*");
Which I have done, but without success.
I also see the documentation references setting headers to deal with CORS issues, like this:
myFiles = new FileCollection('myFiles',
{ resumable: true, // Enable built-in resumable.js chunked upload support
http: [ // Define HTTP route
{ method: 'get', // Enable a GET endpoint
path: '/:md5', // this will be at route "/gridfs/myFiles/:md5"
lookup: function (params, query) { // uses express style url params
return { md5: params.md5 }; // a query mapping url to myFiles
},
handler: function (req, res, next) {
if (req.headers && req.headers.origin) {
res.setHeader('Access-Control-Allow-Origin', 'http://meteor.local'); // For Cordova
res.setHeader('Access-Control-Allow-Credentials', true);
}
next();
}
},
But again without success.
Looking at the network tab on the inspector, I can't even see requests for the images from the server, which suggests that it is being denied by something in the Cordova code, and it's not even trying to go out and get the images.
I have reproduced the problem using Vaughn's demo app, which I have forked and added the android platform, so it's ready to go if you care to try and help.
https://github.com/mikkelking/meteor-file-sample-app
If you do a meteor run android-device it should run on the Android. You will need to register and then upload an image to see the problem. From a browser it works fine.
Any help would be appreciated, this is a show stopper for my project. One option I have considered is to move the images to an S3 bucket, which I think should work, but I'd like to keep the images in the db if I can.
I had a similar issue once with gridfs. I believe that the issue comes because the image source is a relative source. So your image sources are coming from localhost. It works on the web version because the browser is on the same machine as your server, so a localhost source works fine. But on the android device it won't work because the images are not served on that device.
When I had this problem I just deployed to production and it worked on mobile devices because the image source pointed to a url that was on the internet and not relative to the device. This works for production but not for dev testing.
When I saw this question I cloned your code and got it working on an android device for local dev.
The first step I did is to set the ROOT_URL env variable and mobile server to point to the your local server. When you run meteor locally you can run a command like this to set these variables, using your computer's local ip address
export ROOT_URL=http://192.168.1.255:3000 && meteor run android-device --mobile-server=http://192.168.1.255:3000
Next, in your sample.coffee Template.collTest.helpers link function, you need to use the absolute url instead of a relative one (so that on your mobile device it will look to your local server instead of localhost). To dynamically get this so that it works on different servers, you can use something like this
Meteor.absoluteUrl(myData.baseURL + "/md5/" + this.md5)
Then I had to add the computer's ip address http://192.168.1.255:3000 to the content security policies in the sample.jade file.
I almost forgot, at this point I was getting a 403 forbidden error. I changed the myData.allow read function in sample.coffee and just returned true and the 403 was gone, something was happening with the permissions there
After that the image showed up on my android device.

Web Worker: How to prevent that file gets loaded from cache?

This is incredibly annoying.. I am wondering why the heck my changes aren't reflected as I notice that my JavaScript file for my Web Worker always gets loaded from cache:
I have disabled the Cache and hitting Ctrl + F5 does not work either.
How can I make sure that this file does not get loaded from cache?
_worker = new Worker('js/toy-cpu.js');
You could add a kind of version number, for example like this:
_worker = new Worker('js/toy-cpu.js?v=' + new Date().getTime());
If you are looking for the purposes of development / the configuration of your personal machine.. rather than that every user needs it to load from the web server.
Chrome has an option to disable cache
Notice the checkbox "disable cache" that you can check as I have.
And in the section below, where it says "Perform a request". if you refresh then you see the page listed, and it can indicate whether a URL is loaded from the web server, if chrome says 200 and if in the size column it gives size as a number, then it loaded from the web server. And if you double click a URL in the inspector, you see HTTP Headers.

Can I get the data of a cross-site <img/> tag as a blob?

I am trying to save a couple of images that are linked to by a webpage to offline storage. I'm using IndexedDB on Firefox and FileSystem API on Chrome. My code is actually an extension, so on Firefox I'm running on Greasemonkey, and on Chrome as a content script. I want this to be automated.
I am running into problem when I retrieve the image file. I'm using example code from the article titled Storing images and files in IndexedDB, but I get an error: the images I'm trying to download are on a different subdomain and the XHR fails.
XMLHttpRequest cannot load http://...uxgk.JPG. Origin http://subdomain.domain.com is not allowed by Access-Control-Allow-Origin.
On Firefox I could probably use GM_xmlhttpRequest and it'd work (the code works on both browsers when I'm in same-origin URLs), but I still need to solve the problem for Chrome, in which other constraints (namely, needing to interact with frames on the host page) require me to incorporate my script in the page and forfeit my privileges.
So it comes back to that I'm trying to figure out a way to save images that are linked to (and may appear in) the page to IndexedDB and/or FileSystem API. I either need to realize how to solve the cross-origin problem in Chrome (and if it requires privileges, then I need to fix the way I'm interacting with jQuery) or some kind of reverse createObjectURL. At the end of the day I need a blob (File object, as far as I understand) to put into the IndexedDB (Firefox) or to write to FileSystem API (Chrome)
Help, anyone?
Edit: my question may actually really come down to how I can use jQuery the way I want without losing my content script privileges on Chrome. If I do, I could use cross-origin XHRs on Chrome as well. Though I'd much rather get a solution that doesn't rely on that. Specifically since I'd like this solution if I get the script incorporated into the webpage, and not require it to be a content script/userscript.
Edit: I realized that the question is only about cross-site requests. Right now I have one of three ways to get the image blob, with the help of #chris-sobolewski, these questions and some other pages (like this), which can be seen in this fiddle. However, all of these require special privileges in order to run. Alas, since I'm running on a page with frames, because of a known defect in Chrome, I can't access the frames. So I can load a script into each frame by using all_frames: true, but I really want to avoid loading the script with every frame load. Otherwise, according to this article, I need to escape the sandbox, but then it comes back to privileges.
Since you are running on Chrome and Firefox, your answer is fortunately, yes (kind of).
function base64img(i){
var canvas = document.createElement('canvas');
canvas.width = i.width;
canvas.height = i.height;
var context = canvas.getContext("2d");
context.drawImage(i, 0, 0);
var blob = canvas.toDataURL("image/png");
return blob.replace(/^data:image\/(png|jpg);base64,/, "");
}
this will return the base64 encoded image.
from there you just call the function something along these lines:
image = document.getElementById('foo')
imgBlob = base64img(image);
Then go ahead and store imgBlob.
Edit: As file size is a concern, you can also store the data as a canvasPixelArray, which is width*height*4 bytes in size.
imageArray = context.getImageData( 0, 0 ,context.canvas.width,canvasContext.canvas.height );
Then JSONify the array and save that?

Categories

Resources