I know the traditional way is to store image/video file in one place and then just save the reference index into db's table.
Now I am learning about gundb, I can store key-value json type data very easily, but since it is decentralized, if I wanna make a say chatroom app, how should I handle image storing (eg: user's avatar)?
I am also wondering if it is possible to make a movie-sharing app using gundb?
#Retric, great question! I'm not sure why people are downvoting you, they must be haters.
You are right, it is better to store that image/video and reference it via GUN. For videos specifically, WebTorrent/BitTorrent has done P2P video sharing for over a decade now and at one point handled 40% of the world's internet traffic!
However WebTorrent/BitTorrent is not very good with discovering/sharing those URIs (magnet links, etc.) but GUN is. So I'd recommend that as one option.
For images, especially small ones like avatars/icons/profiles, I do often store them in GUN directly by Base64 encoding them (many websites around the world inline images/icons/sprites/avatars into CSS files with base64 data-URLs, except now you could use GUN for this).
If you are interested in this, I wrote a small utility using jQuery that lets you drag&drop images into your website, and it'll auto-resize (pass options to overwrite it) and base64 encode it for you to then save to GUN:
https://github.com/amark/gun/blob/master/lib/upload.js
Here is a small example of how I use it:
$('#profile').upload(function resize(e, up){
if(e.err){ return } // handle error
$('#profile').addClass('pulse'); // css to indicate image processing
if(up){ return up.shrink(e, resize, 64) } // pass it `e` drag&drop/upload event, then I reuse the current function (named resize) as the callback for it, and tell it resize to 64px.
$('#profile').removeClass('pulse'); // css indicate done processing.
$("#profile img").attr('src', e.base64).removeClass('none'); // set photo in HTML!
gun.user().get('who').get('face').get('small').put(e.base64); // save profile thumbnail to GUN
});
Finally, what about storing videos in GUN if you don't want to use BitTorrent?
I would highly recommend using the HLS format to store videos in GUN, this would allow you to do decentralized realtime video streaming. It is a beautifully simple format that allows video streaming to work even from static files, because it stores the video in small chunks that can be streamed - which fits perfectly with GUN.
There already is a JS based video-player for the HLS format:
https://github.com/video-dev/hls.js/
Based off the demo page, you can see an example of how the video is stored, like here on GitHub:
https://github.com/video-dev/streams/tree/master/x36xhzz
(if you click on the m3u8 file, you'll see it has metadata that 720p is stored in url_0 folder, which themselves have sub-files)
Rather than storing the HLS video files on BitTorrent or a centralized server, you could store it in GUN using the same folder structure gun.get('videos').get('x36xhzz').get('url_0').get('url_496').get('193039199_mp4_h264_aac_hd_7.ts').once(function(video_chunk){ passToHLSplayer(video_chunk) }) such that it would be easy for HLS.js to integrate with GUN.
Now you'll have P2P decentralized video streaming!!!
And even cooler, you can combine it with GUN's lib/webrtc adapter and do this fully browser to browser!
I hope this helped.
The thing to understand here is the difference between content addressed space (frozen space) and user space in gun.
Let's say you have some media encoded as base64, and you know its content type (I've used text here to keep example short, but you could use image video etc):
// put avatar in frozen space:
let media = JSON.stringify({ b64 : "U2hlIHdhcyBib3JuIGFuIGFkdmVudHVyZXIuLi4=", type : "text/plain"})
// get hash of stringified media obj using gun's SEA lib:
let mediaID = await SEA.work(media, null, null, {name: "SHA-256"});
// put media in hash-addressed gundb
gun.get('#').get(mediaID).put(media,(r)=>console.log('Media put acknowledged?',r))
For a hypothetical chat app, you could use user space and put the media under the "avatar" name:
// put avatar in user space:
let user = await SEA.pair();
await gun.user().auth(user)
gun.get('~' + user.pub).get('avatar').put('#' + mediaID)
// retrieve a user's avatar
gun.get('~' + usera.pub).get('avatar').once((hashid,k)=>{
gun.get('#').get(hashid).once(media=>{
console.log("Got user's avatar :-)",media)
//do something with media
})
})
Related
I have an pinterest like application. Images and other related information are stored in MongoDb. Generally size of images is about 1 mb. Images are displaying with infinite scroll. When long script with base64 string is loaded, browser crashes or response time is really high (especially for Internet Explorer)
What is the best way to display images that are stored in MongoDb?
I think the best way to achieve this, is to have you file physically in some public folder on your server. This should be accesible in a way that you will only need to use something like
http://www.myhost.com/images/my/path/to/image.jpg
You can still maintain your Base64 image in mongodb as backup, however, this is not the best way to retrieve you images due performance issues (as you have seen). I recommend you to do the following:
Each time you store the image on mongo, be sure to also store the "image file" as itself on some public place on your server. Have in mind that you should keep the path to that file on the mongo model you are using. So, the next time you call the object, rather than get the base 64 image, you should only get the path to the image.
Lets say, you have this model
myModel = {
name: "some name",
image64: "someextralongstringveryveryveryweird......",
imageUrl: "/images/my/path/to/image/imagename-id.jpg"
}
the next time you query on it, you can just ignore the image64 using mongo projection, and in you client side you just use some html tag that makes use of that url.
<img src="/images/my/path/to/image/imagename-id.jpg">
This will help you lots on performance.
There are some libraries that could help you to manage the image file creation. ImageMagick is one that I have used and is so versatile.
I guess you have some server side part of this application? Why don't you create a tiny API to retrieve images?
So your browser will have information about image and can ask your server for it, something in line of http://your_server/api/image/imageID or http://your_server/images/imagename and then your server would just stream this image, you don't need to store this in the file system.
On the client side (browser) you just need to implement 'lazy loading'.
If you're using MongoDB, you should be storing images in your database using GridFS (http://excellencenodejsblog.com/gridfs-using-mongoose-nodejs/), a feature which exposes something like a virtual filesystem for your application.
Using GridFS, you could write a controller method which streams a requested file from your MongoDB instance and pipes the file content to the response.
I would recommend storing the images on the filesystem and using your web server to handle serving them to clients.
For performance, I would put them on a CDN - that will be able to handle the traffic.
In your application storage (mongo), you can store a URL/location to the image and then use that when retrieving the image in your javascript code.
In addition, in your code I would recommend preloading images via javascript that preloads images before the user has scrolled to see them. There are some great tools out there that you can leverage for that.
In the off chance that you cannot change the storage and you have to use mongo the way it is - I would look at preloading images with javascript.
I was having the same issue. So i used a mongodb to store my images.
This is how I proceeded:
Define a logo schema:
var logoSchema = new mongoose.Schema({
name: String,
url: String
});
Compile the logo schema into a model:
var Logo = mongoose.model("Logo", logoSchema)
Creating a new logo:
var rectblack = new Logo({
name:"rect&black",
url:"public/image.jpg"
});
Saving it :
rectblack.save(function(err, logo){
if(err){
console.log("some went wrong")
} else {
console.log("logo saved")
console.log("logo")
}
});
Now to use this logo or image I just called it with the image tag (just the file name!!!):
<img src="/image.jpg">.
My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)
This question already has answers here:
Closed 10 years ago.
I have had trouble when researching or otherwise trying to figure out how (if it's even possible) to get binary image data using JavaScript/jQuery from an html input element of type file.
I'm using WebMatrix (C#), but it may not be necessary to know that, if the purposes of this question can be answered using JavaScript/jQuery alone.
I can take the image, save it in the database (as binary data), then later show the pic on the page, from the binary data, after posting. This does, however, leave me without a pic preview, before uploading, for which I am almost certain I must use AJAX.
Again, this may not even be possible, but as long as I can get the binary image data, I believe I can push it to the server with AJAX and process the image the same way I would if I were taking it from a database (note that I don't save the image files themselves using GUID and all that,I just save the binary data).
If there is an easier way to show a pic preview using the input element, that would work fine, too, of course, as the whole idea behind me trying to do this is to show a pic preview before they hit the submit form button (or at least create that illusion).
**********UPDATE***********
I do not consider this a duplicate of another question because, my real question is:
How can I get image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
Short answer: Use the jQuery form plugin, it suports AJAX-like form submits even for file uploads.
tl;dr
Thumbnail preview is popular websites is usually done by a number of steps, basically the website do these steps:
upload the RAW image
Resize and optimise the image for data storage
Generate a temporary link to that file (usually stored in a server maintained HTTP session)
Send it back to the user, to enable a 'preview'
Actually store the image after user confirms the image
A few bad solutions are:
Most of the modern browsers has options to enable script access to local files, but usually you don't ask your users to tinker with those low level settings.
Earlier Internet Explorer (ah... yes it's a shame) and ancient versions of modern browsers will expose the full file path by reading the 'value' of file input box, which you can directly generates an tag and use that value. (Now it is replaced by some c:/fakepath/... thing.)
Use Adobe Flash to mimic the file selection panel, it can properly read local files. But passing it into JavaScript is another topic...
Hope these helps. ;)
UPDATE
I actually came across a situation that requires a preview before uploading, I'd like to also put it here. As I could recall, there were no transitional versions in modern browsers that do not implement FileReader before masking the real file path, but feel free to correct me if so. This solution should caters most of the browsers, as long as they are supported by jQuery.
// 1. Listen to change event
$(':file').change(function() {
// 2. Check if it has the FileReader class
if (!this.files) {
// 2.1. Old enough to assume a real path
setPreview(this.value);
}
else {
// 2.2. Read the file content.
var reader = new FileReader();
reader.onload = function() {
setPreview(reader.result);
};
reader.readAsDataURL();
}
});
function setPreview(url) {
// Do preview things.
$('.preview').attr('src', url);
}
I'm developing a web app that works with video files -- specifically, I have the user 'select' their video file through a form input, I then construct a URL reference to that file, and set the <video> source to that URL. This allows me to work with user supplied content, without having to upload the video -- something that seems unnecessary, and will lead to decreased performance.
Here's my very simple code for now:
// within a change event for a file input
var videoFile = e.currentTarget.files[0];
var fileURL = URL.createObjectURL(videoFile);
var videoNode.src = fileURL;
This works great. The problem: It doesn't allow me to store a reference to this video in between user sessions. I've tried to save the fileURL into a Mongo document, and then later reload that video file... and while this works sometimes, it often breaks... with no clear consistency.
Does anyone have a good solution to storing reference to local files in between user sessions? Do I have to use something like the HTML5 Filesystem API? Localstorage?
I may have missed what you are getting at, but it sounds like you just need a cookie. http://www.w3schools.com/js/js_cookies.asp
You can save whatever file name you want in a simple cookie and then the next time they visit the page you recall the video name they want.
Background
I'm working on an internal project that basically can generate a video on the client side, but since there are no JavaScript video encoders I'm aware of, I'm just exporting each frame individually. I need to avoid uploading to the server; this is all happening on the client side.
Implementation
I'm using this FileSaver.js (more specifically, Chrome's webkit FileSystem API) to save a large number of PNGs generated by an HTML5 canvas. I set Chrome to automatically download to a specific folder, so when I hit 'Save' it just takes off and saves something like 20 images per second. This works perfectly for my purposes.
If I could use JSZip to compress all these frames into one file before offering it to the client to save, I would, but I haven't even tried because there's just no way the browser will have enough memory to generate ~8000 640x480 PNGs and then compress them.
Problem
The problem is that after a certain number of images, every file downloaded is empty. Chrome even starts telling me in the download bar that the file is 0 bytes. Repeated on the same project with the same export settings, the empty saves start at exactly the same time. For example, with one project, I can save the first 5494 frames before it chokes. (I know this is an insanely large number, but I can't help that.) I tried setting a 10ms delay between saves, but that didn't have any effect. I haven't tried a larger delay because exporting takes a very long time as it is.
I checked the blob.size and it's never zero. I suspect it's exceeding some quota, but there are no errors thrown; it just silently fails to either write to the sandbox or copy the file to the user-specified location.
Questions
How can I detect these empty saves? Prevent them? Is there a better way to do this? Am I just screwed?
EDIT: Actually, after debugging FileSaver.js, I realized that it's not even using webkitRequestFileSystem; it returns when it gets here:
if (can_use_save_link) {
object_url = get_object_url(blob);
save_link.href = object_url;
save_link.download = name;
if (click(save_link)) {
filesaver.readyState = filesaver.DONE;
dispatch_all();
return;
}
}
So, it looks like it's not even using the FileSystem API, and therefore I have no idea how to empty the storage before it's full.
EDIT 2: I tried moving the "if (can_use_save_link)" block to inside the "writer.onwriteend" function, and changing it to this:
if (can_use_save_link) {
save_link.href = file.toURL();
save_link.download = name;
click(save_link);
}else{
target_view.location.href = file.toURL();
}
The result is I'm able to save all 8260 files (about 1.5GB total) since it's now using storage with a quota. Before, the files didn't show up in the HTML5 FileSystem because I assume you didn't need to put them there if the anchor element supported the 'download' attribute.
I was also able to comment out the code that appends ".download" to the filename, and I had to provide an empty anonymous function as an argument to both instances of "file.remove()".
Use JSZip, it won't use too much memory if you disable compression (which is the default). To manually disable compression anyways, make sure to pass compression: "STORE" when calling zip.generate().
I ended up modifying FileSaver.js (see "EDIT 2" in the original post).