I am developing a web page containing Javascript. This js uses static string data (about 1-2 MB) which is stored in a flat file. I could compress it with gzip or any other algorithm to reduce the transfer load.
Would it be possible to fetch this binary file with Ajax and decompress it into a string (which I could split later) in the client browser. If yes, how can I achieve this? Does anyone have a code example?
And another library or site is this one, although it has few examples it has some thorough test cases that can be seen.
https://github.com/imaya/zlib.js
Here are some of the complex test cases
https://github.com/imaya/zlib.js/blob/master/test/browser-test.js
https://github.com/imaya/zlib.js/blob/master/test/browser-plain-test.js
The code example seems very compact. Just these two lines of code...
// compressed = Array.<number> or Uint8Array
var gunzip = new Zlib.Gunzip(compressed);
var plain = gunzip.decompress();
If you look here https://github.com/imaya/zlib.js/blob/master/bin/gunzip.min.js you see they have the packed js file you will need to include. You might need to include one or two of the others in https://github.com/imaya/zlib.js/blob/master/bin.
In any event get those files into your page and then feed the GUnzip objects your pre-gzipped data from the server and then it will be as expected.
You will need to download the data and get it into an array yourself using other functions. I do not think they include that support.
So try these examples of download from https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest/Sending_and_Receiving_Binary_Data
function load_binary_resource(url) {
var req = new XMLHttpRequest();
req.open('GET', url, false);
req.overrideMimeType('text\/plain; charset=x-user-defined');
req.send(null);
if (req.status != 200) return '';
return req.responseText;
}
// Each byte is now encoded in a 2 byte string character. Just AND it with 0xFF to get the actual byte and then feed that to GUnzip...
var filestream = load_binary_resource(url);
var abyte = filestream.charCodeAt(x) & 0xff; // throw away high-order byte (f7)
=====================================
Also there is Node.js
Question is similar to
Simplest way to download and unzip files in Node.js cross-platform?
There is this example code at nodejs documentation. I do not know how much more specific it gets than that...
http://nodejs.org/api/zlib.html
Just enable the Gzip compression on your Apache and everything will be automatically done.
Probably you will have to store the string in a .js file as a json and enable gzip for js mime type.
I remember that I used js-deflate for off-linne JS app with large databases (needed due to limitations of local storage) and worked perfectly. It depends on js-base64.
Related
We have a scenario where we are sending multiple json files in a single response. These json files are stored as separate blobs in the backend (aerospike blobstore) and are fetched dynamically in response to a single request.
As long as we send these blobs uncompressed its working fine. i.e. we add a separator after each blob and use this separator to isolate each json blob something like this -
{
// first json here
}
-- JSONEND--blobid1
{
// second json here
}
-- JSONEND--blobid2
and so on.
As long as the blobs are uncompressed from the source i.e. blob store it works fine and we are able to isolate each json in javascript into a separate variable after parsing.
But our challenge is - these blobs are precompressed and saved into the blobstore for various reasons (performance / reduced diskspace ) and we want to simply send these compressed blobs in one response to the client. Scripts on client side use these blobs and parse them into separate json object trees.
Is this possible ? how ? We need to support only chrome and possibly firefox.
if you using Gzip and you have a nodejs server
see here, how to use gzjoin
gzjoin.c
join gzip files without recalculating the crc or recompressing
- illustrates the use of the Z_BLOCK flush parameter for inflate()
- illustrates the use of crc32_combine()
https://github.com/nodejs/node/tree/master/deps/zlib/examples
Zlib documentation:
https://nodejs.org/api/zlib.html#zlib_zlib
this is correct answer:
gzlog.c
gzlog.h
efficiently and robustly maintain a message log file in gzip format
- illustrates use of raw deflate, Z_PARTIAL_FLUSH, deflatePrime(),
and deflateSetDictionary()
- illustrates use of a gzip header extra field
OR
zpipe.c
reads and writes zlib streams from stdin to stdout
- illustrates the proper use of deflate() and inflate()
- deeply commented in zlib_how.html (see above)
OR
zran.c
index a zlib or gzip stream and randomly access it
- illustrates the use of Z_BLOCK, inflatePrime(), and
inflateSetDictionary() to provide random access
We are implementing a client-side web application that communicates with the server exclusively via XMLHttpRequests (and AJAX engine).
The XHR responses usually are plain text with some XML on it but in this case, the server is sending compressed data in .tgz file type. We know for sure that the data that the server is sending is correct because if we use an HTTP command-line client such as curl, the file sent as response is valid and contains the expected data.
However, when making an AJAX call and "blobing" the response in a downloadable file, the file we obtain is different in size (higher) than the correct one and it is not recognized by the decompresser. It Gives the following error:
gzip: stdin: not in gzip format
/bin/gtar: Child returned status 1
/bin/gtar: Error is not recoverable: exiting now
The code I'm using is the following:
*$.AJAX*.done(function(data){
window.URL = window.webkitURL || window.URL;
var contentType = 'application/x-compressed-tar';
var file = new Blob([data], {type: contentType});
var a = document.createElement('a'),
ev = document.createEvent("MouseEvents");
a.download = "browser_download2.tgz";
a.href = window.URL.createObjectURL(file);
ev.initMouseEvent("click", true, false, self, 0, 0, 0, 0, 0,
false, false, false, false, 0, null);
a.dispatchEvent(ev);
});
I avoided the parameters used to make the AJAX call, but let's assume that this is not the problem as I correctly receive an answer. I used this contentType because is the same one displayed by the obtained by curl but I tried different ones. The code may look a little bit weird so I'll desglosse it for you: I'm basically creating a link and I'm attaching to it the download link and the name of the file (it's a dirty way to be able to name the file). Finally I'm virtually clicking the link.
I compared the correct tgz file and the one obtained via browser with a hex viewer and I observed the repetition of patterns in the corrupted one (EF, BF and BD, all along the file) that is not present in the correct one.
Therefore I think about some possible causes:
(a) The browser is adding extra characters or maybe the response
header is still in the downloaded file.
(b) The file has been partially decompressed because when I inspect
the request Header I can state "Accept-Encoding: gzip, deflate";
although I don't know if the browser (Firefox in my case)
automatically decompresses data.
(c) The code that I'm using to blob the data is not correct; although
it acomplished well the aim with a plain/text file in another
occasion.
Edit
I also provide you the links to the hex inspection:
(a) Corrupted file: http://en.webhex.net/view/278aac05820c34dfbdd2217c03970dd9/0
(b) (Presumably) correct file: http://en.webhex.net/view/4a01894b814c17d2ec71ba49ac48e683
I don't know if this thread will be helpful for somebody, but just in case I figured out the cause and a possible solution for my problem.
The cause
Default Javascript variables store information in Unicode/ASCII format; they are not prepared for storing binary data correctly and this is why one can easily see wrong characters interpreted (this also explains why repetitions of EF, BF, etc. were observed in the Hex Viewer, which stand for wrong characters of ASCII/Unicode).
The solution
The last browser versions implement the so called typed arrays. They are javascript arrays that can store data in different formats (also binary). Then, if one specifies that the XMLHttpRequest response is in binary format, data will be correctly stored and, when blobed into a file, the file will not be corrupted. Check out the code I used:
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'arraybuffer';
Notice that the key point is to define the responseType as "arraybuffer". It may be also interesting noticing that I decided not to use Jquery for the AJAX anymore. It poorly implements this feature and all attempts I did to parse Jquery were in vain (overrideMimeType described somewhere else didn't work in my case). Instead, old plain XMLHttRquest worked pretty nicely.
At work we are trying to upload files from a web page to a web service using html 5/javascript in the browser end and C# in the web service. But have some trouble with encoding of some sort.
As for the javascript we get the file's binary data with help from a FileReader.
var file = ... // gets the file from an input
var fileReader = new FileReader();
fileReader.onload = dataRecieved;
fileReader.readAsBinaryString(file);
function dataRecieved() {
// Here we do a normal jquery ajax post with the file data (fileReader.result).
}
Wy we are posting the data manually and not with help from XmlHttpRequest (or similar) is for easier overall posting to our web service from different parts of the web page (it's wrapped in a function). But that doesn't seem to be the problem.
The code in the Web Service looks like this
[WebMethod]
public string SaveFileValueFieldValue(string value)
{
System.Text.UnicodeEncoding encoder = new UnicodeEncoding();
byte[] bytes = encoder.GetBytes(value);
// Saves file from bytes here...
}
All works well, and the data seems to be normal, but when trying to open a file (an image as example) it cannot be opened. Very basic text files seems to turn out okay. But if I upload a "binary" file like an image and then open both the original and the uploaded version in a normal text editor as notepad to see what differs, it seems to be wrong with only a few "invisible" characters and something that displays as a new line a few bytes in from from the start.
So basicly, the file seems to encode just a few bytes wrong somewhere in the conversions.
I've also tried to create an int array in javascript from the data, and then again transformed to a byte[] in the web service, with the exact same problem. If I try to convert with anything else than unicode (like UTF-8), the data turns out completly different from the original, so I think om on the right track here, but with something slightly wrong.
The request itself is text, so binary data is lost if you send the wrong enc-type.
What you can do is encode the binary to base64 and decode it on the other side.
To change the enc-type to multi-part/mixed and set boundaries (just like an e-mail or something) you'd have to assemble the request yourself.
I am new to HTML/Javascript, as well as coding in general so bear with me :). I am trying to create a "Spot the Difference" game in html5 using javascript. Everything is local (on my machine). I have two pictures, of the same size, one with differences. To generate data about the clickable fields, I have a java program that reads both of the images and outputs all of the positions in which pixels are different into a XML file. My plan was to then use this XML file with my javascript to define where the user could click. However, it appears (correct me if I'm wrong) that javascript cannot read local XML files for security reasons. I do not want to use an ActiveXObject because I plan on putting this onto mobile devices via phone gap or a webkit object. Does anyone have a better approach to this problem, or perhaps a way to read local XML files via javascript? Any help would be greatly appreciated, thanks.
If you are planning to put this into a smart phones (iOS and Android) and read local files, I have done similar things with JSON (yes, please don't use XML).
Convert your output to JSON
Put this as part of your application package. For example, in Android, I put it as part of the .apk in /appFiles/json
Create a custom content provider that would read the local file. I create mine as content:// but you create whatever scheme you want. You could leverage android.content.ContentProvider in order to achieve custom URL Scheme. iOS has its own way to create custom scheme as well. The implementation simply read your local storage and give the content
To read it from Javascript, I simply call ajax with the custom scheme to get the json file. For example content://myfile/theFile.json simply redirect me to particular directory in local storage with /myfile/theFile.json appended to it
Below is the sample to override openFile() in the ContentProvider
public ParcelFileDescriptor openFile (Uri uri, String mode) {
try {
Context c = getContext();
File cacheDir = c.getCacheDir();
String uriString = uri.toString();
String htmlFile = uriString.replaceAll(CUSTOM_CONTENT_URI, "");
// Translate the uri into pointer in the cache
File htmlResource = new File(cacheDir.toString() + File.separator + htmlFile);
File parentDir = htmlResource.getParentFile();
if(!parentDir.exists()) {
parentDir.mkdirs();
}
// get the file from one of the resources within the local storage
InputStream in = WebViewContentProvider.class.getResourceAsStream(htmlFile);
// copy the local storage to a cache file
copy(in, new FileOutputStream(htmlResource));
return ParcelFileDescriptor.open(htmlResource, ParcelFileDescriptor.MODE_READ_WRITE);
} catch(Exception e) {
throw new RuntimeException(e);
}
}
I hope it helps
I would suggest modifying your java program to output a JSON formatted file instead of XML. JSON is native to JavaScript and will be much simpler for you to load.
As for actually loading the data, i'm not sure what the best option is as you say you want to evenutally run this on a mobile device. If you were just making a normal website you could setup a web server using either Apache or IIS depending on your OS and put the files in the document root. Once you've done that you can load the JSON file via Ajax, which can easily be googled for.
Not sure if this helps any.
Since this is a local file, you can do this with jQuery
$.ajax({
type: "GET",
url: "your.xml",
dataType: "xml",
success: function(xml){
///do your thing
}
});
http://api.jquery.com/jQuery.ajax/
I'm working on a Chrome app that uses the HTML5 Filesystem API, and allows users to import and sync files. One issue I'm having is that if the user tries to sync image files, the files get corrupted during the upload process to the server. I'm assuming it's because they're binary.
For uploading, I opted just to make an Ajax POST request (using MooTools) and then put the file contents as the body of the request. I told MooTools to turn off urlEncoding and set the charset to "x-user-defined" (not sure if that's necessary, I just saw it on some websites).
Given that Chrome doesn't have support for xhr.sendAsBinary, does anyone have any sample code that would allow me to send binary files via Ajax?
FF's xhr.sendAsBinary() is not standard. XHR2 supports sending files (xhr.send(file)) and blobs (xhr.send(blob)):
function upload(blobOrFile) {
var xhr = new XMLHttpRequest();
xhr.open('POST', '/server', true);
xhr.onload = function(e) { ... };
// Listen to the upload progress.
xhr.upload.onprogress = function(e) { ... };
xhr.send(blobOrFile);
}
You can also send an ArrayBuffer.
IF you're writing the server, then you can just transform the bytes that you read into pure text, send it to the server and then decode it back.
Here's the simplest way (not very efficient, but that's just to show the technique) -
translate each byte you read from the file into a string of two hexadecimal characters. If you read the byte 53 (in decimal) then translate it into "45" (the hexadecimal representation of 53). concatenate all these strings together, and send the resulting string to the server.
On the server side, break the string on even positions, translate each pair of digits into a byte.