Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm writing a Web application that needs to store JSON data in a small, fixed-size server-side cache via AJAX (think: Opensocial quotas). I do not have control over the server.
I need to reduce the size of the stored data to stay within a server-side quota, and was hoping to be able to gzip the stringified JSON in the browser before sending it up to the server.
However, I cannot find much in the way of JavaScript implementations of Gzip. Any suggestions for how I can compress the data on the client side before sending it up?
Edit There appears to be a better LZW solution that handles Unicode strings correctly at http://pieroxy.net/blog/pages/lz-string/index.html (Thanks to pieroxy in the comments).
I don't know of any gzip implementations, but the jsolait library (the site seems to have gone away) has functions for LZW compression/decompression. The code is covered under the LGPL.
// LZW-compress a string
function lzw_encode(s) {
var dict = {};
var data = (s + "").split("");
var out = [];
var currChar;
var phrase = data[0];
var code = 256;
for (var i=1; i<data.length; i++) {
currChar=data[i];
if (dict[phrase + currChar] != null) {
phrase += currChar;
}
else {
out.push(phrase.length > 1 ? dict[phrase] : phrase.charCodeAt(0));
dict[phrase + currChar] = code;
code++;
phrase=currChar;
}
}
out.push(phrase.length > 1 ? dict[phrase] : phrase.charCodeAt(0));
for (var i=0; i<out.length; i++) {
out[i] = String.fromCharCode(out[i]);
}
return out.join("");
}
// Decompress an LZW-encoded string
function lzw_decode(s) {
var dict = {};
var data = (s + "").split("");
var currChar = data[0];
var oldPhrase = currChar;
var out = [currChar];
var code = 256;
var phrase;
for (var i=1; i<data.length; i++) {
var currCode = data[i].charCodeAt(0);
if (currCode < 256) {
phrase = data[i];
}
else {
phrase = dict[currCode] ? dict[currCode] : (oldPhrase + currChar);
}
out.push(phrase);
currChar = phrase.charAt(0);
dict[code] = oldPhrase + currChar;
code++;
oldPhrase = phrase;
}
return out.join("");
}
I had another problem, I did not want to encode data in gzip but to decode gzipped data.
I am running javascript code outside of the browser so I need to decode it using pure javascript.
It took me some time but i found that in the JSXGraph library there is a way to read gzipped data.
Here is where I found the library: http://jsxgraph.uni-bayreuth.de/wp/2009/09/29/jsxcompressor-zlib-compressed-javascript-code/
There is even a standalone utility that can do that, JSXCompressor, and the code is LGPL licencied.
Just include the jsxcompressor.js file in your project and then you will be able to read a base 64 encoded gzipped data:
<!doctype html>
</head>
<title>Test gzip decompression page</title>
<script src="jsxcompressor.js"></script>
</head>
<body>
<script>
document.write(JXG.decompress('<?php
echo base64_encode(gzencode("Try not. Do, or do not. There is no try."));
?>'));
</script>
</html>
I understand it is not what you wanted but I still reply here because I suspect it will help some people.
We just released pako https://github.com/nodeca/pako , port of zlib to javascript. I think that's now the fastest js implementation of deflate / inflate / gzip / ungzip. Also, it has democratic MIT licence. Pako supports all zlib options and its results are binary equal.
Example:
var inflate = require('pako/lib/inflate').inflate;
var text = inflate(zipped, {to: 'string'});
I ported an implementation of LZMA from a GWT module into standalone JavaScript. It's called LZMA-JS.
Here are some other compression algorithms implemented in Javascript:
Huffman
LZ77
I did not test, but there's a javascript implementation of ZIP, called JSZip:
https://stuk.github.io/jszip/
I guess a generic client-side JavaScript compression implementation would be a very expensive operation in terms of processing time as opposed to transfer time of a few more HTTP packets with uncompressed payload.
Have you done any testing that would give you an idea how much time there is to save? I mean, bandwidth savings can't be what you're after, or can it?
Most browsers can decompress gzip on the fly. That might be a better option than a javascript implementation.
You can use a 1 pixel per 1 pixel Java applet embedded in the page and use that for compression.
It's not JavaScript and the clients will need a Java runtime but it will do what you need.
Related
I'm working on a digital art project that involves gathering cookies from a set of websites that I visit. I'm dabbling in writing some code to help me with this but overall I'm just looking for the easiest/fastest way to gather all of the contents of the cookies dropped in a single visit into a text file for re-use later.
Right now - I'm using this script in a JavaScript bookmarklet which replaces the page I'm on with the contents of the cookies in an array (I'm later putting this array into a python script I wrote...).
The contents of the bookmarklet is below but the problem right now is it only returns the contents of the cookies from the single domain.
So for example - if I run this script on the NYTimes.com homepage I get approx 48 cookies dropped by the domain. But if I look in Chrome I see that all of the 3rd party tracking scripts have hundreds of cookies. How do I gather them all? Not just the NYtimes.com ones?
This is the current JavaScript code I'm running via a bookmarklet right now:
function get_cookies_array() {
var cookies = { };
if (document.cookie && document.cookie != '') {
var split = document.cookie.split(';');
for (var i = 0; i < split.length; i++) {
var name_value = split[i].split("=");
name_value[0] = name_value[0].replace(/^ /, '');
cookies[decodeURIComponent(name_value[0])] = decodeURIComponent(name_value[1]);
}
}
return cookies;
}
function quotationsanitize(cookie){
if(cookie.indexOf('"') === -1)
{
return cookie;
}
else{
alert("found a quotation!");
return encodeURIComponent(cookie);
}
}
function sanitize(cookie){
if(cookie.indexOf(',') === -1)
{
return quotationsanitize(cookie);
}
else{
alert("found a comma!");
return quotationsanitize(encodeURIComponent(cookie));
}
}
function appendCookies(){
$("body").empty();
var cookies = get_cookies_array();
$("body").append("[");
for(var name in cookies) {
//$("body").append(name + " : " + cookies[name] + "<br />" );
var cookieinfo = sanitize(cookies[name]);
$("body").append('"' + cookieinfo + '",<br />' );
}
$("body").append("]");
}
var js = document.createElement('script');
js.src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js";
document.head.appendChild(js);
jqueryTimeout = window.setTimeout(appendCookies, 500);
I'm removing " and , from the output because I'm putting this data into an array in Python by copying and pasting it. I admit that it's a hack. If anyone has any better ideas I'm all ears!
I'd write a simple little HTTP proxy. And then set your browser to use the proxy, and have it record all the cookies as they pass through.
There's a question about writing a simple proxy here, seriously simple python HTTP proxy?
which might get you started.
You'd need to extend it to read the headers, and extract the cookies, but that's relatively easy, and if you're happy in python, you''l find libraries that do most of what you want already. You would want to record the Related header too, so you knew which cookies came from which page request, but you could then record and entire browsing session quite simply.
I am trying out calculation of MD5 using javascript and looking at
fastest MD5 Implementation in JavaScript post 'JKM' implementation is suppose to be one of the faster implementations. I am using SparkMD5 which is based of off JKM implementation. However the example provided https://github.com/satazor/SparkMD5/blob/master/test/readme_example.html takes about 10seconds for a 13MB file (~23 seconds with debugger) while the same file takes only 0.03seconds using md5sum function in linux command line. Are these results too slow for javascript implementation or is this poor performance expected?
It is expected.
First, I don't think I need to tell you that JAVASCRIPT IS SLOW. Yes, even with modern JIT optimization etc. JavaScript is still slow.
To show you that it is not your JS implementation's fault, I will do some comparisons with Node.js, so that the browser DOM stuff doesn't get in the way for benchmarking.
Test file generation:
$ dd if=/dev/zero of=file bs=6M count=1
(my server only has 512 MB of RAM and Node.js can't take anything higher than 6M)
Script:
//var md5 = require('crypto-js/md5')
var md5 = require('MD5')
//var md5 = require('spark-md5').hash
//var md5 = require('blueimp-md5').md5
require('fs').readFile('file', 'utf8', function(e, b) { // Using string here to be fair for all md5 engines
console.log(md5(b))
})
(you can uncomment the contestants/benchmarkees)
The result is: (file reading overhead removed)
MD5: 5.250s - 0.072s = 5.178s
crypto-js/md5: 4.914s - 0.072s = 4.842s
Blueimp: 4.904s - 0.072s = 4.832s
MD5 with Node.js binary buffer instead of string: 1.143s - 0.063s = 1.080s
spark: 0.311s - 0.072s = 0.239s
md5sum: 0.023s - 0.003s = 0.020s
So no, spark-md5 is in reality not bad at all.
When looking at the example HTML page, I saw that they are using the incremental API. So I did another benchmark:
var md5 = require('spark-md5')
var md5obj = new md5()
var chunkNum = 0
require('fs').createReadStream('file')
.on('data', function (b) {
chunkNum ++
md5obj.append(b.toString())
})
.on('end', function () {
console.log('total ' + chunkNum + ' chunks')
console.log(md5obj.end())
})
With 96 chunks, it is 0.313s.
So no, it is not the MD5 implementation's fault at all. Performance this poor is TBH a little surprising, but not totally impossible as well, you are running this code in a browser.
BTW, my server is a DigitalOcean VPS with SSD. The file reading overhead is about 0.072s:
require('fs').readFile('file', 'utf8', function() {})
while with native cat it's about 0.003s.
For MD5 with native Buffer, the overhead is about 0.063s:
require('fs').readFile('file', function() {})
I'm designing some code for a standard login in a web application, I consider that doing the password encryption on client side is better than making mongo server doing it.
So, if I have this code ...
$("#btnSignUp").click( function () {
var sign = {
user:$("#signUser").val(),
pass:$("#signPass").val()
};
});
And then I'd do a post of sign with the password value already encrypted, how could I achieve this? Does JavaScript support AES?
You should submit the login page over https and make use of certificates to do the encryption. JavaScript is never a good idea for things that need security since you can control/influence the execution of it using developer tools built into most browsers
There are many libraries available for javascript to encrypt your data. Check out http://crypto.stanford.edu/sjcl/
I'd recommend using AES encryption in your JavaScript code. See Javascript AES encryption for libraries and links. The trouble you'll have is picking a key that is only available on the client side. Perhaps you can prompt the user? Or hash together some client system information that's not sent to the server.
kindly refer this link
http://point-at-infinity.org/jsaes/
AES_Init();
var block = new Array(16);
for(var i = 0; i < 16; i++)
block[i] = 0x11 * i;
var key = new Array(32);
for(var i = 0; i < 32; i++)
key[i] = i;
AES_ExpandKey(key);
AES_Encrypt(block, key);
AES_Done();
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a member of a team with more than 20 developers. Each developer works on a separate module (something near 10 modules). In each module we might have at least 50 CRUD forms, which means that we currently have near 500 add buttons, save buttons, edit buttons, etc.
However, because we want to globalized our application, we need to be able to translate texts in our application. For example, everywhere, the word add should become ajouter for French users.
What we've done till now, is that for each view in UI or Presentation Layer, we have a dictionary of key/value pairs of translations. Then while rendering the view, we translate required texts and strings using this dictionary. However, this approach, we've come to have something near 500 add in 500 dictionaries. This means that we've breached DRY principal.
On the other hand, if we centralize common strings, like putting add in one place, and ask developers to use it everywhere, we encounter the problem of not being sure if a string is already defined in the centralized dictionary or not.
One other options might be to have no translation dictionary and use online translation services like Google Translate, Bing Translator, etc.
Another problem that we've encountered is that some developers under the stress of delivering the project on-time can't remember the translation keys. For example, for the text of the add button, a developer has used add while another developer has used new, etc.
What is the best practice, or most well-known method for globalization and localization of string resources of an application?
As far as I know, there's a good library called localeplanet for Localization and Internationalization in JavaScript. Furthermore, I think it's native and has no dependencies to other libraries (e.g. jQuery)
Here's the website of library: http://www.localeplanet.com/
Also look at this article by Mozilla, you can find very good method and algorithms for client-side translation: http://blog.mozilla.org/webdev/2011/10/06/i18njs-internationalize-your-javascript-with-a-little-help-from-json-and-the-server/
The common part of all those articles/libraries is that they use a i18n class and a get method (in some ways also defining an smaller function name like _) for retrieving/converting the key to the value. In my explaining the key means that string you want to translate and the value means translated string.
Then, you just need a JSON document to store key's and value's.
For example:
var _ = document.webL10n.get;
alert(_('test'));
And here the JSON:
{ test: "blah blah" }
I believe using current popular libraries solutions is a good approach.
When you’re faced with a problem to solve (and frankly, who isn’t
these days?), the basic strategy usually taken by we computer people
is called “divide and conquer.” It goes like this:
Conceptualize the specific problem as a set of smaller sub-problems.
Solve each smaller problem.
Combine the results into a solution of the specific problem.
But “divide and conquer” is not the only possible strategy. We can also take a more generalist approach:
Conceptualize the specific problem as a special case of a more general problem.
Somehow solve the general problem.
Adapt the solution of the general problem to the specific problem.
- Eric Lippert
I believe many solutions already exist for this problem in server-side languages such as ASP.Net/C#.
I've outlined some of the major aspects of the problem
Issue: We need to load data only for the desired language
Solution: For this purpose we save data to a separate files for each language
ex. res.de.js, res.fr.js, res.en.js, res.js(for default language)
Issue: Resource files for each page should be separated so we only get the data we need
Solution: We can use some tools that already exist like
https://github.com/rgrove/lazyload
Issue: We need a key/value pair structure to save our data
Solution: I suggest a javascript object instead of string/string air.
We can benefit from the intellisense from an IDE
Issue: General members should be stored in a public file and all pages should access them
Solution: For this purpose I make a folder in the root of web application called Global_Resources and a folder to store global file for each sub folders we named it 'Local_Resources'
Issue: Each subsystems/subfolders/modules member should override the Global_Resources members on their scope
Solution: I considered a file for each
Application Structure
root/
Global_Resources/
default.js
default.fr.js
UserManagementSystem/
Local_Resources/
default.js
default.fr.js
createUser.js
Login.htm
CreateUser.htm
The corresponding code for the files:
Global_Resources/default.js
var res = {
Create : "Create",
Update : "Save Changes",
Delete : "Delete"
};
Global_Resources/default.fr.js
var res = {
Create : "créer",
Update : "Enregistrer les modifications",
Delete : "effacer"
};
The resource file for the desired language should be loaded on the page selected from Global_Resource - This should be the first file that is loaded on all the pages.
UserManagementSystem/Local_Resources/default.js
res.Name = "Name";
res.UserName = "UserName";
res.Password = "Password";
UserManagementSystem/Local_Resources/default.fr.js
res.Name = "nom";
res.UserName = "Nom d'utilisateur";
res.Password = "Mot de passe";
UserManagementSystem/Local_Resources/createUser.js
// Override res.Create on Global_Resources/default.js
res.Create = "Create User";
UserManagementSystem/Local_Resources/createUser.fr.js
// Override Global_Resources/default.fr.js
res.Create = "Créer un utilisateur";
manager.js file (this file should be load last)
res.lang = "fr";
var globalResourcePath = "Global_Resources";
var resourceFiles = [];
var currentFile = globalResourcePath + "\\default" + res.lang + ".js" ;
if(!IsFileExist(currentFile))
currentFile = globalResourcePath + "\\default.js" ;
if(!IsFileExist(currentFile)) throw new Exception("File Not Found");
resourceFiles.push(currentFile);
// Push parent folder on folder into folder
foreach(var folder in parent folder of current page)
{
currentFile = folder + "\\Local_Resource\\default." + res.lang + ".js";
if(!IsExist(currentFile))
currentFile = folder + "\\Local_Resource\\default.js";
if(!IsExist(currentFile)) throw new Exception("File Not Found");
resourceFiles.push(currentFile);
}
for(int i = 0; i < resourceFiles.length; i++) { Load.js(resourceFiles[i]); }
// Get current page name
var pageNameWithoutExtension = "SomePage";
currentFile = currentPageFolderPath + pageNameWithoutExtension + res.lang + ".js" ;
if(!IsExist(currentFile))
currentFile = currentPageFolderPath + pageNameWithoutExtension + ".js" ;
if(!IsExist(currentFile)) throw new Exception("File Not Found");
Hope it helps :)
jQuery.i18n is a lightweight jQuery plugin for enabling internationalization in your web pages. It allows you to package custom resource strings in ‘.properties’ files, just like in Java Resource Bundles. It loads and parses resource bundles (.properties) based on provided language or language reported by browser.
to know more about this take a look at the How to internationalize your pages using JQuery?
I'm trying to write a string to a socket (socket is called "response"). Here is the code I have sofar (I'm trying to implement a byte caching proxy...):
var http = require('http');
var sys=require('sys');
var localHash={};
http.createServer(function(request, response) {
var proxy = http.createClient(80, request.headers['host'])
var proxy_request = proxy.request(request.method, request.url, request.headers);
proxy_request.addListener('response', function (proxy_response) {
proxy_response.addListener('data', function(x) {
var responseData=x.toString();
var f=50;
var toTransmit="";
var p=0;
var N=responseData.length;
if(N>f){
p=Math.floor(N/f);
var hash="";
var chunk="";
for(var i=0;i<p;i++){
chunk=responseData.substr(f*i,f);
hash=DJBHash(chunk);
if(localHash[hash]==undefined){
localHash[hash]=chunk;
toTransmit=toTransmit+chunk;
}else{
sys.puts("***hit"+chunk);
toTransmit=toTransmit+chunk;//"***EOH"+hash;
}
}
//remainder:
chunk=responseData.substr(f*p);
hash=DJBHash(chunk);
if(localHash[hash]==undefined){
localHash[hash]=chunk;
toTransmit=toTransmit+chunk;
}else{
toTransmit=toTransmit+chunk;//"***EOH"+hash;
}
}else{
toTransmit=responseData;
}
response.write(new Buffer(toTransmit)); /*error occurs here */
});
proxy_response.addListener('end', function() {
response.end();
});
response.writeHead(proxy_response.statusCode, proxy_response.headers);
});
request.addListener('data', function(chunk) {
sys.puts(chunk);
proxy_request.write(chunk, 'binary');
});
request.addListener('end', function() {
proxy_request.end();
});
}).listen(8080);
function DJBHash(str) {
var hash = 5381;
for(var i = 0; i < str.length; i++) {
hash = (((hash << 5) + hash) + str.charCodeAt(i)) & 0xffffffff;
}
if(hash<-1){
hash=hash*-1;
}
return hash;
}
The trouble is, I keep getting a "content encoding error" in Firefox. It's as if the gizipped content isn't being transmitted properly. I've ensured that "toTransmit" is the same as "x" via console.log(x) and console.log(toTransmit).
It's worth noting that if I replace response.write(new Buffer(toTransmit)) with simply response.write(x), the proxy works as expected, but I need to do some payload analysis and then pass "toTransmit", not "x".
I've also tried to response.write(toTransmit) (i.e. without the conversion to buffer) and I keep getting the same content encoding error.
I'm really stuck. I thought I had this problem fixed by converting the string to a buffer as per another thread (http://stackoverflow.com/questions/7090510/nodejs-content-encoding-error), but I've re-opened a new thread to discuss this new problem I'm experiencing.
I should add that if I open a page via the proxy in Opera, I get gobblydeegook - it's as if the gzipped data gets corrupted.
Any insight greatly appreciated.
Many thanks in advance,
How about this?
var responseData = Buffer.from(x, 'utf8');
from: Convert string to buffer Node
Without digging very deep into your code, it seems to me that you might want to change
var responseData=x.toString();
to
var responseData=x.toString("binary");
and finally
response.write(new Buffer(toTransmit, "binary"));
From the docs:
Pure Javascript is Unicode friendly but not nice to binary data. When
dealing with TCP streams or the file system, it's necessary to handle
octet streams. Node has several strategies for manipulating, creating,
and consuming octet streams.
Raw data is stored in instances of the Buffer class. A Buffer is
similar to an array of integers but corresponds to a raw memory
allocation outside the V8 heap. A Buffer cannot be resized.
So, don't use strings for handling binary data.
Change proxy_request.write(chunk, 'binary'); to proxy_request.write(chunk);.
Omit var responseData=x.toString();, that's a bad idea.
Instead of doing substr on a string, use slice on a buffer.
Instead of doing + with strings, use the "concat" method from the buffertools.
Actually, now new Buffer() is deprecated sence node.js v10+, so better to use
Buffer.from(,)
from
response.write(new Buffer(toTransmit));
do
response.write(Buffer.from(toTransmit,'binary'));