How to properly process base64 to stored server image - javascript

I'm working on an add-item page for a basic webshop, the shop owner can add item images via drag/drop or browsing directly. When images are selected i'm storing the base64 in an array. I'm now not too sure how best to deal with sending/storing of these item images for proper use. After giving Google a bit of love i'm thinking the image data could be sent as base64 and saved back to an image via something like file_put_contents('/item-images/randomNumber.jpg', base64_decode($base64)); then adding the item's image paths to its database data for later retrieval. Below is an untested example of how i currently imagine sending the image data, is something like this right?
$("#addItem").click(function() {
var imgData = "";
$.each(previewImagesArray, function(index, value) {
imgData += previewImagesArray[index].value;
});
$.post
(
"/pages/add-item.php",
"name="+$("#add-item-name").val()+
"&price="+$("#add-item-price").val()+
"&desc="+$("#add-item-desc").val()+
"&category="+$("#add-item-category :selected").text()+
"&images="+imgData
);
return false;
});
Really appreciate any help, i'm relatively new to web development.

As you are doing, so do I essentially: get the base64 from the browser, then post back, and store. A few comments...
First, HTML POST has no mandatory size limitation, but practically your backend will limit the size of posted data. (eg, 2M max_post_size in PHP.) Since you are sending base64, you are significantly reducing the effective payload you can send. That is, if every one byte of image equals three bytes of base64, you will get far less image transfered per byte of network. Either send multiple posts or increase your post size to fit your needs.
Second, as #popnoodles mentioned, using a randomNumber will likely not be sufficient in the long term. Use either a database primary key or the tempnam family of functions to generate a unique identifier. I disagree with #popnoodleson implementation, however: it's quite possible to upload the same file b/w two different people. For example, my c2013 Winter Bash avatar on SO was taken from an online internet library. Someone else could use that same icon. We would collide, so the MD5 is not sufficient in general, but in your use case could be.
Finally, you probably will want to base64 decode, but give some thought to whether you need it. You can use a data/url and inline the base64 image data. This has the same network issue as before: significantly more transfer is required to send it. But, a data URL works very well for lots of very small images (eg avatars) or pages that will be cached for a very long time (esp if your users have reasonable data connections). Summary: consider the use case before presuming you need to base64 decode.

Related

Client vs server image process and shown

Client vs server imagen process.
We got a big system which runs on JSF(primefaces) EJB3 and sometimes JavaScript logic (like for using firebase and stuff).
So we run onto this problem, we have a servlet to serve some images. Backend take a query, then extract some blob img from DB, make that BLOB into array of bytes, send it to browser session memory and servlet take it to serve it in ulr-OurSite/image/idImage. Front end calls it by <img>(url/image/id)</img> and works fine so far.
Then we are using a new direct way to show img, we send BLOB/RAW data to frontend and there we just convert them into Base64.imageReturn. and pass it to html.
Base64 codec = new Base64();
String encoded = codec.encodeBase64String(listEvidenciaDev.get(i).getImgReturns());
Both work, for almost all cases.
Note: We didn't try this before because we couldn't pass the RAW data through our layers of serialized objects and RMI. Now we can of course.
So now there are two ways.
Either we send data to servlet and put it on some url, which means the backend does all the job and frontend just calls url
or we send data to frontend which is going to make some magic and transform it to img.
This brings 2 questions.
If we send to frontend RawObject or make them call URL to show his image content, final user download the same amount of data? This is important because we have some remote branch offices with poor internet connection
Is worth pass the hard work to frontend (convert data) or backend (convert and publish)?
EDIT:
My questions is not about BLOB (the one i call RAW data) being bigger than base64
It is; passing the data as object and transform it to a readable picture is more heavy to internet bandwidth than passing a url from our servlet with the actual IMG and load it on html ?
I did choose to close this answer because we did some test and it was the same bandwidth usage on front end.
Anyway we make use of both solutions
If we dont want to charge frontend making a lot of encode we set a servlet for that images (that comes with more code and more server load). We look for the best optimization on specific cases.

How to read a large file(>1GB) in javascript?

I use ajax $.get to read a file at local server. However, the web crashed since my file was too large(> 1GB). How can I solve the problem? If there's other solutions or alternatives?
$.get("./data/TRACKING_LOG/GENERAL_REPORT/" + file, function(data){
console.log(data);
});
A solution, assuming that you don't have control over the report generator, would be to download the file in multiple smaller pieces, using range headers, process the piece, extract what's needed from it (I assume you'll be building some html components based on the report), and move to the next piece.
You can tweak the piece size until you find a reasonable value for it, a value that doesn't make the browser crash, but also doesn't result in a large number of http requests.
If you can control the report generator, you can configure it to generate multiple smaller reports instead of a huge one.
Split the file into a lot of files or give a set user ftp access. I doubt you'd want too many people downloading a gig each off your web server.

Protecting raw JSON data from being copied

I'm creating an application with Node.js and Mongo DB, rendering the views with Swig.
I have a database of business names, addresses and geo location data that is being plotted onto a Google map with pins.
I'd like to stop users from easily copying the raw JSON data using view source, Firebug, Chrome Dev tools etc.
I'm not after bank grade security, just want to make it hard enough for most users to give up.
I have two routes of delivering the JSON package to the browser:
1) Using Swig, passing the JSON package directly to the view. Problem is that a simple view source will show the JSON.
2) Requesting the data with an AJAX call. In this scenario the data is easily accessible with Chrome Dev tools.
What are my options?
Base-64 encode the string.
Then you can just base64-decode it in JavaScript.
That should make it sufficiently unreadable, no real security though - of course.
Plus it's fast.
You need to take care with UTF-8 characters (e.g. German äöüÄÖÜ, or French èéàâôû)
e.g. like this in JavaScript:
var str = "äöüÄÖÜçéèñ";
var b64 = window.btoa(unescape(encodeURIComponent(str)))
console.log(b64);
var str2 = decodeURIComponent(escape(window.atob(b64)));
console.log(str2);
example:
var imgsrc = 'data:image/svg+xml;base64,' + btoa(unescape(encodeURIComponent(markup)));
var img = new Image(1, 1); // width, height values are optional params
img.src = imgsrc;
More secure variant:
Return encrypted base64 encoded JSON, plus the decryption algorithm, base64 encode them server-side, bit-shift it a few bits, return via ajax, then de-bitshift the string on the webpage, pass it to eval, which will give you the decrypt function, then decrypt the encrypted base64 string, then base-64 decode that string.
But that takes only a few seconds more on the chrome debug console to decrypt, i did decrypt such a thing once, I think on codecanyon to get to a "Tabs" script for free; (don't bother for the tabs, they're bloatware, better invest the time to do it yourself) ;)
I think you find that nowadays here http://www.slidetabs.com/, but I don't know if the "encryption" method is still in there.
Additionally, you can also escape the string in JavaScript, that then looks like this:
var _0xe91d=["\x28\x35\x28\x24\x29\x7B\x24\x2E\x32\x77\x2E
...
x5F\x63\x6F\x6E\x74\x5F\x64\x75\x72\x7C\x76\x5F\x74\x61\x62\x73\x5F\x61\x6C\x69\x67\x6E\x7C\x76\x5F\x74\x61\x62\x73\x5F\x64\x75\x72\x7C\x76\x5F\x73\x63\x72\x6F\x6C\x6C\x7C\x63\x6F\x6E\x74\x5F\x61\x6E\x69\x6D\x7C\x63\x6F\x6E\x74\x5F\x66\x78\x7C\x74\x61\x62\x5F\x66\x78\x7C\x72\x65\x70\x6C\x61\x63\x65\x7C\x62\x61\x6C\x69\x67\x6E\x7C\x61\x6C\x69\x67\x6E\x5F\x7C\x75\x6E\x6D\x6F\x75\x73\x65\x77\x68\x65\x65\x6C\x7C\x73\x77\x69\x74\x63\x68\x7C\x64\x65\x66\x61\x75\x6C\x74\x7C\x6A\x51\x75\x65\x72\x79","","\x66\x72\x6F\x6D\x43\x68\x61\x72\x43\x6F\x64\x65","\x72\x65\x70\x6C\x61\x63\x65","\x5C\x77\x2B","\x5C\x62","\x67"]
;eval(function (_0x173cx1,_0x173cx2,_0x173cx3,_0x173cx4,_0x173cx5,_0x173cx6){_0x173cx5=function (_0x173cx3){return (_0x173cx3<_0x173cx2?_0xe91d[4]:_0x173cx5(parseInt(_0x173cx3/_0x173cx2)))+((_0x173cx3=_0x173cx3%_0x173cx2)>35?String[_0xe91d[5]](_0x173cx3+29):_0x173cx3.toString(36));} ;if(!_0xe91d[4][_0xe91d[6]](/^/,String)){while(_0x173cx3--){_0x173cx6[_0x173cx5(_0x173cx3)]=_0x173cx4[_0x173cx3]||_0x173cx5(_0x173cx3);} ;_0x173cx4=[function (_0x173cx5){return _0x173cx6[_0x173cx5];} ];_0x173cx5=function (){return _0xe91d[7];} ;_0x173cx3=1;} ;while(_0x173cx3--){if(_0x173cx4[_0x173cx3]){_0x173cx1=_0x173cx1[_0xe91d[6]]( new RegExp(_0xe91d[8]+_0x173cx5(_0x173cx3)+_0xe91d[8],_0xe91d[9]),_0x173cx4[_0x173cx3]);} ;} ;return _0x173cx1;} (_0xe91d[0],62,284,_0xe91d[3][_0xe91d[2]](_0xe91d[1]),0,{}));
You can then bring the string back like:
"\x66\x72\x6F\x6D\x43\x68\x61\x72\x43\x6F\x64\x65".toString()
But for a moderate coder (like me), to figure out the system and decrypt the data of all this combined will take only appx. 15-30 minutes, (experimential find, from the codecanyon-try).
It's questionable if such a thing is worth the expense of your time, because it takes somebody like me less time to reverse-engineer your "encryption" than it takes you to "code" it.
Note that if you put a string like "\x66\x72\x6F\x6D\x43\x68\x61\x72\x43\x6F\x64\x65" into your appllication, you may trigger false alarms on certain virus scanners (McAffee, TrendMicro, Norton, etc., the usual suspects).
You can also partition the JSON string into an array of JSON-string chunks, makes it harder to decrypt it (maybe rotating the sequence in the array according to a certain system might help as well).
You can also break the string into an array of char:
var x = ['a', 'b', 'c'];
You can then bring it back like
console.log(x.join(""));
You can also reverse the string, and put that into an array (amCharts does that).
Then you bring it back with
x.reverse().join("");
The last one might be tricky for utf-8, as you need to correctly reverse strings like "Les misérables" (see also this and this)
Since the data will go on your client's computer, there is no other way to fully protect that data than... not sending it.
So, you could render some views on the server side and send them to the client but it may not be doable in your case.
Other way, would be to send data, but to make it difficult for an unauthorized user to access to it.
If your application is using an user database, you could generate a fixed key per user and encrypt sensible data before sending it to the client, and then the client would decrypt it with the same key calculated on the client side.
In addition, you can fine tune which data you want to send or not send to each user.
If you want to protect the data betweeen the moment the client's receive it and the moment it goes in your map, I'm afraid it is not possible as the map component you're using is probably waiting for standard JSON data.
Anyway, it makes no sense to protect your data as it will be displayed on your map.
Everything that is passed to client is not safe, you can try obfuscating data, but in the end the place where you put in the map will be accessible by just adding a line of console.log()
Another option, I'm just speculating as I'm not really sure how google maps work, but you might firstly send only the geolocation to the map, this way you will have pins on the map, the only after clicking on the ping you could fetch other data from api (name, address). Google maps should support something like onclick.
Annoy a potential scraper/hacker with all the tricks everyone talks about on this thread and others. But as it's been said many times, once the data is sent to the client, it's basically unprotected.
Perhaps your thinking should involve these things too:
-How to identify when someone is scraping (e.g. monitoring IPs, thresholds, user activity, etc) and do something about it or at least identify the culprit.
-Put copyrights and other identification on any thing you can, to help other users see and understand that it's your data, not the scrapers'. Look at what artists have been doing already, for a long time.
-Lay hidden traps in your data to help identify it as unique; that only you know about and the scraper wouldn't bother to look for or too lazy to check. If the scraper uses your data publicly too, then maybe this can be used in a legal case, or at least you could publicly shame the offender.

Return Base64 encoded thumbnails in JSON or just return URLs to them?

I have built a page showing a number of items with their thumbnail images. The page has a pager and the user can iterate over the results back and forth. Each iteration request is made using AJAX and a JSON response is returned. Currently the JSON response also includes the Base64 encoded thumbnail image for each item. For 12 items the response is about 100KB(gzip encoded). Thumbnails are compressed as JPEG.
Is this a good way? Instead of embedding thumbnails in the JSON response, is it preferrable to just include URLs to thumbnail images?
Please answer in terms of performance(and maybe bandwidth) both client side and server side.
It depends.
In favour of embedding:
No extra requests.
No extra round-trip (related).
Small overhead (no URL, no extra request headers)
In favour of linking:
Smaller total transfer (unless your images are tiny).
Cacheable.
Basically it depends on the size and reuse of the images. If the images are only likely to be seen once then there is no benefit to the cacheability. If your entire JSON response can be cached that is even better.
If the images are so small that the header overhead is a significant portion of the size of the image it might be worth embedding them.
Lastly protocols such as HTTP2 and SPDY will multiplex all the data on a single connection and reduce much of the overhead (such as headers). So unless your images are truly tiny and unlikely to ever be cached it is probably better to link to them when using these protocols (which you should be using).

In node.js and express, how should a client send a large, up to 3k, amount of data to the server?

the client will be sending my server a change log, containing a list of commands and parameters, JSON or not is TBD.
This payload can be a 3 or 4K not likely to be more.
What is the standard approach to deal with requirement?
Client should send a json, containing all of the changes, as part of the request body?
Any recommendations? Lessons learned?
Just POST the data. 3-4 KB is nothing unless you're dealing with feature-phone WAP browsers in the middle of rural India, performance issues of the "OMG, I'm Google and care about every byte ever because of my zillion-user userbase" type, or something like that.
If you're really worried about payload size, you can gzip-base64 encode it before sending - but only do this if a) you really care about this (which is unlikely) and b) your payload is large enough that this saves you bandwidth. (gzip-base64'ing small payloads often increases their size, since there isn't enough data to get enough compression benefit to offset the 33% size increase from base64 encoding.)
You can use a normal JSON post to send across 3/4K of data.
You should pay more attention to what you do with the data received on the server side, whether you buffer up all data before you start processing them (store in db or elsewhere), or process them in chunks. If you are simply dumping the data into files on server, you should create a Writable stream and pump chunks of data received into the stream.
How are you going to process the received data on the server? But then, 3/4K is not really worrying amount of data.
You can set the maximun upload size with
app.use(express.limit('5mb'));
if that's an issue?
But, there shouldn't really be any limitations on this as default, except the max buffer size (which I believe is 1GB).
It also sounds like this is something that you can just post to the server with a regular POST request, in other words you use a form with a file input and just upload the file the regular way, as 4kb isn't really a big file.

Categories

Resources