I have a website where people can upload images and crop them before they upload them to the server.
My problem is that each image size increases by hundred percent in the upload process. For example if I take an JPG image size 102kb 640x640, when it is loaded from the website, I use the chrome network tool after I crea the cache and see that its size is 800kb (after I saved it with size 600x600).
In order to save it to the DB I use HTML5 canvas and cropping using http://fengyuanchen.github.io/cropper/
In the server side I use Azure CloudBlockBlob.
Client:
VData = $("#uploadedImage").cropper('getCroppedCanvas', {
width: 600,
height: 600
}).toDataURL();
Server:
public PhotoInfo UploadPhoto(string image, string picCaption)
{
string base64 = image.Substring(image.IndexOf(',') + 1);
byte[] imageByte = Convert.FromBase64String(base64);
Stream stream = new MemoryStream(imageByte);
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
ConfigurationManager.AppSettings.Get("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("photos");
if (container.CreateIfNotExists())
{
// configure container for public access
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
container.SetPermissions(permissions);
}
// Retrieve reference to a blob
string uniqueBlobName = string.Format("{0}{1}", Guid.NewGuid().ToString(), Path.GetExtension("blob")).ToLowerInvariant();
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
//blob.Properties.ContentType = image.ContentType;
blob.Properties.ContentType = "image/jpg";
stream.Position = 0;
blob.UploadFromStream(stream);
var photo = new PhotoInfo()
{
ImagePath = blob.Uri.ToString(),
InvitationId = 0,
Name = picCaption != null ? picCaption : ""
};
PhotoInfo uploadedPhoto = _boxItemProvider.SetPhoto(photo);
return uploadedPhoto;
}
Any advice here?
Thanks.
Issue was solved!
instead of just using toDataURL() with no parameters I replaced it with toDataURL("image/jpeg", quality) and image size was almost same as the original when the quality was 1 and 50% less when quality was 0.75
Related
I'm working on an (HTML) form for an internal tool. Users can fill data out about an issue and attach screenshots. This form is then submitted via ajax to PHPMailer to be sent. The issue is with the screenshots. Due to system limitations I'm unable to have the users actually upload the files to the server.
Currently, I'm using HTML5 filereader to select the files. I then convert the image blobs to base64 and send them to PHPMailer, to be converted to attachments. This is actually working great. However, I'm running into file size issues. Specifically a 1000px x 1000px (402KB) test image. The resulting base64 string is over a million characters long and the request is returning 413 (Request Entity Too Large).
I understand that base64 is not an efficient method for transferring large images and I've seen various posts about retrieving / converting image blobs from a database. What I haven't been able to find is info on retrieving a local image blob and converting it to base64.
My image blob URLs look like this:
blob:http://example.com/18960927-e220-4417-93a4-edb608e5b8b3
Is it possible to grab this local image data in PHP and then convert it to base64?
I cannot post much of the source but, the following will give you an idea of how I am using filereader
window.onload=function(){
window.URL = window.URL || window.webkitURL;
var fileSelect = document.getElementById("fileSelect"),
fileElem = document.getElementById("fileElem"),
fileList = document.getElementById("fileList");
fileSelect.addEventListener("click", function (e) {
if (fileElem) {
fileElem.click();
}
e.preventDefault(); // prevent navigation to "#"
}, false);
}
function handleFiles(files) {
if (!files.length) {
fileList.innerHTML = "<p>No files selected!</p>";
} else {
fileList.innerHTML = "";
var list = document.createElement("ul");
fileList.appendChild(list);
for (var i = 0; i < files.length; i++) {
if(files[i].size > 1000000) {
alert(files[i].name + ' is too big. Please resize it and try again.');
} else {
var li = document.createElement("li");
list.appendChild(li);
var img = document.createElement("img");
img.src = window.URL.createObjectURL(files[i]);
img.height = 60;
img.setAttribute("class", "shotzPrev");
img.onload = function() {
window.URL.revokeObjectURL(this.src);
}
li.appendChild(img);
var info = document.createElement("span");
info.innerHTML = files[i].name + "<br>" + files[i].size + " bytes";
li.appendChild(info);
}
}
}
}
You can POST the File object to php
fetch("/path/to/server", {
method: "POST"
body: files[i]
})
.then(response => console.log(response.ok))
.catch(err => console.error(err));
I think its nginx error, please change the client_max_body_size value in nginx.conf file.
for example :
# set client body size to 2M #
client_max_body_size 2M;
PHP configuration (optional)
Your php installation also put limits on upload file size. Edit php.ini and set the following directives.
;This sets the maximum amount of memory in bytes that a script is allowed to allocate
memory_limit = 32M
;The maximum size of an uploaded file.
upload_max_filesize = 2M
;Sets max size of post data allowed. This setting also affects file upload. To upload large files, this value must be larger than upload_max_filesize
post_max_size = 3M
I have seen many questions and solutions for this now. I am new to Mongo DB and MEAN stack development. I want to know whether there is anyway to store image content itself rather than path of the image file in Mongo DB. All the solutions suggests to store image as buffer and then use it back in the source by converting buffer to base64. I did it but the resulting output get resolves to path to the image file rather than the image content. I am looking to save image itself in DB.
// saving image
var pic = {name : "profilePicture.png",
img : "images/default-profile-pic.png",
contentType : "image/png"
};
//schema
profilePic:{ name: String, img: Buffer, contentType: String }
//retrieving back
var base64 = "";
var bytes = new Uint8Array( profilePic.img.data );
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
base64 += String.fromCharCode( bytes[ i ] );
}
var proPic = "data:image/png;base64," + base64;
console.log(proPic);
//console output
data:image/png;base64,images/default-profile-pic.png
The output for proPic resolves to "data:image/png;base64,images/default-profile-pic.png"
few links that I referred before posting this
How to do Base64 encoding in node.js?
How to convert image into base64 string using javascript
The problem is simply, that you don't read and encode the picture. Instead you use the path as a string.
Serverside using Node
If you want to perform it on the serverside with an image on the filesystem you can use something along following:
var fs = require('fs');
// read and convert the file
var bitmap = fs.readFileSync("images/default-profile-pic.png");
var encImage = new Buffer(bitmap).toString('base64');
// saving image
var pic = {name : "profilePicture.png",
img : encImage,
contentType : "image/png"
};
....
Clientside
Again we need to load the image and encode it as base64. There is an answer about doing this on the client here.
using the first approach the result would be something like following:
function toDataUrl(url, callback, outputFormat){
var img = new Image();
img.crossOrigin = 'Anonymous';
img.onload = function(){
var canvas = document.createElement('CANVAS');
var ctx = canvas.getContext('2d');
var dataURL;
canvas.height = this.height;
canvas.width = this.width;
ctx.drawImage(this, 0, 0);
dataURL = canvas.toDataURL(outputFormat);
callback(dataURL);
canvas = null;
};
img.src = url;
}
toDataUrl("images/default-profile-pic.png", function(encImage){
// saving image
var pic = {name : "profilePicture.png",
img : encImage,
contentType : "image/png"
};
//Proceed in the callback or use a method to pull out the data
....
});
Below two links saved my time. If we use "ng-file-upload" our life becomes easy from there.
https://github.com/danialfarid/ng-file-upload#install
https://github.com/danialfarid/ng-file-upload
Below is what worked for me
//my html code
<div>
<button type="file" ngf-select="onFileSelect($file)" ng-model="file" name="file" ngf-pattern="'image/*'"
ngf-accept="'image/*'" ngf-max-size="15MB" class="btn btn-danger">
Edit Profile Picture</button>
</div>
//my js function
function onFileSelect(file){
//var image = document.getElementById('uploadPic').files;
image = file;
if (image.type !== 'image/png' && image.type !== 'image/jpeg') {
alert('Only PNG and JPEG are accepted.');
return;
}
$scope.uploadInProgress = true;
$scope.uploadProgress = 0;
var reader = new window.FileReader();
reader.readAsDataURL(image);
reader.onloadend = function() {
base64data = reader.result;
$scope.profile.profilePic = base64data;
ProfileService.updateProfile($scope.profile).then(function(response){
$rootScope.profile = response;
$scope.profilePicture = $rootScope.profile.profilePic;
});
}
}
// when reading from the server just put the profile.profilePic value to src
src="data:image/png;base64,{base64 string}"
// profile schema
var ProfileSchema = new mongoose.Schema({
userid:String,
//profilePic:{ name: String, img: Buffer, contentType: String },
profilePic:String
}
I wouldn't say this is the best solution but a good place to start.Also this limits you from uploading file size more than 16 MB in which case you can use"GridFs" in the above implementation initially the file is converted to "blob" and then I am converting it to "base64" format and adding that to my profile's string variable.
Hope this helps someone in saving their time.
Here is my problem: I have created an image collage function in javascript. (I started off with some code from this post btw: dragging and resizing an image on html5 canvas)
I have 10 canvas elements stacked on top of each other and all parameters, including 2dcontext, image data, positions etc. for each canvas is held in instances of the function 'collage'.
This is working fine, I can manipulate each canvas separately (drag, resize, adding frames, etc). But now and I want the user to be able to save the current work.
So I figure that maybe it would be possible to create a blob, that contains all the object instances, and then save the blob as a file on disk.
This is the function collage (I also push each instance to the array collage.instances, to be able to have numbered indexes)
function collage() {
this.canvas_board = '';
this.canvas = '';
this.ctx = '';
this.canvasOffset = '';
this.offsetX = '';
this.offsetY = '';
this.startX = '';
this.startY = '';
this.imageX = '';
this.imageY = '';
this.mouseX = '';
this.mouseY = '';
this.imageWidth = '';
this.imageHeight = '';
this.imageRight = '';
this.imageBottom = '';
this.imgframe = '';
this.frame = 'noframe';
this.img = '';
collage.instances.push(this);
}
collage.instances = [];
I tried with something like this:
var oMyBlob = new Blob(collage.instances, {type: 'multipart/form-data'});
But that doesn't work (only contains about 300 bits of data).
Anyone who can help? Or maybe suggest an alternative way to save the current collage work. It must of course must be possible to open the blob and repopulate the object instances.
Or maybe I am making this a bit more complicated than it has to be... but I am stuck right now, so I would appreciate any hints.
You can extract each layer's image data to DataURLs and save the result as a json object.
Here's a quick demo: http://codepen.io/gunderson/pen/PqWZwW
The process literally takes each canvas and saves out its data for later import.
The use of jquery here is for convenience:
$(".save-button").click(function() {
var imgData = JSON.stringify({
layers: getLayerData()
});
save(imgData, "myfile.json");
});
function save(filecontents, filename) {
try {
var $a = $("<a>").attr({
href: "data:application/json;," + filecontents,
download: filename
})[0].click();
return filecontents;
} catch (err) {
console.error(err);
return null;
}
}
function getLayerData() {
var imgData = [];
$(".layer").each(function(i, el) {
imgData.push(el.toDataURL("image/png"));
});
return imgData;
}
To restore, you can use a FileReader to read the contents of the JSON back into the browser, then make <img>s for each layer, set img.src to the dataURLs in your JSON and from there you can draw the <img> into onload canvases.
Add a reference (src URL) for the image to the instance, then serialize the instance array as JSON and use f.ex. localStorage.
localStorage.setItem("currentwork", JSON.stringify(collage.instances));
Then to restore you would need to do:
var tmp = localStorage.getItem("currentwork");
collage.instances = tmp ? JSON.parse(tmp) : [];
You then need to iterate through the array and reload the images using proper onload handling. Finally re-render everything.
Can you store image data on client? Yes, but not recommended. This will take a lot of space and if too much you will not be able to save all the data, the user may refuse to allow more storage space etc.
Keeping a link to the image on a server is a better approach for these things IMO. But if you disagree, look into IndexedDB (or WebSQL although deprecated) to have local storage which can be expanded in available space. localStorage can only hold between 2.5 - 5 mb, ie. no image data and only strings. Each char takes two bytes, data-uris adds 33% on top, so this will run empty pretty fast...
An image is loaded to the fabric canvas using file input and FileReader. We use scaleToWidth and scaleToHeight to have large photos fit to the canvas.
When I use choose a large 3.2MB jpeg the image is nicely resized to 1MB which is what we want. We then prepare the data for storage on local indexed db;
canvas.toJSON(); // 4.2MB
canvas.toDataURL(); // 1MB
It seems that the toJSON method stores the original jpeg. Can we reduce the jpeg prior to serialization ?
I'd prefer to serialize to JSON so we can use other excellent Fabric features in the future.
we have this figured by ;
loading the photo to Fabric.js canvas
then exporting it (which reduces the image to the canvas dimensions) and
reloading reduced image data back on the canvas and
removing the original full size photo.
Now the fabric.js canvas data is nicely reduced for storage in local indexeddb;
// camera image // 3.2 MB
canvas.toJSON(); // 1 MB
canvas.toDataURL(); // 1 MB
javascript
var reader = new FileReader();
reader.onload = function (event) {
var img = new Image();
var opts = {};
img.onload = function () {
var imgInstance = new fabric.Image(img, opts);
if (imgInstance.getWidth() > canvas.getWidth()) {
imgInstance.scaleToWidth(canvas.getWidth());
}
if (imgInstance.getHeight() > canvas.getHeight()) {
imgInstance.scaleToHeight(canvas.getHeight());
}
canvas.add(imgInstance);
canvas.renderAll();
img = null;
/* now that the image is loaded reduce it's size
so the original large image is not stored */
/* assumes photo is object 0, need to code a function to
find the index otherwise */
var photoObjectIx = 0;
var originalPhotoObject = canvas.getObjects()[photoObjectIx];
var nimg = new Image();
nimg.onload = function () {
var imgInstance = new fabric.Image(nimg, { selectable: false });
canvas.remove(originalPhotoObject);
canvas.add(imgInstance);
canvas.renderAll();
nimg = null;
};
nimg.src = originalPhotoObject.toDataURL();
}
img.src = event.target.result;
}
reader.readAsDataURL(e.target.files[0]);
I personally compress big json data and then decompress it on the server...
Deflate in JS - nice script to gzdeflate (compress) JSON.
and then... in PHP:
<?php
$json = gzinflate($HTTP_RAW_POST_DATA);
?>
I'm developing with SDK 1.6.2.
My app uses the camera to capture and save an image to Titanium.Filesystem.applicationDataDirectory.
A tap of the app is supposed to display all stored images (details [path] stored in database) tiled across the screen.
Saving the image:
var image = event.media // from camera success
var filename = new Date().getTime() + "-ea.jpg";
bgImage = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory, filename);
bgImage.write(image);
Storing to database:
var db = Titanium.Database.open('photoDB');
try{
db.execute('INSERT INTO stored (image) VALUES (?)', bgImage.nativePath);
} catch(e) {
alert(e.message);
}
db.close();
Showing the Images:
imageArray = [];
images = [];
var db = Titanium.Database.open('photoDB');
var dbrows = db.execute('select id, date, image from stored order by date asc');
while (dbrows.isValidRow()) {
imageArray.push({
image:dbrows.fieldByName('image')
});
dbrows.next();
}
dbrows.close();
// loop thru and display images
for (var i = 0; i < imageArray.length; i++){
var pushleft = (i % 4) * 75; // tile from left
var pushtop = Math.floor(i/4) * 96; // determine how far from top
var file = Titanium.Filesystem.getFile(imageArray[i].image);
images[i] = Ti.UI.createImageView({
image: imageArray[i].image, // path to image at applicationDataDirectory
width: 75,
height: 96,
left: pushleft + 5, // logic for positioning
top: pushtop + 5, // logic for positioning
store_id: imageArray[i].id
});
win.add(images[i]);
}
Unfortunately, while the tiles work the images are just showing the image placeholder, not the stored image.
I have phonedisk, so after building the app for my device I can view the application directory and the images are being stored.
What am I missing?
Figured it out, thanks everyone for the help ;) < sarcasm (It's only been a day, I hold no grudges)
Here's what was wrong in case anybody else has a similar issue.
// Create a file name
var filename = new Date().getTime() + "-ea.jpg";
// Create the file in the application directory
bgImage = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory, filename);
// Write the image to the new file (image created from camera)
bgImage.write(image);
When I was storing the image location in the database I was storing the full path bgImage.nativePath. However, when I updated and rebuild the app the apps applicationDataDirectory changed, so the stored path was invalid.
So now I just store var filename in the database and when I display it like this:
images[i] = Ti.UI.createImageView({
image: Titanium.Filesystem.applicationDataDirectory + Ti.Filesystem.separator + imageArray[i].image, // path to image at applicationDataDirectory
width: 75,
height: 96,
left: pushleft + 5, // logic for positioning
top: pushtop + 5, // logic for positioning
store_id: imageArray[i].id
});
Now, even with updates, it always points to the proper applicationDataDirectory