I use a image gallery plugin called Unite Gallery plugin in an ASP.NET MVC project in order to display images stored in database. However, loading all of the images at the same time takes too long time (because each photo is in 1MB-4MB size and loading 500 photos at the same time on page load is not a good idea) and I think there must be a better approach i.e. asenkron loading or partial loading. Here is my Razor and Controller code. I have a look at many pages on the wweb and docs, but there is not an example in the documentation page. Do you have any idea?
<div id="gallery" style="display:none;">
#foreach (var item in Model)
{
if (item.FileData != null)
{
var base64 = Convert.ToBase64String(item.FileData);
var imgSrc = String.Format("data:image/gif;base64,{0}", base64);
<img alt='Image'
src="#imgSrc"
data-image="#imgSrc"
data-description='Image'>
}
}
</div>
<script type="text/javascript">
jQuery(document).ready(function () {
var gallery = jQuery("#gallery").unitegallery({
gallery_theme: "default" //theme skin
});
gallery.on("item_change", function (num, data) {
if((num%15) == 0)
{
$.ajax({
url: '#Url.Action("List", "PhotoContest")',
data: { isAll: isAllChecked, page: num }, //??? I pass the page parameter???
success: function(data){
//call is successfully completed and we got result in data
//??? NO IDEA ???
},
error:function (xhr, ajaxOptions, thrownError){
//some errror, some show err msg to user and log the error
alert(xhr.responseText);
}
});
}
});
});
</script>
public ActionResult List(string query)
{
var model = db.Photo.Select(m => new PhotoViewModel
{
Id = m.Id,
Name = m.Name,
StatusId = m.StatusId,
SubmitDate = m.SubmitDate,
FileAttachments = m.FileAttachments,
SubmitNo = m.SubmitNo
})
.ToArray();
return View("List", model);
}
Update:
After trying to apply #Kris's perfect approach, I encountered the error shown below. There is not a fix or solution regarding to this specific problem on the web. Any idea?
The image after page load overloads div and gallery borders as shown below:
Load 30 images at a time
Load remaining in Itemchange event available in Unite gallery
Main Page
<div id="gallery" >
<input type="hidden" id="galleryPage" value="0"/>
#HTML.Action("GalleryImages") //first load 30 items as PageNo = 0
</div>
<script type="text/javascript">
var gallery;
jQuery(document).ready(function () {
gallery = jQuery("#gallery").unitegallery({
gallery_theme: "default" //theme skin
});
gallery.on("item_change", function (num, data) {
//when item loaded equals to 15 or 30 or multiples of 15 another 30 items get loaded
if((num%15) == 0)
{
$.ajax({
url: '#HTML.Action("GalleryImages")'+"?pageNo="+jQuery("galleryPage").val(),
data: { isAll: isAllChecked },
success: function(data){
jQuery("gallery").append(data);//partial view with new images
jQuery("galleryPage").val(gallery.getNumItems()/30); //page number total items/number of items per page
},
error:function (xhr, ajaxOptions, thrownError){
//some errror, some show err msg to user and log the error
alert(xhr.responseText);
}
});
}
});
});
</script>
Partial View (_galleryImages.cshtml)
#foreach (var item in Model)
{
if (item.FileData != null)
{
var base64 = Convert.ToBase64String(item.FileData);
var imgSrc = String.Format("data:image/gif;base64,{0}", base64);
<img alt='Image'
src="#imgSrc"
data-image="#imgSrc"
data-description='Image'>
}
}
Controller
//Main View
public ActionResult List()
{
return View();
}
//Partial View
public Action GalleryImages(int PageNo)
{
int PageSize = 30;
var model = db.Photo.Select(m => new PhotoViewModel
{
Id = m.Id,
Name = m.Name,
StatusId = m.StatusId,
SubmitDate = m.SubmitDate,
FileAttachments = m.FileAttachments,
SubmitNo = m.SubmitNo
}).Skip(PageNo*PageSize).Take(PageSize).ToArray();
return PartialView("_galleryImages", model);
}
I don't think there's just one issue here. First, loading 100s of images all at once is going to be slow no matter what you do. For this point #Kris probably has the right idea. I'm unfamiliar with this particular library, but if it provides a way to progressively load in a handful of images at a time, you should definitely make use of that.
The second issue is that you're using base64-encoded data URIs. Images encoded in this way are roughly 150% as large as the actual image data itself. In other words, you're adding greater stress to an already stressed situation. Instead, you should have an action that returns the image data, something like:
public ActionResult GetImage(int id)
{
var image = db.Images.Find(id);
if (image == null)
{
return new HttpNotFoundResult();
}
return File(image.FileData, image.FileType);
}
You can get somewhat creative here by caching the database query result or even the entire response, but be advised that you'll need a significant amount of RAM, since you're going to be storing a lot of image data there.
Third, there's the issue of using a database to store image data in the first place. Just because databases provide a blob type, doesn't mean you need to use it. The most performant approach is always going to be serving directly from the filesystem, as IIS can serve static files directly, without involving all the ASP.NET machinery. Instead of storing the image data in your database, write the image to a filesystem location and then merely store the path to the image in the database. You could then optimize even further by actually offloading all the images to a CDN, ensuring super-fast delivery and taking virtually all load off your server.
Related
My problem is simple and complex same time:
Im tryin to upload files using jQuery fileUpload library with spring mvc controller as server side, but my files are being uploaded by one request each. What i want is posting them ALL in ONE request.
I have tried singleFileUploads: false option but its not working, if i pass 4 files to upload, the method responsible for handling the post is called 4 times.
Im using this form to post files:
<div class="upload-file-div">
<b>Choose csv files to load</b> <input id="csvUpload" type="file"
name="files[] "data-url="adminpanel/uploadCsv" multiple />
</div>
<div id="dropzoneCsv">Or drop files here</div>
<div id="progressCsv">
<div class="bar" style="width: 0%;"></div>
</div>
Jquery method to upload files:
$('#csvUpload').fileupload(
{
singleFileUploads: false,
dataType : 'json',
done : function(e, data) {
$("tr:has(td)").remove();
$.each(data.result, function(index, file) {
$("#uploaded-csv").append(
$('<tr/>').append(
$('<td/>').text(file.fileName))
.append(
$('<td/>').text(
file.fileSize))
.append(
$('<td/>').text(
file.fileType))
.append(
$('<td/>').text(
file.existsOnServer))
.append($('<td/>')));
});
},
progressall : function(e, data) {
var progress = parseInt(data.loaded / data.total * 100,
10);
$('#progressCsv .bar').css('width', progress + '%');
},
dropZone : $('#dropzoneCsv')
});
And handler method :
#RequestMapping(value = "/adminpanel/uploadCsv", method = RequestMethod.POST)
public #ResponseBody
List<FileMeta> uploadCsv(MultipartHttpServletRequest request, HttpServletResponse response) {
// 1. build an iterator
Iterator<String> itr = request.getFileNames();
MultipartFile mpf = null;
List<FileMeta> csvFiles = new ArrayList<FileMeta>();
// 2. get each file
while (itr.hasNext()) {
// 2.1 get next MultipartFile
mpf = request.getFile(itr.next());
System.out.println(mpf.getOriginalFilename() + " uploaded! ");
// 2.3 create new fileMeta
FileMeta fileMeta = new FileMeta();
fileMeta.setFileName(mpf.getOriginalFilename());
fileMeta.setFileSize(mpf.getSize() / 1024 + " Kb");
fileMeta.setFileType(mpf.getContentType());
try {
File dir = new File(Thread.currentThread().getContextClassLoader()
.getResource("").getPath()+"CSV");
if(!dir.exists()) dir.mkdirs();
File newCSV = new File(dir+"\\"+ mpf.getOriginalFilename());
if(!newCSV.exists())
{
mpf.transferTo(newCSV);
fileMeta.setExistsOnServer(false);
}
else fileMeta.setExistsOnServer(true);
} catch (IllegalStateException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
// 2.4 add to files
csvFiles.add(fileMeta);
}
return csvFiles;
}
I would really need an assistance here :( Files should be loaded in one request, thats why im doing the iterator, but its just not working.
ps. Sorry for my terrible english :(
You may want to try Programmatic file upload instead. The send method will ensure only one request is issued.
Basically keep a filelist variable, update it everytime fileuploadadd callback happens, then use this filelist for the send method.
For example:
$document.ready(function(){
var filelist = [];
$('#form').fileupload({
... // your fileupload options
}).on("fileuploadadd", function(e, data){
for(var i = 0; i < data.files.length; i++){
filelist.push(data.files[i])
}
})
$('#button').click(function(){
$('#form').fileupload('send', {files:filelist});
})
})
It is inspired by this question.
The reason I found it useful is even if you set singleFileUploads to false, if you do multiple individual selections, they will still be sent with individual requests each, as the author said himself in this GitHub issue
The script downloads historic stock prices from finance.yahoo.com. An array of tickers is used to loops through the script, creats li´nks based on the ticker array and downloads the data associated to each ticker. However, some of the ticker symbols are not up to date anymore and as a result yahoo delivers a 404 page instead of a csv containing price information. The errorpage is then instead stored in a csv and saved to my computer. To not download these files I am looking for the string 'Sorry, the page you requested was not found.', which is contained within each of yahoos error sites as an indicator for a 404 page.
Behaviour of the code (output, see below code):
The code runs through all tickers and downloads all stock price .csv's. This works fine for all ticker, but some ticker symbols are not used anymore by yahoo. In the case of a ticker symbol that is not used anymore the program downloads a .csv containing yahoos 404 page. All files (also the good ones containing actual data) are downloaded in the directory c:\Users\W7ADM\stock-price-leecher\data2.
Problem:
I would like for the code to not download the 404 page into a csv file, but just do nothing in this case and move on to the next ticker symbol in the loop. I am trying to achive this with the if-condition that looks for the String "Sorry, the page you requested was not found." that is diplayed on yahoos 404-pages. In the end I hoope to download all csv's for tickers that actually exists and save them to my hdd.
var url_begin = 'http://real-chart.finance.yahoo.com/table.csv?s=';
var url_end = '&a=00&b=1&c=1950&d=11&e=31&f=2050&g=d&ignore=.csv';
var tickers = [];
var link_created = '';
var casper = require('casper').create({
pageSettings: {
webSecurityEnabled: false
}
});
casper.start('http://www.google.de', function() {
tickers = ['ADS.DE', '0AM.DE']; //ADS.DE is retrievable, 0AM.DE is not
//loop through all ticker symbols
for (var i in tickers){
//create a link with the current ticker
link_created=url_begin + tickers[i] + url_end;
//check to see, if the created link returns a 404 page
this.open(link_created);
var content = this.getHTML();
//If is is a 404 page, jump to the next iteration of the for loop
if (content.indexOf('Sorry, the page you requested was not found.')>-1){
console.log('No Page found.');
continue; //At this point I want to jump to the next iteration of the loop.
}
//Otherwise download file to local hdd
else {
console.log(link_created);
this.download(link_created, 'stock-price-leecher\\data2\\'+tickers[i]+'.csv');
}
}
});
casper.run(function() {
this.echo('Ende...').exit();
});
The Output:
C:\Users\Win7ADM>casperjs spl_old.js
ADS.DE,0AM.DE
http://real-chart.finance.yahoo.com/table.csv?s=ADS.DE&a=00&b=1&c=1950&d=11&e=31
&f=2050&g=d&ignore=.csv
http://real-chart.finance.yahoo.com/table.csv?s=0AM.DE&a=00&b=1&c=1950&d=11&e=31
&f=2050&g=d&ignore=.csv
Ende...
C:\Users\Win7ADM>
casper.open is asynchronous (non-blocking), but you use it in a blocking fashion. You should use casper.thenOpen which has a callback which is called when the page is loaded and you can do stuff with it.
casper.start("http://example.com");
tickers = ['ADS.DE', '0AM.DE']; //ADS.DE is still retrievable, 0AM.DE is not
tickers.forEach(function(ticker){
var link_created = url_begin + ticker + url_end;
casper.thenOpen(link_created, function(){
console.log("open", link_created);
var content = this.getHTML();
if (content.indexOf('Sorry, the page you requested was not found.') > -1) {
console.log('No Page found.');
} else {
console.log("downloading...");
this.download(link_created, 'test14_'+ticker+'.csv');
}
});
});
casper.run();
Instead of using the thenOpen callback, you can also register to the page.resource.received event and download it specifically by checking the status. But now you wouldn't have access to ticker so you either have to store it in a global variable or parse it from resource.url.
var i = 0;
casper.on("page.resource.received", function(resource){
if (resource.stage === "end" && resource.status === 200) {
this.download(resource.url, 'test14_'+(i++)+'.csv');
}
});
casper.start("http://example.com");
tickers = ['ADS.DE', '0AM.DE']; //ADS.DE is still retrievable, 0AM.DE is not
tickers.forEach(function(ticker){
var link_created = url_begin + ticker + url_end;
casper.thenOpen(link_created);
});
casper.run();
I don't think you should do this with open or thenOpen. It may work on PhantomJS, but probably not on SlimerJS.
I actually tried it and your page is strange in that the download doesn't succeed. You can load some dummy page like example.com, download the csv files yourself using __utils__.sendAJAX (it is only accessible from the page context) and write them using the fs module. You should only write it based in the specific 404 error page text that you identified:
casper.start("http://example.com");
casper.then(function(){
tickers = ['ADS.DE', '0AM.DE']; //ADS.DE is still retrievable, 0AM.DE is not
tickers.forEach(function(ticker){
var link_created = url_begin + ticker + url_end;
var content = casper.evaluate(function(url){
return __utils__.sendAJAX(url, "GET");
}, link_created);
console.log("len: ", content.length);
if (content.indexOf('Sorry, the page you requested was not found.') > -1) {
console.log('No Page found.');
} else {
console.log("writing...");
fs.write('test14_'+ticker+'.csv', content);
}
});
});
casper.run();
I'm trying to upload generated client side documents (images for the moment) with Dropzone.js.
// .../init.js
var myDropzone = new Dropzone("form.dropzone", {
autoProcessQueue: true
});
Once the client have finished his job, he just have to click a save button which call the save function :
// .../save.js
function save(myDocument) {
var file = {
name: 'Test',
src: myDocument,
};
console.log(myDocument);
myDropzone.addFile(file);
}
The console.log() correctly return me the content of my document
data:image/png;base64,iVBORw0KGgoAAAANS...
At this point, we can see the progress bar uploading the document in the drop zone but the upload failed.
Here is my (standart dropzone) HTML form :
<form action="/upload" enctype="multipart/form-data" method="post" class="dropzone">
<div class="dz-default dz-message"><span>Drop files here to upload</span></div>
<div class="fallback">
<input name="file" type="file" />
</div>
</form>
I got a Symfony2 controller who receive the post request.
// Get request
$request = $this->get('request');
// Get files
$files = $request->files;
// Upload
$do = $service->upload($files);
Uploading from the dropzone (by drag and drop or click) is working and the uploads are successfull but using the myDropzone.addFile() function return me an empty object in my controller :
var_dump($files);
return
object(Symfony\Component\HttpFoundation\FileBag)#11 (1) {
["parameters":protected]=>
array(0) {
}
}
I think i don't setup correctly my var file in the save function.
I tryied to create JS image (var img = new Image() ...) but without any success.
Thanks for your help !
Finally i found a working solution without creating canvas :
function dataURItoBlob(dataURI) {
'use strict'
var byteString,
mimestring
if(dataURI.split(',')[0].indexOf('base64') !== -1 ) {
byteString = atob(dataURI.split(',')[1])
} else {
byteString = decodeURI(dataURI.split(',')[1])
}
mimestring = dataURI.split(',')[0].split(':')[1].split(';')[0]
var content = new Array();
for (var i = 0; i < byteString.length; i++) {
content[i] = byteString.charCodeAt(i)
}
return new Blob([new Uint8Array(content)], {type: mimestring});
}
And the save function :
function save(dataURI) {
var blob = dataURItoBlob(dataURI);
myDropzone.addFile(blob);
}
The file appears correctly in dropzone and is successfully uploaded.
I still have to work on the filename (my document is named "blob").
The dataURItoBlob function have been found here : Convert Data URI to File then append to FormData
[EDIT] : I finally wrote the function in dropzone to do this job. You can check it here : https://github.com/CasperArGh/dropzone
And you can use it like this :
var dataURI = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAmAAAAKwCAYAAA...';
myDropzone.addBlob(dataURI, 'test.png');
I can't comment currently and wanted to send this to you.
I know you found your answer, but I had some trouble using your Git code and reshaped it a little for my needs, but I am about 100% positive this will work for EVERY possible need to add a file or a blob or anything and be able to apply a name to it.
Dropzone.prototype.addFileName = function(file, name) {
file.name = name;
file.upload = {
progress: 0,
total: file.size,
bytesSent: 0
};
this.files.push(file);
file.status = Dropzone.ADDED;
this.emit("addedfile", file);
this._enqueueThumbnail(file);
return this.accept(file, (function(_this) {
return function(error) {
if (error) {
file.accepted = false;
_this._errorProcessing([file], error);
} else {
file.accepted = true;
if (_this.options.autoQueue) {
_this.enqueueFile(file);
}
}
return _this._updateMaxFilesReachedClass();
};
})(this));
};
If this is added to dropzone.js (I did just below the line with Dropzone.prototype.addFile = function(file) { potentially line 1110.
Works like a charm and used just the same as any other. myDropzone.addFileName(file,name)!
Hopefully someone finds this useful and doesn't need to recreate it!
1) You say that: "Once the client have finished his job, he just have to click a save button which call the save function:"
This implies that you set autoProcessQueue: false and intercept the button click, to execute the saveFile() function.
$("#submitButton").click(function(e) {
// let the event not bubble up
e.preventDefault();
e.stopPropagation();
// process the uploads
myDropzone.processQueue();
});
2) check form action
Check that your form action="/upload" is routed correctly to your SF controller & action.
3) Example Code
You may find a full example over at the official Wiki
4) Ok, thanks to your comments, i understood the question better:
"How can i save my base64 image resource with dropzone?"
You need to embedd the image content as value
// base64 data
var dataURL = canvas.toDataURL();
// insert the data into the form
document.getElementById('image').value = canvas.toDataURL('image/png');
//or jQ: $('#img').val(canvas.toDataURL("image/png"));
// trigger submit of the form
document.forms["form1"].submit();
You might run into trouble doing this and might need to set the "origin-clean" flag to "true". see http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#security-with-canvas-elements
how to save html5 canvas to server
I have the following code in my main Dancer app .pm:
package Deadlands;
use Dancer ':syntax';
use Dice;
our $VERSION = '0.1';
get '/' => sub {
my ($dieQty, $dieType, $bonus);
my $button = param('button');
$dieQty = param('dieQty');
$dieType = param('dieType');
$bonus = param('bonus');
if (defined $dieQty && defined $dieType) {
return Dice::Dice->new(dieType => $dieType, dieQty => $dieQty, bonus => $bonus)->getStandardResult();
}
template 'index';
};
true;
Here is my JavaScript:
$(document).ready(function() {
$('#standardRoll').click(function() {
$.get("/lib/Deadlands.pm", { button: '1', dieType: $("#dieType").val(), dieQty: $("#dieQty").val(), bonus: $("#bonus").val() }, processData);
function processData(data) {
$("#result").html(data);
}
});
});
I have a div in my web page called result that I want to be updated with the die roll result from Perl. Dancer keeps coming back with a 404 error in the command window when I push the submit button.
/lib/Deadlands.pm needs to be the URL of your route (probably / in this case), not the filesystem path of your Perl module.
Your AJAX request needs to point to a URL that actually exists, not a filename that has nothing to do with the web. Looks like $.get('/', ...) would do in this case.
My ASPX code generated some html files where I just put link for paging like
First |
Next |
Previous |
Last
say if user currently on second page when it press Next moves to 3rd page ...
now issue is when user clicking Next button several times and system is in progress to generate let say 5th page it will show error page.
Is there any way to check from html via javascript to check whether file is present or not?
Kindly help me to pull out from this show stopper issue
You can use ajax for check file exists or not
Using Jquery
$.ajax({
url:'http://www.example.com/3.html',
error: function()
{
alert('file does not exists');
},
success: function()
{
alert('file exists');
}
});
Using Javascript
function checkIfRemoteFileExists(fileToCheck)
{
var tmp=new Image;
tmp.src=fileToCheck;
if(tmp.complete)
alert(fileToCheck+" is available");
else
alert(fileToCheck+" is not available");
}
Now to check if file exists or not call js function like this
checkIfRemoteFileExists('http://www.yoursite.com/abc.html');
i like to use this type of script
function CheckFileExist(fileToCheck: string) {
return new Promise((resolve, reject) => {
fetch(fileToCheck).then(res => {
if (res.status == 404) resolve(false);
if (res.status == 200) resolve(true);
return res.text()
})
})
}
and use it
var exists = await CheckFileExist(link);
There is an issue with #Sibu's solution: it actually downloads the file (it can be potentionally big, wasting traffic)
In the 2021, one should not use jQuery in new projects
native Promises and Fetch are the way to go today
<output id="output"></output>
<script>
// create a non-cached HTTP HEAD request
const fileExists = file =>
fetch(file, {method: 'HEAD', cache: 'no-store'})
.then(r => r.status==200);
// check the file existence on the server
// and place the link asynchronously after the response is given
const placeNext = file => fileExists(file).then(yes => output.innerHTML =
(yes ? `Next` : '')
);
// place the "next" link in the output if "3.html" exists on the server
placeNext('3.html');
</script>