How can I store the contents of a folder which consist entirely of images.
I can do it this way.
var files = ["1.jpg","2.jpg",
"3.jpg","4.jpg","5.jpg","6.jpg",
"7.jpg","8.jpg","9.jpg","10.jpg",];
but I'd like to do it more dynamically I'm planning on having 100's of images in the folder.
I'm thinking something like this pseudo code
var images
for i to number_of_items_in_folder
images[i]= image_from_folder
The images are coming from a folder (local) called images
To do this on the client only you will need a trick or a hardcoded value
So
var images = [];
// you tell the script how many and they all have numbers from 0 to here 87
for (var i=0; i<87; i++) {
images.push(i+".jpg");
}
If you do NOT know how many you have, you need to preload them
var images = [],done=false,i=0;
while (!done) {
var image = new Image();
image.onerror=function() {
done=true;
}
image.onload=function() {
images.push(this.src);
}
image.src=i+".jpg";
i++;
}
and if they all have different names, then you need a server script that will build the array for you.
Related
I am trying to make a website, in which I include images via links. If an image is non-existent, or just 1 pixel wide, the website should display an alternative image. I am using Jade/Pug and JS.
I try to make a list of links beforehand, before rendering them on the website. That way I can just iterate threw my link-list in the .pug file afterwards.
So what I am trying to do is, to check, if an image has a certain size, using JS only. If it does, I add the link to my list, if not, then I add an alternative link.
This is the important part of my code in the app.js-file:
app.get("/", function (req, res) {
//get all the books
var isbns = [];
var links = [];
dbClient.query("SELECT isbn FROM book WHERE id < 20", function (dbError, dbItemsResponse){
isbns = dbItemsResponse.rows;
var linkk = 0;
for(i=0;i<20;i++){
linkk = "http://covers.openlibrary.org/b/isbn/" + Object.values(isbns[i]) + "-M.jpg";
var wid = getMeta(linkk);
if(wid < 2){
links[i]="https://i.ibb.co/7JnVtcB/NoCover.png";
} else {
links[i]=linkk;
}
}
});
});
function getMeta(url){
var img = new Image(); //or document.createElement("img");
img.onload = function(){
alert(this.width);
};
img.src = url;
}
This gives me a ReferenceError: Image() is not defined. If I try to use document.createElement("img"); it says "document is not defined".
How can i check on the server-side if an Image is existent? Or how can I use the Image-Constructor in my app.js file? Without using my Jade/Pug/HTML file in any way.
Sorry If it's a dumb question, but I am trying to figure this out since 20 hours non-stop, and I can't get it to work.
You are mixing up nodejs and javascript. Your code is nodejs and therefore on the sererside. window and Image are only available in the browser, resp. on the client side.
For checking if a file exists, (Only on the serverside!=) you can use fs => fs.access.
var fs = require("fs");
// Check if the file exists in the current directory.
fs.access(file, fs.constants.F_OK, (err) => {
console.log(`${file} ${err ? 'does not exist' : 'exists'}`);
});
Note
There isn't something like a "dumb question" :=)
I'm trying to get a list of all image src url's in a given webpage using PhantomJS. My understanding is that this should be extremely easy, but for whatever reason, I can't seem to make it work. Here is the code I currently have:
var page = require('webpage').create();
page.open('http://www.walmart.com');
page.onLoadFinished = function(){
var images = page.evaluate(function(){
return document.getElementsByTagName("img");
});
for(thing in a){
console.log(thing.src);
}
phantom.exit();
}
I've also tried this:
var a = page.evaluate(function(){
returnStuff = new Array;
for(stuff in document.images){
returnStuff.push(stuff);
}
return returnStuff;
});
And this:
var page = require('webpage').create();
page.open('http://www.walmart.com', function(status){
var images = page.evaluate(function() {
return document.images;
});
for(image in images){
console.log(image.src);
}
phantom.exit();
});
I've also tried iterating through the images in the evaluate function and getting the .src property that way.
None of them return anything meaningful. If I return the length of document.images, there are 54 images on the page, but trying to iterate through them provides nothing useful.
Also, I've looked at the following other questions and wasn't able to use the information they provided: How to scrape javascript injected image src and alt with phantom.js and How to download images from a site with phantomjs
Again, I just want the source url. I don't need the actual file itself. Thanks for any help.
UPDATE
I tried using
var a = page.evaluate(function(){
returnStuff = new Array;
for(stuff in document.images){
returnStuff.push(stuff.getAttribute('src'));
}
return returnStuff;
});
It threw an error saying that stuff.getAttribute('src') returns undefined. Any idea why that would be?
#MayorMonty was almost there. Indeed you cannot return HTMLCollection.
As the docs say:
Note: The arguments and the return value to the evaluate function must be a simple primitive object. The rule of thumb: if it can be serialized via JSON, then it is fine.
Closures, functions, DOM nodes, etc. will not work!
Thus the working script is like this:
var page = require('webpage').create();
page.onLoadFinished = function(){
var urls = page.evaluate(function(){
var image_urls = new Array;
var images = document.getElementsByTagName("img");
for(q = 0; q < images.length; q++){
image_urls.push(images[q].src);
}
return image_urls;
});
console.log(urls.length);
console.log(urls[0]);
phantom.exit();
}
page.open('http://www.walmart.com');
i am not sure about direct JavaScript method but recently i used jQuery to scrape image and other data so you can write script in below style after injecting jQuery
$('.someclassORselector').each(function(){
data['src']=$(this).attr('src');
});
document.images is not an Array of the nodes, it's a HTMLCollection, which is built off of an Object. You can see this if you for..in it:
for (a in document.images) {
console.log(a)
}
Prints:
0
1
2
3
length
item
namedItem
Now, there are several ways to solve this:
ES6 Spread Operator: This turns array-likes and iterables into arrays. Use like so [...document.images]
Regular for loop, like an array. This takes advantage of the fact that the keys are labeled like an array:
for(var i = 0; i < document.images.length; i++) {
document.images[i].src
}
And probably more, as well
Using solution 1 allows you to use Array functions on it, like map or reduce, but has less support (idk if the current version of javascript in phantom supports it).
I used the following code to get all images on the page loaded, the images loaded on the browser changed dimensions on the basis of the view port, Since i wanted the max dimensions i used the the maximum view port to get the actual image sizes.
Get All Images on Page USING Phantom JS
Download All Images URL on Page USING Phantom JS
No Matter even if the image is not in a img tag below code you can retrieve the URL
Even Images from such scripts will be retrieved
#media screen and (max-width:642px) {
.masthead--M4.masthead--textshadow.masthead--gradient.color-reverse {
background-image: url(assets/images/bg_studentcc-750x879-sm.jpg);
}
}
#media screen and (min-width:643px) {
.masthead--M4.masthead--textshadow.masthead--gradient.color-reverse {
background-image: url(assets/images/bg_studentcc-1920x490.jpg);
}
}
var page = require('webpage').create();
var url = "https://......";
page.settings.clearMemoryCaches = true;
page.clearMemoryCache();
page.viewportSize = {width: 1280, height: 1024};
page.open(url, function (status) {
if(status=='success'){
console.log('The entire page is loaded.............################');
}
});
page.onResourceReceived = function(response) {
if(response.stage == "start"){
var respType = response.contentType;
if(respType.indexOf("image")==0){
console.log('Content-Type : ' + response.contentType)
console.log('Status : ' + response.status)
console.log('Image Size in byte : ' + response.bodySize)
console.log('Image Url : ' + response.url)
console.log('\n');
}
}
};
I am trying to change ownership of files in Google Drive, where my service account isn't owner of the file.
function getDriveFiles(folder, path) {
var folder = DriveApp.getFolderById("0B23heXhtbThYaWdxzMc");
var path = "";
var files = [];
var fileIt = folder.getFiles();
while ( fileIt.hasNext() ) {
var f = fileIt.next();
if (f.getOwner().getEmail() != "service#domain.com")
files.push({owner: f.getOwner().getEmail(), id: f.getId()});
}
return files;
}
So my array looks like this:
var files = [
{owner=jens#domain.com, id=CjOqUeno3Yjd4VEFrYzg},
{owner=jens#domain.com, id=CjOqUYWxWaVpTQ2tKc3c},
{owner=jens#domain.com, id=CjOqUNTltdHo2NllkcWs},
{owner=jens#domain.com, id=CjOqUVTRRMnU2Y0ZJYms},
{owner=jack#domain.com, id=CjOqUXzBmeE1CT0VLNkE},
{owner=aurora#domain.com, id=CjfKj4ur7YcttORkXTn8D2rvGE},
{owner=aurora#domain.com, id=CjOqUY3RFUFlScDBlclk}
]
Next function that i need to pass this array to is batchPermissionChange which will batch change the ownership to my service account. However i would like it to run batchPermissionChange per user. So if e.g jens#domain.com have 4 files, i don't want the batchPermissionChange function to be triggered 4 times, i would like it to trigger it one time with jens#domain.com, and include his four fileID's.
function batchPermissionChange(ownerEmail, filesArray){
Do batch job Google... https://www.googleapis.com/batch
}
Question
How do i run the function batchPermissionChange(ownerEmail, filesArray) with for e.g jens#domain.com with his 4 fileId's? I could loop through the array, like, 'for each item in array run batchPermissionChange', but that will trigger the batch-function 4 times for the user jens#domain.com.
When you retrieve the list of files, instead of pushing all the files into a single array, you can create a map of arrays, with the keys in the map being the owners, and the arrays being the list of files for that owner.
function getDriveFiles(folder, path) {
var folder = DriveApp.getFolderById("0B23heXhtbThYaWdxzMc");
var path = "";
var files = {};
var fileIt = folder.getFiles();
while (fileIt.hasNext()) {
var f = fileIt.next();
var owner = f.getOwner().getEmail();
var id = f.getId();
if (owner != "service#domain.com") {
// if the owner doesn't exist yet, add an empty array
if (!files[owner]) {
files[owner] = [];
}
// push the file to the owner's array
files[owner].push(id);
}
}
return files;
}
The files object will end up looking something like this:
{
'jens#domain.com': ['CjOqUeno3Yjd4VEFrYzg', 'CjOqUYWxWaVpTQ2tKc3c', 'CjOqUNTltdHo2NllkcWs', 'CjOqUVTRRMnU2Y0ZJYms'],
'jack#domain.com': ['CjOqUXzBmeE1CT0VLNkE'],
'aurora#domain.com': ['CjfKj4ur7YcttORkXTn8D2rvGE', 'CjOqUY3RFUFlScDBlclk']
}
Now, in the area of your code where you want to call batchPermissionChange, do it like this:
for(var ownerEmail in files) {
if(files.hasOwnProperty(ownerEmail)) {
// NOTE: I'm not sure what the first parameter should be for this, but
// this shows how to send the array of files for just one user at a
// time, so change the first parameter if I got it wrong.
batchPermissionChange(ownerEmail, files[ownerEmail]);
}
}
Im trying to load in jpeg images, frame by frame to create an sequence animation of jpeg images. I'm attempting to load them in a recursive loop using javascript. I need to load images in linearly to achieve progressive playback of the animation. (start playback before all frames are loaded) I get a Stack overflow at line: 0 error from IE due to the natural recursion of the function. (My real code loads in over 60+ frames)
Here is a basic example of how I'm doing this:
var paths = ['image1.jpg', 'image2.jpg', 'image3.jpg']; //real code has 60+ frames
var images = [];
var load_index = 0;
var load = function(){
var img = new Image();
img.onload = function(){
if(load_index<=paths.length){
load_index++;
load();
}else{
alert('done loading');
}
}
img.src = paths[load_index];
images.push(img);
}
It seems I can avoid this error by using a setTimeout with an interval of 1 when calling the next step of the load. This seems to let IE "breathe" before loading the next image, but decreases the speed at which the images load dramatically.
Any one know how to avoid this stack overflow error?
http://cappuccino.org/discuss/2010/03/01/internet-explorer-global-variables-and-stack-overflows/
The above link suggests that wrapping the function to remove it from the window object will help avoid stack overflow errors. But I then see strangeness with it only getting about 15 frames through the sequence and just dies.
Put simply, don't use a recursive function for this situation, there isn't any need:
var paths = ['image1.jpg', 'image2.jpg', 'image3.jpg'];
var images = [];
var loads = [];
/// all complete function, probably should be renamed to something with a
/// unique namespace unless you are working within your own function scope.
var done = function(){
alert('all loaded');
}
var loaded = function(e,t){
/// fallbacks for old IE
e = e||Event; t = e.target||e.srcElement;
/// keep a list of the loaded images, you can delete this later if wanted
loads.push( t.src );
if ( loads.length >= paths.length ) {
done();
}
}
var load = function(){
var i, l = paths.length, img;
for( i=0; i<l; i++ ){
images.push(img = new Image());
img.onload = loaded;
img.src = paths[i];
}
}
In fact, as you are finding, the method you are using currently is quite intensive. Instead, the above version doesn't create a new function for each onload listener (saves memory) and will trigger off as many concurrent loads as your browser will allow (rather than waiting for each image load).
(the above has been manually typed and not tested, as of yet)
update
Ah, then it makes more sense as to why you are doing things this way :) In that case then your first approach using the setTimeout would probably be the best solution (you should be able to use a timeout of 0). There is still room for rearranging things to see if you can avoid that though. The following may get around the problem...
var paths = ['image1.jpg', 'image2.jpg', 'image3.jpg'];
var images = []; /// will contain the image objects
var loads = []; /// will contain loaded paths
var buffer = []; /// temporary buffer
var done = function(){ alert('all loaded'); }
var loaded = function(e,t){
e = e||Event; t = e.target||e.srcElement; loads.push( t.src );
/// you can do your "timing/start animation" calculation here...
/// check to see if we are complete
if ( loads.length >= paths.length ) { done(); }
/// if not fire off the next image load
else { next(); }
}
var next = function(){
/// current will be the next image
var current = buffer.shift();
/// set the load going for the current image
if ( current ) { current.img.src = current.path; }
}
var load = function(){
var i, l = paths.length, img;
for( i=0; i<l; i++ ){
img = new Image();
img.onload = loaded;
/// build up a list of images and paths to load
buffer.push({ img: img, path: paths[i] });
}
/// set everything going
next();
}
If the above doesn't do it, another way of getting around the issue would be to step through your list of paths, one at a time, and append a string of image markup (that would render off-screen) to the DOM with it's own onload="next()" handler... next() would be responsible for inserting the next image. By doing this it would hand off the triggering of the load and the subsequent load event to outside of your code, and should get around stacking calls.
I have a bunch of text files on server side with file names 0.txt, 1.txt, 2.txt, 3.txt and so forth. I want to read the content of all files and store them in an array A, such that A[0] has 0.txt's content, A[1] has 1.txt's, ...
How can I do it in Javascript / jquery?
Originally, I used $.ajax({}) in jQuery to load those text files. But it didn't work, because of the asynchronous nature of ajax. I tried to set $.ajax({...async=false...}), but it was very slow -- I have ~1000 10KB files to read in total.
from your question, you want to load txt file from server to local:
var done = 0, resultArr = [], numberOfFiles = 1000;
function getHandler(idx) {
return function(data) {
resultArr[idx] = data;
done++;
if (done === numberOfFiles) {
// tell your other part all files are loaded
}
}
}
for (var i = 0; i < numberOfFiles; i++) {
$.ajax(i + ".txt").done(getHandler(i));
}
jsFiddle: http://jsfiddle.net/LtQYF/1/
What you're looking for is File API introduced in HTML5 (working draft).
The examples in this article will point you in the right direction. Remember that the end user will have to initiate the action and manually select the files - otherwise it would have been a terrible idea privacy- and security-wise.
Update:
I found (yet again) the mozilla docos to be more readable! Quick html mockup:
<input type="file" id="files" name="files[]" onchange="loadTextFile();" multiple/>
<button id="test"onclick="test();">What have we read?</button>
...and the JavaScript:
var testArray = []; //your array
function loadTextFile() {
//this would be tidier with jQuery, but whatever
var _filesContainer = document.getElementById("files");
//check how many files have been selected and iterate over them
var _filesCount = _filesContainer.files.length;
for (var i = 0; i < _filesCount; i++) {
//create new FileReader instance; I have to read more into it
//but I was unable to just recycle one
var oFReader = new FileReader();
//when the file has been "read" by the FileReader locally
//log its contents and push them into an array
oFReader.onload = function(oFREvent) {
console.log(oFREvent.target.result);
testArray.push(oFREvent.target.result);
};
//actually initiate the read
oFReader.readAsText(_filesContainer.files[i]);
}
}
//sanity check
function test() {
for (var i = 0; i < testArray.length; i++) {
console.warn(testArray[i]);
}
}
Fiddled
You don't give much information to give a specific answer. However, it is my opinion that "it doesn't work because of the asynchronous nature of ajax" is not correct. You should be able to allocate an array of the correct size and use a callback for each file. You might try other options such as bundling the files on the server and unbundling them on the client, etc. The designs, that address the problem well, depend on specifics that you have not provided.