Firebase storage failing silently? - javascript

I'm trying to get the download url for multiple images, then trigger a change in my app. But... if one of those images doesn't exist for whatever reason, everything fails silently.
Here's the code:
const promises = [];
snapshot.forEach(childSnapshot => {
const child = childSnapshot.val();
const promise = firebase.storage()
.ref(child.songImagePath)
.getDownloadURL()
.catch(err => {
console.log('caught', err);
return "";
})
.then(imageURL => {
return imageURL;
});
promises.push(promise);
});
Promise.all(promises)
.catch(err => {
console.log('caught', err);
})
.then(urls => {
...do something with urls array
});
I'm using child.songImagePath in my database to store the image's location in storage. If ALL paths for ALL images have images, everything works perfectly.
BUT if an upload went awry or for some reason there's no image in the storage location, it fails silently. None of my catches fire. And Promise.all is never resolved.
What's going on here? Is there a way to check for a file's existence before calling getDownloadURL?
EDIT: As #mjr points out, in the documentation they've formatted their error callback slightly differently than I have. This also seems to never fire an error, though:
.then(
imageURL => {
return imageURL;
},
err => {
console.log('caught', err);
return "";
}
);

Firebase Storage JS dev here.
I ran your code with minor changes[1] in Chrome and React Native, and didn't see that behavior.
I see Promise.all always resolving (never failing), with an empty string in the array for invalid files. This is because your .catch handler for getDownloadURL returns an empty string.
For further troubleshooting, it would be useful to know:
version of the firebase JS library you are using
the browser/environment and version
network logs, for example from the network panel in Chrome's dev tools, or similar for other browsers
The firebase-talk Google Group tends to be a better place for open-ended troubleshooting with more back-and-forth.
[1] For reference, here's my code:
const promises = [];
// Swap out the array to test different scenarios
// None of the files exist.
//const arr = ['nofile1', 'nofile2', 'nofile3'];
// All of the files exist.
const arr = ['legitfile1', 'legitfile2', 'legitfile3'];
// Some, but not all, of the files exist.
//const arr = ['legitfile1', 'nofile2', 'nofile3'];
arr.forEach(val => {
  const promise = firebase.storage()
    .ref(val)
    .getDownloadURL()
    .catch(err => {
// This runs for nonexistent files
      console.log('caught', err);
      return "";
    })
    .then(imageURL => {
// This runs for existing files
      return imageURL;
    });
  promises.push(promise);
});
Promise.all(promises)
  .catch(err => {
// This never runs
    console.log('caught', err);
  })
  .then(urls => {
// This always runs
    console.log('urls', urls);
  });

Related

dynamically zip generated pdf files node-archiver

I'm trying to create multiple PDF file using pdfkit, I have an array of users and I create a report for each one, the createTable()Function below returns a Buffer that I send to archiver to zip, once complete the zip file is sent for download to the front end.
My issue is that for some reason, Archiver will sometimes throw a QUEUECLOSED error, if I run the function too many time, sometimes I can run it 10 times and the 11th time I'll get an error and sometimes I get an error after the second time, each time i run it the data is the same and nothing else changed.
Any help is greatly appreciated.
users.forEach(async (worker, index) => {
createTable(date, worker.associateName, action, worker.email, worker.id, excahngeRate).then(resp => {
archive.append(resp, { name: worker.associateName + '.pdf' })
if (index === users.length - 1 === true) {//make sure its the last item in array
archive.pipe(output)
archive.finalize();
}
}).catch(err => {
console.log(err)
})
});
You finalize too soon. The createTable for the last user might not be the last to finish. You should add all to archive and once everything is done, finalize it.
// Use map to get an array of promises
const promises = users.map(async (worker, index) => {
return createTable(date, worker.associateName, action, worker.email, worker.id, excahngeRate).then(resp => {
archive.append(resp, { name: worker.associateName + '.pdf' })
}).catch(err => {
console.log(err)
})
});
// Wait for all promises to finish.
Promise.all(promises).then(()=>{
archive.pipe(output)
archive.finalize();
});
In your current code, you could console.log just before your IF statement, and log the index of the completed createTable, you'll see they do not finish in order.

Fetch and Store files with IndexedDB

I need to download a list of files and store them locally with IndexedDB. I am using fetch to retrieve the files as follows:
cacheRecordings() {
var request = window.indexedDB.open("database", 2);
request.onsuccess = event => {
var database = event.target.result;
var transaction = database.transaction(["store"], 'readwrite'); //second step is opening the object store
this.objectStore = transaction.objectStore("store");
}
for (const url of this.urls) {
fetch(url)
.then(resp => resp.blob())
.then(blob => {
const url = window.URL.createObjectURL(blob);
const index = this.objectStore.index('path');
index.openCursor().onsuccess = function(event) { <-- Error is thrown here
this.objectStore.add(url, path);
}.bind(this)
})
.catch((error) => {
console.log(error)
})
}
}
The above code results in the following two errors:
Failed to execute 'openCursor' on 'IDBIndex': The transaction is not active.
Failed to execute 'index' on 'IDBObjectStore': The transaction has finished
How do I store the fetched files using IndexedDB?
I found a relevant question - but it does NOT address my use case of fetched files.
TransactionInactiveError: Failed to execute 'add' on 'IDBObjectStore': The transaction is not active
My guess is that this happens because you are performing an async operation (the fetch) within a sync loop (the for)
To confirm this try storing a single file in the db without the loop. If that's successful, look into executing acync code within a loop

React Native API FETCH Different names for each objects

I am connecting a REST api from React Native app. I have Json response with filename objects with different names but all the objects have same variables: filename, message, and display.
Number of objects changes with each request to API (REST), the names of objects in response are different depending on requests. But the variables in each object are same as above.
The information I need from this response is only filename text, but it will be acceptable if I get list of objects so I can read through the messages from errors.
The image shows how my objects look like.
This is my fetch request :
const getGists = async () => {
await axios
.get(`https://api.github.com/gists/public?per_page=30`)
.then((r) => {
let n;
for (n = 0; n < 30; n++) {
console.log(r.data[n].files.filename);
// console.log("____________________");
// console.log(r.data[n].owner.avatar_url);
// console.log("____________________");
// console.log(JSON.stringify(r.data[n].files));
}
})
.catch((e) => {
console.log("ERROR", e);
});
};
how is possible to get every filename from these requests even if object name is not the same in each iteration . Thanks for help
Working with the result of the API calls and some higher-order functions, this will work fine:
const getGists = async () => {
await axios
.get(`https://api.github.com/gists/public?per_page=30`)
.then((response) => {
const myDesireResult = response.data.reduce((acc, item) => {
const files = Object.values(item.files);
if (files.length > 1) {
files.forEach((file) => acc.push(file.filename));
} else {
acc.push(files[0].filename);
}
return acc;
}, []);
console.log(myDesireResult);
})
.catch((e) => {
console.log("ERROR", e);
});
};
Explanation:
in the then block, can get the API call result with result.data
with reduce function, looping through the data will start.
since the object in the files has different names, we can get the files with Object.values() easily.
Some of the files contain several items and most of them have just one item. so with checking the length of the file we can do proper action. if the files have more than one element, with another simple lop, we can traverse this file array easily.
Check the working example on codesandbox

Not getting data from firebase on opening the app

I am trying to get data from firebase but it returns empty value when the app loads, but if I edit something on that file even the commented line, then the data loads and app runs, I want when the app opens all data should be there from firebase to run app. and also how to arrange "grabbedData" in reverse order tried grabbedData.reverse() but doent work.
const Getdata = async () => {
let grabbedData = [];
await firebase
.database()
.ref(`/users/`)
.orderByKey()
.on("value", (snapshot, key) => {
// console.log("snapshot....", snapshot);
grabbedData.push(snapshot.val());
});
setUserdata(grabbedData);
console.log("grabbedData", grabbedData); // empty value here :(
if (grabbedData) {
let getfranchiseuid = "";
Object.keys(grabbedData).map(function (key) {
let y = grabbedData[key];
Object.keys(y).map(function (key2) {
let x = y[key2];
if (key2 === uid) {
getfranchiseuid = x.franchiseuid;
}
});
});
if (getfranchiseuid) {
let customerList1 = [];
firebase
.database()
.ref(`/serviceProvider/${getfranchiseuid}/franchise/customers`)
.orderByKey()
.on("value", (snapshot) => {
customerList1.push(snapshot.val());
});
setCustomerList(customerList1);
console.log("customerList1customerList1", customerList1);
}
}
};
useEffect(() => {
var unsubscribe = firebase.auth().onAuthStateChanged(function (user) {
if (user) {
storeUser({ user });
setUser(user);
setEmail(user.email);
setUid(user.uid);
} else {
// No user is signed in.
}
});
unsubscribe();
Getdata();
}, []);
Data is loaded from Firebase asynchronously. Since this may take some time, your main JavaScript code will continue to run, so that the user can continue to use the app while the data is loading. Then when the data is available, your callback is invoked with that data.
What this means in your code is that (for example) right now your setUserdata is called before the grabbedData.push(snapshot.val()) has run, so you're setting any empty user data. You can most easily see this by setting some breakpoints on the code and running it in a debugger, or by adding logging and checking the order of its output.
console.log("1");
await firebase
.database()
.ref(`/users/`)
.orderByKey()
.on("value", (snapshot, key) => {
console.log("2");
});
console.log("3");
When you run this code, the output will be:
1
3
2
This is probably not what you expected, but it is exactly correct and does explain your problems.
The solution for this is always the same: any code that needs the data from the database must be inside the callback, or be called from there.
So for example:
await firebase
.database()
.ref(`/users/`)
.orderByKey()
.on("value", (snapshot, key) => {
grabbedData.push(snapshot.val());
setUserdata(grabbedData);
});
this will ensure that setUserdata is called whenever you updated the grabbedData.
Since you have much more code that depends on grabbedData, that will also have to be inside the callback. So the entire if (grabbedData) { block will need to be moved, and probably others. If you keep applying the solution above, the code will start working.
This is a very common problem for developers that are new to calling asynchronous cloud APIs, so I highly recommend reading some of these other answers:
Why Does Firebase Lose Reference outside the once() Function?
Best way to retrieve Firebase data and return it, or an alternative way
How do I return the response from an asynchronous call? (this one is not specific to Firebase, as the problem is not specific to Firebase)

Chrome File System API hanging

disclaimer, self-answered post to hopefully save others time.
Setup:
I've been using chrome's implementation of the file systems API, [1] [2] [3].
This requires enabling the flag chrome://flags/#native-file-system-api.
For starters I want to recursively read a directory and obtain a list of files. This is simple enough:
paths = [];
let recursiveRead = async (path, handle) => {
let reads = [];
// window.handle = handle;
for await (let entry of await handle.getEntries()) { // <<< HANGING
if (entry.isFile)
paths.push(path.concat(entry.name));
else if (/* check some whitelist criteria to restrict which dirs are read*/)
reads.push(recursiveRead(path.concat(entry.name), entry));
}
await Promise.all(reads);
console.log('done', path, paths.length);
};
chooseFileSystemEntries({type: 'openDirectory'}).then(handle => {
recursiveRead([], handle).then(() => {
console.log('COMPLETELY DONE', paths.length);
});
});
I've also implemented a non-recursive while-loop-queue version. And lastly, I've implemented a node fs.readdir version. All 3 solutions work fine for small directories.
The problem:
But then I tried running it on some sub-directories of the chromium source code ('base', 'components', and 'chrome'); together the 3 sub-dirs consist of ~63,000 files. While the node implementation worked fine (and surprisingly it utilized cached results between runs, resulting in instantaneous runs after the first), both browser implementations hung.
Attempted debugging:
Sometimes, they would return the full 63k files and print 'COMPLETLEY DONE' as expected. But most often (90% of the time) they would read 10k-40k files before hanging.
I dug deeper into the hanging, and apparently the for await line was hanging. So I added the line window.handle = handle immediately before the for loop; when the function hung, I ran the for loop directly in the browser console, and it worked correctly! So now I'm stuck. I have seemingly working code that randomly hangs.
Solution:
I tried skipping over directories that would hang:
let whitelistDirs = {src: ['base', 'chrome', 'components', /*'ui'*/]}; // 63800
let readDirEntry = (handle, timeout = 500) => {
return new Promise(async (resolve, reject) => {
setTimeout(() => reject('timeout'), timeout);
let entries = [];
for await (const entry of await handle.getEntries())
entries.push(entry);
resolve(entries);
});
};
let readWhile = async entryHandle => {
let paths = [];
let pending = [{path: [], handle: entryHandle}];
while (pending.length) {
let {path, handle} = pending.pop();
await readDirEntry(handle)
.then(entries =>
entries.forEach(entry => {
if (entry.isFile)
paths.push({path: path.concat(entry.name), handle: entry});
else if (path.length || !whitelistDirs[handle.name] || whitelistDirs[handle.name].includes(entry.name))
pending.push({path: path.concat(entry.name), handle: entry});
}))
.catch(() => console.log('skipped', handle.name));
console.log('paths read:', paths.length, 'pending remaining:', pending.length, path);
}
console.log('read complete, paths.length');
return paths;
};
chooseFileSystemEntries({type: 'openDirectory'}).then(handle => {
readWhile(handle).then(() => {
console.log('COMPLETELY DONE', paths.length);
});
});
And the results showed a pattern. Once a directory read hung and was skipped, the subsequent ~10 dir reads would likewise hang and be skipped. Then the following reads would resume functioning properly until the next similar incident.
// begins skipping
paths read: 45232 pending remaining: 49 (3) ["chrome", "browser", "favicon"]
VM60:25 skipped extensions
VM60:26 paths read: 45239 pending remaining: 47 (3) ["chrome", "browser", "extensions"]
VM60:25 skipped enterprise_reporting
VM60:26 paths read: 45239 pending remaining: 46 (3) ["chrome", "browser", "enterprise_reporting"]
VM60:25 skipped engagement
VM60:26 paths read: 45266 pending remaining: 45 (3) ["chrome", "browser", "engagement"]
VM60:25 skipped drive
VM60:26 paths read: 45271 pending remaining: 44 (3) ["chrome", "browser", "drive"]
// begins working properly again
So the issue seemed temporal. I added a simple retry wrapper with a 500ms wait between retries, and the reads began working fine.
readDirEntryRetry = async (handle, timeout = 500, tries = 5, waitBetweenTries = 500) => {
while (tries--) {
try {
return await readWhile(handle, timeout);
} catch (e) {
console.log('readDirEntry failed, tries remaining:', tries, handle.name);
await sleep(waitBetweenTries);
if (!tries)
return e;
}
}
};
Conclusion:
The non-standard Native File System API hangs when reading large directories. Simply retrying after waiting resolves the issue. Took me a good week to arrive at this solution, so thought it'd be worth sharing.

Categories

Resources