Meteor synchronous call to function - javascript

Good Day.
Been wracking the 'ol noggin for a way to solve this.
In a nutshell, I have a form that has a number of text inputs as well as an input file element to upload said file to AWS S3 (via lepozepo:s3 ver5.1.4 package). The nice thing about this package is that it does not need the server, thus keeping resources in check.
This S3 package uploads the file to my configured bucket and returns the URL to access the image among a few other data points.
So, back to the form. I need to put the AWS URL returned into the database along with the other form data. HOWEVER, the S3 call takes more time than what the app waits for since it is async, thus the field within my post to Meteor.call() is undefined only because it hasn't waited long enough to get the AWS URL.
I could solve this by putting the Meteor.call() right into the callback of the S3 call. However, I was hoping to avoid that as I'd much rather have the S3 upload be its own Module or helper function or even a function outside of any helpers as it could be reused in other areas of the app for file uploads.
Psudo-code:
Template.contacts.events({
'submit #updateContact': function(e,template){
s3.upload({file:inputFile, path:client},function(error,result){
if(error){
// throw error
}else{
var uploadInfo = result;
}
});
formInfo = {name:$('[name=name]').val(),file:uploadInfo}; // <= file is undefined because S3 hasn't finished yet
Meteor.call('serverMethod',formInfo, function(e,r){
if(e){
// throw error message
}else{
// show success message
}
});
});
I could put the formInfo and the Meteor.call() in the s3 callback, but that would result in more complex code and less code reuse where IMO this is a perfect place for code reuse.
I've tried wrapping the s3 in it's own function with and without a callback. I've tried using reactiveVars. I would think that updating the db another time with just the s3 file info would make the s3 abstraction more complex as it'd need to know the _id and such...
Any ideas?
Thanks.

If you are using javascript it's best to embrace callbacks!
What is it about using callbacks like this that you do not like, or believe is modular or reusable?
As shown below the uploader function does nothing but wrap s3.upload. But you mention this is psudeocode, so I presume that you left out logic you want included in the modular call to s3.upload (include it here), but decouple the logic around handling the response (pass in a callback).
uploader = function(s3_options, cb) {
s3.upload(s3_options, function(error,result){
if(error){
cb(error);
}else{
cb(null, result);
}
});
};
Template.contacts.events({
'submit #updateContact': function(e,template){
cb = function(error, uploadInfo) {
formInfo = {name:$('[name=name]').val(),file:uploadInfo};
Meteor.call('serverMethod',formInfo, function(e,r){
if(e){
// throw error message
}else{
// show success message
}
});
uploader({file:inputFile, path:client}, cb); // you don't show where `inputFile` or `client` come from
}
});

Related

Call stack size exceeded on re-starting Node function

I'm trying to overcome Call stack size exceeded error but with no luck,
Goal is to re-run the GET request as long as I get music in type field.
//tech: node.js + mongoose
//import components
const https = require('https');
const options = new URL('https://www.boredapi.com/api/activity');
//obtain data using GET
https.get(options, (response) => {
//console.log('statusCode:', response.statusCode);
//console.log('headers:', response.headers);
response.on('data', (data) => {
//process.stdout.write(data);
apiResult = JSON.parse(data);
apiResultType = apiResult.type;
returnDataOutside(data);
});
})
.on('error', (error) => {
console.error(error);
});
function returnDataOutside(data){
apiResultType;
if (apiResultType == 'music') {
console.log(apiResult);
} else {
returnDataOutside(data);
console.log(apiResult); //Maximum call stack size exceeded
};
};
Your function returnDataOutside() is calling itself recursively. If it doesn't gets an apiResultType of 'music' on the first time, then it just keeps calling itself deeper and deeper until the stack overflows with no chance of ever getting the music type because you're just calling it with the same data over and over.
It appears that you want to rerun the GET request when you don't have music type, but your code is not doing that - it's just calling your response function over and over. So, instead, you need to put the code that makes the GET request into a function and call that new function that actually makes a fresh GET request when the apiResultType isn't what you want.
In addition, you shouldn't code something like this that keeping going forever hammering some server. You should have either a maximum number of times you try or a timer back-off or both.
And, you can't just assume that response.on('data', ...) contains a perfectly formed piece of JSON. If the data is anything but very small, then the data may arrive in any arbitrary sized chunks. It make take multiple data events to get your entire payload. And, this may work on fast networks, but not on slow networks or through some proxies, but not others. Instead, you have to accumulate the data from the entire response (all the data events that occur) concatenated together and then process that final result on the end event.
While, you can code the plain https.get() to collect all the results for you (there's an example of that right in the doc here), it's a lot easier to just use a higher level library that brings support for a bunch of useful things.
My favorite library to use in this regard is got(), but there's a list of alternatives here and you can find the one you like. Not only do these libraries accumulate the entire request for you with you writing any extra code, but they are promise-based which makes the asynchronous coding easier and they also automatically check status code results for you, follow redirects, etc... - many things you would want an http request library to "just handle" for you.

Upload image to firebase and set a link in database

I know how to add data to the firebase database, I know how to upload image to the firebase storage. I am doing this in javascript.
I am not able to figure out how to link the image to my database object.
My database object is something like this
{
name:'alex',
age:23,
profession:'superhero'
image:*what to put here ???*
}
One idea is to use the object reference that is created and store the image using the same reference.
Any tutorial or ideas appreciated.
We often recommend storing the gs://bucket/path/to/object reference, otherwise store the https://... URL.
See Zero to App and it's associated source code (here we use the https://... version) for a practical example.
var storageRef = firebase.storage().ref('some/storage/bucket');
var saveDataRef = firebase.database().ref('users/');
var uploadTask = storageRef.put(file);
uploadTask.on('state_changed', uploadTick, (err)=> {
console.log('Upload error:', err);
}, ()=> {
saveDataRef.update({
name:'alex',
age:23,
profession:'superhero'
image:uploadTask.snapshot.downloadURL
});
});
This upload function sits inside a ES6 class and is passed a callback (uploadTick) for the .on('state_changed') from the calling component. That way you can pass back upload status and display that in the UI.
uploadTick(snap){
console.log("update ticked", snap)
this.setState({
bytesTransferred:snap.bytesTransferred,
totalBytes:snap.totalBytes
})
}
The file to upload is passed to this upload function, but I am just using a react form upload to get the file and doing a little processing on it, your approach may vary.

Use Asynchronous IO better

I am really new to JS, and even newer to node.js. So using "traditional" programming paradigms my file looks like this:
var d = require('babyparse');
var fs = require('fs');
var file = fs.readFile('SkuDetail.txt');
d.parse(file);
So this has many problems:
It's not asynchronous
My file is bigger than the default max file size (this one is about 60mb) so it currently breaks (not 100% sure if that's the reason).
My question: how do I load a big file (and this will be significantly bigger than 60mb for future uses) asynchronously, parsing as I get information. Then as a followup, how do I know when everything is completed?
You should create a ReadStream. A common pattern looks like this. You can parse data as it gets available on the data event.
function readFile(filePath, done) {
var
stream = fs.createReadStream(filePath),
out = '';
// Make done optional
done = done || function(err) { if(err) throw err; };
stream.on('data', function(data) {
// Parse data
out += data;
});
stream.on('end', function(){
done(null, out); // All data is read
});
stream.on('error', function(err) {
done(err);
});
}
You can use the method like:
readFile('SkuDetail.txt', function(err, out) {
// Handle error
if(err) throw err;
// File has been read and parsed
}
If you add the parsed data to the out variable the entire parsed file will be sent to the done callback.
It already is asynchronous, javascript is asynchronous no extra effort is needed from your part. Does your code even work though? I think your parse should be inside a callback of read. Otherwise readfile is skipped and file is null.
In normal situations any io code you write will be "skipped" and the code after it which may be more direct will be executed first.
For the first question since you want to process chunks, Streams might be what you are looking for. #pstenstrm has an example in his answer.
Also, you can check this Node.js documentation link for Streams: https://nodejs.org/api/fs.html#fs_fs_createreadstream_path_options
If you want an brief description and example for Streams check this link: http://www.sitepoint.com/basics-node-js-streams/
You can pass a callback to the fs.readFile function to process the content once the file read is complete. This would answer your second question.
fs.readFile('SkuDetail.txt', function(err, data){
if(err){
throw err;
}
processFile(data);
});
You can see Get data from fs.readFile for more details.
Also, you could use Promises for cleaner code with other added benefits. Check this link: http://promise-nuggets.github.io/articles/03-power-of-then-sync-processing.html

Mongodb Tailable Cursor in nodejs, how to stop stream

I use below code to get the data from mongodb capped collection
function listen(conditions, callback) {
db.openConnectionsNew( [req.session.client_config.db] , function(err, conn){
if(err) {console.log({err:err}); return next(err);}
coll = db.opened[db_name].collection('messages');
latestCursor = coll.find(conditions).sort({$natural: -1}).limit(1)
latestCursor.nextObject(function(err, latest) {
if (latest) {
conditions._id = {$gt: latest._id}
}
options = {
tailable: true,
awaitdata: true,
numberOfRetries: -1
}
stream = coll.find(conditions, options).sort({$natural: -1}).stream()
stream.on('data', callback)
});
});
}
and then I use sockets.broadcast(roomName,'data',document);
on client side
io.socket.get('/get_messages/', function(resp){
});
io.socket.on('data', function notificationReceivedFromServer ( data ) {
console.log(data);
});
this works perfectly as I am able to see the any new document which is inserted in db.
I can see in mongod -verbose that after each 200ms there is query running with the query {$gt:latest_id} and this is fine, but I have no idea how can i close this query :( I am very new in nodejs and using the mongodb tailable option for the first time and am totally lost, any help or clue is highly appreciated
What is returned from the .stream() method from the Cursor object returned from .find() is an implementation of the node transform stream interface. Specifically this is a "readable" stream.
As such, it's "data" event is emitted whenever there is new data received and available in the stream to be read.
There are other methods such as .pause() and .resume() which can be used to control the flow of these events. Typically you would call these "inside" a "data" event callback, where you wanted to make sure the code in that callback was executed before the "next" data event was processed:
stream.on("data", function(data) {
// pause before processing
stream.pause();
// do some work, possibly with a callback
something(function(err,result) {
// Then resume when done
stream.resume();
});
});
But of course this is just a matter of "scoping". So as long as the "stream" variable is defined in a scope where another piece of code can access it, then you can call either method at any time.
Again, by the same token of scoping, you can just "undefine" the "stream" object at any point in the code, making the "event processing" redundant.
// Just overwrite the object
scope = undefined;
So worth knowing. In fact the newer "version 2.x" of the node driver wraps a "stream interface" directly into the standard Cursor object without the need to call .stream() to convert. Node streams are very useful and powerful things that it would be well worth while coming to terms with their usage.

createReadStream().pipe() Callback

Sorry in advance, I have a couple of questions on createReadStream() here.
Basically what I'm doing is dynamically building a file and streaming it to the user using fs once it is finished. I'm using .pipe() to make sure I'm throttling correctly (stop reading if buffer's full, start again once it's not, etc.) Here's a sample of my code I have so far.
http.createServer(function(req, res) {
var stream = fs.createReadStream('<filepath>/example.pdf', {bufferSize: 64 * 1024})
stream.pipe(res);
}).listen(3002, function() {
console.log('Server listening on port 3002')
})
I've read in another StackOverflow question (sorry, lost it) that if you're using the regular res.send() and res.end() that .pipe() works great, as it calls the .send and .end and adds throttling.
That works fine for most cases, except I'm wanting to remove the file once the stream is complete and not using .pipe() means I'm going to have to handle throttling myself just to get a callback.
So I'm guessing that I'll want to create my own fake "res" object that has a .send() and .end() method that does what the res usually does, however on the .end() I'll put additional code to clean up the generated file. My question is basically how would I pull that off?
Help with this would be much appreciated, thanks!
The first part about downloading can be answered by Download file from NodeJS Server.
As for removing the file after it has all been sent, you can just add your own event handler to remove the file once everything has been sent.
var stream = fs.createReadStream('<filepath>/example.pdf', {bufferSize: 64 * 1024})
stream.pipe(res);
var had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) fs.unlink('<filepath>/example.pdf');
});
The error handler isn't 100% needed, but then you don't delete the file if there was an error while you were trying to send it.

Categories

Resources