integrate sentry with parse server - javascript

I’m trying to integrate sentry with my parse server which runs express server behind the scene. I’m not being able to send my parse server cloud function errors to sentry. Note that, we don’t need to write any error middlewares manually. Parse does it for us.
app.use('/parse', api)
app.use(Sentry.Handlers.errorHandler())

there is Parse server option enableExpressErrorHandler, that does exactly what is required there - it pass errors to next middleware.
Just use enableExpressErrorHandler: true in your parseServerConfiguration dictionary.

Related

Communication between JavaScript and WCF Service Library (WCF Test Client)

there is a web service (WCF Service Library) when I debug the web service project (in Visual Studio) "Test Client WCF" is launched (so I guess its hosted via the Test Client). I have a web service method called "Test" which returns string. When I "call" that method with the Test Client WCF - it works.
When I want to use browser as a client. I go to http://localhost:9001/Name/WebService/WebAPI and I see the web service (xml with some info about methods). And now I want to use JavaScript to call that Test method.
I created a client similar to this https://stackoverflow.com/a/11404133 and I replace the sr variable (SOAP request) with a request, which is in XML part of the Test method in the "Test Client WCF" and for url I chose http://localhost:9001/Name/WebService/WebAPI . I tried that JavaScript client, but I got some client error -
content-type 'text-xml' is invalid, server wanted
'application/soap+xml; charset=utf-8'
(unfortunately right now I can't get to the web service, so I don’t know a number of the error and exact message, but there was no other information, beside the content-type). So I changed the request header to 'application/soap+xml; charset=utf-8', but then I got error – that tells me:
The message cannot be processed at the receiver, due to an
AddressFilter mismatch at the EndpointDispatcher. Check that the
sender and receiver's EndpointAddresses agree
(Or something like that - I had to translated it to english)
I also tried the "JavaScript client" with an existing service, that I found on the internet and with text/xml content-type. And it works fine.
Please do you have any advice - how to call the Test method with JavaScript? Thanks.
The service invocated in Javascript is called Restful style service. WCF is able to create a Restful style service too. But we need to set up some kinds of additional configuration. The service is hosting in IIS express when we test the service in Visual Studio. It uses the default binding configuration to host the service(BasicHttpBinding), called SOAP web service. The universal way to invocate the service is taking advantage of using service proxy class, that is what the WCFTestClient do.
If we want to invocate the service by using JavaScript, here is a simple demo, wish it is useful to you. Please be aware that the project template is WCF Service Application instead of Service Library project.
https://stackoverflow.com/questions/56873239/how-to-fix-err-aborted-400-bad-request-error-with-jquery-call-to-c-sharp-wcf/56879896#56879896
Feel free to let me know if there is something I can help with.

Is it okay to create a net.Socket() 'data' event handler once for each Node.js http request/response?

I've got a Node.js web server communicating with a locally running Python TCP socket server (communicating via their respective socket modules net.Socket, socket).
Clients make HTTP post requests from the browser which get handled by a Node http.createServer function, with some of them sent to the Python server for heavy computations, the results of which are then sent back to Node and back to the browser for rendering.
The Python server is necessary instead of a Node child process as there are some large (immutable) objects required for the Python computations that take a while to initialise and are then shared across threads. It would be infeasible to create and destroy these objects for every browser request.
So my question is, using the Node callback paradigm, how do I capture the response object for each POST request in the net.Socket data event handler/s?
Note 0: Each request has a unique id that is sent to the Python server and returned.
This currently works* inside my http.createServer callback:
http.createServer((request, response) => {
// route and parse incoming requests etc.
// send POST data to Python
python_socket.write(parsed_request_post_data);
// Python works away diligently then emits a data event handled below
python_socket.once('data', (data_from_python) => {
// error and exception handling
response.setHeader('Content-Type', 'application/json');
response.end(data_from_python);
});
}).listen(HTTPport);
*However if I bomb the server with multiple requests, sometimes I get the same data returned in each response (even though Python handles each data separately). I worry that I am trying to assign multiple once('data' callbacks in the same Node event loop and only one of them is persisting, and that is the one repeatedly sending the Python data back to the browser? Though if this were the case the response object would also be getting repeated and I would get an error for trying to end an already closed response right? But each response seems to end fine.
Apologies for the rather long and vague question. I'm still learning and would really appreciate any advice or references I can study to help me understand what is going on. Also very open to trying different approaches (except changing web server - see note 2 below).
Note 1: I tried declaring a global data handler (note the on instead of once) for the net.Socket server as follows, but couldn't figure out how to forward the returned data to each http response?
python_socket.on('data', (data_from_python) => {
// error and exception handling
// how do I get data_from_python out to each http response
// then close it in a non-blocking way?
});
Note 2: I'm not allowed to use a Python web server as the business wants to reuse this design in future to plug and play other services (R, Julia, C++, ...) into Node web servers.

AngularJS $http.get does not return expected data from API

I am attempting to create a mobile phone application with a javascript / AngularJS frontend that communicats with a node js / express js backend.
I believe that I have properly enabled cors but am not completely certain that it has been done in the correct manner. None of the frontend files are hosted on a server (not even a local one). The node js server is hosted online as well as a mongo db server that it interacts with.
So far I am able to make POST's to my API that create a new user and reflect this in the database. I also have a login that POST's to an authentication function which returns a JSON Web Token (JWT). From here I should be able to put the JWT in the header of requests with the key "Authorization" to get access to the other parts of the API (eg: GET /currentUser).
Attempting to GET /currentUser when the JWT is in the header with postman returns all of the expected data. When I attempt to perform the same GET from my frontend (with JWT in header), I get the following OPTIONS response via firebug: "Reload the page to get source for: MyHostedApi/api/users"
I'm wondering if this is some kind of cors issue, incorrectly set authorization header, bad formatting of the $http.get, etc. Any help is greatly appreciated! I'd be glad to provide any parts of the source that are relevant.
This is what my GET looks like:
$http.get("MyHostedApi/api/users/currentUser")
.success(function(response) {
$scope.userData = response.data.firstName;
});

How to to log JS errors from a client into kibana?

I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).
Right now I need to record all JavaScript client side errors into kibana also. What is the best way to do this?
EDIT: I have to add additional information to this question. As #Jackie Xu provide partial solution to my problem and as follows from my comment:
I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.
I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?
When you say client, I'm assuming here that you mean a logging client and not a web client.
First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.
The overall process will go like this:
Error occurs in your app
Log the error to file, socket, or over a network
Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan
node app index.js
var bunyan = require('bunyan');
var log = bunyan.createLogger({
name: 'myapp',
streams: [{
level: 'info',
stream: process.stdout // log INFO and above to stdout
}, {
level: 'error',
path: '/var/log/myapp-error.log' // log ERROR and above to a file
}]
});
// Log stuff like this
log.info({status: 'started'}, 'foo bar message');
// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
log.error(err);
res.send(500, 'An error occurred');
});
Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:
logstash config myapp.conf
# Input can read from many places, but here we're just reading the app error log
input {
file {
type => "my-app"
path => [ "/var/log/myapp/*.log" ]
codec => "json"
}
}
# Output can go many places, here we send to elasticsearch (pick one below)
output {
elasticsearch {
# Do this if elasticsearch is running somewhere else
host => "your.elasticsearch.hostname"
# Do this if elasticsearch is running on the same machine
host => "localhost"
# Do this if you want to run an embedded elastic search in logstash
embedded => true
}
}
Then start/restart logstash as such: bin/logstash agent -f myapp.conf web
Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.
If I understand correctly, the problem you have is not about sending your logs back to the server (or if it was #Jackie-xu provided some hints), but rather about how to send them to elastiscsearch the most efficiently.
Actually the vast majority of users of the classic stack Logstash/Elasticsearch/Kibana are used to having an application that logs into a file, then use Logstash's plugin for reading files to parse that file and send the result to ElasticSearch. Since #methai gave a good explanation about it I won't go any further this way.
But what I would like to bring on is that:
You are not forced to used Logstash.
Actually Logstash's main role is to collect the logs, parse them to identify their structure and recurrent field, and finally output them in a JSON format so that they can be sent to ElasticSearch. But since you are already manipulating javascript on the client side, one can easily imagine that you would talk directly to the Elasticsearch server.
For example once you have caught a javascript exception, you could do the folowing:
var xhr = new XMLHttpRequest();
xhr.open("PUT", http://your-elasticsearch-host:9292, true);
var data = {
lineNumber: lineNumber,
message: message,
url: url
}
xhr.send(JSON.stringify(data));
By doing this, you are directly talking from the client to the ElasticSearch Server. I can't imagine a simpler and faster way to do that (But note that this is just theory, I never tried myself, so reality could be more complex, especially if you want special fields like date timestamps to be generated ;)). In a production context you will probably have security issues, probably a proxy server between the client and the ES server, but the principle is there.
If you absolutely want to use Logstash you are not forced to use a file input
If, for the purpose of harmonizing, doing the same as everyone, or for using advanced logstash parsing confifuration you want to stick to Logstash, you should take a look at all the alternative inputs to the basic file input. For example I used to use a pipe myself, with a process in charge of collecting the logs and outputting these to the standard output. There is also the possibilty to read on an open tcp socket, and a lot more, you can even add your own.
You would have to catch all client side errors first (and send these to your server):
window.onerror = function (message, url, lineNumber) {
// Send error to server for storage
yourAjaxImplementation('http://domain.com/error-logger/', {
lineNumber: lineNumber,
message: message,
url: url
})
// Allow default error handling, set to true to disable
return false
}
Afterwards you can use NodeJS to write these error messages to a log. Logstash can collect these, and then you can use Kibana to visualise.
Note that according to Mozilla window.onerror doesn't appear to work for every error. You might want to switch to something like Sentry (if you don't want to pay, you can directly get the source from GitHub).
Logging errors trough the default built-in file logging allows your errors to be preserved and it also allows your kernel to optimize the writes for you.
If you really think that it is not fast enough (you get that many errors?) you could just put them into redis.
Logstash has a redis pub/sub input so you can store the errors in redis and logstash will pull them out and store them in your case in elasticsearch.
I'm presuming logstash/es are on another server, otherwise there really is no point doing this, es has to store the data on disc also, and it is not nearly as efficient as writing a logfile.
With whatever solution you go with, youll want to store the data, eg. writing it to disc. Append-only to a single (log) file is highly efficient, and when preserving data the only way you can handle more is to shard it across multiple discs/nodes.

Is there an equivalent of Netscape navigator functions in nodejs?

Can I access the inbuilt navigator functions like isinNet() or DomainNameorHost() from nodejs?
Since nodeJS runs on the server, not the browser, you can't access functions that are only provided in a browser.
Most developers use a middleware like Express to create a web service on nodejs.
In a route, such as
app.route("/play", function(req,res){
// code that handles URL /play
});
there is a callback function that is called when a request arrives for that route.
The req object parameter contains everything about the request.
req.ip is the upstream (incoming) ip address.
I looked around in npm for a module that might map remote ips to hostnames and could not find one. Presumably all it would do is reverseDNS, which could take time and hold up processing requests.

Categories

Resources