How to to log JS errors from a client into kibana? - javascript

I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).
Right now I need to record all JavaScript client side errors into kibana also. What is the best way to do this?
EDIT: I have to add additional information to this question. As #Jackie Xu provide partial solution to my problem and as follows from my comment:
I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.
I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?

When you say client, I'm assuming here that you mean a logging client and not a web client.
First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.
The overall process will go like this:
Error occurs in your app
Log the error to file, socket, or over a network
Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan
node app index.js
var bunyan = require('bunyan');
var log = bunyan.createLogger({
name: 'myapp',
streams: [{
level: 'info',
stream: process.stdout // log INFO and above to stdout
}, {
level: 'error',
path: '/var/log/myapp-error.log' // log ERROR and above to a file
}]
});
// Log stuff like this
log.info({status: 'started'}, 'foo bar message');
// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
log.error(err);
res.send(500, 'An error occurred');
});
Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:
logstash config myapp.conf
# Input can read from many places, but here we're just reading the app error log
input {
file {
type => "my-app"
path => [ "/var/log/myapp/*.log" ]
codec => "json"
}
}
# Output can go many places, here we send to elasticsearch (pick one below)
output {
elasticsearch {
# Do this if elasticsearch is running somewhere else
host => "your.elasticsearch.hostname"
# Do this if elasticsearch is running on the same machine
host => "localhost"
# Do this if you want to run an embedded elastic search in logstash
embedded => true
}
}
Then start/restart logstash as such: bin/logstash agent -f myapp.conf web
Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.

If I understand correctly, the problem you have is not about sending your logs back to the server (or if it was #Jackie-xu provided some hints), but rather about how to send them to elastiscsearch the most efficiently.
Actually the vast majority of users of the classic stack Logstash/Elasticsearch/Kibana are used to having an application that logs into a file, then use Logstash's plugin for reading files to parse that file and send the result to ElasticSearch. Since #methai gave a good explanation about it I won't go any further this way.
But what I would like to bring on is that:
You are not forced to used Logstash.
Actually Logstash's main role is to collect the logs, parse them to identify their structure and recurrent field, and finally output them in a JSON format so that they can be sent to ElasticSearch. But since you are already manipulating javascript on the client side, one can easily imagine that you would talk directly to the Elasticsearch server.
For example once you have caught a javascript exception, you could do the folowing:
var xhr = new XMLHttpRequest();
xhr.open("PUT", http://your-elasticsearch-host:9292, true);
var data = {
lineNumber: lineNumber,
message: message,
url: url
}
xhr.send(JSON.stringify(data));
By doing this, you are directly talking from the client to the ElasticSearch Server. I can't imagine a simpler and faster way to do that (But note that this is just theory, I never tried myself, so reality could be more complex, especially if you want special fields like date timestamps to be generated ;)). In a production context you will probably have security issues, probably a proxy server between the client and the ES server, but the principle is there.
If you absolutely want to use Logstash you are not forced to use a file input
If, for the purpose of harmonizing, doing the same as everyone, or for using advanced logstash parsing confifuration you want to stick to Logstash, you should take a look at all the alternative inputs to the basic file input. For example I used to use a pipe myself, with a process in charge of collecting the logs and outputting these to the standard output. There is also the possibilty to read on an open tcp socket, and a lot more, you can even add your own.

You would have to catch all client side errors first (and send these to your server):
window.onerror = function (message, url, lineNumber) {
// Send error to server for storage
yourAjaxImplementation('http://domain.com/error-logger/', {
lineNumber: lineNumber,
message: message,
url: url
})
// Allow default error handling, set to true to disable
return false
}
Afterwards you can use NodeJS to write these error messages to a log. Logstash can collect these, and then you can use Kibana to visualise.
Note that according to Mozilla window.onerror doesn't appear to work for every error. You might want to switch to something like Sentry (if you don't want to pay, you can directly get the source from GitHub).

Logging errors trough the default built-in file logging allows your errors to be preserved and it also allows your kernel to optimize the writes for you.
If you really think that it is not fast enough (you get that many errors?) you could just put them into redis.
Logstash has a redis pub/sub input so you can store the errors in redis and logstash will pull them out and store them in your case in elasticsearch.
I'm presuming logstash/es are on another server, otherwise there really is no point doing this, es has to store the data on disc also, and it is not nearly as efficient as writing a logfile.
With whatever solution you go with, youll want to store the data, eg. writing it to disc. Append-only to a single (log) file is highly efficient, and when preserving data the only way you can handle more is to shard it across multiple discs/nodes.

Related

Client (JRE) read server (node) variables directly?

I am trying to set up a server where clients can connect and essentially "raise their hand", which lights up for every client, but only one at a time. I currently just use the express module to send a POST response on button-click. The server takes it as JSON and writes it to a file. All the clients are constantly requesting this file to check the status to see if the channel is clear.
I suspect there is a more streamlined approach for this, but I do not know what modules or methods might be best. Can the server push variables to the clients in some way, instead of the clients constantly requesting a file? Then the client script can receive the variable and change the page elements accordingly?
Usually, this kind of task is done by using WebSockets. Since you already have socket.io set up, it'd be great to reuse it.
From the server, start emitting different messages:
socket.emit("hand", { userId: <string> });
From the client, listen to the new event and invoke whatever the appropriate behavior is:
socket.on("hand", (payload) => {
// payload.userId contains user ID
});

Sharing variables between client and server in node

Let me preface by saying that I have spent a considerable amount of time trying to figure out the solution to this problem but I have not discovered something that works. I am using node and want to share a variable between my app.js server file and a client side javascript file (demo.js).
I run node app.js to launch the server and demo.js runs in the client. I have tried using module.exports and export but when I try importing in the demo.js file or referring to the module.exports var I get errors. Maybe I'm approaching this is in the wrong way.
For example, I am trying to use the node wikipedia package to scrape data. I have the following in my app.js file:
var wikipedia = require('node-wikipedia');
wikipedia.page.data('Clifford_Brown', { content: true }, function(response) {
console.log(response);
export const response = response;
module.exports.data = response
});
In my demo.js file I have tried importing this response var and using the module.exports var but I have been unsuccessful.
Anyone have any solutions to this issue or different approaches I should take?
Browser javascript files run in the browser. node.js javascript files run on the server. You cannot directly export things from one to the other. They are on completely different computers in different locations.
It is very important for developers to understand the notion that server-side code runs on the server and client-side code runs on the browser. The two cannot directly call each other or reach the other's variables. Imagine your server is in a data center in Seattle and the browser is running on a computer in Venice.
See How to access session variables in the browser for your various choices described for a previous answer.
In a nutshell, you can have the server insert a javascript variable into the generated web page so that when the javascript runs in the web page on the browser, it can then access that variable in its own page. Or, you can create an Ajax call so the client can request data directly from the server. Or you can have the server put some data in a cookie which the Javascript in the browser can then access.
If the data is easily known by the server at the time the page is generated and you are using some sort of page template system, then it is very easy to just add a <script> tag to the generated page that defines one or more Javascript variables that contain the desired information. Then, the client-side Javascript can just refer to those variables to have access to the data.
To pass data in http there is a request message and response message and the data needs to be inside that message.
In the request you can either pass variables in the request URL
http://host_name/path?key=value
Or inside the request body or headers.
In the response you pass back variables in the response header or response body
First Example:
One way of processing a URL request from the browser explicitly while passing variables is to set up your server to render a html page with those variables embedded.
If you use a templating engine like jade, you can consume the sent variables directly into the template using res.render({ key: 'value' }) rather than using a promise based api call which would run when the user performs some action on the client.
For instance.
// SERVER setup rendering engine
app.get('/', function(req, res) {
res.render( 'index', { key: 'value' })
}
Which will render index.html to the client with the key-value pair passed to the template file used to serve up the index.html (for example jade or ejs).
Second Example:
Using axios you can set up an action to call a server api (you can also pass variables in the URL, headers or body). Using the promise pattern you can then use these variables after the server api has responded.
// CLIENT setup axios
axios.get(URL + '/getkeyvalue')
.then(function(response) {
const value = response.data.key
})
On you server using express you (this is where you would get the optional request variables mentioned above) send back response variables in the body like this.
// SERVER setup express
app.get('/getkeyvalue', function(req, res) {
res.send({ key: 'value' })
}
Note that these are simple examples.
They are too completely different systems. The best way to accomplish what you're trying to do is the create a variable in your html on the server side by stringifying your data
<script> var my_data = <%= JSON.stringify(data) %> </script>
Thats an example using ejs, a common templating language in expressjs

How to make a REST GET request (with authentication) and parse the result in javascript?

Due to circumstances beyond my control, Javascript is the only language option available for me. I'm a beginner and am not even sure if I'm approaching the problem in a "recommended" manner.
Simply put, a customer has setup a MarkLogicDB server online and has given me read-only access. I can query the server with the HTTP GET protocol to return an XML document that has to be parsed. I've been able to create a curl command to return the data I need (example below);
curl --anyauth --user USERNAME:PASSWORD \
-X GET \
http://test.com:8020/v1/documents?uri=/path/to/file.xml
The above returns the requested XML file. Can someone please show me how I could convert the above to javascript code? Additionally, how would I parse the data? Let's say I want to get all the info from a certain element or attribute. How can this be accomplished?
This would be trivial for me to do in Java/.NET, but after reading plenty of online tutorials on Javascript, my head is spinning. Every tutorial talks about web-browsers, but I'm doing this on a server environment (The parse.com CloudCode). There isn't any UI or HTML involved. For debugging, I just read the logs created with console.log().
https://parse.com/docs/cloud_code_guide#networking seems pretty clear, as far as it goes.
Parse.Cloud.httpRequest({
url: 'http://test.com:8020/v1/documents',
params: {
uri : '/path/to/file.xml'
},
success: function(httpResponse) {
console.log(httpResponse.text);
},
error: function(httpResponse) {
console.error('Request failed with response code ' + httpResponse.status);
}
});
But you'll also need authentication. The Parse.Cloud.httpRequest docs don't include any examples for that. If you have support with that vendor, ask the vendor about digest authentication.
If you're stuck you might try adding user and password to the httpRequest params and see what happens. It might work, if the developers of this stack followed the XMLHttpRequest convention.
Failing support from the vendor and existing functionality, you'll have to implement authentication yourself, in JavaScript. This works by generating strings that go into the request headers. These resources should help:
http://en.wikipedia.org/wiki/Digest_access_authentication
http://en.wikipedia.org/wiki/Basic_access_authentication
Basic auth is much easier to implement, but I'd recommend using digest for security reasons. If your HTTPServer doesn't support that, try to get the configuration changed.

What is the best/proper configuration? (javascript SOAP)

I need to retrieve data from a web service (via SOAP) during a nightly maintenance process on a LAMP server. This data then gets applied to a database. My research has returned many options and I think I have lost sight of the forest for the trees; partially because of the mix of client and server terms and perspectives of the articles I have read.
Initially I installed node.js and node-soap. I wrote a simple script to test functionality:
var soap = require('/usr/local/lib/node_modules/npm/node_modules/soap');
var url = "https://api.authorize.net/soap/v1/Service.asmx?WSDL";
soap.createClient(url, function(err, client)
{
if(typeof client == 'undefined')
{
console.log(err);
return;
}
console.log('created');
});
This uses a demo SOAP source and it works just fine. But when I use the actual URL I get a 5023 error:
[Error: Invalid WSDL URL: https://*****.*****.com:999/SeniorSystemsWS/DataExportService.asmx?WSDL
Code: 503
Response Body: <html><body><b>Http/1.1 Service Unavailable</b></body> </html>]
Accessing this URL from a browser returns a proper WSDL definition. I am told by the provider that the 503 is due to a same-origin policy violation. Next, I researched adding CORS to node.js. This triggered my stepping back and asking the question: Am I in the right forest? I'm not sure. So, I am looking for a command-line, SOAP capable, CORS app (or equivalent) configuration. I am a web developer primarily using PHP and Javascript, so Javascript is where I turned first, but that is not a requirement. Ideas? Or, is there a solution to the current script error (the best I think I have found is using jQuery in node.js which includes CORS)
Most likely, this error belongs to your website server.
Please go through this link, it might be helpful.
http://pcsupport.about.com/od/findbyerrormessage/a/503error.htm
Also you can open your wsdl in web browser, search for soap:address location tag under services. And figure out correct url, you are trying to invoke from your script. Directly access this url in browser and see what are you getting.
I think I have a better approach to the task. I found over the weekend that PHP has a full SOAP client. I wrote the same basic login script in PHP and it runs just fine. I get a valid authentication code in the response to loginExt (which is required in further requests), so it looks like things are working. I will comment here after verifying that I can actually use the web service.

Multi-OS text-based database with an engine for Python and JavaScript

This is probably a far fetch, but maybe someone knows a good solution for it.
Introduction
I'm currently making an application in Python with the new wxPython 2.9, which has the new html2 library, which will inherit the native browser of each OS (Safari/IE/Firefox/Konquerer/~), which is really great.
Goal/Purpose
What I'm currently aiming for is to process large chunks of data and analyze it super fast with Python (currently about 110.000 entries, turning out in about 1.500.000 to 2.250.000 results in a dictionary). This works very fast and is also dynamic, so it will only do that first big fetch once (takes about 2-4 seconds only) and afterwards just keeps listening if new data gets created on the disc.
So far, so good. Now with the new wxPython html2 library I'm making the new GUI. It's mainly made to display pages, so what I have made now is a model in a /html/ folder (with HTML/CSS/jQuery) and it will dynamically look for a JSON files (jQuery fetching), which is practically a complete dump of the massive dictionaries that the Python script is making in the background (daemon) in a parallel thread.
JavaScript doesn't seem to have issues with reading a big JSON file and because everything is (and stays) local it doesn't really incur slowness or anything. Also the CPU and memory usage is very low.
Conclusion
But here comes the bottleneck. From the JavaScript point of view, the handling of the big JSON file is not really a joyride. I have todo a lot of searching and matching for all the data I need to get, and also creates a lot of redundant re-looping through the same big chunks of entries.
Question
I'm wondering if there is any kind of "engine" that is implemented for both Python and JavaScript that can handle jSON files, or maybe other text-based files as a database. Meaning you can really have a MySQL-like structure (not meant by full extend of course), where you at least can define a table structure which hold the data and you do reads/write/updates on methodically.
The app I'm currently developing is multi-OS based (at least Ubuntu, OS X and Windows XP+). I also really don't want to create more clutter than using wxPython (for distribution/dependency sakes) to use an extension database (like I could run a MySQL server on localhost), so purely keep it inside my Python distro's folder. This is also to prevent writing massive code (checks) checking if the user has already got servers/databases in use that might collide with what my app I would then install.
Final Notes
I'm kind of aiming to build some kind of API myself too for future projects to make this way of development standard for my Python scripts that need a GUI. Now that wxPython can more easily embrace the modern browser technologies; there seems to be no limit anymore to building super fast, dynamic and responsive graphical Python apps.
Why not just stick the data into a SQLite database and then have both Python and Javascript hit that? See also Convert JSON to SQLite in Python - How to map json keys to database columns properly?
Sqlite is included with in all modern versions of Python. You'll have to check out the SQLite website for its limitations
Kind of got something figured out, through running a CGI HTTP server and letting Python in there fetch SQLite queries for JavaScript's AJAX calls. Here's a small demo (only tested on OS X):
Folder structure
main.py
cgi/index.py
data/
html/index.html
html/scripts/jquery.js
html/scripts/main.js
html/styles/main.css
Python server (main.py)
### CGI Server ###
import CGIHTTPServer
import BaseHTTPServer
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ['/cgi']
# Mute the messages in the shell
def log_message(self, format, *args):
return
httpd = BaseHTTPServer.HTTPServer(('', 61350), Handler)
#httpd.serve_forever()
thread = threading.Thread(name='CGIHTTPServer', target=httpd.serve_forever)
thread.setDaemon(True)
thread.start()
#### TEST SQLLite ####
# Make the database file if it doesn't exist
if not os.path.exists('data/sqlite.db'):
db_file = open('data/sqlite.db', 'w')
db_file.write('')
db_file.close()
import sqlite3
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE info(id INTEGER UNIQUE PRIMARY KEY, name VARCHAR(75), folder_name VARCHAR(75))')
cursor.execute('INSERT INTO info VALUES(null, "something1", "something1_name")')
cursor.execute('INSERT INTO info VALUES(null, "something2", "something1_name")')
conn.commit()
Python SQLite processor (cgi/index.py) (demo is purely for SELECT, needs more dynamic)
#!/usr/bin/env python
import cgi
import json
import sqlite3
print 'Content-Type: text/json\n\n'
### Fetch GET-data ###
form = cgi.FieldStorage()
obj = {}
### SQLite fetching ###
query = form.getvalue('query', 'ERROR')
output = ''
if query == 'ERROR':
output = 'WARNING! No query was given!'
else:
# WARNING: The path probably needs `../data/sqlite.db` if PYTHONPATH is not defined
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute(query)
# TODO: Add functionality/detect if it's a SELECT, INSERT/UPDATE (then we need to conn.commit() )
result = cursor.fetchall()
if len(result) > 0:
output = []
for row in result:
buff = []
for entry in row:
buff.append(entry)
output.append(buff)
else:
output = 'WARNING! No results found'
obj = output
### Return the data in jSON (map) format for JavaScript
print json.dumps(obj)
JavaScript (html/scripts/main.js)
'use strict';
$(document).ready(function() {
// JSON data read test
var query = 'SELECT * FROM test';
$.ajax({
url: 'http://127.0.0.1:61350/cgi/index.py?query=' + query,
success: function(data) {
lg(data);
},
error: function() {
lg('Something went wrong while fetching the query.');
}
});
});
And that wraps it up. The console output in the browser is;
[
[1, "something1", "something1_name"],
[2, "something2", "something2_name"]
]
With this methodology you could let Python and JavaScript read/write in the same database, while Python keeps doing its system tasks (daemon) and update the database entries, whilst JavaScript can keep checking for new data.
This method could probably also add room for listeners and other means of communication between the both.
The main.py will instantly stop running because of the daemon. This is because of my wxPython script after it that keeps the daemon (server) alive until the application stops. If someone else wants to use this code for the future; just make sure the server-code runs after the SQLite initiation and unquote httpd.serve_forever() to keep it running.

Categories

Resources