I am testing meteor-persistent-minimongo2 for offline data support. Although the off-line data feature is working fine, I see some insert failed errors on browser console: WriteError({"code":11000,"index":0,"errmsg":"E11000 duplicate key error. These errors occur randomly, sometimes it gives error for only one record some other time for three records, etc. as I keep refreshing the page. There is no error on the server side, so this is related with the client & minimongo.
I use the package as follows in mycollection.js;
export const MyCollection = new Mongo.Collection('mycollection');
if (Meteor.isClient) {
console.log('Setting Mini Mongo Observer');
const persistentColl = new PersistentMinimongo2(MyCollection, 'testapp');
}
Apparently, meteor-persistent-minimongo2 tries to insert an existing document into minimongo. But the source code making the insert operation seems correct. What do I miss here?
Related
I have this structure
Couchbase server <---> couch sync gateway <---> pouchdb
and I have 4 databases
every local database is sync to remote and every remote db is sync to local, syncing is live.
When I load the page syncing starts, but every second I have a lot of errors in console log
these errors use a lot of memory (my chrome tab uses about 800 Mb of memory after 20 minutes)
How can i prevent this?
The problem is that in my javascript my config is
var syncOptions = {
live: true,
retry: true
};
var localDB = new PouchDB("building");
var remoteDB = new PouchDB("http://xxx.azure.com:4984/building");
localDB.sync(remoteDB, syncOptions);
If I set "retry" value to false there are no problems, but live sync doesn't work, if I set "retry" value to true my page generates about 4 error every second (because I'm syncing 4 databases)
What can I do?
Thanks
EDIT
I'm using pouchdb-5.4.1.js
As the console suggests, these are not errors (I mean, they are, but, that is perfectly normal.) Why this happens is because PouchDB is not officially supported by Couchbase sync Gateway. So, to make it support in an effective manner, PouchDB creates its own milestones on the Couchbase server. Typically you should be seeing a lot of errors on the path "_local", "_bulk_get" and "_all_docs". That is because of problems with integration between Couchbase Sync Gateway and PouchDB. But, you've got nothing to worry about if you've written your sync function properly. It should get the job done, albeit, not as effectively as we want.
I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).
Right now I need to record all JavaScript client side errors into kibana also. What is the best way to do this?
EDIT: I have to add additional information to this question. As #Jackie Xu provide partial solution to my problem and as follows from my comment:
I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.
I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?
When you say client, I'm assuming here that you mean a logging client and not a web client.
First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.
The overall process will go like this:
Error occurs in your app
Log the error to file, socket, or over a network
Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan
node app index.js
var bunyan = require('bunyan');
var log = bunyan.createLogger({
name: 'myapp',
streams: [{
level: 'info',
stream: process.stdout // log INFO and above to stdout
}, {
level: 'error',
path: '/var/log/myapp-error.log' // log ERROR and above to a file
}]
});
// Log stuff like this
log.info({status: 'started'}, 'foo bar message');
// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
log.error(err);
res.send(500, 'An error occurred');
});
Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:
logstash config myapp.conf
# Input can read from many places, but here we're just reading the app error log
input {
file {
type => "my-app"
path => [ "/var/log/myapp/*.log" ]
codec => "json"
}
}
# Output can go many places, here we send to elasticsearch (pick one below)
output {
elasticsearch {
# Do this if elasticsearch is running somewhere else
host => "your.elasticsearch.hostname"
# Do this if elasticsearch is running on the same machine
host => "localhost"
# Do this if you want to run an embedded elastic search in logstash
embedded => true
}
}
Then start/restart logstash as such: bin/logstash agent -f myapp.conf web
Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.
If I understand correctly, the problem you have is not about sending your logs back to the server (or if it was #Jackie-xu provided some hints), but rather about how to send them to elastiscsearch the most efficiently.
Actually the vast majority of users of the classic stack Logstash/Elasticsearch/Kibana are used to having an application that logs into a file, then use Logstash's plugin for reading files to parse that file and send the result to ElasticSearch. Since #methai gave a good explanation about it I won't go any further this way.
But what I would like to bring on is that:
You are not forced to used Logstash.
Actually Logstash's main role is to collect the logs, parse them to identify their structure and recurrent field, and finally output them in a JSON format so that they can be sent to ElasticSearch. But since you are already manipulating javascript on the client side, one can easily imagine that you would talk directly to the Elasticsearch server.
For example once you have caught a javascript exception, you could do the folowing:
var xhr = new XMLHttpRequest();
xhr.open("PUT", http://your-elasticsearch-host:9292, true);
var data = {
lineNumber: lineNumber,
message: message,
url: url
}
xhr.send(JSON.stringify(data));
By doing this, you are directly talking from the client to the ElasticSearch Server. I can't imagine a simpler and faster way to do that (But note that this is just theory, I never tried myself, so reality could be more complex, especially if you want special fields like date timestamps to be generated ;)). In a production context you will probably have security issues, probably a proxy server between the client and the ES server, but the principle is there.
If you absolutely want to use Logstash you are not forced to use a file input
If, for the purpose of harmonizing, doing the same as everyone, or for using advanced logstash parsing confifuration you want to stick to Logstash, you should take a look at all the alternative inputs to the basic file input. For example I used to use a pipe myself, with a process in charge of collecting the logs and outputting these to the standard output. There is also the possibilty to read on an open tcp socket, and a lot more, you can even add your own.
You would have to catch all client side errors first (and send these to your server):
window.onerror = function (message, url, lineNumber) {
// Send error to server for storage
yourAjaxImplementation('http://domain.com/error-logger/', {
lineNumber: lineNumber,
message: message,
url: url
})
// Allow default error handling, set to true to disable
return false
}
Afterwards you can use NodeJS to write these error messages to a log. Logstash can collect these, and then you can use Kibana to visualise.
Note that according to Mozilla window.onerror doesn't appear to work for every error. You might want to switch to something like Sentry (if you don't want to pay, you can directly get the source from GitHub).
Logging errors trough the default built-in file logging allows your errors to be preserved and it also allows your kernel to optimize the writes for you.
If you really think that it is not fast enough (you get that many errors?) you could just put them into redis.
Logstash has a redis pub/sub input so you can store the errors in redis and logstash will pull them out and store them in your case in elasticsearch.
I'm presuming logstash/es are on another server, otherwise there really is no point doing this, es has to store the data on disc also, and it is not nearly as efficient as writing a logfile.
With whatever solution you go with, youll want to store the data, eg. writing it to disc. Append-only to a single (log) file is highly efficient, and when preserving data the only way you can handle more is to shard it across multiple discs/nodes.
I am new to Meteorjs and I am trying to retrieve data from an already existing MongoDB.
Heres what I have so far:
I set the env variable MONGO_URL to the mongoDB url
export MONGO_URL="mongodb://username:password#address:port/dbname"
Created a new meteor project with the following code:
MyCollection = new Meteor.Collection('mycollection');
if (Meteor.isClient) {
//Meteor.subscribe("mycollection");
console.log(MyCollection.findOne());
Template.hello.greeting = function () {
return MyCollection.findOne();
};
}
if (Meteor.isServer) {
Meteor.startup(function () {
// code to run on server at startup
console.log(MyCollection.findOne());
});
}
I know the server side console.log(MyCollection.findOne()); works as it prints out the correct data on the terminal.
The problem is with the client side. When I view the page on my browser, the data is blank and console.log(MyCollection.findOne()); shows 'undefined'.
I know that autopublish is on and I dont have to manually publish the collection from the server side.
I would like to know how I could make the client read from my external mongoDB directly.
Let me know if you have any suggestions!
Even with autopublish on, there is a lag between the client starting and the data being published. At the time that your first console.log is run, the documents haven't finished syncing so the findOne will return undefined. It turns out this isn't a big deal; as you get more familiar with meteor, you will see that the results of find operations are often used in non time-sensitive ways. An easy way to check if the client has the data is just to wait for the page to load, then start the browser console, and manually type:
console.log(MyCollection.findOne());
As for your other problem, the greeting needs to be something that can be displayed in html - a string for example. It can't be a document. Assuming your document had a message property you could do:
return MyCollection.findOne().message;
I am building an application for Android using Phonegap / Cordova which uses the device calendar. I've written a plugin ("Calify") that adds events to the device calendar which works fine. However I'm trying to add the device calendar event ID, which the plugin returns, to a record in an Web-SQL database, but this gives me the following error(s) (running in the Android simulator):
file:///android_asset/www/cordova-2.2.0.js: Line 1090 : processMessage failed: Message: S01 Calify507282772 s[{"id":191,"calid":615}]
file:///android_asset/www/cordova-2.2.0.js: Line 1091 : processMessage failed: Error: Error: INVALID_STATE_ERR: DOM Exception 11
Here's what my code looks like.
window.addEvent(function(insertions){
data = JSON.parse(insertions);
console.log(insertions);
event_id = data[0]['calid'];
id = data[0]['id'];
tx.executeSql('UPDATE AFSPRAKEN SET eventID=? WHERE id=' + id, [event_id]);
}, [calendarID, [{start: start.getTime(), end: end.getTime(), title: event_title}]]);
The log of 'insertions' gives me the desired result, as well as logging 'event_id' and 'id' individually. It seems to have something to do with the database insertion (which is asynchronous as well).
Note that this function runs within a database transaction, which is needed because I'm running some other queries as well outside this calendar event function. Commenting out the insert query makes the code run without errors.
DOM Exception 11 seems to mean the object ('insertions' I guess?) is no longer accessible, perhaps it has something to do with that.
I think the mistake I made is pretty obvious now, so for everyone finding this question through Google (the error message is not really pointing in the right direction) here's why my code gave the errors:
The database transaction is asynchronous, as well as Phonegap plug-ins. The Phonegap function compeletes outside the database transaction, making it impossible to run database queries inside that transaction. This could be solved in several ways:
Open a new database transaction in the complete function of the Phonegap communication
Like I did, connect with the Phonegap plugin after the database transaction
Overview:
I am trying to avoid a race condition with accessing an IndexedDB from both a webpage and a web-worker.
Setup:
Webpage that is saving items to the local IndexedDB as the user works with the site. Whenever a user saves data to the local DB the record is marked as "Unsent".
Web-worker background thread that is pulling data from the IndexedDB, sending it to the server and once the server receives it, marking the data in the IndexedDB as "Sent".
Problem:
Since access to the IndexedDB is asynchronous, I can not be guaranteed that the user won't update a record at the same time the web-worker is sending it to the server. The timeline is shown below:
Web-worker gets data from DB and sends it to the server
While the transfer is happening, the user updates the data saving it to the DB.
The web-worker gets the response from the server and then updates the DB to "Sent"
There is now data in DB that hasn't been sent to the server but marked as "Sent"
Failed Solution:
After getting the response from the server, I can recheck to row to see if anything has been changed. However I am still left with a small window where data can be written to the DB and it will never be sent to the server.
Example:
After server says data is saved, then:
IndexedDB.HasDataChanged(
function(changed) {
// Since this is async, this changed boolean could be lying.
// The data might have been updated after I checked and before I was called.
if (!changed){
IndexedDB.UpdateToSent() }
});
Other notes:
There is a sync api according to the W3 spec, but no one has implemented it yet so it can not be used (http://www.w3.org/TR/IndexedDB/#sync-database). The sync api was designed to be used by web-workers, to avoid this exact situation I would assume.
Any thoughts on this would be greatly appreciated. Have been working on it for about a week and haven't been able to come up with anything that will work.
I think I found a work around for this for now. Not really as clean as I would like, but it seems to be thread safe.
I start by storing the datetime into a LastEdit field, whenever I update the data.
From the web-worker, I am posting a message to the browser.
self.postMessage('UpdateDataSent#' + data.ID + '#' + data.LastEdit);
Then in the browser I am updating my sent flag, as long as the last edit date hasn't changed.
// Get the data from the DB in a transaction
if (data.LastEdit == lastEdit)
{
data.Sent = true;
var saveStore = trans.objectStore("Data");
var saveRequest = saveStore.put(data);
console.log('Data updated to Sent');
}
Since this is all done in a transaction in the browser side, it seems to work fine. Once the browsers support the Sync API I can throw it all away anyway.
Can you use a transaction?
https://developer.mozilla.org/en/IndexedDB/IDBTransaction
Old thread but the use of a transaction would solve the Failed Solution approach. I.e. the transaction only needs to span the check that the data in the IndexedDB hasn't change after the send and marking it as sent if there was no change. If there was a change, the transaction ends without writing.