I have a table with form, surname, lastname, email and a rectangle. I have to insert, in the rectangle, an array with arrays with points of timelines etc. I create a customer with form, surname, lastname and email, add them to indexeddb, load them later to insert the rectangle-array. After that, I want to put the newobjectstore in the indexeddb where the email is the same from my customer I choose/inserted. But with this code my array will be put in a new Objectstore with its own ID.
function cmd_SaveRec(hypeDocument, elementID)
{
hypeDocument.getElementById('rectext').innerHTML = hypeDocument.getElementById('beschriftung').value;
var store_cust = db.transaction(["customer"], "readwrite").objectStore("customer").index("rectangle");
var cursorReq = store_cust.openCursor();
cursorReq.onsuccess = function (e) {
cursor = e.target.result;
if(cursor) {
if(cursor.value.email == mailAdd)
{
//cursor.update([rec_ar]);
if(store_cust.objectStore.put({rectangle: [rec_ar]}))
{
console.info(store_cust.objectStore);
console.info('Gespeichert');
alert("Gespeichert");
} else {
console.info('cmd_SaveRec::Problem');
}
}
cursor.continue();
}
};
cursorReq.onerror = function(e) {
console.log("Error");
console.dir(e);
}
}
var store_cust = evt.currentTarget.result.createObjectStore(
DB_STORE_NAME_CUSTOMER, { keyPath: 'cust_id', autoIncrement: true });
store_cust.createIndex('form', 'form', { unique: false }); //
store_cust.createIndex('surname', 'surname', { unique: false });
store_cust.createIndex('lastname', 'lastname', { unique: false });
store_cust.createIndex('email', 'email', { unique: true });
store_cust.createIndex('rectangle', 'rectangle', { unique: false, multiEntry: true });
Short answer
Provide an identifier/key as a second parameter in objectStor.put(data, key) or
Use a IDBCursor und update it as stated here in the docs
Explanation
As described in the docs:
The put() method of the IDBObjectStore interface updates a given record in a database, or inserts a new record if the given item does not already exist. (source developer.mozilla.org)
Your used method objectStore.put() is for insert or update tasks. If I get you right you're looking for an update - cursor.update() is your friend here (you commented out). - This is the preferred method here !
But you could do it with both methods. Say you would like to update but if the record not exists create one. In such a case the engine have to know if your record exists and what record do you try to update.
If your objectStore uses an autoIncrementing primary key the identifier of the record is not in the record itself so you have to provide the id as a second parameter to your put() function.
I find it easier to take care of the ids by myself. Then the id is part of the record (findable under the keypath you provided at objectStore-creation). Then your code could work as expected... certainly you have to add a key:value pair for an id.
Examples
// create a store with ids YOU handle e.g.
var request = indexedDB.open(dbname, dbversion+1);
request.onerror = errorHandler;
request.onupgradeneeded = function(event){
var nextDB = request.result;
if (!nextDB.objectStoreNames.contains('account')) {
nextDB.createObjectStore('account', {keyPath: "id", autoIncrement: true});
}
}
// Your record has to llok like this
{id: 123456789, rectangle: [rec_ar]}
// Now your code above should work
If you have a primary key in your db:
store_cust.objectStore.put({rectangle: [rec_ar]}, PRIMARY_KEY)
// where PRIMARY_KEY is the id of this specific record
By the way
Don't use if-else to check the completness of the transaction - it is asynchronus - if/else lies everytime here - use callbacks as I stated in my example above (onsuccess).
You should read the docs from my cite. Mozilla is a great ressource for indexedDB stuff.
Related
As an example on basic setup one index is created.
db.onupgradeneeded = function(event) {
var db = event.target.result;
var store = db.createObjectStore('name', { keyPath: 'id' });
store.createIndex('by name', 'name', { unique: false });
};
Question:
Is it possible to create/append more indexes to the same objectStore on the future versionupdate? Since if I try:
db.onupgradeneeded = function(event) {
var db = event.target.result;
var store = db.createObjectStore('name', { keyPath: 'id' });
store.createIndex('by newName', 'newName', { unique: false });
};
It throws an error that current objectStore does already exist. An if I try to create store reference using transaction:
db.onupgradeneeded = function(event) {
var db = event.target.result;
var store = db.transaction('name', 'readwrite').objectStore('name');
store.createIndex('by newName', 'newName', { unique: false });
};
It throws that version change transaction is currently running
Yes it is possible. It can be a bit confusing at first. You want to get the existing object store via the implicit transaction created for you within onupgradeneeded. This is a transaction of type versionchange which is basically like a readwrite transaction but specific to the onupgradeneeded handler function.
Something like this:
var request = indexedDB.open(name, oldVersionPlusOne);
request.onupgradeneeded = myOnUpgradeNeeded;
function myOnUpgradeNeeded(event) {
// Get a reference to the request related to this event
// #type IDBOpenRequest (a specialized type of IDBRequest)
var request = event.target;
// Get a reference to the IDBDatabase object for this request
// #type IDBDatabase
var db = request.result;
// Get a reference to the implicit transaction for this request
// #type IDBTransaction
var txn = request.transaction;
// Now, get a reference to the existing object store
// #type IDBObjectStore
var store = txn.objectStore('myStore');
// Now, optionally inspect index names, or create a new index
console.log('existing index names in store', store.indexNames);
// Add a new index to the existing object store
store.createIndex(...);
}
You also will want to take care to increment the version so as to guarantee the onupgradeneeded handler function is called, and to represent that your schema (basically the set of tables and indices and properties of things) has changed in the new version.
You will also need to rewrite the function so that you only create or make changes based on the version. You can use event.oldVersion to help with this, or things like db.objectStoreNames.contains.
Something like this:
function myOnUpgradeNeeded(event) {
var is_new_db = isNaN(event.oldVersion) || event.oldVersion === 0;
if(is_new_db) {
var db = event.target.result;
var store = db.createObjectStore(...);
store.createIndex('my-initial-index');
// Now that you decided you want a second index, you also need
// to do this for brand new databases
store.createIndex('my-second-new-index');
}
// But if the database already exists, we are not creating things,
// instead we are modifying the existing things to get into the
// new state of things we want
var is_old_db_not_yet_current_version = !isNaN(event.oldVersion) && event.oldVersion < 2;
if(is_old_db_not_yet_current_version) {
var txn = event.target.transaction;
var store = txn.objectStore('store');
store.createIndex('my-second-new-index');
}
}
Pay close attention to the fact that I used event.target.transaction instead of db.transaction(...). These are not at all the same thing. One references an existing transaction, and one creates a new one.
Finally, and in addition, a personal rule of mine and not a formal coding requirement, you should never be using db.transaction() from within onupgradeneeded. Stick to modifying the schema when doing upgrades, and do all data changes outside of it.
I have a data like this:
"customers": {
"aHh4OTQ2NTlAa2xvYXAuY29t": {
"customerId": "xxx",
"name": "yyy",
"subscription": "zzz"
}
}
I need to retrive a customer by customerId. The parent key is just B64 encoded mail address due to path limitations. Usually I am querying data by this email address, but for a few occasions I know only customerId. I've tried this:
getCustomersRef()
.orderByChild('customerId')
.equalTo(customerId)
.limitToFirst(1)
.once('child_added', cb);
This works nicely in case the customer really exists. In opposite case the callback is never called.
I tried value event which works, but that gives me whole tree starting with encoded email address so I cannot reach the actual data inside. Or can I?
I have found this answer Test if a data exist in Firebase, but that again assumes that you I know all path elements.
getCustomersRef().once('value', (snapshot) => {
snapshot.hasChild(`customerId/${customerId}`);
});
What else I can do here ?
Update
I think I found solution, but it doesn't feel right.
let found = null;
snapshot.forEach((childSnapshot) => {
found = childSnapshot.val();
});
return found;
old; misunderstood the question :
If you know the "endcodedB64Email", this is the way.:
var endcodedB64Email = B64_encoded_mail_address;
firebase.database().ref(`customers/${endcodedB64Email}`).once("value").then(snapshot => {
// this is getting your customerId/uid. Remember to set your rules up for your database for security! Check out tutorials on YouTube/Firebase's channel.
var uid = snapshot.val().customerId;
console.log(uid) // would return 'xxx' from looking at your database
// you want to check with '.hasChild()'? If you type in e.g. 'snapshot.hasChild(`customerId`)' then this would return true, because 'customerId' exists in your database if I am not wrong ...
});
UPDATE (correction) :
We have to know at least one key. So if you under some circumstances
only know the customer-uid-key, then I would do it like this.:
// this is the customer-uid-key that is know.
var uid = firebase.auth().currentUser.uid; // this fetches the user-id, referring to the current user logged in with the firebase-login-function
// this is the "B64EmailKey" that we will find if there is a match in the firebase-database
var B64EmailUserKey = undefined;
// "take a picture" of alle the values under the key "customers" in the Firebase database-JSON-object
firebase.database().ref("customers").once("value").then(snapshot => {
// this counter-variable is used to know when the last key in the "customers"-object is passed
var i = 0;
// run a loop on all values under "customers". "B64EmailKey" is a parameter. This parameter stores data; in this case the value for the current "snapshot"-value getting caught
snapshot.forEach(B64EmailKey => {
// increase the counter by 1 every time a new key is run
i++;
// this variable defines the value (an object in this case)
var B64EmailKey_value = B64EmailKey.val();
// if there is a match for "customerId" under any of the "B64EmailKey"-keys, then we have found the corresponding correct email linked to that uid
if (B64EmailKey_value.customerId === uid) {
// save the "B64EmailKey"-value/key and quit the function
B64EmailUserKey = B64EmailKey_value.customerId;
return B64UserKeyAction(B64EmailUserKey)
}
// if no linked "B64EmailUserKey" was found to the "uid"
if (i === Object.keys(snapshot).length) {
// the last key (B64EmailKey) under "customers" was returned. e.g. no "B64EmailUserKey" linkage to the "uid" was found
return console.log("Could not find an email linked to your account.")
}
});
});
// run your corresponding actions here
function B64UserKeyAction (emailEncrypted) {
return console.log(`The email-key for user: ${auth.currentUser.uid} is ${emailEncrypted}`)
}
I recommend putting this in a function or class, so you can easily call it up and reuse the code in an organized way.
I also want to add that the rules for your firebase must be defined to make everything secure. And if sensitive data must be calculated (e.g. price), then do this on server-side of Firebase! Use Cloud Functions. This is new for Firebase 2017.
Can anyone see what may be wrong in this code, basically I want to check if a post has been shared by the current logged in user AND add a temporary field to the client side collection: isCurrentUserShared.
This works the 1st time when loading a new page and populating from existing Shares, or when adding OR removing a record to the Shares collection ONLY the very 1st time once the page is loaded.
1) isSharedByMe only changes state 1 time, then the callbacks still get called as per console.log, but isSharedByMe doesn't get updated in Posts collection after the 1st time I add or remove a record. It works the 1st time.
2) Why do the callbacks get called twice in a row, i.e. adding 1 record to Sharescollection triggers 2 calls, as show by console.log.
Meteor.publish('posts', function() {
var self = this;
var mySharedHandle;
function checkSharedBy(IN_postId) {
mySharedHandle = Shares.find( { postId: IN_postId, userId: self.userId }).observeChanges({
added: function(id) {
console.log(" ...INSIDE checkSharedBy(); ADDED: IN_postId = " + IN_postId );
self.added('posts', IN_postId, { isSharedByMe: true });
},
removed: function(id) {
console.log(" ...INSIDE checkSharedBy(); REMOVED: IN_postId = " + IN_postId );
self.changed('posts', IN_postId, { isSharedByMe: false });
}
});
}
var handle = Posts.find().observeChanges({
added: function(id, fields) {
checkSharedBy(id);
self.added('posts', id, fields);
},
// This callback never gets run, even when checkSharedBy() changes field isSharedByMe.
changed: function(id, fields) {
self.changed('posts', id, fields);
},
removed: function(id) {
self.removed('posts', id);
}
});
// Stop observing cursor when client unsubscribes
self.onStop(function() {
handle.stop();
mySharedHandle.stop();
});
self.ready();
});
Personally, I'd go about this a very different way, by using the $in operator, and keeping an array of postIds or shareIds in the records.
http://docs.mongodb.org/manual/reference/operator/query/in/
I find publish functions work the best when they're kept simple, like the following.
Meteor.publish('posts', function() {
return Posts.find();
});
Meteor.publish('sharedPosts', function(postId) {
var postRecord = Posts.findOne({_id: postId});
return Shares.find{{_id: $in: postRecord.shares_array });
});
I am not sure how far this gets you towards solving your actual problems but I will start with a few oddities in your code and the questions you ask.
1) You ask about a Phrases collection but the publish function would never publish anything to that collection as all added calls send to minimongo collection named 'posts'.
2) You ask about a 'Reposts' collection but none of the code uses that name either so it is not clear what you are referring to. Each element added to the 'Posts' collection though will create a new observer on the 'Shares' collection since it calls checkSharedId(). Each observer will try to add and change docs in the client's 'posts' collection.
3) Related to point 2, mySharedHandle.stop() will only stop the last observer created by checkSharedId() because the handle is overwritten every time checkSharedId() is run.
4) If your observer of 'Shares' finds a doc with IN_postId it tries to send a doc with that _id to the minimongo 'posts' collection. IN_postId is passed from your find on the 'Posts' collection with its observer also trying to send a different doc to the client's 'posts' collection. Which doc do you want on the client with that _id? Some of the errors you are seeing may be caused by Meteor's attempts to ignore duplicate added requests.
From all this I think you might be better breaking this into two publish functions, one for 'Posts' and one for 'Shares', to take advantage of meteors default behaviour publishing cursors. Any join could then be done on the client when necessary. For example:
//on server
Meteor.publish('posts', function(){
return Posts.find();
});
Meteor.publish('shares', function(){
return Shares.find( {userId: this.userId }, {fields: {postId: 1}} );
});
//on client - uses _.pluck from underscore package
Meteor.subscribe( 'posts' );
Meteor.subscribe( 'shares');
Template.post.isSharedByMe = function(){ //create the field isSharedByMe for a template to use
var share = Shares.findOne( {postId: this._id} );
return share && true;
};
Alternate method joining in publish with observeChanges. Untested code and it is not clear to me that it has much advantage over the simpler method above. So until the above breaks or becomes a performance bottleneck I would do it as above.
Meteor.publish("posts", function(){
var self = this;
var sharesHandle;
var publishedPosts = [];
var initialising = true; //avoid starting and stopping Shares observer during initial publish
//observer to watch published posts for changes in the Shares userId field
var startSharesObserver = function(){
var handle = Shares.find( {postId: {$in: publishedPosts}, userId === self.userId }).observeChanges({
//other observer should have correctly set the initial value of isSharedByMe just before this observer starts.
//removing this will send changes to all posts found every time a new posts is added or removed in the Posts collection
//underscore in the name means this is undocumented and likely to break or be removed at some point
_suppress_initial: true,
//other observer manages which posts are on client so this observer is only managing changes in the isSharedByMe field
added: function( id ){
self.changed( "posts", id, {isSharedByMe: true} );
},
removed: function( id ){
self.changed( "posts", id, {isSharedByMe: false} );
}
});
return handle;
};
//observer to send initial data and always initiate new published post with the correct isSharedByMe field.
//observer also maintains publishedPosts array so Shares observer is always watching the correct set of posts.
//Shares observer starts and stops each time the publishedPosts array changes
var postsHandle = Posts.find({}).observeChanges({
added: function(id, doc){
if ( sharesHandle )
sharesHandle.stop();
var shared = Shares.findOne( {postId: id});
doc.isSharedByMe = shared && shared.userId === self.userId;
self.added( "posts", id, doc);
publishedPosts.push( id );
if (! initialising)
sharesHandle = startSharesObserver();
},
removed: function(id){
if ( sharesHandle )
sharesHandle.stop();
publishedPosts.splice( publishedPosts.indexOf( id ), 1);
self.removed( "posts", id );
if (! initialising)
sharesHandle = startSharesObserver();
},
changed: function(id, doc){
self.changed( "posts", id, doc);
}
});
if ( initialising )
sharesHandle = startSharesObserver();
initialising = false;
self.ready();
self.onStop( function(){
postsHandle.stop();
sharesHandle.stop();
});
});
myPosts is a cursor, so when you invoke forEach on it, it cycles through the results, adding the field that you want but ending up at the end of the results list. Thus, when you return myPosts, there's nothing left to cycle through, so fetch() would yield an empty array.
You should be able to correct this by just adding myPosts.cursor_pos = 0; before you return, thereby returning the cursor to the beginning of the results.
I am trying to make two function.
Save() should check if there is an existing document for that user, and if there is then update his save with a new one, and if there is not then insert a new doc using the user's unique id as the docs unique id.
Load() should check if there is an existing save with the user's Id and load it.
I am purely new to that and here is the error I get
Uncaught Error: Not permitted. Untrusted code may only update
documents by ID. [403]
I get that it happens because of how update and insert work. But I want to use the user's unique iD for documents, because it looks simple.
function Save() {
if (Meteor.userId()) {
player = Session.get("Player");
var save = {
id: Meteor.userId(),
data = "data"
};
console.log(JSON.stringify(save));
if (Saves.find({id: Meteor.userId()})){
Saves.update( {id: Meteor.userId()}, {save: save} )
console.log("Updated saves")
}
else {
Saves.insert(save)
}
console.log("Saved");
}
}
function Load(){
if (Meteor.userId()){
if (Saves.find(Meteor.userId())){
console.log(JSON.stringify(Saves.find(Meteor.userId()).save.player));
player = Saves.find(Meteor.userId()).save.player;
data= Saves.find(Meteor.userId()).save.data
}
}
}
Objects/documents id-field is called _id.
See here!
The error occurs when you try the update of the existing object/document on the client side.
You always need to pass in the objects _id to update the object/document from client code.
Note that you always try to pass an id not an _id!
So try it like this:
function Save() {
if (Meteor.userId()) {
player = Session.get("Player");
var save = {
_id: Meteor.userId(),
data = "data"
};
console.log(JSON.stringify(save));
if (Saves.find({_id: Meteor.userId()})){
Saves.update( {_id: Meteor.userId()}, {save: save} )
console.log("Updated saves")
}
else {
Saves.insert(save)
}
console.log("Saved");
}
}
Also note that your Load() function could work, because Collection.find() uses the string you pass as an _id for the document.
Hope that helped!
I am trying to implement this example.
Everything works fine until I attempt to delete a certain item.
Using this:
request.onupgradeneeded = function(event) {
console.log("upgrade", event);
db = event.target.result;
console.log("db", db);
if (!db.objectStoreNames.contains("chatBot")) {
var objectStore = db.createObjectStore("chatBot", {keyPath: "timeStamp", autoIncrement: true});
}
};
and setting up the deletion:
btnDelete.addEventListener("click", function() {
var id, transaction, objectStore, request;
id = document.getElementById("txtID").value;
console.log("id", typeof id);
transaction = db.transaction("people", "readwrite");
objectStore = transaction.objectStore("people");
request = objectStore.delete(id);
request.onsuccess = function(evt) {
console.log("deleted content");
};
}, false);
There is no problem adding items to the indexedDB but for some reason I can't figure out why it can't delete the items.
The id is a string and the objectStore.delete(id) is the correct implementation.
Here is a pastebin of the example
Using Firefox 18
Since you are using autoIncrement key, the key is generated by the user agent. In FF and Chrome, it is integer starting with 1. If you give valid key and convert your id to integer, your code run fine. I tested in both FF and Chrome (dartium). '1' and 1 are different keys according to IndexedDB API key definition.
Another issue IndexedDB API design. The delete methods always return to success event handler with undefined as result whether given key was deleted or not. So it is very difficult to debug. I think it should return number of deleted records at least.
[Edit] mod code: http://pastebin.com/mLpU0VfP
[Edit... also] Notice the + which converts the string to an integer
request = db.transaction("people", "readwrite").objectStore("people").delete(+id);