i know very well how the set method work. but I have a doubt about its use to update a node.
I want to know if when I save an object with a new field (so the same values as before + a new field), is all the fields of the object uploaded again or is only the new field loaded?
I see that in the database the unchanged fields are not lit green during writing, this makes me think that
1) all object is sent to the database and after the upload the database simply ignores the fields without modifications.
2) the unchanged fields are not even uploaded into the database (they simply stay in the client) but only new fields are sent.
in the second case in a context of large objects there would be a considerable saving of bandwidth
const object = {
name: 'tower10',
type: 'building',
rooms: 10
};
await db.ref('object/1').set(object);
object.extra = 'extra content';
object.extra1 = 'extra content 1';
await db.ref('object/1').set(object);
The entire object is sent with every call to set(). Children whose values are not change don't count as updates for listeners (as you noticed in the console when their values don't flash). If you know only certain values are going to change, you cloud only update with those values and not send the entire thing. But the object you're showing here is rather small, and I don't think optimizing this small object will matter very much.
Related
I have a collection whose documents look something like this:
count: number
first: timestamp
last: timestamp
The first value should (almost) never change after the document's creation.
In a batch write operation, I am trying to update documents in this collection, or create those documents that do not yet exist. Something like
batch.setData([
"count": FieldValue.increment(someInteger),
"first": someTimestamp,
"last": someTimestamp
], forDocument: someDocumentRef, mergeFields: ["count","last"])
My hope was that by excluding first from the mergeFields array, Firestore would set count and last by merging it into an existing document or making a new one, and set first only if it had no previous value (i.e., the document didn't exist before this operation). It is clear to me now that this is not the case, and instead first is completely ignored. Now I'm left wondering what the Firestore team intended for this situation.
I know that I could achieve this with a Transaction, but that doesn't tie in very well with my batch write. Are Transactions my only option, or is there a better way to achieve this?
I have created timestamps and other data in my documents and I handle this using separate create and update functions rather than trying to do it all at once.
The initial creation function includes the created date etc and then subsequent updates use the non-destructive update, so just omit any fields in the update payload you do not want to overwrite.
eg. to create:
batch.set(docRef, {created: someTimestamp, lastUpdate: someTimestamp})
then to update:
batch.update(docRef, {lastUpdate: someTimestamp, someOtherField: someData})
This will not overwrite the creationDate field or any other fields, but will create the someOtherField if it does not exist.
If you have a need to do a "only update existing fields" update after the document is created for the first time then currently you have to read the document first to find out if the fields exist and then create an update payload which will patch the only the desired fields. This can be done in a transaction or you can write this logic yourself, depending on your needs.
I am building an API where some fields can be incremented.
After noticing data inconsistency in my MySQL database, I realized that the first version of my code was buggy:
Answer.incrementVotesCount = async (id) => {
// get a copy of the data
let answer = await getAnswer(id);
// update the copy of the data locally
answer.votesCount++;
// replace the persisted data with the updated copy of the original data
await Answer.updateAll({id}, answer);
};
Getting some data, updating it locally and persisting the modification can cause consistency problems when the route is used several times in a short period of time.
Such a situation would look something like this:
Caller A gets data. The persisted votesCount equals 14.
Caller B gets data. The persisted votesCount equals 14.
Caller A updates data. The persisted votesCount becomes 14 + 1.
At this point, the persisted votesCount equals 15, but Caller B's copy of it still equals 14.
Caller B updates data. The persisted votesCount becomes 14 + 1, whereas it should become 15 + 1.
2 increments have been performed, but the second one "crushed" the first one, since it increments an obsolete data.
I thought about using LoopBack3's native SQL functionality, but it seems like it is not fully reliable so I am unsure whether it's a good idea to use it (even though a query as simple as SET a = a + 1 should probably work correctly).
I also thought about using MySQL's triggers to perform some ACID compliant incrementing but I am unsure I can find a clean way to do this.
How do I increment some data without making it inconsistent?
I would take away the votesCount field to a separate hasOne relation, then I would make the Answer model strict='filter', so it would prevent saving data that does not really belong to the model. And when vote-up action would be taken I would increase the voteCount in that separate model, independently of the original Answer.
If you don't want to do it like this, you can try to check the original value in the before save hook, so you could get the latest value from the db and compare votesCount value from the database with the value in the model and update it accordingly.
I need to store client side data temporarily. The data will be trashed on refresh or redirect. What is the best way to store the data?
using javascript by putting the data inside a variable
var data = {
a:"longstring",
b:"longstring",
c:"longstring",
}
or
putting the data inside html elements (as data-attribute inside div tags)
<ul>
<li data-duri="longstring"></li>
<li data-duri="longstring"></li>
<li data-duri="longstring"></li>
</ul>
The amount of data to temporarily store could get a lot because the data I need to store are image dataUri's and a user that does not refresh for the whole day could stack up maybe 500+ images with a size of 50kb-3mb. (I am unsure if that much data could crash the app because of too much memory consumption. . please correct me if I am wrong.)
What do you guys suggest is the most efficient way to keep the data?
I'd recommend storing in JavaScript and only updating the DOM when you actually want to display the image assuming all the image are not stored at the same time. Also note the browser will also store the image in its own memory when it is in the DOM.
Update: As comments have been added to the OP I believe you need to go back to customer requirements and design - caching 500 x 3MB images is unworkable - consider thumbnails etc? This answer only focuses on optimal client side caching if you really need to go that way...
Data URI efficiency
Data URIs use base64 which adds an overhead of around 33% representing binary data in ASCII.
Although base64 is required to update the DOM the overhead can be avoided by storing the data as binary strings and encoding and decoding using atob() and btoa() functions - as long as you drop references to the original data allowing it to be garbage collected.
var dataAsBase64 = "iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==";
var dataAsBinary = atob(dataAsBase64);
console.log(dataAsBinary.length + " vs " + dataAsBase64.length);
// use it later
$('.foo').attr("src", "data:image/png;base64," + btoa(dataAsBinary));
String memory efficiency
How much RAM does each character in ECMAScript/JavaScript string consume? suggests they take 2 bytes per character - although this is still could be browser dependent.
This could be avoided by using ArrayBuffer for 1-to-1 byte storage.
var arrayBuffer = new Uint8Array(dataAsBinary.length );
for (i = 0; i < dataAsBinary.length; i++) {
arrayBuffer[i] = dataAsBinary.charCodeAt(i);
}
// allow garbage collection
dataAsBase64 = undefined;
// use it later
dataAsBase64 = btoa(String.fromCharCode.apply(null, arrayBuffer));
$('.foo').attr("src", "data:image/png;base64," + btoa(dataAsBinary));
Disclaimer: Note all this add a lot of complexity and I'd only recommend such optimisation if you actually find a performance problem.
Alternative storage
Instead of using browser memory
local storage - limited, typically 10MB, certainly won't allow - 500 x 3MB without specific browser configuration.
Filesystem API - not yet widely supported, but ideal solution - can create temp files to offload to disk.
if you really want to loose the data on a refresh, just use a javascript hash/object var storage={} and you have a key->value store. If you would like to keep the data during the duration of the user visiting the page (until he closes the browser window), you could use sessionStorage or to persist the data undefinetly (or until the user deletes it), use localStorage or webSQL
putting data into the DOM (as a data-attribute or hidden fields etc) is not a good idea as the process for javascript to go into the DOM and pull that information out is very expensive (crossing borders between the javascript- and the DOM-world (the website structure) doesn't come cheap)
Using Javascript variable is the best way to store you temp data. You may consider to storing your data inside a DOM attribute only if the data is related to a specific DOM element.
About the performance, storing your data directly in a javascript variable will probably be faster since storing data in a DOM element would also involve javascript in addition to the DOM modifications. If the data isn't related to an existing DOM element, you'll also have to create a new element to store that value and make sure it isn't visible to the user.
The OP mentions a requirement for the data to be forcibly transient i.e. (if possible) unable to be saved locally on the client - at least that is how I read it.
If this type of data privacy is a firm requirement for an application, there are multiple considerations when dealing with a browser environment, I am unsure whether the images in question are to be displayed as images to the user, or where in relation to the client the source data of the images is coming from. If the data is coming into the browser over the network, you might do well (or better than the alternative, at least) to use a socket or other raw data connection rather than HTTP requests, and consider something like a "sentinel" value in the stream of bytes, to indicate boundaries of image data.
Once you have the bytes coming in, you could, I believe, (or soon will be able to) pass the data via a generator function into a typedArray via the iterator protocol, see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array
// From an iterable
var iterable = function*(){ yield* [1,2,3]; }();
var uint8 = new Uint8Array(iterable);
// Uint8Array[1, 2, 3]
And then perhaps integrate those arrays as private members of some class you use to manage their lifecycle? see:
https://www.nczonline.net/blog/2014/01/21/private-instance-members-with-weakmaps-in-javascript/
var Person = (function() {
var privateData = {},
privateId = 0;
function Person(name) {
Object.defineProperty(this, "_id", { value: privateId++ });
privateData[this._id] = {
name: name
};
}
Person.prototype.getName = function() {
return privateData[this._id].name;
};
return Person;
}());
I think you should be able to manage the size / wait problem to some extent with the generator method of creating the byte arrays as well, perhaps check for sane lengths, time passed on this iterator, etc.
A general set of ideas more than an answer, and none of which are my own authorship, but this seems to be appropriate to the question.
Why not used #Html.Hidden ?
#Html.Hidden("hId", ViewData["name"], new { #id = "hId" })
There are various ways to do this, depending upon your requirement:
1) We can make use of constant variables, create a file Constants.js and can be used to store data as
"KEY_NAME" : "someval"
eg:
var data = {
a:"longstring",
b:"longstring",
c:"longstring",
}
CLIENT_DATA = data;
Careful: This data will be lost if you refresh the screen, as all the variables memory is just released out.
2) Make use of cookieStore, using:
document.cookie = some val;
For reference :http://www.w3schools.com/js/tryit.asp?filename=tryjs_cookie_username
Careful: Cookie store data has an expiry period also has a data storage capacity https://stackoverflow.com/a/2096803/1904479.
Use: Consistent long time storage. But wont be recommended to store huge data
3) Using Local Storage:
localStorage.setItem("key","value");
localStorage.getItem("key");
Caution: This can be used to store value as key value pairs, as strings, you will not be able to store json arrays without stringify() them.
Reference:http://www.w3schools.com/html/tryit.asp?filename=tryhtml5_webstorage_local
4) Option is to write the data into a file
Reference: Writing a json object to a text file in javascript
Below is a snipet of code that I am having trouble with. The purpose is to check duplicate entries in the database and return "h" with a boolean if true or false. For testing purposes I am returning a true boolean for "h" but by the time the alert(duplicate_count); line gets executed the duplicate_count is still 0. Even though the alert for a +1 gets executed.
To me it seems like the function updateUserFields is taking longer to execute so it's taking longer to finish before getting to the alert.
Any ideas or suggestions? Thanks!
var duplicate_count = 0
for (var i = 0; i < skill_id.length; i++) {
function updateUserFields(h) {
if(h) {
duplicate_count++;
alert("count +1");
} else {
alert("none found");
}
}
var g = new cfc_mentoring_find_mentor();
g.setCallbackHandler(updateUserFields);
g.is_relationship_duplicate(resource_id, mentee_id, section_id[i], skill_id[i], active_ind,table);
};
alert(duplicate_count);
There is no reason whatsoever to use client-side JavaScript/jQuery to remove duplicates from your database. Security concerns aside (and there are a lot of those), there is a much easier way to make sure the entries in your database are unique: use SQL.
SQL is capable of expressing the requirement that there be no duplicates in a table column, and the database engine will enforce that for you, never letting you insert a duplicate entry in the first place. The syntax varies very slightly by database engine, but whenever you create the table you can specify that a column must be unique.
Let's use SQLite as our example database engine. The relevant part of your problem is right now probably expressed with tables something like this:
CREATE TABLE Person(
id INTEGER PRIMARY KEY ASC,
-- Other fields here
);
CREATE TABLE MentorRelationship(
id INTEGER PRIMARY KEY ASC,
mentorID INTEGER,
menteeID INTEGER,
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
However, you can make enforce uniqueness i.e. require that any (mentorID, menteeID) pair is unique, by changing the pair (mentorID, menteeID) to be the primary key. This works because you are only allowed one copy of each primary key. Then, the MentorRelationship table becomes
CREATE TABLE MentorRelationship(
mentorID INTEGER,
menteeID INTEGER,
PRIMARY KEY (mentorID, menteeID),
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
EDIT: As per the comment, alerting the user to duplicates but not actually removing them
This is still much better with SQL than with JavaScript. When you do this in JavaScript, you read one database row at a time, send it over the network, wait for it to come to your page, process it, throw it away, and then request the next one. With SQL, all the hard work is done by the database engine, and you don't lose time by transferring unnecessary data over the network. Using the first set of table definitions above, you could write
SELECT mentorID, menteeID
FROM MentorRelationship
GROUP BY mentorID, menteeID
HAVING COUNT(*) > 1;
which will return all the (mentorID, menteeID) pairs that occur more than once.
Once you have a query like this working on the server (and are also pulling out all the information you want to show to the user, which is presumably more than just a pair of IDs), you need to send this over the network to the user's web browser. Essentially, on the server side you map a URL to return this information in some convenient form (JSON, XML, etc.), and on the client side you read this information by contacting that URL with an AJAX call (see jQuery's website for some code examples), and then display that information to the user. No need to write in JavaScript what a database engine will execute orders of magnitude faster.
EDIT 2: As per the second comment, checking whether an item is already in the database
Almost everything I said in the first edit applies, except for two changes: the schema and the query. The schema should become the second of the two schemas I posted, since you don't want the database engine to allow duplicates. Also, the query should be simply
SELECT COUNT(*) > 0
FROM MentorRelationship
WHERE mentorID = #mentorID AND menteeID = #menteeID;
where #mentorID and #menteeID are the items that the user selected, and are inserted into the query by a query builder library and not by string concatenation. Then, the server will get a true value if the item is already in the database, and a false value otherwise. The server can send that back to the client via AJAX as before, and the client (that's your JavaScript page) can alert the user if the item is already in the database.
i have a map system (grid) for my website. I have defined 40000 'fields' on a grid. Each field has a XY value (for x(1-200) and y(1-200)) and a unique identifier: fieldid(1-40000).
I have a viewable area of 16x9 fields. When the user visits website.com/fieldid/422 it displays 16x9 fields starting with fieldid 422 in the upperleft corner. This obviously follows the XY system, which means the field in the second row, right below #422 is #622.
The user should be able to navigate Up, Down, Left and Right (meaning increment/decrement the X or Y value accordingly). I have a function which converts XY values to fieldids and vice-versa.
Everything good so far, I can:
Reload the entire page when a user clicks a navigate button (got this)
Send an ajax-request and get a jsonstring with the new 16x9 fields (got this)
But I want to build in some sort of caching system so that the data sent back from the server can be minimized after the first load. This would probably mean only sending new 'rows' or 'columns' of fields and storing them in somesort of javascript multidimensional array bigger then the 16x9 used for displaying. But I can't figure it out. Can somebody assist?
I see two possible solutions.
1 If you use ajax to get new tiles and do not reload entire page very often, you may just use an object that holds the contents of each tile, using unique tile ids as keys, like:
var mapCache = {
'1' : "tile 1 data",
'2' : "tile 2 data"
//etc.
}
When the user request new tiles, first check if you have them in your object (you know which tiles will be needed for given area), then download only what you need and add new key/value pairs to the cache. Obviously all cached data will disappear as soon as the page is reloaded by user.
2 If you reload the page for each request you might split your tiles into separate javascript "files". It doesn't really matter how it would be implemented on the server - static files like tile1.js, tile2.js etc, or dynamic script (probably with some server-side cache) like tile.php?id=1, tile.php?id=2 etc. What's important is that the server sends proper HTTP headers and makes it possible for the browser to cache these requests. So when a page containing some 144 tiles is requested you have 144 <script /> elements, each one containing data for one tile and each one will be stored in browser's cache. This solution makes sense only if there's lot of data for each tile and data doesn't change on the server very often, or/and there's significant cost of tile generation/trasfer.
You could just have an array of 40,000 references. Basically, empty array elements don't take up a lot of room until you actually put something in them (its one of the advantages of a dynamically typed language). Javascript doesn't know if you are going to put an int or an object into an array element, so it doesn't allocate the elements until yo put something in them. So to summarize, just put them in an array - that simple!
Alternatively, if you don't want the interpreter to allocate 40,000 NULLs at start, you could use a dictionary method, with the keys being the 1 in 40,000 array indices. Now the unused elements don't even get allocated. Though if you are going to eventually fill a substantial portion of the map, the dictionary method is much less efficient.
Have a single associative array, which initially starts out with zero values.
If the user visits, say, grid 32x41y, you set a value for the array like this:
if (!(visitedGrids.inArray('32'))
{
visitedGrids['32'] = {}
}
visitedGrids['32']['41'] = data;
(This is pseudo-code; I haven't checked the syntax.)
Then you can check to see if the user has visited the appropriate grid coordinates by seeing if there is a value in the associative array.