I'm working with numerical data I.e. adding, subtracting, present valuing, etc, numbers in a page. However, I format them and print them to the screen. Say I want to add a column of numbers, I have to parse for commas etc. Is there a paradigm to use the actual data, an not have to parse the DOM data? Or should I store both data in the page, but save the numbers as an attribute?
If I understand your question correctly, it sounds like you might be looking for Angular or some other form of two way data binding. Using that framework, you would be able to setup a template to reflect automatically to your "data model" (some javascript construct in memory) and have it update automatically to reflect changes in that model. You can also use "filters" when drawing it to the page to make the raw number display as currency.
create a template. and then pass variables to it, and it spits out html
--mustache (multi types, you want javascript)
--handlerbars.js
=================
MISC javascript code that might be worth while to you.
// would split things up on underscore _
// replace underscore with comma if wanted.
var something = your_something.split('_');
//some for loops
for( var i in something) {
var a = something[i];
}
for (var i = 0; i < something.length; i++) {
var a = something[i];
}
regex, .match, .replace
// are javascript string commands for....
// find(something) and replace with (something) doing.
// other words dealing with decimal points, dollar signs etc..
// and extracting numbers or like.
//dealing with objects and arrays and getting data from them.
something['name'] //= something.name = same thing in many situations.
something[i].name
something[i][x]
something.name.x
var a = '<table>';
a+= '<tr>';
a+= '<td>;
//a = <table><tr><td>
.innerhtml or .append
//used to add stuff to the dom / html stuff already in the something.html file.
json.parse() // may come up
there are way more better examples than i can do out there. but hopefully pickup some keywords that would reveal better internet searches for examples.
normally doing for me... google search "javascript .innerhtml" and open 4 to 10 results i get back and normally find enough to satisfy what i want.
jtables.com i want to say, or datatables.net i also want to say deal with sort spreadhsheet like columns and sorting data that the end user can do.
cookies, localstorage, indexeddb. localstorage most likely easier of the 3 with enough power for simple application and storing information.
Related
I want to search a sheet for a value in a sheet, and then return the row and the column where the value was found.
I am well-versed in VBA and have used the .Find function to accomplish this very easily. However, after searching for the last 30 minutes online, I have been stunned to discover that is is so hard to find the code for this extremely simple function. I feel like I am in the twilight zone. Does Javascript really not have anything analogous to .Find? If so, why is this language used so much when VBA appears to be able to accomplish the same tasks in a much more simple manner? Please advise.
I assume you are calling something like mysheet.getDataRange().getValues(), which returns the contents of a google sheet as an array of arrays, eg, [[row1A, row1B], [row2A, row2B]].
With JS, you can get the index of a value in an array using indexOf, which returns the index of the found item or -1 if the item is not in the array. I don't think you can directly search across the two nested arrays. Instead, you could try iterating over the outer array and searching the inner array. Something like this:
// get data from google sheet, doing something like this
var data = mysheet.getDataRange().getValues()
// define our own function
function findItem(data) {
// loop over outer array
for (var i=0; i < data.length; i++) {
var row = data[i]; // get the inner array
var idx = row.indexOf(searchValue); // search for value
// if the value is found, return [row, column]
if (idx > -1) {
return [i, idx];
}
}
// call the function
var res = findItem(data);
var row = res[0];
var col = res[1];
You're comparing apples and oranges. JavaScript and VBA are different languages with different goals. VBA was built to allow it to integrate seamlessly with the MSSQLServer. JavaScript in its native form has no relational-database functionality at all. It's meant more for manipulating web pages through the DOM. (It can do more than that, but that's its primary function.) While VBA can do some of the things Javascript can do, it's a rather clunky way (IMHO) of doing so and narrowly focused on a rather specific set of problems that are tied to very specific software and hardware infrastructures. While that functionality might be nice to have in some cases, most of the JavaScript you see on the web today isn't interested in databases at all.
It's not clear to me what source of data you're trying to connect to but if you are specifically looking for a JavaScript data solution you might want to look into something like MongoDB, and the code libraries that have been developed specifically for that. And there are tons of other JS libraries that are relational-data or database-specific, and you can search places like npm for those. Or you can combine JavaScript with languages that inherently include database functionality, and PHP is an excellent example of that.
I have found these great pivot table components on the web
nicolaskruchten - pivottable
ZKOSS - pivottable
RJackson - pivottable
My problem (or question) lies in the fact that pivot tables inherently "pivot" data around a numeric data-set; meaning the cell intersections can typically only be numeric in nature (SUM/COUNT/FIRST...)
Is there anything out there (or, if you know how to modify a current component) that will show non-numeric data within the pivot "Value" or intersection.
my question is illustrated below.
as can be seen the interaction data is actually the reports which each role has acess to (grouped by the class of report)... I know there are other ways to represent this data, however I really like the way the pivot viewers above can swap the data filters (columns)
thanks in advance.
I can't speak for all the different pivot table implementations out there, some might have a built-in "aggregator" that will allow you to concatenate string values with formatting (to add a cr/lf for example), but a quick look at the nicolaskruchten/pivottable source lets me think you could easily add your own aggregator.
Just look at file pivot.coffee, starting at line 22. You'll see different aggregator templates, so you could probably add your own right there.
EDIT
I just looked at the rjackson/pivot.js implementation, and by the looks of it, if you add your own defaultSummarizeFunction at pivot.js line 586+, and then also add it to the select at lines 651+, you could create your own "concat" SummarizeFunction
EDIT 2
Well, turns out to be even easier than I thought. Using the rjackson/pivot.js one, all you have to do is provide your own summarizeFunction when you define your concatenable field:
{name: 'billed_amount', type: 'string', rowLabelable: false, summarizable: 'string', summarizeFunction: function(rows, field){
var result = '',
i = -1;
m = rows.length;
while (++i < m) {
result = result + rows[i][field.dataSource] + '<br>';
}
return result;
}}
In this example, I turned a field that would normally be summed, and turned it into a concatenated one simply by providing my own summarizeFunction.
Look at an example here: http://mrlucmorin.github.io/pivot.js/
Cheers
I have a multidimensional array that is something like this
[0]string
[1]-->[0]string,[1]string,[2]string
[2]string
[3]string
[4]-->[0]string,[1]string,[2]string[3]string,[4]string,[5]INFO
(I hope that makes sense)
where [1] and [4] are themselves arrays which I could access INFO like myArray[4][5].
The length of the nested arrays ([1] and [4]) can varry.
I use this method to store, calculate, and distribute data across a pretty complicated form.
Not all the data thats storred in the array makes it to an input field so its not all sent to the next page when the form's post method is called.
I would like to access the array the same way on the next page as I do on the first.
Thoughts:
Method 1:
I figure I could load all the data into hidden fields, post everything, then get those values on the second page and load themm all back into an array but that would require over a hundred hidden fields.
Method 2:
I suppose I could also use .join() to concatenate the whole array into one string, load that into one input, post it , and use .split(",") to break it back up. But If I do that im not sure how to handel the multidimensional asspect of it so that I still would be able to access INFO like myArray[4][5] on page 2.
I will be accessing the arrary with Javascript, the values that DO make it to inputs on page 1 will be accessed using php on page 2.
My question is is there a better way to acomplish what I need or how can I set up the Method 2 metioned above?
This solved my problem:
var str = JSON.stringify(fullInfoArray);
sessionStorage.fullInfoArray = str;
var newArr = JSON.parse(sessionStorage.fullInfoArray);
alert(newArr[0][2][1]);
If possible, you can use sessionStorage to store the string representation of your objects using JSON.stringify():
// store value
sessionStorage.setItem('myvalue', JSON.stringify(myObject));
// retrieve value
var myObject = JSON.parse(sessionStorage.getItem('myvalue'));
Note that sessionStorage has an upper limit to how much can be stored; I believe it's about 2.5MB, so you shouldn't hit it easily.
Keep the data in your PHP Session and whenever you submit forms update the data in session.
Every page you generate can be generated using this data.
OR
If uou are using a modern browser, make use of HTML5 localStorage.
OR
You can do continue with what you are doing :)
I have been experimenting with PostgreSQL and PL/V8, which embeds the V8 JavaScript engine into PostgreSQL. Using this, I can query into JSON data inside the database, which is rather awesome.
The basic approach is as follows:
CREATE or REPLACE FUNCTION
json_string(data json, key text) RETURNS TEXT AS $$
var data = JSON.parse(data);
return data[key];
$$ LANGUAGE plv8 IMMUTABLE STRICT;
SELECT id, data FROM things WHERE json_string(data,'name') LIKE 'Z%';
Using, V8 I can parse JSON data into JS, then return a field and I can use this as a regular pg query expression.
BUT
On large datasets, performance can be an issue, as for every row I need to parse the data.
The parser is fast, but it is definitely the slowest part of the process and it has to happen every time.
What I am trying to work out (to finally get to an actual question) is if there is a way to cache or pre-process the JSON ... even storing a binary representation of the JSON in the table that could be used by V8 automatically as a JS object might be a win. I've had a look at using an alternative format such as messagepack or protobuf, but I don't think they will necessarily be as fast as the native JSON parser in any case.
THOUGHT
PG has blobs and binary types, so the data could be stored in binary, then we just need a way to marshall this into V8.
Postgres supports indexes on arbitrary function calls. The following index should do the trick :
CREATE INDEX json_idx ON things (json_string(field,'name'));
The short version appears to be that with Pg's new json support, so far there's no way to store json directly in any form other than serialised json text. (This looks likely to change in 9.4)
You seem to want to store a pre-parsed form that's a serialised representation of how v8 represents the json in memory, and that's not currently supported. It's not even clear that v8 offers any kind of binary serialisation/deserialisation of json structures. If it doesn't do so natively, code would need to be added to Pg to produce such a representation and to turn it back into v8 json data structures.
It also wouldn't necessarily be faster:
If json was stored in a v8 specific binary form, queries that returned the normal json representation to clients would have to format it each time it was returned, incurring CPU cost.
A binary serialised version of json isn't the same thing as storing the v8 json data structures directly in memory. You can't write a data structure that involves any kind of graph of pointers out to disk directly, it has to be serialised. This serialisation and deserialisation has a cost, and it might not even be much faster than parsing the json text representation. It depends a lot on how v8 represents JavaScript objects in memory.
The binary serialised representation could easily be bigger, since most json is text and small numbers, where you don't gain any compactness from a binary representation. Since storage size directly affects the speed of table scans, value fetches from TOAST, decompression time required for TOASTed values, index sizes, etc, you could easily land up with slower queries and bigger tables.
I'd be interested to see whether an optimisation like what you describe is possible, and whether it'd turn out to be an optimisation at all.
To gain the benefits you want when doing table scans, I guess what you really need is a format that can be traversed without having to parse it and turn it into what's probably a malloc()'d graph of javascript objects. You want to be able to give a path expression for a field and grab it out directly from the serialised form where it's been read into a Pg read buffer or into shared_buffers. That'd be a really interesting design project, but I'd be surprised if anything like it existed in v8.
What you really need to do is research how the existing json-based object databases do fast searches for arbitrary json paths and what their on-disk representations are, then report back on pgsql-hackers. Maybe there's something to be learned from people who've already solved this - presuming, of course, that they have.
In the mean time, what I'd want to focus on is what the other answers here are doing: Working around the slow point and finding other ways to do what you need. You could also look into helping to optimise the json parser, but depending on whether the v8 one or some other one is in use that might already be far past the point of diminishing returns.
I guess this is one of the areas where there's a trade-off between speed and flexible data representation.
perhaps instead of making the retrieval phase responsible for parsing the data, creating a new data type which could pre-disseminate json data on input might be a better approach?
http://www.postgresql.org/docs/9.2/static/sql-createtype.html
I don't have any experience with this, but it got me curious so I did some reading.
JSON only
What about something like the following (untested, BTW)? It doesn't address your question about storing a binary representation of the JSON, it's an attempt to parse all of the JSON at once for all of the rows you're checking, in the hope that it will yield higher performance by reducing the processing overhead of doing it individually for each row. If it succeeds at that, I'm thinking it may result in higher memory consumption though.
The CREATE TYPE...set_of_records() stuff is adapted from the example on the wiki where it mentions that "You can also return records with an array of JSON." I guess it really means "an array of objects".
Is the id value from the DB record embedded in the JSON?
Version #1
CREATE TYPE rec AS (id integer, data text, name text);
CREATE FUNCTION set_of_records() RETURNS SETOF rec AS
$$
var records = plv8.execute( "SELECT id, data FROM things" );
var data = [];
// Use for loop instead if better performance
records.forEach( function ( rec, i, arr ) {
data.push( rec.data );
} );
data = "[" + data.join( "," ) + "]";
data = JSON.parse( data );
records.forEach( function ( rec, i, arr ) {
rec.name = data[ i ].name;
} );
return records;
$$
LANGUAGE plv8;
SELECT id, data FROM set_of_records() WHERE name LIKE 'Z%'
Version #2
This one gets Postgres to aggregate / concatenate some values to cut down on the processing done in JS.
CREATE TYPE rec AS (id integer, data text, name text);
CREATE FUNCTION set_of_records() RETURNS SETOF rec AS
$$
var cols = plv8.execute(
"SELECT" +
"array_agg( id ORDER BY id ) AS id," +
"string_agg( data, ',' ORDER BY id ) AS data" +
"FROM things"
)[0];
cols.data = JSON.parse( "[" + cols.data + "]" );
var records = cols.id;
// Use for loop if better performance
records.forEach( function ( id, i, arr ) {
arr[ i ] = {
id : id,
data : cols.data[ i ],
name : cols.data[ i ].name
};
} );
return records;
$$
LANGUAGE plv8;
SELECT id, data FROM set_of_records() WHERE name LIKE 'Z%'
hstore
How would the performance of this compare?: duplicate the JSON data into an hstore column at write time (or if the performance somehow managed to be good enough, convert the JSON to hstore at select time) and use the hstore in your WHERE, e.g.:
SELECT id, data FROM things WHERE hstore_data -> name LIKE 'Z%'
I heard about hstore from here: http://lwn.net/Articles/497069/
The article mentions some other interesting things:
PL/v8 lets you...create expression indexes on specific JSON elements and save them, giving you stored search indexes much like CouchDB's "views".
It doesn't elaborate on that and I don't really know what it's referring to.
There's a comment attributed as "jberkus" that says:
We discussed having a binary JSON type as well, but without a protocol to transmit binary values (BSON isn't at all a standard, and has some serious glitches), there didn't seem to be any point.
If you're interested in working on binary JSON support for PostgreSQL, we'd be interested in having you help out ...
I don't know if it would be useful here, but I came across this: pg-to-json-serializer. It mentions functionality for:
parsing JSON strings and filling postgreSQL records/arrays from it
I don't know if it would offer any performance benefit over what you've been doing so far though, and I don't really even understand their examples.
Just thought it was worth mentioning.
Here's an example of what I need in sql:
SELECT name FROM employ WHERE name LIKE %bro%
How do I create view like that in CouchDB?
The simple answer is that CouchDB views aren't ideal for this.
The more complicated answer is that this type of query tends to be very inefficient in typical SQL engines too, and so if you grant that there will be tradeoffs with any solution then CouchDB actually has the benefit of letting you choose your tradeoff.
1. The SQL Ways
When you do SELECT ... WHERE name LIKE %bro%, all the SQL engines I'm familiar with must do what's called a "full table scan". This means the server reads every row in the relevant table, and brute force scans the field to see if it matches.
You can do this in CouchDB 2.x with a Mango query using the $regex operator. The query would look something like this for the basic case:
{"selector":{
"name": {
"$regex": "bro"
}
}}
There do not appear to be any options exposed for case-sensitivity, etc. but you could extend it to match only at the beginning/end or more complicated patterns. If you can also restrict your query via some other (indexable) field operator, that would likely help performance. As the documentation warns:
Regular expressions do not work with indexes, so they should not be used to filter large data sets. […]
You can do a full scan in CouchDB 1.x too, using a temporary view:
POST /some_database/_temp_view
{"map": "function (doc) { if (doc.name && doc.name.indexOf('bro') !== -1) emit(null); }"}
This will look through every single document in the database and give you a list of matching documents. You can tweak the map function to also match on a document type, or to emit with a certain key for ordering — emit(doc.timestamp) — or some data value useful to your purpose — emit(null, doc.name).
2. The "tons of disk space available" way
Depending on your source data size you could create an index that emits every possible "interior string" as its permanent (on-disk) view key. That is to say for a name like "Dobros" you would emit("dobros"); emit("obros"); emit("bros"); emit("ros"); emit("os"); emit("s");. Then for a term like '%bro%' you could query your view with startkey="bro"&endkey="bro\uFFFF" to get all occurrences of the lookup term. Your index will be approximately the size of your text content squared, but if you need to do an arbitrary "find in string" faster than the full DB scan above and have the space this might work. You'd be better served by a data structure designed for substring searching though.
Which brings us too...
3. The Full Text Search way
You could use a CouchDB plugin (couchdb-lucene now via Dreyfus/Clouseau for 2.x, ElasticSearch, SQLite's FTS) to generate an auxiliary text-oriented index into your documents.
Note that most full text search indexes don't naturally support arbitrary wildcard prefixes either, likely for similar reasons of space efficiency as we saw above. Usually full text search doesn't imply "brute force binary search", but "word search". YMMV though, take a look around at the options available in your full text engine.
If you don't really need to find "bro" anywhere in a field, you can implement basic "find a word starting with X" search with regular CouchDB views by just splitting on various locale-specific word separators and omitting these "words" as your view keys. This will be more efficient than above, scaling proportionally to the amount of data indexed.
Unfortunately, doing searches using LIKE %...% aren't really how CouchDB Views work, but you can accomplish a great deal of search capability by installing couchdb-lucene, it's a fulltext search engine that creates indexes on your database that you can do more sophisticated searches with.
The typical way to "search" a database for a given key, without any 3rd party tools, is to create a view that emits the value you are looking for as the key. In your example:
function (doc) {
emit(doc.name, doc);
}
This outputs a list of all the names in your database.
Now, you would "search" based on the first letters of your key. For example, if you are searching for names that start with "bro".
/db/_design/test/_view/names?startkey="bro"&endkey="brp"
Notice I took the last letter of the search parameter, and "incremented" the last letter in it. Again, if you want to perform searches, rather than aggregating statistics, you should use a fulltext search engine like lucene. (see above)
You can use regular expressions. As per this table you can write something like this to return any id that contains "SMS".
{
"selector": {
"_id": {
"$regex": "sms"
}
}
}
Basic regex you can use on that includes
"sms$" roughly to LIKE "%sms"
"^sms" roughly to LIKE "sms%"
You can read more on regular expressions here
i found a simple view code for my problem...
{"getavailableproduct": {
"map": "function(doc) { var prefix = doc['productid'].match(/[A-Za-z0-9]+/g); if(prefix) for(var pre in prefix) { emit(prefix[pre],null); } }"
}
}
from this view code if i split a key sentence into a key word...
and i can call
?key="[search_keyword]"
but i need more complex code because if i run this code i can only find word wich i type (ex: eat, food, etc)...
but if i want to type not a complete word (ex: ea from eat, or foo from food) that code does not work..
I know it is an old question, but: What about using a "list" function? You can have all your normal views, andthen add a "list" function to the design document to process the view's results:
{
"_id": "_design/...",
"views": {
"employees": "..."
},
"lists": {
"by_name": "..."
}
}
And the function attached to "by_name" function, should be something like:
function (head, req) {
provides('json', function() {
var filtered = [];
while (row = getRow()) {
// We can retrive all row information from the view
var key = row.key;
var value = row.value;
var doc = req.query.include_docs ? row.doc : {};
if (value.name.indexOf(req.query.name) == 0) {
if (req.query.include_docs) {
filtered.push({ key: key, value: value, doc: doc});
} else {
filtered.push({ key: key, value: value});
}
}
}
return toJSON({ total_rows: filtered.length, rows: filtered });
});
}
You can, of course, use regular expressions too. It's not a perfect solution, but it works to me.
You could emit your documents like normal. emit(doc.name, null); I would throw a toLowerCase() on that name to remove case sensitivity.
and then query the view with a slew of keys to see if something "like" the query shows up.
keys = differentVersions("bro"); // returns ["bro", "br", "bo", "ro", "cro", "dro", ..., "zro"]
$.couch("db").view("employeesByName", { keys: keys, success: dealWithIt } )
Some considerations
Obviously that array can get really big really fast depending on what differentVersions returns. You might hit a post data limit at some point or conceivably get slow lookups.
The results are only as good as differentVersions is at giving you guesses for what the person meant to spell. Obviously this function can be as simple or complex as you like. In this example I tried two strategies, a) removed a letter and pushed that, and b) replaced the letter at position n with all other letters. So if someone had been looking for "bro" but typed in "gro" or "bri" or even "bgro", differentVersions would have permuted that to "bro" at some point.
While not ideal, it's still pretty fast since a look up in Couch's b-trees is fast.
why cann't we just use indexOf() in view?