I have a hybrid cordova app which uses javascript azure-mobile-apps-js-client for communicating with server. Data is also synchronized in sqlite DB on device.
I need to implement search against Person entities by their Full Names.
All persons which matches search term(contains term inside Full Name) should be returned.
Something like "LIKE" in SQL.
I did read this article but didn't found a way to do that.
Seems like this client is supporting only operations like =, >, <.
Does it mean I need to retrieve all records from table and filter them on client (which sounds weird to me) or I just missing something ?
Thanks.
Finally found an option to do that using javascript string.indexOf function.
//Declare a query
function queryFunction(term){
return this.FullName.indexOf(term) != -1
}
//Pass it to where function
table
.where(queryFunction, term)
.read()
.then(success, failure);
Related
I am using Breezejs in 'NoDB' mode, meaning I write my metadata by hand. When I create a Breeze query with OData parameters I add a filter by id, say
new breeze.Predicate('iD', datacontext.breeze.FilterQueryOp.Equals, myId)
The var myId is indeed a GUID value (though it's defined as a String), but in my DB and in both my server-side and client-side model it's a string (I can't change the DB structure). the property definition in my metadata model is
dataProperties: {
...
iD: { dataType: DataType.String },
...
}
(I know the property name looks weird, but I have to use this syntax since I have the breeze.NamingConvention.camelCase.setAsDefault() on my datacontext, and the property's name on the DB is ID uppercased)
When I execute the query I see that the corresponding oData filter option in the WebAPI url is like
$filter=ID eq guid'65BEB144-5C0C-4481-AC70-5E61FDAA840D'
which leads me to this server error: No coercion operator is defined between types 'System.Guid' and 'System.String'.
Is there a way to disable this automatic 'parsing' of GUIDs and leave them as strings?
I have temporarily solved this by removing the parsing directly inside breeze's source code so that my webAPI call would look like
$filter=ID eq '65BEB144-5C0C-4481-AC70-5E61FDAA840D'
but I don't like this solution and I would be glad if there was a better one, like parametrize this behaviour in some way. I didn't find anything about this on Breeze's official website.
Breeze uses its metadata to determine that datatype of each property in a query and then uses this information to generate the correct OData filter. So your metadata definition of ID as a string should be correct.
However, in order to perform this operation breeze needs to know the EntityType of your query. For example in the following query
var q = EntityQuery.from("Foo").where(....)
breeze needs to know the EntityType that "Foo" ( a resourceName) corresponds to. Once it has the entity type it can correctly format any filters for specific properties of this entityType. If breeze does not have 'EntityType', then it falls back to guessing about the datatype of each property. In your case, its guessing that the datatype is a 'Guid'
So the fix is to either tell the query directly about the EntityType that you are querying
var q = breeze.EntityQuery.from("Foo).where(....).toType(FoosEntityType);
or you can handle it more globally via via the MetadataStore.setEntityTypeForResourceName method.
breeze.MetadataStore.setEntityTypeForResourceName("Foo", FoosEntityType);
var q = breeze.EntityQuery.from("Foo).where(....); // your original query
I am looking for a visual javascript WHERE clause builder that also supports scalar functions. E.g.,
...
WHERE COL_A = '1234'
AND IS_HOLIDAY(DATECOL, LOCALITY)
AND (BASE_MEASURE(UOM, QTY) > 1234 OR AMOUNT > 1234)
...
The ones I have seen lack support for functions. (Like in MS-Excel.) They also tend to pull the table schema directly from database which is not suitable for me, esp. as I have just one set to search and have no access to database.
What I have: a list of searchable column names and their data types, several sets of functions (simple mathematical/date/string and many homebrewn ones encapsulating business rules) and their arguments / return types. Null support is not important, nor are function call within function calls. What I need is a javascript library that I can encode for metadata and let loose in a DOM element; it manages the user interactions, returning a WHERE clause string.
I will appreciate if you could please point me to either some good starting code or to an outline on how to add function support, or both.
Thanks.
I am building a query engine for a database which is pulling data from SQL and other sources. For normal use cases the users can use a web form where the use can specify filtering parameters with select and ranged inputs. But for advanced use cases, I'd like to to specify a filtering equation box where the users could type
AND, OR
Nested parenthesis
variable names
, <, =, != operators
So the filtering equation could look something like:
((age > 50) or (weight > 100)) and diabetes='yes'
Then this input would be parsed, input errors detected (non-existing variable name, etc) and SQL Alchemy queries built based on it.
I saw an earlier post about the similar problem https://stackoverflow.com/a/1395854/315168
There seem to exist several language and mini-language parsers for Python http://navarra.ca/?p=538
However, does there exist any package which would be out of the box solution or near solution for my problem? If not what would be the simplest way to construct such query parser and constructor in Python?
Have a look at https://github.com/dfilatov/jspath
It's similar to xpath, so the syntax isn't as familiar as SQL, but it's powerful over hierarchical data.
I don't know if this is still relevant to you, but here is my answer:
Firstly I have created a class that does exactly what you need. You may find it here:
https://github.com/snow884/filter_expression_parser/
It takes a list of dictionaries as an input + filter query and returns the filtered results. You just have to define the list of fields that are allowed plus functions for checking the format of the constants passed as a part of filter expression.
The filter expression it ingests has to have the following format:
(time > 45.34) OR (((user_id eq 1) OR (date gt '2019-01-04')) AND (username ne 'john.doe'))
or just
username ne 'john123'
Secondly it was foolish of me to even create this code because dataframe.query(...) from pandas already does almost exactly what you need: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html
Here's an example of what I need in sql:
SELECT name FROM employ WHERE name LIKE %bro%
How do I create view like that in CouchDB?
The simple answer is that CouchDB views aren't ideal for this.
The more complicated answer is that this type of query tends to be very inefficient in typical SQL engines too, and so if you grant that there will be tradeoffs with any solution then CouchDB actually has the benefit of letting you choose your tradeoff.
1. The SQL Ways
When you do SELECT ... WHERE name LIKE %bro%, all the SQL engines I'm familiar with must do what's called a "full table scan". This means the server reads every row in the relevant table, and brute force scans the field to see if it matches.
You can do this in CouchDB 2.x with a Mango query using the $regex operator. The query would look something like this for the basic case:
{"selector":{
"name": {
"$regex": "bro"
}
}}
There do not appear to be any options exposed for case-sensitivity, etc. but you could extend it to match only at the beginning/end or more complicated patterns. If you can also restrict your query via some other (indexable) field operator, that would likely help performance. As the documentation warns:
Regular expressions do not work with indexes, so they should not be used to filter large data sets. […]
You can do a full scan in CouchDB 1.x too, using a temporary view:
POST /some_database/_temp_view
{"map": "function (doc) { if (doc.name && doc.name.indexOf('bro') !== -1) emit(null); }"}
This will look through every single document in the database and give you a list of matching documents. You can tweak the map function to also match on a document type, or to emit with a certain key for ordering — emit(doc.timestamp) — or some data value useful to your purpose — emit(null, doc.name).
2. The "tons of disk space available" way
Depending on your source data size you could create an index that emits every possible "interior string" as its permanent (on-disk) view key. That is to say for a name like "Dobros" you would emit("dobros"); emit("obros"); emit("bros"); emit("ros"); emit("os"); emit("s");. Then for a term like '%bro%' you could query your view with startkey="bro"&endkey="bro\uFFFF" to get all occurrences of the lookup term. Your index will be approximately the size of your text content squared, but if you need to do an arbitrary "find in string" faster than the full DB scan above and have the space this might work. You'd be better served by a data structure designed for substring searching though.
Which brings us too...
3. The Full Text Search way
You could use a CouchDB plugin (couchdb-lucene now via Dreyfus/Clouseau for 2.x, ElasticSearch, SQLite's FTS) to generate an auxiliary text-oriented index into your documents.
Note that most full text search indexes don't naturally support arbitrary wildcard prefixes either, likely for similar reasons of space efficiency as we saw above. Usually full text search doesn't imply "brute force binary search", but "word search". YMMV though, take a look around at the options available in your full text engine.
If you don't really need to find "bro" anywhere in a field, you can implement basic "find a word starting with X" search with regular CouchDB views by just splitting on various locale-specific word separators and omitting these "words" as your view keys. This will be more efficient than above, scaling proportionally to the amount of data indexed.
Unfortunately, doing searches using LIKE %...% aren't really how CouchDB Views work, but you can accomplish a great deal of search capability by installing couchdb-lucene, it's a fulltext search engine that creates indexes on your database that you can do more sophisticated searches with.
The typical way to "search" a database for a given key, without any 3rd party tools, is to create a view that emits the value you are looking for as the key. In your example:
function (doc) {
emit(doc.name, doc);
}
This outputs a list of all the names in your database.
Now, you would "search" based on the first letters of your key. For example, if you are searching for names that start with "bro".
/db/_design/test/_view/names?startkey="bro"&endkey="brp"
Notice I took the last letter of the search parameter, and "incremented" the last letter in it. Again, if you want to perform searches, rather than aggregating statistics, you should use a fulltext search engine like lucene. (see above)
You can use regular expressions. As per this table you can write something like this to return any id that contains "SMS".
{
"selector": {
"_id": {
"$regex": "sms"
}
}
}
Basic regex you can use on that includes
"sms$" roughly to LIKE "%sms"
"^sms" roughly to LIKE "sms%"
You can read more on regular expressions here
i found a simple view code for my problem...
{"getavailableproduct": {
"map": "function(doc) { var prefix = doc['productid'].match(/[A-Za-z0-9]+/g); if(prefix) for(var pre in prefix) { emit(prefix[pre],null); } }"
}
}
from this view code if i split a key sentence into a key word...
and i can call
?key="[search_keyword]"
but i need more complex code because if i run this code i can only find word wich i type (ex: eat, food, etc)...
but if i want to type not a complete word (ex: ea from eat, or foo from food) that code does not work..
I know it is an old question, but: What about using a "list" function? You can have all your normal views, andthen add a "list" function to the design document to process the view's results:
{
"_id": "_design/...",
"views": {
"employees": "..."
},
"lists": {
"by_name": "..."
}
}
And the function attached to "by_name" function, should be something like:
function (head, req) {
provides('json', function() {
var filtered = [];
while (row = getRow()) {
// We can retrive all row information from the view
var key = row.key;
var value = row.value;
var doc = req.query.include_docs ? row.doc : {};
if (value.name.indexOf(req.query.name) == 0) {
if (req.query.include_docs) {
filtered.push({ key: key, value: value, doc: doc});
} else {
filtered.push({ key: key, value: value});
}
}
}
return toJSON({ total_rows: filtered.length, rows: filtered });
});
}
You can, of course, use regular expressions too. It's not a perfect solution, but it works to me.
You could emit your documents like normal. emit(doc.name, null); I would throw a toLowerCase() on that name to remove case sensitivity.
and then query the view with a slew of keys to see if something "like" the query shows up.
keys = differentVersions("bro"); // returns ["bro", "br", "bo", "ro", "cro", "dro", ..., "zro"]
$.couch("db").view("employeesByName", { keys: keys, success: dealWithIt } )
Some considerations
Obviously that array can get really big really fast depending on what differentVersions returns. You might hit a post data limit at some point or conceivably get slow lookups.
The results are only as good as differentVersions is at giving you guesses for what the person meant to spell. Obviously this function can be as simple or complex as you like. In this example I tried two strategies, a) removed a letter and pushed that, and b) replaced the letter at position n with all other letters. So if someone had been looking for "bro" but typed in "gro" or "bri" or even "bgro", differentVersions would have permuted that to "bro" at some point.
While not ideal, it's still pretty fast since a look up in Couch's b-trees is fast.
why cann't we just use indexOf() in view?
How do you fix a names mismatch problem, if the client-side names are keywords or reserved words in the server-side language you are using?
The DOJO JavaScript toolkit has a QueryReadStore class that you can subclass to submit REST patterned queries to the server. I'm using this in conjunction w/ the FilteringSelect Dijit.
I can subclass the QueryReadStore and specify the parameters and arguments getting passed to the server. But somewhere along the way, a "start" and "count" parameter are being passed from the client to the server. I went into the API and discovered that the QueryReadStore.js is sending those parameter names.
I'm using Fiddler to confirm what's actually being sent and brought back. The server response is telling me I have a parameter names mismatch, because of the "start" and "count" parameters. The problem is, I can't use "start" and "count" in PL/SQL.
Workaround or correct implementation advice would be appreciated...thx.
//I tried putting the code snippet in here, but since it's largely HTML, that didn't work so well.
While it feels like the wrong thing to do, because I'm hacking at a well tested, nicely written JavaScript toolkit, this is how I fixed the problem:
I went into the DOJOX QueryReadStore.js and replaced the "start" and "count" references with acceptable (to the server-side language) parameter names.
I would have like to handled the issue via my PL/SQL (but I don't know how to get around reserved words) or client-side code (subclassing did not do the trick)...without getting into the internals of the library. But it works, and I can move on.
As opposed to removing it from the API, as you mentioned, you can actually create a subclass with your own fetch, and remove start/count parameters (theoretically). Have a look at this URL for guidance:
http://www.sitepen.com/blog/2008/06/25/web-service-data-store/
Start and count are actually very useful because they allow you to pass params for the query that you can use to filter massive data sets, and it helps to manage client-side paging. I would try to subclass instead, intercept, and remove.
Is your pl/sql program accessed via a URL and mod_plsql? If so, then you can use "flexible parameter passing" and the variables get assigned to an array of name/value pairs.
Define your package spec like this...
create or replace package pkg_name
TYPE plsqltable
IS
TABLE OF VARCHAR2 (32000)
INDEX BY BINARY_INTEGER;
empty plsqltable;
PROCEDURE api (name_array IN plsqltable DEFAULT empty ,
value_array IN plsqltable DEFAULT empty
);
END pkg_name;
Then the body:
CREATE OR REPLACE PACKAGE BODY pkg_name AS
l_count_value number;
l_start_value number;
PROCEDURE proc_name (name_array IN plsqltable DEFAULT empty,
value_array IN plsqltable DEFAULT empty) is
------------
FUNCTION get_value (p_name IN VARCHAR) RETURN VARCHAR2 IS
BEGIN
FOR i IN 1..name_array.COUNT LOOP
IF UPPER(name_array(i)) = UPPER(p_name) THEN
RETURN value_array(i);
END IF;
END LOOP;
RETURN NULL;
END get_value;
----------------------
begin
l_count_value := get_value('count');
l_start_value := get_value('start');
end api;
end pkg_name;
Then you can call pkg_name.api using
http://server/dad/!pkg_name.api?start=3&count=3