I am writing an express app, where I'm pushing data from my views to a database. But most of the data is mapped to some other data in database tables.
For example, is a choose student name drop down- once you choose the student by his name , a drop down below - will show all roles that he is allowed for.
So I'm following this pattern of
app.post('\action1', function(req,res){
function querySomething(){
var defered = Q.defer();
connection.query(some_select_query,defered.makeNodeResolver());
return defered.promise;
}
function querySomethingElse(){
var defered = Q.defer();
connection.query(some_other_select_query,defered.makeNodeResolver());
return defered.promise;
}
Q.all([querySomething(), querySomethingElse()]).then((results,err) => {
connection.release()
if(results){
res.render('some_view.ejs', {
result1:results[0][0],
result2:results[1][0]
});
}
else{
res.render('error.ejs',{});
}
})
})
Now the problem is that I have to follow this pattern of selecting something from multiple tables, pass all these function to a promise- and when the results is passed back, goto my view with all those result objects - so that I can use them in my view - as a means of doing drop downs dependent on one another.
Sometimes I have to re-write this multiple times.
Doing a select query like this would be performance intensive especially if all views are using the result of the same query.
Is there any way I can build a cached data store on my express server side code and query that instead of the actual database??
If there is an insert or an update - i will refresh this store and just do a new select * that one time.
What libraries are there on top of express which will help me do this??
Does mysql-cache does the same thing?? I'm also using connection pooling with createPool.
How do I achieve this - or do I just restore to using big mvc's like sails to rewrite my app?
You can try apiCache npm module.
"Sometimes I have to re-write this multiple times."
Based on the business need, you may want to handle each use case separately and this scenario doesn't deal with caching.
Doing a select query like this would be performance intensive especially if all views are using the result of the same query.
This is a classic example for the need of server-side caching.
Related
I am new to nodejs/Express.js development.
I have built my backend service with Express.js / Typescript and I have multiple routes / api endpoints defined. One is like this:
app.post('/api/issues/new', createNewIssue);
where browser will send a post request when a user submits a new photo (also called an issue in my app).
The user can send an issue to another user, and the backend will first query the database to find the number of issues that matches the conditions of "source user" and "destination user", and then give the new issue an identifying ID in the form srcUser-dstUser-[number], where number is the auto-incremented count.
The createNewIssue function is like this:
export const createNewIssue = catchErrors(async (req, res) => {
const srcUser = req.header('src_username');
const dstUser = req.header('dst_username');
// query database for number of issues matching "srcUser" and "dstUser"
...
const lastIssues = await Issue.find( {where: {"srcUser": srcUser, "dstUser": dstUser}, order: { id: 'DESC'}});
const count = lastIssues.length;
// create a new issue Entity with the ID `srcUser-dstUser-[count+1]`
const newIssue = await createEntity(Issue, {
...
id: `srcUser-dstUser-${count+1}`,
...
});
res.respond({ newIssue: newIssue});
})
Say the backend receives multiple requests with the same srcUser and dstUser attributes at the same time, will there be collisions where multiple new issues are created with the same id?
I have read some documentation about nodejs being single-threaded, but I'm not sure what that means definitely for this specific scenario.
Besides business logic in this scenario, I have some confusions in general about Express JS / Node JS:
When there is only one cpu core, Express JS process multiple concurrent requests asynchronously: it starts processing one and does not wait for it to finish, instead continues to process the next one. Is this understanding accurate?
When there are multiple cpu cores, does Express JS / Node Js utilize them all in the same manner?
Node.js will not solve this problem for you automatically.
While it will only deal with one thing at a time, it is entirely possible that Request 2 will request the latest ID in the database while Request 1 has hit the await statement at the same point and gone to sleep. This would mean they get the same answer and would each try to create a new entry with the same ID.
You need to write your JavaScript to make sure that this doesn't happen.
The usual ways to handle this would be to either:
Let the database (and not your JavaScript) handle the ID generation (usually by using a sequence.
Use transactions so that the request for the latest ID and the insertion of the new row are treated as one operation by the database (so it won't start the same operation for Request 2 until the select and insert for Request 1 are both done).
Test to make sure createEntity is successful (and doesn't throw a 'duplicate id' error) and try again if it fails (with a limit in case it keeps failing in which case it should return an error message to the client).
The specifics depend on which database you use. I linked to the Postgresql documentation for the sake of example.
It works perfectly, with a single endpoint.
With apollo-link-rest, I have made a client that looks like this
const restLink = new RestLink({ uri: "https://example.com/" })
And export the client with a new ApolloClient({...})
Now to the question
On the same server https://example.com/, there are multiple endpoints, all with same fields but different data in each
The first query that works look like this
export const GET_PRODUCTS = gql`
query firstQuery {
products #rest(type: "data" path: "first/feed") { // the path could be second/feed and it will work with different data
id
title
}
}
`
I want all these different path into one and same json feed, because they all have the same fields, but with different data
Using aliases
You can (should be possible) use standard method to make similar queries - get many data (result) nurmally available as the same shape (node name). This is described here.
{
"1": products(....
"2": products(....
...
}
Paths can be created using variables
Results can be easy combined by iterating over data object. Problem? Only for fixed amount (not many) endpoints as query shouldn't be generated by strings manipulations.
Multiple graphql queries
You can create queries in a loop - parametrized - using Promise.all() and apollo-client client.query(. Results needs to be combined into one, too.
Custom fetch
Using custom fetch you can create a query taking an array of paths. In this case resolver should use Promise.all() on parametrized fetch requests. Combined results can be returned as single node (as required).
Bads
All these methods needs making multiple requests. Problem can be resolved by making server side REST wrapper (docs or blog).
I don't know the best way to handle huge mongo databases with meteorjs.
In my example I have a database collection with addresses in it with the geo location. (the whole code snippets are just examples)
Example:
{
address : 'Some Street',
geoData : [lat, long]
}
Now I have a form where the user can enter an address to get the geo-data. Very simple. But the problem is, that the collection with the geo data has millions of documents in it.
In Meteor you have to publish a collection on Server side and to subscribe on Client and Server side. So my code is like this:
// Client / Server
Geodata = new Meteor.collection('geodata');
// Server side
Meteor.publish('geodata', function(){
return Geodata.find();
});
// Client / Server
Meteor.subscribe('geodata');
Now a person has filled the form - after this I get the data. After this I search for the right document to return. My method is this:
// Server / Client
Meteor.methods({
getGeoData : function (address) {
return Geodata.find({address : address});
}
});
The result is the right one. And this is still working. But my question is now:
Which is the best way to handle this example with a huge database like in my example ? The problem is that Meteor saves the whole collection in the users cache when I subscribed it. Is there a way to subscribe to just the results I need and when the user reused the form then I can overwrite the subscribe? Or is there another good way to save the performance with huge databases and the way I use it in my example?
Any ideas?
Yes, you can do something like this:
// client
Deps.autorun(function () {
// will re subscribe every the 'center' session changes
Meteor.subscribe("locations", Session.get('center'));
});
// server
Meteor.publish('locations', function (centerPoint) {
// sanitize the input
check(centerPoint, { lat: Number, lng: Number });
// return a limited number of documents, relevant to our app
return Locations.find({ $near: centerPoint, $maxDistance: 500 }, { limit: 50 });
});
Your clients would ask only for some subset of the data at the time. i.e. you don't need the entire collection most of the time, usually you need some specific subset. And you can ask server to keep you up to date only to that particular subset. Bare in mind that more different "publish requests" your clients make, more work there is for your server to do, but that's how it is usually done (here is the simplified version).
Notice how we subscribe in a Deps.autorun block which will resubscribe depending on the center Session variable (which is reactive). So your client can just check out a different subset of data by changing this variable.
When it doesn't make sense to ship your entire collection to the client, you can use methods to retrieve data from the server.
In your case, you can call the getGeoData function when the form is filled out and then display the results after the method returns. Try taking the following steps:
Clearly divide your client and server code into their respective client and server directories if you haven't already.
Remove the geodata subscription on the server (only clients can activate subscriptions).
Remove the geodata publication on the server (assuming this isn't needed anymore).
Define the getGeoData method only on the server. It should return an object, not a cursor so use findOne instead of find.
In your form's submit event, do something like:
Meteor.call('getGeoData', address, function(err, geoData){Session.set('geoDataResult', geoData)});
You can then display the geoDataResult data in your template.
I am using firebase for data storage. The data structure is like this:
products:{
product1:{
name:"chocolate",
}
product2:{
name:"chochocho",
}
}
I want to perform an auto complete operation for this data, and normally i write the query like this:
"select name from PRODUCTS where productname LIKE '%" + keyword + "%'";
So, for my situation, for example, if user types "cho", i need to bring both "chocolate" and "chochocho" as result. I thought about bringing all data under "products" block, and then do the query at the client, but this may need a lot of memory for a big database. So, how can i perform sql LIKE operation?
Thanks
Update: With the release of Cloud Functions for Firebase, there's another elegant way to do this as well by linking Firebase to Algolia via Functions. The tradeoff here is that the Functions/Algolia is pretty much zero maintenance, but probably at increased cost over roll-your-own in Node.
There are no content searches in Firebase at present. Many of the more common search scenarios, such as searching by attribute will be baked into Firebase as the API continues to expand.
In the meantime, it's certainly possible to grow your own. However, searching is a vast topic (think creating a real-time data store vast), greatly underestimated, and a critical feature of your application--not one you want to ad hoc or even depend on someone like Firebase to provide on your behalf. So it's typically simpler to employ a scalable third party tool to handle indexing, searching, tag/pattern matching, fuzzy logic, weighted rankings, et al.
The Firebase blog features a blog post on indexing with ElasticSearch which outlines a straightforward approach to integrating a quick, but extremely powerful, search engine into your Firebase backend.
Essentially, it's done in two steps. Monitor the data and index it:
var Firebase = require('firebase');
var ElasticClient = require('elasticsearchclient')
// initialize our ElasticSearch API
var client = new ElasticClient({ host: 'localhost', port: 9200 });
// listen for changes to Firebase data
var fb = new Firebase('<INSTANCE>.firebaseio.com/widgets');
fb.on('child_added', createOrUpdateIndex);
fb.on('child_changed', createOrUpdateIndex);
fb.on('child_removed', removeIndex);
function createOrUpdateIndex(snap) {
client.index(this.index, this.type, snap.val(), snap.name())
.on('data', function(data) { console.log('indexed ', snap.name()); })
.on('error', function(err) { /* handle errors */ });
}
function removeIndex(snap) {
client.deleteDocument(this.index, this.type, snap.name(), function(error, data) {
if( error ) console.error('failed to delete', snap.name(), error);
else console.log('deleted', snap.name());
});
}
Query the index when you want to do a search:
<script src="elastic.min.js"></script>
<script src="elastic-jquery-client.min.js"></script>
<script>
ejs.client = ejs.jQueryClient('http://localhost:9200');
client.search({
index: 'firebase',
type: 'widget',
body: ejs.Request().query(ejs.MatchQuery('title', 'foo'))
}, function (error, response) {
// handle response
});
</script>
There's an example, and a third party lib to simplify integration, here.
I believe you can do :
admin
.database()
.ref('/vals')
.orderByChild('name')
.startAt('cho')
.endAt("cho\uf8ff")
.once('value')
.then(c => res.send(c.val()));
this will find vals whose name are starting with cho.
source
The elastic search solution basically binds to add set del and offers a get by wich you can accomplish text searches.
It then saves the contents in mongodb.
While I love and reccomand elastic search for the maturity of the project, the same can be done without another server, using only the firebase database.
That's what I mean:
(https://github.com/metaschema/oxyzen)
for the indexing part basically the function:
JSON stringifies a document.
removes all the property names and JSON to leave only the data
(regex).
removes all xml tags (therefore also html) and attributes (remember
old guidance, "data should not be in xml attributes") to leave only
the pure text if xml or html was present.
removes all special chars and substitute with space (regex)
substitutes all instances of multiple spaces with one space (regex)
splits to spaces and cycles:
for each word adds refs to the document in some index structure in
your db tha basically contains childs named with words with childs
named with an escaped version of "ref/inthedatabase/dockey"
then inserts the document as a normal firebase application would do
in the oxyzen implementation, subsequent updates of the document ACTUALLY reads the index and updates it, removing the words that don't match anymore, and adding the new ones.
subsequent searches of words can directly find documents in the words child. multiple words searches are implemented using hits
SQL"LIKE" operation on firebase is possible
let node = await db.ref('yourPath').orderByChild('yourKey').startAt('!').endAt('SUBSTRING\uf8ff').once('value');
This query work for me, it look like the below statement in MySQL
select * from StoreAds where University Like %ps%;
query = database.getReference().child("StoreAds").orderByChild("University").startAt("ps").endAt("\uf8ff");
I want to extract some data from the database without refreshing a page. What is the best possible way to do this?
I am using the following XMLHTTPRequest function to get some data (shopping cart items) from cart.php file. This file performs various functions based on the option value.
For example: option=1 means get all the shopping cart items. option=2 means delete all shopping cart items and return string "Your shopping cart is empty.". option=3, 4...and so on.
My XHR function:
function getAllCartItems()
{
if(window.XMLHttpRequest)
{
allCartItems = new XMLHttpRequest();
}
else
{
allCartItems=new ActiveXObject("Microsoft.XMLHTTP");
}
allCartItems.onreadystatechange=function()
{
if (allCartItems.readyState==4 && allCartItems.status==200)
{
document.getElementById("cartmain").innerHTML=allCartItems.responseText;
}
else if(allCartItems.readyState < 4)
{
//do nothing
}
}
var linktoexecute = "cart.php?option=1";
allCartItems.open("GET",linktoexecute,true);
allCartItems.send();
}
cart.php file looks like:
$link = mysql_connect('localhost', 'user', '123456');
if (!$link)
{
die('Could not connect: ' . mysql_error());
}
mysql_select_db('projectdatabase');
if($option == 1) //get all cart items
{
$sql = "select itemid from cart where cartid=".$_COOKIE['cart'].";";
$result = mysql_query($sql);
$num = mysql_num_rows($result);
while($row = mysql_fetch_array($result))
{
echo $row['itemid'];
}
}
else if($option == 2)
{
//do something
}
else if($option == 3)
{
//do something
}
else if($option == 4)
{
//do something
}
My Questions:
Is there any other way I can get the data from database without
refreshing the page?
Are there any potential threats (hacking, server utilization,
performance etc) in my way of doing this thing? I believe a hacker
can flood my server be sending unnecessary requests using option=1,
2, 3 etc.
I don't think a Denial of Service attack would be your main concern, here. That concern would be just as valid is cart.php were to return HTML. No, exposing a public API for use via AJAX is pretty common practice.
One thing to keep in mind, though, is the ambiguity of both listing and deleting items via the same URL. It would be a good idea to (at the very least) separate those actions (or "methods") into distinct URLs (for example: /cart/list and /cart/clear).
If you're willing to go a step further, you should consider implementing a "RESTful" API. This would mean, among other things, that methods can only be called using the correct HTTP verb. You've possibly only heard of GET and POST, but there's also PUT and DELETE, amongst others. The reason behind this is to make the methods idempotent, meaning that they do the same thing again and again, no matter how many times you call them. For example, a GET call to /cart will always list the contents and a DELETE call to /cart will always delete all items in the cart.
Although it is probably not practical to write a full REST API for your shopping cart, I'm sure some of the principles may help you build a more robust system.
Some reading material: A Brief Introduction to REST.
Ajax is the best option for the purpose.
Now sending and receiving data with Ajax is done best using XML. So use of Web services is the recommended option from me. You can use a SOAP / REST web service to bring data from a database on request.
You can use this Link to understand more on Webservices.
For the tutorials enough articles are available in the Internet.
you're using a XMLHttpRequest object, so you don't refresh your page (it's AJAX), or there's something you haven't tell
if a hacker want to DDOS your website, or your database, he can use any of its page... As long as you don't transfer string between client and server that will be used in your SQL requests, that should be OK
I'd warn you about the use of raw text response display. I encourage you to format your response as XML or JSON to correctly locate objects that needs to be inserted into the DOM, and to return an tag to correctly handle errors (the die("i'm your father luke") won't help any of your user) and to display them in a special area of your web page
First, you should consider separating different parts of your application. Having a general file that performs every other tasks related to carts, violates all sorts of software design principles.
Second, the first vulnerability is SQL injection. You should NEVER just concatenate the input to your SQL.
Suppose I posted 1; TRUNCATE TABLE cart;. Then your SQL would look like:
select itemid from cart where cartid=1; TRUNCATE TABLE cart; which first selects the item in question, then ruins your database.
You should write something like this:
$item = $_COOKIE['cart'];
$item = preg_replace_all("['\"]", "\\$1", $item);
To avoid refreshing, you can put a link on your page. Something like, Refresh
In terms of security, it will always pay to introduce a database layer concerned with just your data, regardless of your business logic, then adding a service layer dependent on the database layer, which would provide facilities to perform business layer actions.
You should also take #PPvG recommendation into note, and -- using Apache's mod_rewrite or other similar facilities -- make your URLs more meaningful.
Another note: try to encapsulate your data in JSON or XML format. I'd recommend the use of json_encode(); on the server side, and JSON.parse(); on the client side. This would ensure a secure delivery.