So I am trying to make a live search feature where a user can search for other users based on username and the relevant results will be shown on the webpage. So far, I have only come across resources that show how to do this for PHP and MySQL. How can this be achieved for Node.js and MongoDB? I'm using socket.io in my project, could this be used at all?
Thanks for the help, and if any of my code is needed, just let me know!
Basically, you would watch when the user writes in the username input field( using the keyup event) and send the value of the input field to the server using socket.io, on the server, when you receive the value in the event you will do a query on the database(model.findOne in mongoose) with that value and return the user if he exists. The key here is to make the database do an index on the username for a faster search by making the username field unique in mongoose or by creating a new index manually.
Example:
Frontend with jquery:
$(document).ready(function() {
var username = $('#username');
username.keyup(function() {
var value = username.val();
socket.emit('find_user', value);
});
});
socket.on('find_user_result', function(user) {
// treat result here
});
Backend with mongoose:
socket.on('find_user', function(value) {
User.findOne({username: value}, function(err, user) {
if (err) throw err;
if (!user) socket.emit('find_user_result', {});
else socket.emit('find_user_result', user);
});
})
In your requirement, it doesn't depend much on which backend to use.
On the backend just create a rest API to handle the search request (if you want to use NodeJS, you can refer this article https://scotch.io/tutorials/build-a-restful-api-using-node-and-express-4).
On the frontend, you can use just XHR request to make live search, no need socket. Each time user type on an input, detect the input change event and send the search request to the backend api (you can do it by just pure javascript XHR request, or use ajax module in JQuery ...), fetch the result from response, print to the screen.
After you can achieve this, you can improve your search performance on the frontend by limiting request in amount of time (not sending request each time use press a key, but after an amount of time, example each 200ms, this technique call "debounce").
Related
So I need to implement an "expensive" API endpoint. Basically, the user/client would need to be able to create a "group" of existing users.
So this "create group" API would need to check that each users fulfill the criteria, i.e. all users in the same group would need to be from the same region, same gender, within an age group etc. This operation can be quite expensive, especially since there are no limit on how many users in one group, so its possible that the client requests group of 1000 users for example.
My idea is that the endpoint will just create entry in database and mark the "group" as pending, while the checking process is still happening, then after its completed, it will update the group status to "completed" or "error" with error message, then the client would need to periodically fetch the status if its still pending.
My implementation idea is something along this line
const createGroup = async (req, res) => {
const { ownerUserId, userIds } = req.body;
// This will create database entry of group with "pending" status and return the primary key
const groupId = await insertGroup(ownerUserId, 'pending');
// This is an expensive function which will do checking over the network, and would take 0.5s per user id for example
// I would like this to keep running after this API endpoint send the response to client
checkUser(userIds)
.then((isUserIdsValid) => {
if (isUserIdsValid) {
updateGroup(groupId, 'success');
} else {
updateGroup(groupId, 'error');
}
})
.catch((err) => {
console.error(err);
updateGroup(groupId, 'error');
});
// The client will receive a groupId to check periodically whether its ready via separate API
res.status(200).json({ groupId });
};
My question is, is it a good idea to do this? Do I missing something important that I should consider?
Yes, this is the standard approach to long-running operations. Instead of offering a createGroup API that creates and returns a group, think of it as having an addGroupCreationJob API that creates and returns a job.
Instead of polling (periodically fetching the status to check whether it's still pending), you can use a notification API (events via websocket, SSE, webhooks etc) and even subscribe to the progress of processing. But sure, a check-status API (via GET request on the job identifier) is the lowest common denominator that all kinds of clients will be able to use.
Did I not consider something important?
Failure handling is getting much more complicated. Since you no longer create the group in a single transaction, you might find your application left in some intermediate state, e.g. when the service crashed (due to unrelated things) during the checkUser() call. You'll need something to ensure that there are no pending groups in your database for which no actual creation process is running. You'll need to give users the ability to retry a job - will insertGroup work if there already is a group with the same identifier in the error state? If you separate the group and the jobs into independent entities, do you need to ensure that no two pending jobs are trying to create the same group? Last but not least you might want to allow users to cancel a currently running job.
I have a node application that needs to be integrated into vtiger, and I have successfully been able to create, delete and retrieve information from my vtiger instance. If I try to update however, I get a Permission to perform the operation is denied for id error.
I have tried a couple different methods i.e. different ways of performing the request. And to test it at the moment I am pulling all of the data (result in the below code) for an id, changing one value and then calling the update using:
var requestJS = require('request');
//Real result comes stright from CRM, but an example of what is being passed through
result = {
'lastname': 'Updated last name',
'id': '12x10',
'assigned_user_id': '19x5',
}
var url = VT_URL + '?operation=update&sessionName=' + session + '&element=' + encodeURIComponent(JSON.stringify(result));
requestJS.post(url, function(err, res, body){
//stuff here
});
I have also tried by attaching the result as the body, and by not using the encodeUriComponent function. Always the same error.
where VT_URL is my vitger url and session is my session id retrieved from login.
I am using the credentials of an admin so I should have read/write access to contacts in the CRM instance.
I have been stuck on this for a while and can't find an answer
So it's not really an answer, but as I changed to a new vtiger instance it all seemed to work fine. So I'm assuming it was more to do with the installation of vtiger rather than an error in the code.
Thought I would keep this question here though because I've seen it around a fair bit
Can you check on your previous vtiger instance if there is an entry (in the database) for your module (I assume Contacts) in the vtiger_ws_entity table ?
If yes, ID is 12 ?
I'm trying to build a contacts list app to teach myself reactjs, and I am learning fluxible now.
1) A new contact is entered. Upon submit, a newContact object is created that holds:
firstName
lastName
email
phone1 (can add up to 3 phones)
image (right now its just a text field, you can add a URL..)
2) This newContact object is sent as a payload to my createNewContactAction, and dispatcher is "alerted" that a new contact has been made.
3) At this point, ContactStore comes into play.. This is where I am stuck.
I have gotten my object to this point. If I want to save this object to my database, is this where I would do that?
I'm a bit confused as to what to do next. My end goal would be to show all the contacts in a list, so I need to add each new contact somewhere so I can pull all of them.
Can someone point me in the right direction?
I would make a request to the server to save the newContact object before calling the createNewContactAction function. If the save is successful, then you can call the createNewContactAction to store the newContact object in the ContactStore. If it isn't successful, then you can do some error handling.
To understand why I think this pattern is preferable in most cases, imagine that you saved the contact in the store and then tried to save it in the database, but then the attempt to save in the database was unsuccessful for some reason. Now the store and database are out of sync, and you have to undo all of your changes to the store to get them back in sync. Making sure the database save is successful first makes it much easier to keep the store and database in sync.
There are cases where you might want to stash your data in the store before the database, but a user submitting a form with data you want to save in the database likely isn't one of those cases.
I like to create an additional file to handle my API calls, having all of your xhttp calls in your store can clutter things very quickly. I usually name it with my store, so in this case something like "contacts-api.js". In the api file I export an object with all of the api methods I need. eg using superagent for xhttp requests:
module.exports = {
createNewContact: function(data, callback) {
request
.post('/url')
.send(data)
.end(function(res, err) {
if (callback && typeof callback === 'function') {
callback(res, err);
}
});
}
}
I usually end up creating 3 actions per request. First one is to trigger the initial request with data, next is a success with the results and last is one for errors.
Your store methods for each action might end up looking something like this:
onCreateNewContactRequest: function(data) {
api.createNewContact(data, function(res, err) {
if (err) {
ContactsActions.createNewContactError(err);
} else {
ContactsActions.createNewContactSuccess(res);
}
});
},
onCreateNewContactSuccess: function(res) {
// save data to store
this.newContact = res;
},
onCreateNewContactError: function(err) {
// save error to store
this.error = err;
}
DB calls should ideally be made by action creators. Stores should only contain data.
I don't know the best way to handle huge mongo databases with meteorjs.
In my example I have a database collection with addresses in it with the geo location. (the whole code snippets are just examples)
Example:
{
address : 'Some Street',
geoData : [lat, long]
}
Now I have a form where the user can enter an address to get the geo-data. Very simple. But the problem is, that the collection with the geo data has millions of documents in it.
In Meteor you have to publish a collection on Server side and to subscribe on Client and Server side. So my code is like this:
// Client / Server
Geodata = new Meteor.collection('geodata');
// Server side
Meteor.publish('geodata', function(){
return Geodata.find();
});
// Client / Server
Meteor.subscribe('geodata');
Now a person has filled the form - after this I get the data. After this I search for the right document to return. My method is this:
// Server / Client
Meteor.methods({
getGeoData : function (address) {
return Geodata.find({address : address});
}
});
The result is the right one. And this is still working. But my question is now:
Which is the best way to handle this example with a huge database like in my example ? The problem is that Meteor saves the whole collection in the users cache when I subscribed it. Is there a way to subscribe to just the results I need and when the user reused the form then I can overwrite the subscribe? Or is there another good way to save the performance with huge databases and the way I use it in my example?
Any ideas?
Yes, you can do something like this:
// client
Deps.autorun(function () {
// will re subscribe every the 'center' session changes
Meteor.subscribe("locations", Session.get('center'));
});
// server
Meteor.publish('locations', function (centerPoint) {
// sanitize the input
check(centerPoint, { lat: Number, lng: Number });
// return a limited number of documents, relevant to our app
return Locations.find({ $near: centerPoint, $maxDistance: 500 }, { limit: 50 });
});
Your clients would ask only for some subset of the data at the time. i.e. you don't need the entire collection most of the time, usually you need some specific subset. And you can ask server to keep you up to date only to that particular subset. Bare in mind that more different "publish requests" your clients make, more work there is for your server to do, but that's how it is usually done (here is the simplified version).
Notice how we subscribe in a Deps.autorun block which will resubscribe depending on the center Session variable (which is reactive). So your client can just check out a different subset of data by changing this variable.
When it doesn't make sense to ship your entire collection to the client, you can use methods to retrieve data from the server.
In your case, you can call the getGeoData function when the form is filled out and then display the results after the method returns. Try taking the following steps:
Clearly divide your client and server code into their respective client and server directories if you haven't already.
Remove the geodata subscription on the server (only clients can activate subscriptions).
Remove the geodata publication on the server (assuming this isn't needed anymore).
Define the getGeoData method only on the server. It should return an object, not a cursor so use findOne instead of find.
In your form's submit event, do something like:
Meteor.call('getGeoData', address, function(err, geoData){Session.set('geoDataResult', geoData)});
You can then display the geoDataResult data in your template.
I have been trying to implement a RESTFul API with NodeJS and I use Mongoose (MongoDB) as the database backend.
The following example code registers multiple users with the same username when requests are sent at the same time, which is not what I desire. Although I tried to add a check!
I know this happens because of the asynchronous nature of NodeJS, but I could not find a method to do this properly. It looks like "findOne" method immediately returns, causing registerUser to return and then another request is processed.
By the way, I don't want to check for existing users with a separate API function, I need to check at the registration stage. Is there any way to do this?
Controller.prototype.registerUser = function (req, res) {
Users.findOne({'user_name': req.body.user_name}, function(err, user) {
if(!user) {
new User({user_name: req.body.user_name}).save(function(err) {
if(!err) {
res.send("User saved");
} else {
res.send("DB Error: Could not save user!");
}
});
} else {
res.send("User exists");
}
});
}
You should consider setting the user_name to be unique in the Schema. That would ensure that the user_name stays unique even if simultaneous requests are made to set an identical user name.
Yes, the reason this is happening is as you suspected because multiple requests can execute the code simultaneously and therefore the User.fineOne can return false multiple times. Incidentally this can happen with other stacks as well, even ones that use one thread per request.
To solve this, you need a way to somehow either control that just one user is being worked on at the time, you can accomplish this by adding all registerUser requests to a queue and then pulling them off the queue one by one and calling res.Send only after it's processed form the queue.
Alternatively, maybe you can keep a local array of user names, and each time a new request comes in and check the array if it's already there. If it isn't add it to the array and work on it. If it is in the array, send the response "User exists". Then, once the user has been successfully created, you can remove it from that array. (I haven't thought this one through 100% but I think it should work as well.)