Parse-Server Make HTTP Get Request in Cloud Code - javascript

I was wondering what the best way to go about this would be and if this is even possible.
Basically I'm using Parse Server for my backend and I want to do query an API every 5 minutes for new data to populate my tables with.
I was thinking of having a cloud code function that the client could call and it would check the last updated time and if greater than 5 minutes query the API to get new data and populate the tables.
However I'm not sure it's possible to do a HTTP GET request in cloud code, if it is I'm not sure how to do it.
Also I'm wondering if this is the best way to go about solving my problem at hand? If not what would be a better alternative? The API query is very quick and returning some basic JSON data.

The simplest solution would be to create a Parse Cloud Function that makes call to external API and call it every 5 minutes from the client.
It is possible to make other API calls from the Parse Cloud code use Parse.Cloud.httpRequest or any other npm packages. Since the cloud code runs on the server, you can use any node packages that are not only supported by the browser.
Another approach would be to create a Parse Cloud Job that makes external API call, have it run in the server at the interval of 5 minutes and update a Parse table with the obtained data. Then you can use Live Query to get the up-to-date data from the Parse API on the client side.

You might be able to find a node module that is a wrapper for the API you're trying to use, even if it isn't an official one.
You'll have to set up what's called a 'cron job' to run your background job / call your cloud function that refreshes the data. But you might also be able to set up webhooks for this other service, so anytime it receives updates to information, it'll trigger this webhook on your server and you can add data in real time instead of by time intervals.
What's the API?

Related

Nodejs GET REST API and frequent access to MongoDb database

I'm implementing a web app using the MERN framework (MongoDB, Express and Node.js for the back-end, React for the front-end).
In a section of my web app, I need to access a collection of the Mongo database very frequently (every 50 ms).
I need to synchronize this data with a video player.
I'd like to know which is the best way to handle this situation.
The options I came up with right now are:
Send a single GET request to the collection and save the whole content of that collection in a variable of the front-end (but I think it's the worst solution, since the size of this collection is 350MB)
Send a GET request every 50 ms
Send a GET request every N seconds based on the current time of the video player, and save the content of the request dynamically in a variable of the front-end
I'm sure there are better ways to handle this situation.
i think correct approach to achieve that is to open sockets in you application as it will poll you server automatically.here is the link socket.io

Storing and updating API Data in MongoDB

I am working on a web app where I have made a call to an API and stored the data using MongoDB. This data gets updated daily so I will need to be able to update the data daily by clicking a button in admin site. What is the best way to approach this?
I am new to using databases so I do not know the best approach. The reason I am wanting to store data in database is so I can store it using Redux or Context API so when someone goes to a page the data will be available faster instead of having to make a new API call (and wasting an API call) every time someone visits a page.
My database contains about 630 documents at a time.
Issue:
I need to update the 630 documents in my database to match the 630 documents coming from API that changes daily so I need to figure out what to query MongoDB to accomplish this.
You can use node-schedule.
It's very much like cron-job. But runs on the node application. Make sure the scheduler runs every 24 hours interval and put this database oprtation there.
Note that Node Schedule is designed for in-process scheduling, i.e.
scheduled jobs will only fire as long as your script is running, and
the schedule will disappear when execution completes. If you need to
schedule jobs that will persist even when your script isn't running,
consider using actual cron.
I ended up finding a way to solve my issue.
Previously I have updated a document one at a time so I used something along the lines of db.collection.update(<query>,<update>) to update one document. Issue I was facing was I needed to update all the documents in collection at one time. So by using db.collection.remove({}) I was able to remove all the documents in collection then used db.collection.create(myData) to add new updated data.

Api call request limit of 1 call each hour

I use an API in Switzerland, which allows me to request the api one time every hour in production.
I don't need more than one request each week, since it's event data, but I don't know what i have to do that i can use this api for 200+ users each day.
Do I have to save the data somewhere like firebase or are there services for this? I'm very new in this field. Could you give me a hint?
Building on top of what Dr. cool said, you'll most likely want to use cron jobs: http://code.tutsplus.com/tutorials/scheduling-tasks-with-cron-jobs--net-8800
Also keep in mind, some API's do not allow you to store the data they provide on your own server. Make sure you read the terms of use before doing so from the API provider.
It's better to have a program on the server that can run once a week and load data from the API. This data should be saved in a database. Then when one of your users needs the data, it's ready to load from your database without hitting the API limit.
Yes, Firebase is a great option. Or you can use MySQL or other server-side databases.

How to Perform loadtest on Node.js Application

Please, I need an expert advice on how to efficiently test a finished Node.js Application.
What I want to do is
1. I want to run the test to stimulate for example 100 users are all inserting data into a mongoDB
2. 100 users retriving records from the db
3. 100 users are maybe deleting from the database, I want to check how the system will perform in situations like that.
I read about loadtest in npm and it seems to me as a good candidate, but I don't know if I can use this to pass data(post/get) request into the database to actually, see how the system will respond to situation like 100 users all posting real data into the database.
I suppose ,loadtest helps with checking the response time of eg 20sec with maybe 40 concurrent request, which is my basic undertanding of the module, I don't know if it has other functionlities like what I am expecting to do.
Any advice or clue to go about this will be appreciated,because I wish to avoid re-engineering the wheel if possible
Thank you
You can use loadtest module.
But first you should define routes for crud processes and call
loadtest -c 10 --rps 100 http://example.com/api/collection
--rps meaning as request per second
You can get it from: https://github.com/alexfernandez/loadtest
You can use Apache JMeter to conduct load test against:
Web Application frontend - record and replay supported
Web Services
MongoDB directly
any combination of all above.
There's no reason you absolutely need to use a node.js utility to conduct load tests on node.js apps. In fact, I would suggest you use a language that supports multiple threads. With node, I'd worry about blocking the event loop, waiting on I/O and getting inaccurate results.
I recently tried vegeta and am very happy with it. You can use it as is without having to write any Golang code (although it's open source and you can modify it as you please). It supports URLs with headers and POSTs with payloads. It is written in Golang, which does support multithreading, and it even reports its latency. You can get reports in html, json and plain text right out of the box
Finally, vegeta scales well. It seems you'd like to issue POSTs, GETs and DELETEs. You can spin up an instance to do your GET loads, another one for POSTs and one for DELETES. Or you can POST a bunch of data, then DELETE the data on the same VM. When all of the VMs are done running the tests, you can look at the results separately or aggregate them.

Caching JavaScript API Calls

I'm querying the GitHub API from the client using JavaScript (on this page).
There are 14 API calls each time the page loads, which means I will end up hitting GitHub's API rate limit of 5000 calls per hour pretty fast.
Most caching strategies I've seen assume that you have access to a server, but in my case I'm running a purely static Middleman site.
So my question is this: how can I cache API requests from the client? Are there third-party apps that provide this service?
(Note that my use case is many different clients hitting the page (e.g. it has been linked from Hacker News), not a single client refreshing. So local caching wouldn't really help much. )
Agreed with Firebase or separate data store alternative from #David so you can create a persistent cache mechanism since you don't have access to the server where the application sits. It's basically another data store and you can update your logic in Middleman to either make a fresh call to the Github api or to pull from data saved in Firebase based on some checks you do when a person visits that Translation page. Check out the logic here
You can cache a single client's page by using local storage or a cookie. This way if the user refreshes, you can have logic to see if you want to query the API again. This would be fine if your user base was small.
This type of caching is typically done on the server since you are limiting yourself to ~357 users per hour at best.
To cache on the client side, store the data in local storage and log the time of the query. Then decide on an interval (let's say 5 minutes). Then prior to any refresh or page load, look a the users local storage and see if the query was within the last 5 minutes. If it was, read from the local storage. If not, then query the API again. This only applies to each user but by querying every 5 minutes, it would allow you to say ~30 users per hour.
http://diveintohtml5.info/storage.html
No server, eh? You could use something like Parse. Make a Parse object, set the key to the particular GitHub API URI, and set the value to something like this:
{
stored: <Date>,
value: <stringified JSON returned from GitHub API call>
}
Then when someone hits your client, first call Parse to see if you already have a cached version for that particular API call. If you don't, make the call to GitHub's API and then store the results on Parse (with stored set to the current DateTime so you can check for staleness later).
If Parse does have a cached version stored, check the stored value to see how old it is - if it is stale, make a fresh call to GitHub, and store the results back into Parse. Otherwise, just parse the JSON string from value and you're good to go.
This is assuming that you want individual caching control over the 14 GitHub API calls. If you don't, then just store the compiled calls into one object on Parse under a key like cache.

Categories

Resources