send response to different client on model update - javascript

I have a simple but unusual problem.
I have a client page making requests to a RESTful API. So the client PUTs, no big deal. A form sends JSON to the API, and from there it is stored in the server database.
However, I have a separate page that is supposed to give me a current status from the database, say, a count of rows. I need the count to be updated every time a new row is created. The current solution is to send a GET request every 5 seconds or so, which is obviously not ideal. I don't want to query the database if there is no row created yet.
So I need a trigger on every row create. This seems trivial to implement in app/api/Controllers/AppController.js. However, in this file I have a reference to the PUTting client, not the GETting one. How can I reference the GETting client there?

It sounds like you're referring to a push service, which would need to be implemented server-side.
If the putting and status update page are supposed to be part of the same session you could store a cookie when storing data. Then the status page could poll the cookie so it knows when to make a new AJAX request. You'd still have client-side constant polling, but since the cookie keeps track of if you need to update or not it'd mean a lot less network calls.
But that would only work if this is supposed to be part of the same session. Otherwise you'd need either constant AJAX polling or some sort of push service.

Related

Handling database operations with .net core mvc get request

I have a controller like the image below. This controller hides the relevant record in the database when the fetch request is sent. Do I need to use http post for such operations in this project that I wrote with Entity framework core? The problem with this controller is that the admin executes the javascript code fetch(https://localhost:5001/admin/deletepost?delete=url) on any page. As soon as this get query runs, the relevant record is hidden or deleted from the database. Is it safe as it is? How can I make it more secure? Thank you very much to everyone who replied.
Although this method is only accessible to the admin, will the deletion of the record as a result of the admin sending this request cause a deficit?
For several reasons, POST is more secure than GET.
GET parameters are passed through the URL. This means that the parameters are stored in the server log and browser history. When using GET, you can also easily change the data submitted to the server because it is in the address bar.
The problem when comparing the security between the two is that POST may block temporary users, but it cannot block malicious users. It is very easy to forge a POST request and should not be fully trusted.
The biggest security problem of GET is not the end user's maliciousness, but the third party sending a link to the end user.
Another point is that you must consider where to use GET and POST, because GET should only be used for operations that do not change database information, and only request or read information and POST data should be used when the data will be changed.
Some web scanners will automatically click on each link (usually a GET request) instead of in a button or form (usually a POSTS request) to avoid changing the database, but for example, if you perform a delete operation after the link, you The risk of clicking on the link may be easier with more automated tools.

Updating localStorage when there's new data from server?

After logging into an app (React.js), I am caching the member data in localStorage as a lot of my components are using it and request only needs to be done upon log-in, ideally.
However, a few properties in this member object may be changed in the backend manually so the frontend doesn't have a way to know whether the member object has changed at all. (Again, ideally, any change to the member object should go through some form submission that directly changes the DB, with which an update can be triggered for the localStorage, but this is not an option at this time.)
Example scenario: There's a generic form in the app to request for additional credits. Customer service will receive an email regarding the request. Credits would be manually updated for Customer A (in DB). If Customer A doesn't re-login (where the get request for member is done), localStorage will still show the old no. of credits.
If this is the situation, what's the best way to go about it?
Don't store member data in localStorage at all so as to keep the data fresh. Just call the endpoint whenever it's needed.
Use sessionStorage instead?
Trigger a refetch when the user refreshes the page / app (although user may not know that they need to do this to update the data).
Suggestions?
Calling the endpoint whenever its needed is ideal if the data is going to change based on things outside of the user's control.
Session Storage is just local storage that gets wiped when the browsing session ends, you'll still have the exact same issue
This doesn't really solve the problem, and it's typically a bad user experience to require the user to perform regular maintenance tasks in order to use your application to the best of its ability
I'd go with just getting the data fresh.
At a high level, you have two choices:
Poll (periodically call the back end to refresh the data)
Open a persistent connection (like a web socket) to the server, and have the server push updates to clients.
The latter option would require a lot of changes, and it changes the scalability of your app, so the former choice seems like the most reasonable option for you.
It's smart to keep using localStorage so you have an offline copy of the data and aren't blocking rendering during page load; you can have a background periodic refresh process that doesn't disrupt the user in the meantime. If your data is mirrored in something like redux or context, then your UI could seemlessly update if/when the data changes.
If you do not know when member has been updated, don't store it. You should query the back end every time you need member. That is the only way to keep the data sync with your database.

Writing entire SQL table whenever data changes on a Node server (weird one, so bear with me)

Okay, let me start by saying that I know this is weird. I do.
But here goes:
Let's say I have an SQL database which stores my data. And let's say I don't have a choice in this, it has to be SQL. The application I'm building has somewhere in the region of 100,000 records in its database, and once every single record has been processed by the users of the application, they all go off and get sent to a different application entirely. So for a short period of time, this application will be in use, and then stops being used until the same time next year. While the application is in use, no external sources will be touching the database at all.
When the (Node) server starts up, it loads everything from the database, into an object literal on the server.
The client-side of this application, on a very basic level, makes requests (to an API on the server) for data, and sends updated versions of records back to the server once they've been processed.
So here's where it gets weird: Let's say I don't want to have the client-side application have to directly retrieve records from the database, nor do I want it to be able to write to them. So the data from the entire database already exists in memory on the server. There's a module on the server that can handle changing the representation of that data already (again, because the client application only interacts with APIs on the server, the database module exists to facilitate this).
Multiple users access the system at once, but due to the way the system works, it is not possible for two users to be sent the same record, so two users will never be sending an update back for the same record (records are processed individually, and sequentially).
So, let's say that I decided that, since I was already managing all of this data in memory on the server, I would just send an updated version of the current data, in its entirety, back to the database, every time it changed.
The question is, where does this rank on the crazy scale?
Performance, writing an entire database rather than single records, would obviously suffer. But, in a database that is only read from once (on start-up of the application), is that even a concern? If every operation other than "Write all the stuff when any of the stuff changes" happened in memory on the server, does it matter how long those updates actually take? If a new update to the database comes in whilst it's being updated, surely SQL will take care of this?
It feels like the correct way to do this of course, is to have each user directly getting their info from the database, and directly making updates to the database too (or at least interacting with API endpoints to make this happen), but, is just...not doing that, utter lunacy?
Like I said, I know it's weird, but other than the fact that "it feels kind of wrong", I'm not sure I'm convinced that it is in fact entirely wrong. So I figured that this place would have an opinion.
The way that I think it currently works is:
[SQL DB] is updated whenever a change happens on {in-memory DB}
{in-memory DB} is updated in various ways based on API calls to the server
makes requests for data, and sends updates to data, both of which are processed on the in-memory DB
Multiple requests can happen at the same time from the application, but mutliple users can not see the same record, because records are allocated to a given user before they're sent
Multiple updates can come from multiple users, each of which ultimately ends in the entire SQL database being saved to with the contents of the in-memory DB.
(Note: I'm not saying "is this the best way to do this". I'm just asking, is there a significant argument for caring about the performance of a database being written to, if it's not going to be read from again unless the server needs to be restarted)
What I think that I would do, in this situation, is to add an attribute to each cached record to indicate that the record is "dirty." In other words, that something has been done to it, by someone, since it was originally read from the database.
(You could also add an attribute that indicates that someone "has this particular record 'checked-out,'" so that you can be sure that two users are no updating the same record at the same time.)
At some convenient moment, you can then walk through the collection, posting the "dirty" records back to the database. Use an SQL Transaction, not only for efficiency but also to be sure that the final update to the database is atomic.
You will need to be very mindful of the possibility of race-conditions. One possible strategy is to use a Unix timestamp as a "dirty" indicator. A record is selected for posting to the database only if its "dirty-time" is greater-than or equal-to the timestamp when the commit-process was last run.
(And, P.S.: "no, I've seen even 'weirder' things than this, in all my crazy years in this crazy business ...)

Posting data dynamically without refreshing the page

I just want to know how can I post data without refreshing the page for example, now Facebook when you post a comment it will be posted and shown to the people without refreshing the page. I do know how to insert data in MySQL without refreshing the page with AJAX but the question is: how to insert the data and get it at the same time without refreshing the page.
Thank You
OSDM's answer might seem accomplish the behavior you want but it isn't the one you're asking about. His answer would only provide updates when a user upload's something and not as they are created in the system (uploaded).
There are 2 different ways you can accomplish the fetching of new information in the server: AJAX and WebSockets.
AJAX - AJAX stands for Asynchronous Javascript and XML. It allows you to fetch content with a particular server behind the scene and then you can insert the newly fetched data into your page to display it to the user. This however has to be manually triggered and therefore doesn't really happen in real time. You could trigger the fetching of data either manually (e.g. with the press of a button), or on a timer (e.g. every 5 seconds, 10 minutes, etc). It is important to note that it is hard for the server to know what information the page is currently displaying and therefore each AJAX call usually request all of the information to be displayed and re-render the page (deletes the current content and inserts the newly fetched one which also includes content that was already being displayed).
WebSockets - WebSockets can be thought of as an 'upgraded' HTTP connection. The client and the server establish a connection and are free to send data in either direction. You can set up web sockets between your server and the website (client) such that whenever new content is inserted into the MySQL database the server relays the new content to the client. Much like AJAX, you would interpret the new information and add it to the page. The upside of using web sockets is that information is being fed to you in-real time as the server receives it. This means that you only need to fetch data in bulk when you first load the site and updates are pushed to you as they occur. You do not need to rely on a timer or manual input to fetch data as you're being fed data and not fetching it.
Facebook, for example, doesn't rely on a timer or you fetching new data (although that certainly happens if you refresh the page) but each client is listening to the server for new information through web sockets.
That is all javascript (or jquery). You allready know how to send the data to your server. Now all you need to do is modify the html with javascript.
For example(jquery):
$("#submit").click(function(){
$("#comments").append("<div class=newcomment>"+$("#textbox").val()+"</div>");
$.POST('upload.php',{comments:$("#textbox").val()});
});
Now the comment is send to the upload.php and the comment is added to the comment section of your page.
If you need data from the server also to be included, just add some javascript to upload.php file and do something like this: $("#getdatefromserver").load('upload.php',{comments:$("#textbox").val()}); Now the javascript from upload.php will run in the page.
And no page refresh is done.

Caching JavaScript API Calls

I'm querying the GitHub API from the client using JavaScript (on this page).
There are 14 API calls each time the page loads, which means I will end up hitting GitHub's API rate limit of 5000 calls per hour pretty fast.
Most caching strategies I've seen assume that you have access to a server, but in my case I'm running a purely static Middleman site.
So my question is this: how can I cache API requests from the client? Are there third-party apps that provide this service?
(Note that my use case is many different clients hitting the page (e.g. it has been linked from Hacker News), not a single client refreshing. So local caching wouldn't really help much. )
Agreed with Firebase or separate data store alternative from #David so you can create a persistent cache mechanism since you don't have access to the server where the application sits. It's basically another data store and you can update your logic in Middleman to either make a fresh call to the Github api or to pull from data saved in Firebase based on some checks you do when a person visits that Translation page. Check out the logic here
You can cache a single client's page by using local storage or a cookie. This way if the user refreshes, you can have logic to see if you want to query the API again. This would be fine if your user base was small.
This type of caching is typically done on the server since you are limiting yourself to ~357 users per hour at best.
To cache on the client side, store the data in local storage and log the time of the query. Then decide on an interval (let's say 5 minutes). Then prior to any refresh or page load, look a the users local storage and see if the query was within the last 5 minutes. If it was, read from the local storage. If not, then query the API again. This only applies to each user but by querying every 5 minutes, it would allow you to say ~30 users per hour.
http://diveintohtml5.info/storage.html
No server, eh? You could use something like Parse. Make a Parse object, set the key to the particular GitHub API URI, and set the value to something like this:
{
stored: <Date>,
value: <stringified JSON returned from GitHub API call>
}
Then when someone hits your client, first call Parse to see if you already have a cached version for that particular API call. If you don't, make the call to GitHub's API and then store the results on Parse (with stored set to the current DateTime so you can check for staleness later).
If Parse does have a cached version stored, check the stored value to see how old it is - if it is stale, make a fresh call to GitHub, and store the results back into Parse. Otherwise, just parse the JSON string from value and you're good to go.
This is assuming that you want individual caching control over the 14 GitHub API calls. If you don't, then just store the compiled calls into one object on Parse under a key like cache.

Categories

Resources