I have a script located in my nodejs application that needs to be run at specific time intervals to update some values in the database. I use node-cron library to successfully schedule this script to run locally but I haven't been able to find a suitable way to get this script to run at the specified time intervals when the nodejs application is deployed to Azure. Any help resolving this would be appreciated.
You can use Azure Functions with a timer trigger, Azure App Service background tasks, Azure Container Instances to achieve the same result in Azure. There are several other ways to do that, but they would be more complicated to setup and maintain.
I'd probably go for Azure Function and a timer trigger.
Related
I have a shell application which is the container application that performs all the API communication. Also, I do have multiple Micro application which just broadcast the API request signal to shell application.
Now, keeping the security in mind, as a shell application how it can ensure that API request signal is coming from the trusted micro app which I own.
To be very precise, My ask is, is there a way to let shell application know that the signal is coming from the micro app that it owns and not from any untrusted(like hacking, XSS) source
As per the Micro-Frontend architecture each Micro Frontend should make call to it's own API (micro service). However, your Shell app can provide some common/global library which can help the Micro Frontends make the AJAX call. But the onus of making the call must remain with the individual micro frontend.
From your question it is unclear if your apps are running in iframes, or are being loaded directly into your page.
In the case of iFrames your using postMessage and you can check the origin on received message via event.origin. Compare this with a list of allowed domains.
If your micro apps are directly on your page then you just control what is allowed to load into them.
So, in most microfrontends, each microapp does its own API calls to the corresponding microservice on the backend, and the shell app is ignorant of it. The most the shell app would do relative to this is passing some app config to all microapps which has config like the hostname of the various backends, and, an auth token if all the backends are using the same auth.
But to ensure the shell app doesn't have, say, an advertisement with malicious code trying to pose as another microapp, well..
how are the microapps talking to the shell? Is there a common custom event? The name of the customEvent would have to be known to the intruder, but that's only security-by-obscurity, which isn't real.
other methods like postMessage are between window objects, which I don't think help your case.
You might be able to re-use the authToken that the shell and microapps both know, since it was communicated at startup. But if you have microapps that come and go that won't work either.
I'm building a website and would like to use strapi CMS as the backend. Since my website will be built with Gatsby, I planned of using a cron task on the server to build the website every day, if the content has changed.
Is there a functionality in Strapi that let me retrieve the last content changed date? Or should I create it myself (if it's possible)?
EDIT:
Sadly, I can't use webhooks because I'm forced into a PLESK control panel.
Indeed, there is a functionality to achieve this in all CMS, they are called webhooks. Some CMS added the functionality (but only under gatsby development, like DatoCMS does). Adding a webhook is much more efficient than creating a cron job to build each day, since it may cause unnecessary deploys if there isn't new or edited content and may cause big delays until the content is added to the content is deployed.
According to Strapi's documentation:
A webhook is a way for an application to notify other applications
that an event occurred. Using a webhook is a good way to tell third
party providers to start some processing (CI, build, deployment ...).
The way a webhook works is by delivering information to a receiving
application through HTTP requests (typically POST requests).
You may find this guide interesting. It shows you step by step how to create a webhook in your CD system.
Sadly, I can't use webhooks because I'm forced into a PLESK control
panel.
In this case, since Plesk does only accept GitHub webhooks, you are forced to choose your cron implementation.
I have a website written in php. Till now I used setTimeout with ajax to update chats simultaneously but after that didn't work I learnt about socket.io. I need to implement private messaging and I have got some things covered in socket.io but when I ran it on localhost I had to keep the terminal open as long as I wanted to chat.
1. How am I supposed to do that on my server which is Hostinger currently. Is there some terminal I need to run or do I need ssh(shell) access which I don't have at the moment?
2. If there isn't, how would the node script keep running?
3. And since socket uses node, how would the app use node modules? Do they need to be uploaded to the hosting space?
Apart from that, if there is any private messaging and group messaging implementation anyone knows about even other than how it could be done in socket.io it will be very helpful if you could suggest. I need the users to chat among themselves not with me.
Thanks in advance!
You can do this by using node js ,socket.IO and express.js
Link given below can provide you with a rich documentation to reach a solution.
https://socket.io/get-started/chat
For running your application in background you can use PM2 process manager.
For documentation refer below link
http://pm2.keymetrics.io/
I recognize this question is ridiculously stupid but I need an answer to this. I have tried going through google for an answer to this, but no luck so far.
So, I am going to build an application in JavaScript (using React and Redux) with separate client and server logic. The code for each will be housed in separate files. If I deploy both my server and client code logic to heroku, how will it be deployed by heroku?
By my understanding, heroku deploys a single app, and it will see this as essentially two different apps. Do I need to write both client and server logic together and push them up to Heroku necessarily?
Without an example of what you are trying to do with your app it is quite hard to give a correct answer.
But if you want to use Heroku node.js you will be deploying both server and client logic in one project.
Here is an example for react.js: http://ditrospecta.com/javascript/react/es6/webpack/heroku/2015/08/08/deploying-react-webpack-heroku.html
I new to Amazon AWS and want to create a cloud-based REST API in Node.js.
Usually I develop the program along with testing it. It means I write some tests, and then write the code that makes those tests run successfully. So in a typical programming session, I may run the tests or the app tens of times.
When I do this locally it is easy and quick. But what if I want to do the whole process on Amazon cloud? How does this code-test-code cycle look like? Should I upload my code to AWS every time I make a change? And then run it against some server address?
I read somewhere in the documentation that when I run a task for a few minutes (for example 15min), Amazon rounds it up to 1hour. So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours? If yes, then what will be the solution to avoid these huge costs?
When I do this locally it is easy and quick.
You can continue to do so. Deploying in the cloud is does not require developing in the cloud.
But what if I want to do the whole process on Amazon cloud?
When I do this, usually edit the code locally, the rsync my git directory up to the server and restart the service. It's super-quick.
Most people develop locally, and occasionally test on a real AWS server to make sure they haven't broken any assumptions (i.e. forgot something at boot/install time).
There are tools like Vagrant that can help you keep your server installation separate from your development environment.
As you grow (and you've got more money), you'll want to spin up staging/QA servers. These don't have to be run all the time, just when changes happen. (i.e. have Jenkins spin them up.) But it's not worth automating everything from the start. Make sure you're building the right thing (what people want) before you build it right (full automation, etc.)
So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours?
Only if you launch a new instance every time. Generally, you want to continue to edit-upload-run on the same server until it works, then occasionally kill and relaunch that server to make sure that you haven't screwed up the boot process.