I'm building a website and would like to use strapi CMS as the backend. Since my website will be built with Gatsby, I planned of using a cron task on the server to build the website every day, if the content has changed.
Is there a functionality in Strapi that let me retrieve the last content changed date? Or should I create it myself (if it's possible)?
EDIT:
Sadly, I can't use webhooks because I'm forced into a PLESK control panel.
Indeed, there is a functionality to achieve this in all CMS, they are called webhooks. Some CMS added the functionality (but only under gatsby development, like DatoCMS does). Adding a webhook is much more efficient than creating a cron job to build each day, since it may cause unnecessary deploys if there isn't new or edited content and may cause big delays until the content is added to the content is deployed.
According to Strapi's documentation:
A webhook is a way for an application to notify other applications
that an event occurred. Using a webhook is a good way to tell third
party providers to start some processing (CI, build, deployment ...).
The way a webhook works is by delivering information to a receiving
application through HTTP requests (typically POST requests).
You may find this guide interesting. It shows you step by step how to create a webhook in your CD system.
Sadly, I can't use webhooks because I'm forced into a PLESK control
panel.
In this case, since Plesk does only accept GitHub webhooks, you are forced to choose your cron implementation.
Related
I am using Server side GTM, but I am facing adblocking issues when calling the below request when I want to retrieve the gtm?js file:
https://example.gtmdomain.com/gtm.js?id=GTM-MY_GTM_ID
The request works fine when I don't use adblockers.
Is there a way to rename the endpoint to something else, such as https://example.gtmdomain.com/secret_file_name.js?id=GTM-MY_GTM_ID in order to not be blocked by adblockers?
So. Server-side gtm is exactly what it says. It's executed on the server. It listens for network requests. It doesn't have any exposure to what happens on the front-end. And the front-end has no clue about there being a server-side GTM. Well, unless there are explicit calls to its endpoint, which you can proxy with your backend mirrors when needed.
What you experience is adblockers blocking your front-end GTM container. Even though it's theoretically possible to track all you need, including front-end events with server-side GTM, it's considered to be the best practice to use both GTMs and stream front-end events to back-end GTM through front-end GTM.
This, of course, makes you dependant on adblockers since they will block your front-end GTM. A way to avoid it is... Well, not to use the front-end GTM and have all your tracking implemented either in a tag manager that is not blocked (I doubt there is one) or just have your own custom javascript library doing all the front-end tracking and sending it to the backend GTM to be properly processed and distributed.
Generally, it's too expensive to implement tracking with no TMS, since now you really have to know your JS, so only the cool kids can afford to do this. A good example would be Amazon.
Basically, it would cost about two to five times more (depending on particulars) to implement tracking with no TMS, but adblockers typically cut only about 10% of traffic. 10% is not vital for reporting, measuring effectiveness of funnels and what not. All the critically important data is not being reported on by analytics anyway. Backend is the real source of critical data.
You can easily do this if you use sGTM hosting from https://stape.io
There is a feature called Custom Loader. With it, you can download Web GTM from different paths and all other related scripts will be also downloaded from different URLs, for example, gtag.js for GA4.
More info https://stape.io/blog/avoiding-google-tag-manager-blocking-by-adblockers
You can also create your custom loader client for Web GTM. However, there will be problems with related scripts. UA/GA4 still will be blocked then, but GTM itself not.
So, I finally implemented a great solution using GTM client templates. It works like a charm.
To summarize, the steps are:
Create a client template from your server container. You can import this template from https://raw.githubusercontent.com/gtm-templates-simo-ahava/gtm-loader/main/template.tpl
Create a new client from this client template
name your path as you want
This article explains perfectly the required steps: https://www.simoahava.com/analytics/custom-gtm-loader-server-side-tagging/
I have a shell application which is the container application that performs all the API communication. Also, I do have multiple Micro application which just broadcast the API request signal to shell application.
Now, keeping the security in mind, as a shell application how it can ensure that API request signal is coming from the trusted micro app which I own.
To be very precise, My ask is, is there a way to let shell application know that the signal is coming from the micro app that it owns and not from any untrusted(like hacking, XSS) source
As per the Micro-Frontend architecture each Micro Frontend should make call to it's own API (micro service). However, your Shell app can provide some common/global library which can help the Micro Frontends make the AJAX call. But the onus of making the call must remain with the individual micro frontend.
From your question it is unclear if your apps are running in iframes, or are being loaded directly into your page.
In the case of iFrames your using postMessage and you can check the origin on received message via event.origin. Compare this with a list of allowed domains.
If your micro apps are directly on your page then you just control what is allowed to load into them.
So, in most microfrontends, each microapp does its own API calls to the corresponding microservice on the backend, and the shell app is ignorant of it. The most the shell app would do relative to this is passing some app config to all microapps which has config like the hostname of the various backends, and, an auth token if all the backends are using the same auth.
But to ensure the shell app doesn't have, say, an advertisement with malicious code trying to pose as another microapp, well..
how are the microapps talking to the shell? Is there a common custom event? The name of the customEvent would have to be known to the intruder, but that's only security-by-obscurity, which isn't real.
other methods like postMessage are between window objects, which I don't think help your case.
You might be able to re-use the authToken that the shell and microapps both know, since it was communicated at startup. But if you have microapps that come and go that won't work either.
I am using nodeJs and the express, express-handlebars and mqtt packages.
I am recently trying to update a table which shows the content of the current temperature outside and inside. This table is only a part of the website making it worth striving to just update this particular talbe each second.
Therefore, using does not seem like the right answer for me and also harms the ability to set settings and click on a link.
I have already tried using the Query .load function which indeed works but does not work properly together with express-handelbars. Instead of the content transmitted to my server via MQTT, {{temperatureInside}} and {{temperatureOutside}} is shown on the website.
Any ideas how to solve this problem?
In the case that you want the table to be updated automatically, that means you need the server to tell the front end that there is new data. This is not possible with standard HTTP requests, which is why some clever person built Websockets (WSS protocol). Using Node, a the library I use for this is socket.io
socket.io has code that needs to be both imported on the front end (tells your client how to talk through WSS) and required on the Node side (npm install socket.io --save)
From there, you can set up custom events that both your server and client understand. I'll leave you to go through the docs, but socket.io would certainly do the trick for you. I've used it in many similar circumstances.
Im building a website in firebase. It's a simple look-up service which only has an input element that fires a request to a 3rd party api.
www.3rdparty.com/api/[myapikey]/method
The problem is that I'm limited to x requests per second and I can't expose my api-key to the users.
My mission eventually is to store the responses in firebase so that I can limit the number of requests that reach the 3rd party (a cache function)
Putting such an API key into the client-side code of your application introduces the risk of malicious users taking your key and using it to their own purposes. There is nothing you can do about that, except for simply not including the API key into the client-side code. This applies equally to Android and iOS code btw.
Since you can't put the API key in client-side code, you'll have to run it on a server. This is a quite common scenario for using server-side code within a Firebase architecture: the code needs access to some information that common clients cannot be trusted with. It is covered by pattern 2 in our blog post on common Firebase application architectures.
From that blog post:
An example of such an architecture in action would be clients placing tasks for the server to process in a queue. You can have one or more servers picking off items from the queue whenever they have resources available, and then place the result back into your Firebase database so the clients can read them.
Pardon me if I am asking something really stupid. But this is what I want to implement as per my new role as a analytic Implementer. Some of our files (Mostly pdfs) are stored on a webserver (CDN server) to reduce some load of the application server.
We provide links to these file to all our users across the world. What I want is to track these file download whenever they occur. So I just wanted to know is there any way by which I can call a function or a routine from where I can make those tracking calls ?
Not really.
If you are using a 3rd party web hosting as CDN, then you could simply get the Analytics reports using whatever tool your host offers.
If you are running your own hosting box, you could install almost any analytics software on it to monitor access. Just one example is provided here: http://ruslany.net/2011/05/using-piwik-real-time-web-analytics-on-iis/
The clean simple way, however, would be to have a simple web application running on that CDN server that accepts file requests and then returns the file. The advantages are that you could:
record whatever statistics you wish off it.
use widely available tools like Google Analytics
make dynamic decisions, one example of which is deciding version of file sent based on factors like user bandwidth, etc.
transparently handle missing files and path changes, so links will be valid forever
send different caching headers for different files
implement very simple access control and policy based restrictions