I'm working on a purchase page for a new product that the inventor is expecting to receive significant media coverage for (time will tell...). We're building a simple 1-page product page using HTML, CSS and using Stripe's (hosted) checkout pages.
We're trying to minimise the amount of back-end logic needed as this is less able to be cached by CloudFlare.
We do however need to show the product pricing in different currencies depending whether the visitor is in the UK (GBP), Europe (EUR) or US/Rest of World (USD).
CloudFlare will pass a header (HTTP_CF_IPCOUNTRY) (when turned on) with a country code to our upstream webserver - but this wont always be available if we're aiming to cache the entire page.
Any clever ideas?
I'm thinking an ajax call to a geo-location service perhaps?
It seems like you will need something dynamic to achieve this. If you can offload this to a geolocation service with javascript as you mentioned, this would probably be the best.
If you end up having to make the dynamic component yourself, I think you would have the least dynamic requests by making a page that redirects to a static page for each region based on user location. A page like this would be easy to make in Cloudflare workers and cheap ($5/month + $0.50 per million requests). You should probably give the option to users to override their region manually because of the inaccuracy of ip geolocation in general.
There is an option to make it completely static by asking the user their location from the browser and then making a javascript function to map this to a currency/region, but users will likely reject your location request en masse, and I wouldn't consider it a truly useable option.
Related
I am a young developer, and I work on the development of a site whose content is stored on Contentful. Currently, each reloading of the page, the javascript will retrieve the content on Contentful via the API.
The content of the site is not likely to change often, so I would like to cache it.
The site is stored on Netlify. Link
So I thought I could recover the content on Contentful on the Node build, store it in a "cache", that the javascript could use when loading the page. And when modifying on Contentful, a webhook would trigger the rebuild on Netlify.
I do not know if my thinking is the right one, thank you for your help and your answers.
Contentful actually has caching built into its service so you shouldn't need to do anything to get the benefits of caching on your website. Quoting from the Contentful Docs:
There are no limits enforced on requests that hit our CDN cache, i.e. the request doesn't count towards your rate limit and you can make an unlimited amount of cache hits. For requests that do hit the Contentful Delivery API enforces rate limits of 78 requests per second and 280800 requests per hour by default. Higher rate limits may apply depending on your current plan.
See https://www.contentful.com/developers/docs/references/content-delivery-api/#/introduction/api-rate-limits for full details
If you want to do additional caching onto of the Contentful API you could utilize a Node library that'll do it for you. Something like APICache would work pretty well in this use case.
If the rebuilding stack when new content is published, rather than rending it on page view, is important to you, I'd encourage you to take a look at static sites. Contentful has some great webhook support that you can use together with Netlify to help rebuild your site anytime an author pushes new content. Check out this tutorial about using Gatsby for more details - https://www.contentful.com/blog/2018/02/28/contentful-gatsby-video-tutorials/
It seems to be better to cache the pages separately (instead of caching the whole site) and use a cron job to compare the cache of each page (maybe weekly) against the current version. If it is different, regenerate the cache for that page. Also, you might want to manually trigger that, possibly on deploys or in the rare event when there is a change on a given page.
Anyway, before you start to do all this caching stuff you should check whether your site is anywhere near to be overwhelmed by requests. If not, then caching can be postponed to be later, which would be wise, since, in the case your site's nature will change over time and changes will occur often you might need a different cache, or even no cache at all.
I have inserted the analytics.js tracking script into my code, and now I am trying to get user data such as medium, source, etc. using javascript and putting them into variables. Is there a way I can do this using Client Id?
I assume you mean getting the data in realtime for use in your website. That is not possible.
Client ID is not exposed in the interface by default, you'd need to use a custom dimension.
There is a processing delay, report data may only be reliable the next day.
While there is the (less reliable) data from the real time API (which at least contains medium and source information) it does not support custom dimension, so you could not use the client id as query key.
Also to retrieve data from the API you need to be authenticated, which the current users of your webpage is not. So you would need to set up some kind of serverside proxy that handles authentication for you.
Also there are API limits determining how many requests you can make in a given time frame. Even a small site would exhaust those requests pretty quickly.
So while in theory this sounds doable it is not actually feasible for any real-life purpose.
SITUATION
I have a main public Liferay website, that is therefore accessible both by intranet and not-intranet (i.e. public) users.
I also have a Liferay intranet website, which is accessible only to intranet users because is protected via a login page.
The login page to the intranet website is public.
After you successfully login, the intranet website is loaded.
EXPECTED:
In my Google Analytics account for the main website, I want to differentiate intranet users from public users (e.g. in order to understand how the 2 categories behave).
Questions
Can I use a custom dimension to solve this problem, or is there a better way?
Custom dimension data has to be sent via hits (UPDATE: by "hits" I meant either pageview or event hits, I am not referring to the dimension scope, cfr. https://developers.google.com/analytics/devguides/collection/analyticsjs/custom-dims-mets), therefore I should:
load the Google Analytics tracking code of the main website on the intranet website (the site displayed after successfully logging in)
send a pageview hit from this Intranet website to the main website together with a custom dimension, e.g.
ga('send', 'pageview', {
'dimension1': 'I am a intranet user'
});
Is this correct?
Does the above mentioned solution have any impact on my Analytics data in the main website (e.g. more pageviews due to the tracking code added to the intranet website, or strange behaviours in counting user sessions, etc.)?
Thanks a lot.
UPDATE:
Actually, the solutions proposed below would not work because the 2 websites (intranet and not-intranet) are considered different domains.
So, even if I had the following domains
intranet website: http://intranet.mycompany.com
company website: http://www.mycompany.com
and I sent data to the same UA account (i.e. the company website UA account), they would be counted as different visits.
Quoting Google (see https://developers.google.com/analytics/devguides/collection/gajs/gaTrackingSite#profilesKey)
If a user independently visits two sites that are tracking in the same
view (profile), such as through a bookmark, these visits will still be
counted under separate sessions. In this scenario, the linking methods
are not invoked, and thus there is no way to determine the initiating
session for a given user.
So, how could I solve my problem?
Would it be possible to solve it by implementing cross-domain tracking (https://support.google.com/analytics/answer/1034342?hl=en), and how?
Thanks a lot.
Can I use a custom dimension to solve this problem, or is there a better way?
Yes, custom dimension is perfect for this.
Custom dimension data has to be sent via hits
The User-level scope is more appropriate than the hit-level one for what you want to achieve. The linked document explains in detail why, and gives an example similar to your use case.
Does the above mentioned solution have any impact on my Analytics data in the main website
Yes, impact is mainly that you will have extra data corresponding to the visits to the intranet.
A custom dimension works well for your purpose. You will get additional hits for visits on your intranet site, but you can segment them out via the custom dimension to separate between inter/intranet.
Since the intranet requires a login there is one other way you could try, which would have the additional benefit of allowing for cross-device tracking (if that is beneficial to you).
Google calls this "userID", despite the fact that it must not be used to identify individual users. On login you pass in a unique value per user that is set by your backend system (UUID format is suggest but any unique string would work). Since it is not assigned by the tracking code but set by your system it will be the same id on every device. It is used to de-duplicate users, i.e. persons that log in from multiple devices will be recognized as single users (also useful if people delete their cookies - the userID can be used to aggregate sessions into unique visitors).
To make this work you need to set up a special view that contains only data from visits where the userId is set (so you would have a view for your public site and a view only for your logged-in users). You get a few special reports, for example one to tell you how many users log in from different device categories.
What the userID should not do, and in fact must not do according to Googles terms of service, is to identify individuals. The userId is not exposed in the Interface, and you must not store it as a custom dimension. If you store it on the client side in a cookie you must unset it once the users log out. It is merely there to allow continuous tracking of users independently from cookies (plus you need to amend your privacy policy if you want to use this).
Of course you can combine both approaches to get even more insights.
Say, a link to a person is sent to a user via email. If the person is already logged into the webpage in his/her browser, clicking on the link takes him/her to the page. However, if he/she is not logged in, he/she should be asked to login in order to access the page. Is there a way to achieve the above functionality using jquery, javascript?
Yes. Build a back-end authentication system, using AJAX and whatever your server-side language is.
From there, develop a hypermedia-style of content-system, and a modular, "widget"-based application delivery model.
Within your hypermedia responses to login (plus passing whatever relevant path information was gained from the e-mail), either redirect the page to a new page (based on the linked response from the server), or download the widgets requested from the server (for whatever application you're displaying media in), and then stream in AJAX content (again, from a URL dictated by the server-response).
This is about as close as you're going to get to security, in terms of delivering things to the client, in real-time, with authentication.
If you were to load the reports/gallery/game/whatever, and put a div over it, and ask for users to log in, then smart users can just kill the div.
If you include the content, or include the application components (JS files), or even include the links to the JS files which will request and display the content, then clever people are again going to disassemble that, in 20 seconds, flat.
The only way I can see to do this is to have a common request-point, to touch the server, and conditionally load your application, based on "next-steps" URLs, passed to the client, based on successful authorization and/or successfully completing whatever the previous step was, plus doing authentication of some form on each request (REST-based tokens+nonces, or otherwise)...
This would keep the content (and any application-structure which might have vulnerabilities) from the client, until you can guarantee that the client has been properly authorized, and the entire application is running inside of multiple enclosed/sandboxed modules, with no direct access to one another, and only instance-based access to a shared-library.
Is it worth the work?
Who knows.
Are we talking about a NORAD nuclear-launch iPhone app, which must run in JavaScript?
Then no, engineering this whole thing for the next six months isn't overboard.
And again, all of this security falls over as soon as one person leaves themselves logged-in, and leaves their phone on the table (biometric authentication as well, then?).
Are we talking about a gallery or discount-offers that you want to prevent people to log into, so you know that only the invited people are using them?
Well, then an 18-month project to engineer, develop, debug and deploy a system like this is probably going to be overkill.
In this case, perhaps you can just do your best to prevent the average person from stealing your content or using your cut-prices, and accept that people who take the time to dig into and reverse-engineer everything are going to find a way to get what they want, 95 times out of 100.
In that case, perhaps just putting a login div overtop of the page IS what you're going to be looking for...
If you're dealing with, say a company back-end, or with company fiscals or end-user, private-data, or anything of the sort, then aside from meeting legal requirements for collection/display/storage, how much extra work you put into the security of the system depends on how much your company's willing to pay to do it.
If it makes you feel better, there are companies out there that pay $60,000-$150,000 a year, to use JS tracking/testing programs from Adobe. Those programs sit right there, on the webpage, most of the time, for anybody to see, as long as you know where to look.
So this isn't exactly an unknown problem.
Yes it is. By authenticating (login) you can store a "loggedIn" cookie which you have to delete by session end (logout or closing the browser). You can use that cookie to check if somebody is logged in or not. If not logged in, than you can display the login page and send the login request with ajax. Btw it is not a good practice to use hybrid applications like that. It is better to use SPA-s with REST service, or implement this on server side.
We send follow up emails for inquiries on our products and I wanted to track how effective they are.
This is my plan:
Update the url in the hyperlink of the email to include a query string like:
href=http://www.somepage.htm?source=fromEmail
And then track how many visits I get with the query string = fromEmail
My problem is that the page is a .htm and I didn't really want to rewrite it so I'm looking for a javascript counter that can accomodate the query string. Ideally I would like to be able to track the total page hits, as well as the hits that come specifically from these emails. Even more ideally I would like be able to track various information in SQL Server so that the person that requested this could do some reporting on it.
Am I going about this the right way or should I just rewrite it in .net (as we are a .net shop)?
While it is definitely possible to put some javascript on your .htm page that fires an AJAX request that increments a SQL counter table if the source=fromEmail, I would say that it is more reliable to have the server increment this counter when serving up the page.
Having the server do the work when the hit originally comes in will also allow you to track more specific information about the request for the report.
Javascript on emails is a no-no. Outlook by default blocks Javascript, so there goes 50% of your users. Other email systems are not keen on running javascript either. Remember, when you're doing HTML emails, you need to think 1995-vintage HTML. Thanks, Microsoft.
You've got a few (ok, but not great) options:
Include an image file on it. When it gets loaded, count it as a hit. This is how all the major services handle email tracking, with a 1px X 1px white image file that they most often place at the bottom of the page. The obvious problem with doing this is that if they use Outlook's preview pane with images enabled, it counts as a hit that they may not have read. If they read it on Gmail while not unblocking images (set to hidden by default) you've got a real hit that doesn't get recorded. So, either way, your numbers are wrong.
Track link clicks by routing links through your server. You use your server to then re-write urls for the browser to follow. Again, it works well enough, but won't capture the real numbers because only a small percentage of people who get an email actually click a link on them. Here's an example using link tagging with Google Analytics
A combination of the two above. It covers both cases, yes, but could result in double counting one user. You could also hybridize the two by setting a variable on each image that could track back to the source email, then store hits in a DB to eliminate dupes. That's a LOT of work, though.
My company sends (and tracks) thousands of emails daily as part of its core business, and we always encourage clients to do emails with "teasers" that draw them into other websites for the main content. Why? The closer we get a user to the main site, the closer we are to a sale--nobody has ever done an ecommerce transaction solely on email yet (that I know of) Also, it's one heck of a lot easier and offers far more options to do tracking via Google Analytics on a site than it is to track emails. Since you can't reliably embed Analytics in emails, your best bet is to get 'em to a website that can.