How to know if a JS file is loaded on server side? - javascript

I'm a developer for the website Friconix, a collection of free icons. Users can include our JS file on their web pages as explain here : https://friconix.com/start/.
I would like to gather statistics on clients using our tools. How to know, on server side, information on pages (URL or at least domains) that request our JS file ?
It's important to explain that, for decreasing the loading time, the JS file is not dynamically generated. There is no PHP file loaded every time the file is requested. The file is saved in plain text on our server. I wonder if the right solution is not to add something in the .htaccess file ?

Since the script is requested from your server every time a user loads a browser-page you can track who and how often that path is requested.
A simple approach is that it will be present in you request log files. So you can create a script and read your log files every so often.
A second approach is to setup a special rule/location in nginx/apache/which-ever-server-you-are-running
A third approach is to serve the script via CDN that has all these attributes built in (ie. CloudFront)

This can be done via a simplistic REST API call from the script. Thus when your script will load it will call the rest API via an AJAX or XHR call. The request can contain a unique client ID. On the server-side, you can implement a simple API that will accept these requests and store them by extracting the necessary information for analytics.
All the information like domains and IP about the client can be gathered from the API request or requests which will be made from clients page.
Reference - How do I call a JavaScript function on page load?

Related

Run a script in a google cloud from a local html page

I would like to run a script in a google cloud server using a local HTML page.
To be more clear the steps would be:
open a local HTML page on my local computer.
push a button that triggers a script in my google cloud server.
the script creates a file in the server that I can download pressing
another button.
I'm new in this field and I don't know where to start.
How do I connect to the server via HTML? (PHP?, Javascript?)
How does the authorization process work?
There are several languages and strategies that you can use.
You can use locally Javascript or PHP (it needs installation and configuration) that will allow you, for example, to make an HTTP request (it may also be another protocol), to a script (that can be in PHP or Javascript or others) on the server, which, upon receiving the request, processes and generates a file for a specific path.
Then on that other button you make a request to that path to download the file.
My suggestion is to choose the languages and implement with these to understand the process.
Create an HTML page, put a button.
Attach a function to button onClick to send an Ajax request
That would cause cross server request challenge for you down the road..
You can simply put a URL from your local web page styled as a button to your Google Cloud hosted application.
Create the file on the server side and you can set an HTML header
Content-Disposition: attachment; filename="results.csv"
to make a file downloaded to the user end.

Ajax crawlable application without hashbang

I am building a website that is Ajax based. when Dom is load, an async http request is made to a server which answer a JSON text, then the data from JSON are put in the DOM by javascript.
Google crawler just doesn't read content loaded after javscript, so i need to create an HTML snapshot of my page (on the server), and make my server handles requests with hashbang.
But my doubt is that i am not using hashbangs in my request.
My only ajax req is something like http://www.apiservice.com?get_data=true How can i tell google which request make to get the HTML snapshot of the entire page and where can i do it (maybe putting the request url in sitemap?)
Thank you in advantage
I understand your page is built in two steps: first request to server getting the core html/javascript, and a second one getting additional data to displayed in your page.
If so, then the first request is the one for the crawler with the hashbang. It makes a lot of sense to put it in your sitemap. The static html page that your server should return is the complete html resulting from the two server calls in your process.
If you do not cache the static html page for the crawler and instead generate it dynamically (e.g., use htmlunit, see this SO reference) then both steps would be executed before returning the static html snapshot. So if you cache it then you ought to make sure you do the same.

how to get images from other web page and show in my website

I just want to know how to get images from other web page and show in my website.
Case flow is:
Type some page URL in text box and submit
Collect all images in that web page (not in entire site) and show them in my webpage
So, you need to get images from page, and the input data is thh address of that page. Well, you have two solutions:
I. If this is functionality for your site which others will use, then plain JavaScript is not enough, because browser's privacy policies block getting such data from other pages. What you need in this case is to send the URL to a script on your server, which will download that page, parse it for s and return you the list of image srcs.
How exactly to do this is a pretty complicated question, for it depends on your site's serever-side programming language. Anyway such functionality would consist of client side javascript using AJAX techniques and server site script (e.g. php). Client script which is pretty much straight-forward.
On client side your js has to:
1. Get desired URLs
2. Send them to server
3. Wait for server's response (which contains srcs of images on desired page)
4. Create img tags with srcs which you got from server script
Keywords for this to google are, for example, AJAX, XmlHttpRequest and JSONP (sorry if you already know that :)
On server side your (php|ruby|python|perl|brainfuck) has to:
1. Get page URL from javascript code at step 2
2. Download a page by that url
3. Parse it looking for img tags and their srcs
4. Send list of srcs (in XML, JSONP or any other form) back to client
II. If you need to get images from other pages only for your personal use, you can write an extension for your browser. This way doesn't require any server side scripts.
If you want do scrape other websites with javascript, you should create a server side script which can act as proxy or you can use YQL.
Here's my answer for cross domain ajax call with YQL,
Cross Domain Post method ajax call using jquery with xml response
First of all check for Copyright. Copy only if the image is provided by the owner for free use. Also read and understand the license of usage.
If the image is free to use as stated by the owner under license then download the image and then use it. Also, please don't forget to keep copy of the license and the website url from where you downloaded the image.
Download and then use is suggested so that if tomorrow the other website shuts down then your website remains unaffected.
Last but not the least, try to design/ shoot your own images. Even if they are not as good as others at least they are genuine.

Precomputing Client-side Javascript Execution

Suppose you were to build a highly functional single-page client-side application that listens to URL changes in order to navigate around the application.
Suppose then, that when a user (or search engine bot) loads a page by its url, instead of delivering the static javascript file and hits the api as normal, we'd like to precompute everything server-side and delivery the DOM along with the js state.
I am wondering if there are existing tools or techniques for persisting such an execution of state to the client.
I know that I could execute the script in something like phantom JS and output the DOM elements, but then event handlers, controllers and the js memory state would not be attached properly. I could sniff our user agent and only send the precomputed content to bots, but I am afraid google would punish for this, and we also lose the speed benefits of having sent everything precomputed in the first place.
So you want to compile, server-side and send to the client the results of requesting a resource at a specific URL? What is your backend written in?
We have an API running on GAE in Java. Our app is a single-page app, and we use the HTML5 history object so we have to have "real responses" for actual URLs on the front-end.
To handle this we use JSP to pre-cache the data in the page as it's loaded from the server and sent to the client.
On the front end we use Backbone, so we modified Backbone.sync to look for a copy of the data it's looking for locally on the page and if it's not there, only then to request it from the server as an AJAX call.
So, yes, this is pretty much what every site did before you had ajax. The trick is writing your app so that the data can be local in the page (or in localStorage even) and if not only then to request the data. Then make sure your page is "built" on the server end (so we actually populate the data in the HTML elements on the server end so the page doesn't require JS on the client end).
If you go somewhere else the data is dynamic and the page doesn't reload.

Is there any way to do an AJAX call to a file above public_html?

I'm making a script that lets my users open the page, vote for our site, and then get a password to some restricted content on the site. However, I plan on storing the password in a file outside public_html so it cannot be read directly from the source code.
Is there any way to do an AJAX call to a file above public_html? I don't want to AJAX to a file inside public_html that will read the file, it'll just defeat the purpose.
Not directly, no. And, frankly, thank goodness for that (since js is executed client-side, and the client should never have access to the web-server above public_html).
You can, however, use Ajax to call a php script inside the web root that has access to documents outside of the web-root. This way you're still keeping the password out of public reach, but still allowing your users to make use of it.
The down-side is that the password might make it to the client-side in the Ajax call (depending on what your Ajax call does). Basically, if JS can get access to the password then so can any interested user.
No, you cannot do that.
The web server does not allow you to do that.
Also, it is highly insecure to expose access to non public_html files on the server.
No, you can't have an AJAX call to a file that's not served by the web server (I'm assuming the file above public_html doesn't have an apache ALIAS or virtual directory setup).
To accomplish what you're trying to do, create a script (php?) on your site that AJAX calls and this script will either:
Read the password file wherever it is on the system (assuming the file has the correct file permissions)
Embed the password within the script itself since the source code of the script can't be retrieved.
No. An AJAX request is simply a request like any other that loads a resource from your server. The only difference is that it exposes the result to javascript on an already loaded page instead of loading a new page. So if an AJAX request can get this secure file, than anyone can.
You could setup a proxy script in some web application programming language to fetch the file from disk and send it along for you. But then it wouldn't be much different from putting the file right in the public directory.
You may need to rethink your approach here.
Why don't you do an AJAX call to some view function on the server that can access the file you need and then return whatever data to the AJAX request?

Categories

Resources