Get window URL in AMP - javascript

I have been trying to use the window.location.search inside my custom javascript to get the page URL in AMP page but I am getting empty field.
Is there any way/function by which I can get the page URL?

To maintain AMP's performance guarantees, custom JS code runs in a Web Worker, and certain restrictions apply. So you can't act on elements that are outside of the <amp-script> component (Web Worker). In other words, it's virtual DOM. Documentation for <amp-script> component.
Here is a very similar question. Also check these comment posted by #Weston Ruter, that might be a solution for your problem.
Alternatively you can try using PHP instead.

Related

Where should I put urls in an AJAX based web project?

I am on my way to some web development at the moment. There I have a set of views (different versions of the site the user will be able to see). Many of those allow some interaction that is JS/Ajax based. This is just the context of this question:
Where can I put the request URLs of the various ajax requests?
I know this seems a little stupid this question thus let me explain a bit. I assume jQuery but this question is basically not strictly related to it. I will try to give very minimalistic snippets to see the idea, these are of course not 1:1 correct/finished/good.
Typically such a site has not only one single type of request but a whole bunch of these. Think of a site where the user sees his personal data like name, mail, address, phone etc. On clicking on one such entry, a minimal form should be displayed to allow modification of the entry. Of course you need minor changes in the replacements (e.g. distinguish between change name and change phone).
First approach was to write ajax code for each and every possible entry separately in a JS file. I mean that each entry gets its own html id and I just replace the content of the element with the named id with the new content. I write code for each id explicitly in JS causing quite some redundancy in code (although a well designed set of functions will help here):
$("#name").click(function(){ /* replace #name, hardcode url */});
$("#phone").click(function(){ /* replace #phone, hardcode url */});
One other way was to put some <a> tag with the href set to the url of the AJAX request. Then the developer can define some classes that need to follow a defined and fixed scheme. The JS code gets smaller in size as only a single event must be registered and I need to follow the convention throughout the site.
<div class='foo'>... <a href="ajax.php?first" class="ajax"></div>
<div class='foo'>... <a href="ajax.php?second" class="ajax"></div>
and the simplified JS:
$(".foo a.ajax").click(function(ev){ /* do something and use source of ev to fetch the url */ });
This second method could be done even worse if you did put the url in any html tag and hide it from the user (scary).
Ideally one should write the page such, that all interaction that is AJAX-enabled should be doable with JS disabled as well. Thus I think the way of putting the urls in <a> tags is not good. However I think hardcoding them is also not ideal.
So did I miss a useful/typical part of how one can do this? is there even some consesus where such data can be located best?
If your website is big enough, you should seperate your urls based on modules such as banking, finance, user etc. But if you do not have that much urls, you can store all of them in a single javascript file.
You should store BASE url in a single javascript file with all of should import it(in case of your domain changes or development to production mode).
//base_url.js
var BASE_URL_PROD = "www......com"; // production server url
var BASE_URL_DEV = "localhost:3000"; // local server url
var BASE_URL = BASE_URL_DEV; // change this if you are on dev or prod mode.
// urls.js
var FETCH_USER = BASE_URL + "/user/fetch";
var SAVE_USER = BASE_URL + "/user/save";
// in some javascript class
$("#clickMe").ajax({url: FETCH_USER} ...);
The question here is: do you want to offer a way to access the information, if javascript is turned off or not loaded yet?
You already answered yourself: If javascript is disabled or not loaded yet, the user will directly go to the given url.
If you want to offer a none-javascript way, change your controller and check for ajax request or just use the javascript way, like Abdullah described already.

PHP HttpRequest to create a web page - how to handle long response times?

I am currently using javascript and XMLHttpRequest on a static html page to create a view of a record in Zotero. This works nicely except for one thing: The page html title.
I can of course also change the <title>...</title> tag, but if someone wants to post the view to for example facebook the static title on the web page will be shown there.
I can't think of any way to fix this with just a static page with javascript. I believe I need a dynamically created page from a server that does something similar to XMLHttpRequest.
For PHP there is HTTPRequest. Now to the problem. In the javascript version I can use asynchronous calls. With PHP I think I need synchronous calls. Is that something to worry about?
Is there perhaps some other way to handle this that I am not aware of?
UPDATE: It looks like those trying to answer are not at all familiar with Zotero. I should have been more clear. Zotero is a reference db located at http://zotero.org/. It has an API that can be used through XMLHttpRequest (which is what I said above).
Now I can not use that in my scenario which I described above. So I want to call the Zotero server from my server instead. (Through PHP or something else.)
(If you are not familiar with the concepts it might be hard to understand and answer the question. Of course.)
UPDATE 2: For those interested in how Facebook scraps an URL you post there, please test here: https://developers.facebook.com/tools/debug
As you can see by testing there no javascript is run.
Sorry, im not sure if i understand what you are trying to ask, are you just wanting to change the pages title?
Why not use javascript?
document.title = newTitle
Facebook expects the title (or opengraph :title tags) to be present when it fetches the page. It won't execyte any JavaScript for you to fill in the blanks.
A cool workaround would be to detect the Facebook scraper with PHP by parsing the User Agent string, and serving a version of the page with the information already filled in by PHP instead of JavaScript.
As far as I know, the Facebook scraper uses this header for User Agent: "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)"
You can check to see if part of that string is present in the header and load the page accordingly.
if (strpos($_SERVER['HTTP_USER_AGENT'], 'facebookexternalhit') !== false)
{
//synchronously load the title and opengraph tags here.
}
else
{
//load the page normally
}

Domain wide localStorage fall-back for I6 & IE7?

In our current project, we're using HTML 5 localStorage with fall-back to global storage for Firefox and userdata behaviors for IE6/IE7.
The fall-back is provided through a JS script called jStorage.
This worked ok, until we started testing in IE6/IE7, even though it "works", it turns out that there's a restriction in userdata behaviour which locks it down so storage can only be set and read on the same URL or as MSDN puts it "For security reasons, a UserData store is available only in the same directory and with the same protocol used to persist the store".
Hence if I set a value on one page and then navigate to another, although I'm on the same site, it won't work.
Which for us pretty much renders it unusable as a fall-back for local storage, which is scoped per domain.
Has anyone come across this problem before and found a decent solution?
Any ideas or thoughts will be appreciated.
Remy Sharp's polyfill will do that.
https://gist.github.com/remy/350433
if the problem is to get data across two page in different paths, but in the same domain, you could try one of these (note: i didn't try: i'm just trying to be creative)
Use url rewriting (with an .htaccess) so you can access /path1/page1 and /path2/page2 with a single path-rewritten/page1 and path-rewritten/page2
if you are in /path2/page2 you could load an invisible iframe loading a page in /path1 in which you get the data stored in some data structure that you pass in the parent document.
Since page1 and page2 are - per hypothesys - in the same domain you can make the page1 and iframe communicate each other via javascript.
btw good question.
A theoretical solution would be:
dynamically create a hidden "proxy" iframe accessing a static
document retrieved from a location of your convenience, say
http:/domain/proxy.html
proxy access to the DOM element in the iframe to persist/fetch data

Get objects from javascript using URL without loading the document

I have an URL which links to a HTML docment, and i want to get objects of the document without load the URL in my browser. for instance, i have an URL named:
http://www.example.com/,
how can i get one object (i.e., by getElementsbyTagName) of this document?
You can't. You can omit, at best, extraneous files being linked to from within the document like javascript or css, but you can't just grab one part of the document.
Once you have the document, you can grab out of it a section, but you can't just grab a section without getting the whole thing first.
It's the equivalent of saying that you want the 2nd paragraph of an essay. Without the essay, you don't know what the 2nd paragraph is, where it starts or ends.
Is this document in the same domain, or a different domain as the security domain your javascript is running in.
If it's in the same domain, you have a couple options to explore.
You could load the page using an XMLHttpRequest, or JQuery.get, and parse the data you're looking for out of the HTML with an ugly regular expression.
Or, if you're feeling really clever, you can load the target document into a jsdom object, jQuerify it, and then use the resulting jquery object to access the date you're looking for with a simple selector.
If the url is on the same domain you can use .load() for example:
$("some_element").load("url element_to_get")
See my example - http://jsfiddle.net/ajthomascouk/4BtLv/
On this example it gets the H1 from this page - http://jsfiddle.net/ajthomascouk/xJdFe
Its hard to show using jsfiddle, but I hope you get the gist of it?
Read more about .load() here - http://api.jquery.com/load/
Using Ajax calls, I guess.
This is long to explain if you have never used XHR, so here's a link: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest
Another option is to construct an iframe using
var iframe = document.create('iframe');
iframe.src = 'http://...';

How to capture a complete webpage using javascript

I inject javascript code into a page user is currently viewing, on users command this script make DOM changes. At the end of this interaction user might want to save the page so that s/he can view/edit it later. I could remember the DOM changes that user made, But if the original page(at its source) is changed, I will not be able to restore this page for user. That is why I want to send the changed page to my server. I should be able to restore it completely and the page should behave exactly the way it did(including scripts and media).
Additionally I can not store media of users page at my end(resource limitation), so I guess I have to parse and modify all addresses/references/links of media to global URL/URI in various scripts(HTML/CSS/JavaScript).
Now the question is, Is there a library/framework/jquery extension that can help me achieve this objective ?
else, What is the right/professional way to do it ?
Since you are using jQuery you could try $("html").html(); just make sure to add the appropriate <html> tags when you output it again.
$('body').html()
$('head').html()
$('html').html()
Download firebug, and try it in the console window on this page. I am getting what looks like the correct data back.
Have I got It right that you are building some kind of CMS that let's the user edit entire pages (Not just seperate content blocks) in Contenteditable mode?
I would definatly advise looking at a solution like ckeditor/tinymce etc... Because doing it all yourself will be a terrible pain.
The answer from #Sydenam should work fine to save the whole HTML page.
Meanwhile, and this is IMPORTANT, I would recommend you to consider a potential SECURITY ISSUE here. Indeed the user can inject whatever he wants in the DOM and have you saving it, like nasty Javascript functions sending confidential information on a remote server for example.
So, in my perspective, a professional way of doing this would be to dedicate a PART of the DOM only to that usage, let say a <div id='editable_div'> that you can load using a $('#editable_div').load('your_url',parameters, etc...), and save afterward using another AJAX call.
When saving it you can parse this chunk of HTML and make sure nothing nasty is inside with some regexp (like tags).
Hope it helps,
Regards,

Categories

Resources