I realized at a moment with the help of firebug in stackoverflow.com, when someone accepted your answer, suddenly your points got increased without any Ajax hit received to any method. Amazing, How is it possible?
Please advice so that i can try to implement this technique into my upcoming projects. Thanks in advance.
Make sure you have looked into the net option. There are two ways I can tell.
Web Sockets
iFrame
Please have a look at http://www.html5rocks.com/en/features/connectivity & http://html5demos.com/web-socket
But will work on limited browsers
Using iFrame with simple get request no ajax call will made but you will be able to see it firebug net. This is what Facebook uses and all browser compatible.
It's using WebSockets instead of AJAX XMLHttpRequest in modern browsers. You can find more details about Stack Overflow's implementation on meta.stackoverflow.com.
The main advantage of WebSockets is the server can send an update to the browser the moment you receive an upvote. Other methods, such as XHR and hidden iframes, require the browser to poll the server at regular intervals to get an updated vote count.
you could use an image submit button and submit to an small iframe that displays the number.
otherwise you'd still be messing around with an hidden iframe and submits or gets posts in a hidden iframe.
If you really want a javascript less solution the form submits hidden/small iframe are the way to go.
Related
I have to scrape a website which i've reviewed and i realised that i don't need to submit any form. I have the needed urls to get the data.
I'm using NodeJs and Phantom.
My problems source is something related with the session or cookies (i think).
In my web browser i can enter in this link https://www.infosubvenciones.es/bdnstrans/GE/es/convocatorias, hit on the form blue button with text "Procesar consulta". The table below will be filled. In dev tools on network tab you can see a XHR request with a link similar to https://www.infosubvenciones.es/bdnstrans/busqueda?type=convs&_search=false&nd=1594848133517&rows=50&page=1&sidx=4&sord=desc, if you open it in a new tab, the data is displayed. But if you open that link in other web browser you get 0 results.
That's exactly what is happening to me with NodeJs and Phantom and i don't know how to fix it.
If you want to give Scrapy a try, https://docs.scrapy.org/en/latest/topics/dynamic-content.html explains how to deal with this type of scenarios, and I would suggest reading it after completing the tutorial.
The page can also be handy if you use other scraping framework, as there’s not much that is Scrapy-specific, and for Python-specific stuff I’m sure there will be JavaScript counterparts.
As for Cheerio and Phantom, I’m not familiar with them, but it is most likely doable with them as well.
It’s doable with any web client, it’s just a matter of knowing how to use the tool for this purpose. Most of the work involves using your web browser tools to understand how the website works underneath.
I've made a webpage that has the URL-form http://www.example.com/module/content
It's a very dynamic webpage, actually it is a web app.
To make it as responsive as possible, I want to use AJAX instead of normal page requests. This is also enabling me to use JavaScript to add a layer to provide offline capabilities.
My question is only: How should I make the URLs? Should they be http://www.example.com/module/content or http://www.example.com/#!/module/content?
Following is only my thoughts in both directions. You don't need to read it if you already have a clear thought about this.
I want to use the first version because I want to support the new HTML5 standard. It is easy to use, and the URLs look pretty. But more importantly is that it allows me to do this:
If the user requests a page, it will get a full HTML page back.
If the user then clicks a link, it will insert only the contents into the container div via AJAX.
This will enable users without JavaScript to use my website, since it does not REQUIRE the use to have JavaScript, it will simply use the plain old "click a link-request-get full html page back"-approach.
While this is great, the problem is of course Internet Explorer. Only the last version of IE supports History API, and to get this working in older versions, one needs to use a hashtag. (50 % of my users will be using IE, so I need to support it...) So then I have to use /#!/ to get it working.
If one uses both these URL-versions, the problem arises that if a IE user posts this link to a website, Google will send the server a _unescaped_string (or similar..) And it will index the page with the IE-version with the hashtag. And some pages will be without the hashtag.
And as we remember, a non-hashtag is better on many different things. So, can this search engine problem be circumvented? Is it possible to tell the GoogleBot that if it's reaching for the hashtag-version of the website, it should be redirected to the non-hashtag-version of the webpage? This way, one could get all the benefits of a non-hashtag URL and still support IE6-IE9.
What do you think is the best approach? Maybe you have tried it in practice yourself?
Thanks for your answer!
If you want Google to index your Ajax content then you should use the #!key=value like this. That is what Google prefers for Ajax navigation.
If you really prefer the pretty HTML5 url without #! then, yes, you can support both without indexing problems! Just add:
<link rel="canonical" href="preferredurl" />
to the <head> section of each page (for the initial load), so to help Google know which version of the url you would prefer them index. Read more about canonical urls here.
In that case the solution is very easy. You use the first URL scheme, and you don't use AJAX enhancements for older IE browsers.
If your precious users don't want to upgrade, it's their problem, but they can't complain about not having these kewl effects and dynamics.
You can throw a "Your browser is severely outdated!" notice for legacy browsers as well.
I would not use /#!/ in the url. First make sure the page works normally, with full page requests (that should be easy). Once that works, you can check for the window.history object and if that is present add AJAX. The AJAX calls can go to the same url and the main difference is the server side handling. The check is simple, if the HTTP_X_REQUESTED_WITH is set then the request is an AJAX request and if it is not set then you're dealing with a standard request.
You don't need to worry about duplicate content, because GoogleBot does not set the HTTP_X_REQUESTED_WITH request header.
I have a very strange problem with JQuery $.post()
I have implemented it in my new web application and get messed response from QA after testing, some say it is working perfectly and some say it is not working;
It generate no error and no warning,,, but i am very upset i have almost compeleted the application and it is impossible to use some alternative,,, please if some one has any idea then please help me.
Regards
I suggest you to write a simple test page that makes only a $.post call and print a result (like 'test passed'). This way you can be sure that there is no other problem in your application causing the error. Afterwards you can send this link to the QA and see if the XHR requests are really working (or not).
Maybe the specify computer setting, such as different browsers or browser security configuration or others, check then step by step. You should may the error logs more detail. otherwise you can use HTTP sniffer tools such as firebug, ieinspectoer..
I apologize if this has been asked before. I searched but did not find anything. It is a well-known limitation of AJAX requests (such as jQuery $.get) that they have to be within the same domain for security reasons. And it is a well-known workaround for this problem to use iframes to pull down some arbitrary HTML from another website and then you can inspect the contents of this HTML using javascript which communicates between the iframe and the parent page.
However, this doesn't work on the iPhone. In some tests I have found that iframes in the Safari iPhone browser only show content if it is content from the same site. Otherwise, they show a blank content area.
Is there any way around this? Are there other alternatives to using iframes that would allow me to pull the HTML from a different domain's page into javascript on my page?
Edit:
One answer mentioned JSONP. This doesn't help me because from what I understand JSONP requires support on the server I'm requesting data from, which isn't the case.
That same answer mentioned creating a proxy script on my server and loading data through there. Unfortunately this also doesn't work in my case. The site I'm trying to request data from requires user login. And I don't want my server to have to know the user's credentials. I was hoping to use something client-side so that my app wouldn't have to know the user's credentials at the other site.
I'm prepared to accept that there is no way to accomplish what I want to do on the iPhone. I just wanted to confirm it.
You generally can NOT inspect the contents of an iframe from another domain via JavaScript. The most common answers are to use JSONP or have your original server host a proxy script to retrieve the inner contents for you.
Given your revisions, without modification or support from the secondary site, you are definitely not going to be able to do what you want via the iPhone's browser.
"In some tests I have found that iframes in the Safari iPhone browser only show content if it is content from the same site"
I found the same thing. Is this documented somewhere? Is there a workaround? This sounds like broken web standards to me, and I am wondering if there is a solution.
I am writing a Javascript based upload progress meter. I want to use the standard multipart submit method (rather than submitting the file in an iframe). During the submit, I send ajax requests that return the % complete of the upload and then update the progress meter accordingly.
This all works smoothly in FireFox & IE. However, Safari seems prevent the completion of ajax requests after the main form has been submitted. In the debugger, I can see the request headers, but it appears as though the response is never received.
Anyone aware of this, or how to get around it?
Yes, this is how Safari and any browser based on WebKit (i.e. Google Chrome) behave. I recently ran into this on a file upload progress meter also. I ended up using the same technique seen at http://drogomir.com/blog/2008/6/30/upload-progress-script-with-safari-support to get the ajax to work. In my case, I didn't want to change the look of my application to the one Drogomir uses, but the technique itself worked. Essentially, the solution is to create a hidden iframe only in Safari that loads jQuery and your AJAX script. Then, the top frame calls a function in that frame on form submit. All other browsers still work the same as before.
This is a WebKit bug. See https://bugs.webkit.org/show_bug.cgi?id=23933
Are you using an iframe to submit your form to? I'm guessing that once the form is submitted, the page enters a state where no more modifications to the DOM can be made.
Check a tutorial such as this one for more information.
This actually sounds like correct behaviour to me - and im surprised that firefox and IE behave otherwise.
It is akin to you attempting to leave a page and the page still interacting with you - sounds naughty!
i can see why this would be of benefit - but I would hope it only the case if you are performing a POST to the uri you are currently accessing, or at worst same-domain.