So there is a problem with JavaScript and requests (in Python) and that is, it does not use JavaScript when requesting a webpage.
The website I'm working with (https://access.paylocity.com/) requires JavaScript and without it, it changes the content of the page to just a text at the top saying, "Please enable JavaScript to view the page content."
(I could be wrong here but) I think one solution is the use of Selenium, but that would replace requests which I'm fine with as long as there are no other ways of fixing/bypassing this JavaScript detection.
(For those wondering, this python project of mine is supposed to automatically fetch the events on the Paylocity calendar, then port those events to another calendar that I frequently use everyday. It's also just intended for myself.)
Edit: Here is the code I have if that will help https://pastecode.io/s/GXTUO1BgtR (I didn't know where to paste my code, so I decided on that website. If I should change it, please comment or say something about it.)
Since the website you're working with is dynamically loading the JS as far I can tell, I think you have no other choice as to making use of Selenium. I had a project on my own a couple weeks ago and run into a similar problem which I could also solve using Selenium. But, I'm no expert, I'm just giving away my thoughts on this.
Related
I'm trying to recreate the following "effect" for my portfolio but I've been out of web development for a while and I can't find my way around it. Hopefully someone here can give me a hint.
I'm trying to achieve a kind of smooth transition between pages as you can see for example on those websites.
www. say.studio/
cedricklachot. com
www.durimel. io
(sorry I know is not the right way to put links but I don't know how to put them, it's constantly giving me error messages when I paste the URLs and it's driving me mad!)
When you switch from one page to another it feels like everything is on the same page, because it's so smooth and navigation elements remain on place, like it's just one single html file for the whole website and the rest of stuff is loaded by javascript, but as I see from the URLs there are different pages, so it must be switching between different html files. But this "smooth switching" so to say is what I can't find the way to replicate.
I have tried with onload animations like fading in effects, but still it's very clear that the browser is switching between different html pages so it definitely doesn't have the smoothness that I see on the examples I provided.
I hope I explained myself well as I'm not native English speaker :) thanks
Welsome to Stack Overflow. React framework is good for this, the page re-renders with updated components when a user changes it. You can also use php for this, which is a more primitive version in my opinion.
here's a cool app. i use Wappalyzer for firefox, it lets you see what technologies. the page is using, when i examine your link
well this one doesnt work https://www.say.studio/cedricklachot.com
this one https://www.durimel.io/nel
uses PHP on an apache server.
So view the source code, then you're going to have to learn PHP (Or copy and edit it)
the server is going to be tricky if you don't have experience. I'd recommend using AWS EC2, and doing an ubuntu build with Apache (or Nginx)
let us know when its finished : )
I am working on parsing a html page.
I tried spynner, selenium, mechanize but didnt able to solve issue with javascript with this case.
Can anyone let me know how can i work with such issue to fetch the data to next page?
when I worked on selenium, in this url first we have to get the data in other select box and then proceed but using selenium I am able to get only same url after click on the next page,
same incase of spynner too.
From what I can tell, mechanize doesn't support javascript. So, if you're doing any automation with javascript heavy sites, mechanize is probably not the way to go. Rather, you probably need python to script a fully functional web browser. You can do this with Mozilla via PyXPCOM, with Ruby and WATIR, or with spynner. Of these options, I'd probably try spynner, first, as spynner is well integrated with python.
Good luck with your project, and happy coding!
There is a website that I visit often... let's call it www.example.com. And, I am able to interact with parts of this website. The interactions send XMLHttpRequest and get a response back through Javascript, jQuery I believe.
I'm not sure what technology will let me achieve what I want to do, and where to start. Basically, I want to add additional options/shortcuts that the site does not provide. I thought about maybe using a macro, but trying to use macro recording software is just a pain in the butt.
I inspected (using Google Chrome's Developer Tools) the XMLHttpRequest being sent back and forth and I noticed that it is simple JSON messages. I figured the best way to add enhancements to the site without waiting for the actual owners of the site to do so would be to simulate the website sending/recieving these XMLHttpRequest/Response and making additional adjustments to the DOM to provide extra shortcuts.
I don't want to interfere with the original site's functionality though... ie if I send a request and receive a response I want both the original script and my script to process the response. So, here is where I'm stuck... I'm not sure whether to go along the paths of creating a C# application or a Google Chrome extension (I use Google Chrome) or something else alltogether. Any pointers on what dev tools/languages will give me the ability to do what I want would be great. Thanks!
Chrome has built in support for user scripts. You can use these to modify the page as you see fit and also to make requests. Without more details regarding what exactly you want to do with these AJAX request it's hard to advise further.
I'm not 100% sure what your question is, but as I understand it, you want to be able to make changes to a certain website. If these changes can be done with js, i would recommend Greasemonkey for Firefox. It basically lets you run a custom script when you are visiting a certain webpage/domain. You can be as specific as you want about which pages use the script. Once your script loads jQuery, it is really easy to add any functionality.
https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/
You can find pre-written scripts for tons of sites here:
http://userscripts.org/
I'm thinking about creating a webpage and I'm trying to brainstorm some ways to display them in the page.
If i wanted to get dirty and create everything myself, i think i could do it with html5, CSS3, and javascript/jquery. Just do some kind of page buttons with an image tag and maybe get into some more detailed stuff as it comes up (i dont know how i would do zooming and multiple pages).
But wahat i really want to know is if there is already some way to do this? I've looked around for a bit and cant seem to find any sort of plugin that would read a cbz file or display an set of images with the 'e-reader' type of tools in mind. Just wondering if anyone knows of anything?
Thanks
I used to use an online reader for a long time so I started an experiment to build one myself a while back: netcomix
It's open source so you can see if you find anything appealing in what I did. I figured I'd do all the real UI work client side with HTML, CSS, and JavaScript and the server was strictly responsible for acting as a service (for example, to supply a list of comics or a list of all the pages in a particular issue) and serving up the individual JPG/PNG/GIF files. That compartmentalized things nicely and I was very pleased with how jQuery BBQ gave me a history that I could back through even though I stayed on one page the whole time.
Now if I were to do the same experiment again, I'd use Backbone.js to give some structure to the client side and obviously it needs a lot of love because the server side really does nothing at the moment. Early versions were strictly hard coded although I started putting in some simple SQL stuff in there in the latest version. It's nothing more than an experiment though and should be treated as such. It's there for ideas and little else. If you find it interesting and want some more ideas contact me and I'll be happy to let you know all my wacky ideas for such a program.
I know this is an old question. But web technologies have gotten better in the last few years. There are several comic book readers that can work in the browser using pure HTML and JavaScript. I wrote one called: http://comic-book-reader.com .
If you want to see a very simple example of how to read CBR and CBZ files in the browser. You should check out http://workhorsy.github.io/uncompress.js/examples/simple/index.html which uses the JavaScript library https://github.com/workhorsy/uncompress.js
I'm connecting to a web site, logging in.
The website redirects me to new pages and Mechanize deals with all cookie and redirection jobs, but, I can't get the last page. I used Firebug and did same job again and saw that there are two more pages I had to pass with Mechanize.
I took a quick look at the pages and saw that there is some JavaScript and HTML code but couldn't understand it because it doesn't look like normal page code. What are those pages for? How they can redirect to other pages? What should I do to pass these?
If you need to handle pages with Javascript, try WATIR or Selenium - those drive a real web browser, and can thus handle any Javascript. WATIR Classic requires either IE or Firefox with a certain extension installed, and you will see the pages flash on the screen as it works.
Your other option would be understanding what the Javascript on the offending page does and bypassing it manually, but that seems onerous.
At present, Mechanize doesn't handle JavaScript. There's talk of eventually merging Johnson's capabilities into Mechanize, but until that happens, you have two options:
Figure out the JavaScript well enough to understand how to traverse those pages.
Automate an actual browser that does understand JavaScript using Watir.
what are those pages for? how they can redirect to other pages. what should i do to pass these?
Sometimes work is done on those pages. Sometimes the JavaScript is there to prevent automated access like what you're trying to do :). A lot of websites have unnecessary checks to make sure you have a "good" browser, so make sure that your user_agent is set to something common, like IE. Sometimes setting the user_agent to look like an old browser will let you get past without JavaScript.
Website automation is fun because you have to outsmart the website and its software developers, using multiple strategies. Like the others said, Watir is the best tool for getting past JavaScript at the moment.