First I recorded a script against my "Rich" Internett Application having Wickets and JavaScripts and it did not go very well at replay.
However, recording in URL mode solved a lot of these issues.
Why is that?
In general I assume that my script recorded in URL mode did capture things like:
web_url("bootstrap-collapse-ver-12312478469.js"
.
.
.
"RecContentType=text/JavaScript",
and these calls to i.e. JavaScript manipulating the web page made the page recognizabel the the replay because the Javascripts where actually executed during replay. In HTML mode these Javascripts where not executed (not seing them in my script after recording), and hence the page did not have the proper state for the replay to recognize it?
Is my assumption correct?
The only client types which execute JavaScript are
GUI Virtual users (QTP Operating Against Full Browsers)
Citrix|RDP operating against full browsers
TruClient
What you are seeing is the output of the executed JavaScript as a set of explicit requests.
Related
So I'm playing a game online on my laptop and it is a pretty simple html5 game but the optimization is nonexistant. The game with a few circles on my screen is using 85% of my cpu.
Logically, I started profiling the game and was trying to figure out if I could optimize it for my laptop. Now, what I'm wondering is how could I run the game but with a tweaked version of the JS script that is running.
If I try to save the page and run it from my laptop it of course has CORS issues so I can not access the server.
Can I somehow change the script a bit which is executing in my browser but while still staying "inside the webpage" which is running so that I may normally make XHR requests to the server?
Also, although this is not in the title of the question, can I somehow proxy the XHR request to my script but while not breaking the CORS rule?
Why is it so different if I run the same thing from my browser and from a saved HTML on my desktop? I have the same IP and am doing the same thing but from the url it "feels like" I'm running it from somewhere else. Can I somehow imitate that I'm running "from the webpage" but instead run it from a modified saved html?
You could proxy, given there isn't a cross domain protection mechanism or some sort of logging in (which complicates stuff).
What you could very well do is use a browser extension which allows you to add CSS, HTML and JavaScript.
I'm not entirely savvy on extensions so I'm not sure you can modify existing code but I'm guessing that if you can add arbitrary JS code you may very well replace the script tag containing the game for a similar personally modified script based on it. It's worth a try...
Link to getting started with chrome extensions
Update:
If you're set on doing it, proxying ammounts to requesting an URL with your application and do something with the page (html) instead of the original source. I assume you want to change the page and serve it to your browser.
With this in mind you will need the following, I dont know C# so you'll have to google around for libraries and utilities:
a way to request URLs (see link at bottom)
a way to modify the page, you need a DOM crawler
a way to start said process and serve it to your browser by hitting your own URL, meaning you need some sort of web server
I found the following question specifically on proxying with C#
I'm building a feature which require web publishers to put a JS code snippet in the section of a page in order for it to work. This code includes a call to an external (and dynamically generated) JS file from a remote server. The file cannot be cached so putting it on a CDN isn't an option.
What I'm worried about is that if there will ever be a problem with the remote server which will make the remote file unreachable, it can take down the page in which the code is included (potentially the entire site as the code suppose to be included site-wide).
Is there a way to make sure that no matter what, the availability of the remote file will never affect the availability of the page in which the code is included?
-edit-
The resources in the remote file need to be available before the HTML of the page starts to render. Loading the code asynchronously isn't an option.
You could specify async=true which will not "block" your page from resuming loading other resources. Otherwise it'll halt at that script, though it may vary depending on how each browser handles stalling script elements.
Note: The support of the async attribute varies - modern browsers circa 2014 will understand it but if you need to support legacy browsers you may need to look for an alternative solution ( which you can see # the link referenced ).
More details # https://css-tricks.com/thinking-async/
I was wondering if there's any way to attach a js lib to an external webpage after the page has loaded?
To provide a simple example, could I load www.google.com into IE and somehow display the webpage with a green scroll bar?
I would like this process to happen automatically on each page load instead of having to manually execute this process on each page load.
I am assuming that you are talking from a web developer's point of view.
I don't think it is possible without any hacks.
This would also be a huge security risk, because loading javascript code on an external website means that the code can potentially do anything on behalf of the user. It can capture keystrokes, take screenshots, note down passwords and do a lot of illegal stuff.
So instead of this, you can create a browser extension (add-on) which will have to be installed by user's permission (and his knowledge), and can run any code on any page (if the user allows it)
I'm working on a headless browser based on WebKit (using C++/Qt4) with JavaScript support. The main purpose for this is being able to generate a HTML spanshot of websites heavily based on JavaScript (see Backbone.js or any other JavaScript MVC).
I'm aware that there isn't any way for knowing when the page is completely loaded (please see this question) and because of that, after I get the loadFinished signal (docs here) I create a timer and start polling the DOM content (as in checking every X ms the content of the DOM) to see if there were any changes. If there werent I assume that the page was loaded and print the result. Please keep in mind that I already know this is not-near-to-perfect solution, but it's the only one I could think of. If you have any better idea please answer this question
NOTE: The timer is non-blocking, meaning that everything running inside WebKit shouldn't be affected/blocked/paused in any way.
After testing the headless browser with some pages, everything seems to work fine (or at least as expected). But here is where the heisenbug appears. The headless browser should be called from a PHP script, which should wait (blocking call) for some output and then print it.
On my test machine (Apache 2.3.14, PHP 5.4.6) running the PHP script outputs the desired result, aka, the headless browser fetches the website, runs the JavaScript and prints what a user would see; but running the same script in the production server will fetch the website, run some of the JavaScript code and print the result.
The source code of the headless browser and the PHP script I'm using can be found here.
NOTE: The timer (as you can see in the source code of the headless browser) is set to 1s, but setting a bigger amount of time doesn't fix the problem
NOTE 2: Catching all JavaScript errors doesn't show anything, so it's not because of a missing function, wrong args, or any other type of incorrect code.
I'm testing the headless browser with 2 websites.
This one is working on both my test machine and in production server, while this one works only in my test machine.
I'm more propone to think that this is some weird bug in the JavaScript code in the second website rather than in the code of the headless browser, as it generates a perfect HTML snapshot of the first website, but then again, this is a heisenbug so I'm not really sure what is causing all this.
Any ideas/comments will be appreciated. Thank you
Rather than polling for DOM changes, why not watch network requests? This seems like a safer heuristic to use. If there has been no network activity for X ms (and there are no pending requests), then assume page is fully "loaded".
in a simple html file opened locally via firefox I need some javascript code to execute a command (maybe "ls") and get it's result in a string I can use in js/jquery to alter the page contents.
I already know this is a generally bad idea, but I have to make this little local html file capable of running several scripts without a server and without cgi.
In the past I've used to install a plugin in TiddlyWiki (www.tiddlywiki.com) to execute external commands (firefox requested authorization for every operation), so javascript can do it, but how to get command result in js after execution?
I don't believe there's any way to do this without a cooperating browser plug-in. The browser plug-in would be told what command to execute via javascript, it would go execute that command and then call you back with a callback when the results were available. This could be very dangerous as giving the browser access to your local system in almost anyway opens you up to lots of types of attacks (which is why browsers don't offer this capability).