Set browser in offline mode using javascript - javascript

I am using selenium to test an application, I am starting my browser normally but at some point, I need to switch to offline mode, so far I saw several posts saying that is not possible using selenium unless you start the driver in offline mode ( which is not my case ).
I am using C# and Selenium, and I have planned to integrate my project to run it remotely.
Do you know if there is a way to switch the browser to offline mode using Javascript? or any other method?

You could integrate a Service Worker into your application and coordinate with it to either pass through requests as usual, or, if you flag it to do so, drop all requests and simulate failures.
Service Workers essentially act as a proxy for all requests and you can choose how to handle them (e.g. cache them, refresh data from the server, or simply deny the requests).

Related

Can I run programs/CLI using a PWA?

I heard PWA is capable of more things than a regular web app when it comes to client side operations.
As a disclaimer, I'd like to note that the PWA I've been working on is a privately used project, it has no malicious intent to harm any system.
What I want to do is I want to be able to use cec-client to turn the TV on/off on the system, so I want to be able to either run a pre-written bash script, or access CLI on the system. Is it somehow possible using a PWA? Currently the client system has a simple nodejs app that's listening to a call on localhost from the web app, which in turn turns the TV on. I want to change it to a more sophisticated solution, hence I'm wondering if PWA can run scripts on the client's system.
The system that you are currently using (server side code with access to the necessary API) is likely the best method.
Progressive Web Apps are just websites with some extra features to persist state and emulate some functions available to system apps, and the security risks that would be involved if they could execute arbitrary system code would be enormous.

Web - what is the differences between periodic sync and sync?

I came across two different types of sync in the background for PWAs sync and periodic sync. there are not many resources for them and existing resources do not explain enough with sample working codes.
so my main question is: are there any other logical differences between them other than frequency?
and my side question is: are they handling requests by themselves? I'm asking this because I want something more flexible, I mean I'm managing offline and online situations and saving data in IDB them I'm offline and I just need a background process to get my offline data from my custom IDB and send them to the server.
Here's a few use cases that can help illustrate the difference. Also keep in mind that as of Feb. 2021, the Background Sync is only available in Chrome and Chromium-based browsers, and Periodic Background Sync is only available in Chrome after a progressive web app has been installed.
Background Sync
The use case is retrying a failed update/upload operation (usually a POST or a PUT) "in the background" at a regular interval, until it succeeds. You could imagine, for instance, trying to upload a new photo to a social media site, but your network connection is down. As a user, you'd want that upload retried at some point in the future.
The API only provides the mechanism for triggering an opportunity to re-attempt the network operation, via a sync event in the web app's service worker. It's up to a developer to store information about the failed request (usually in IndexedDB) and actually resend it, and indicate whether the sync was successful or if it failed again.
(The workbox-background-sync library can help with the implementation details, if you'd rather not deal with everything yourself.)
Periodic Background Sync
The use case is refreshing caches "in the background" so that the next time a user opens your web app, the data is fresher than it otherwise would be. You could imagine an installed news progressive web app using periodic background sync to update its cache of top headlines each morning.
Under the hood, this works by invoking a periodicsync event in your service worker, and inside that event handler, you'd normally make a GET request to update something stored in the Cache Storage API or IndexedDB.

enable cookies and javascript

I am trying to checkout using PayPal in sandbox environment in my script in JMeter.
It throws an error in the response tree as were sorry, but to checkout using PayPal, you need to turn on javascript and enable cookies in your web browsers settings.
Load testing PayPal is not the best idea, I would recommend leaving it to PayPal QA engineers and focus solely on your application. Even if you figure out that PayPal operations are slow - I don't think you will be able to do anything with it.
In regards to your question itself: well-behaved JMeter test must represent a real user using real browser as close as possible with all related stuff (cookies, headers, cache, think times, etc.). So first of all add HTTP Cookie Manager to your Test Plan.
Also be aware that according to JMeter main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does
So if the application you're testing is built using AJAX technology don't expect JMeter to create and send JavaScript-driven requests, you will need to add HTTP Request samplers to mimic them.
Check out How to make JMeter behave more like a real browser article for more tips and tricks

Selenium client-sided load test max request per second?

I am using Selenium to perform client-sided testing of a specific browser on an AngularJS web application. I want perform a load test by sending many request from concurrent users. For example, 1k request in a second from x amount of users, 2k request, etc.
There is no formal documentation on this topic. Has anyone done this before? Is there an (expected) maximum amount of request Selenium can perform, I know it would be dependant on hardware. I also know there exist tools such as Jmeter, but those do not run client-side JavaScript.
JMeter doesn't run client-side JavaScript but in order to simulate 2k concurrent users you simply don't need to run it. If JavaScript is generating some network traffic - your will be able to mimic it using normal JMeter's HTTP Request samplers. Client-side performance, i.e. script(s) execution timeline should be tested separately using JavaScript Profilers and/or tools like YSlow
Also be aware that Selenium can be integrated with JMeter via WebDriver Sampler plugin.
Alternatively you can consider LoadRunner TruClient, however both approaches which kick off real browsers (no mater full or headless) will be very resources intensive.

Extracting (scraping) data from the web app which requires to log in [Node.js]

I am in position where I need to get some data from a page which does not provide any API.
Situation is such:
To get to the page data firstly you must log in (this normally
creates some session storage / cookies in browser)
Then there get endpoint to get auth key (it checks if user is authenticated using the session storage / cookies which were created during login)
When you have auth key there are plenty of get endpoints which can be accessed with it. That's what I need!
The actual login page is: https://w2.hronline.co.uk/account/login
Could I login just by creating https post to forms url when passing parameters with login credentials? If yes - would node create some session cookies / storage values, so later on I could make another get request to get auth key? If no, is there another way to do that?
You can't scrape directly in nodejs. You have to use some tools which provide you a web-browser in which you can execute js code/actions. Currenlty, I recommend you either Selenium, either PhantomJS.
PhantomJS is usable in nodejs, but it is limited since you can't use the require('') for all the nodejs libraries.
You can use some libraries which use PhantomJS and let you write the scraper in node.js. You should try first CasperJS which is made to do some unit tests like you want.
If you want to go further, you can try phantomjs-node which allows you to instanciate a phantomjs application in parallel, and to send it the js calls you need it to execute (requires to be used to phantomjs).
PhantomJS Is Dead, Use Chrome Headless [...] (https://semaphoreci.com/blog/2018/03/27/phantomjs-is-dead-use-chrome-headless-in-continuous-integration.html):
In April 2017, Vitaliy Slobodin who was at the time the sole
maintainer of PhantomJS, announced that he’s stepping down as
maintainer, leaving the project effectively abandoned:
I think people will switch to it, eventually. Chrome is faster and
more stable than PhantomJS. And it doesn’t eat memory like crazy.
Since you're scraping, Chrome Headless is even more relevant than for CI/testing.
Chrome Headless is a real browser and behaves accordingly.
Here's how to do it:
https://www.sitepoint.com/headless-chrome-node-js/
https://blog.logrocket.com/how-to-set-up-a-headless-chrome-node-js-server-in-docker/

Categories

Resources