Improve page load latency in chrome extension - javascript

I'm building a google chrome extension that looks at a webpage, does some calculations based on features of the page, and then loads an iFrame to display the results. Currently, I am working on trying to create a more accessible version for visually-impared users. I have an option in my options page to allow users to click if they want to use the visually accessible option, and then that information is stored as a boolean in browser storage. The issue is, I have to check for that boolean in storage every single time I load the iFrame (which is every time the page switches or refreshes), and it adds roughly 500ms of latency to the iFrame load.
I have tried using both chrome.storage.sync, and localStorage (from background, with message passing to my content script) to see if the synchronous version would be a bit faster, but they both add roughly 500ms to the running of my content script. Right now I have two different html files, the standard one and the visually accessible one, and the content script chooses which to load based on retrieving the accessibility boolean from storage. If there is a faster way to just programmatically switch the css that the standard html file loads, I could do that as well. The thing is, any way I figure it, I just can't seem to think of a way to avoid having to retrieve the boolean from storage every single time the iFrame loads.
I suppose I'm wondering if there is some other way around this, like if I could somehow direct the extension to just automatically use a certain version of the html based on which option the user selects when they install the extension. Any suggestions would be greatly appreciated.
Here is the function in question (from content script):
function insertFrame(){
var extensionOrigin = 'chrome-extension://' + chrome.runtime.id;
if (!location.ancestorOrigins.contains(extensionOrigin)) {
chrome.runtime.sendMessage({contentScriptQuery: "accessible?"}, function(response){
var accessible = response;
if(accessible === "true"){
//load the accessible frame
var iframe = document.createElement('iframe');
iframe.id = "myFrame";
iframe.src = chrome.runtime.getURL('accessible.html');
document.body.appendChild(iframe);
}else{
//load the regular frame
var iframe = document.createElement('iframe');
iframe.id = "myFrame";
iframe.src = chrome.runtime.getURL('popup.html');
document.body.appendChild(iframe);
}
console.log("Time to run content script:", Date.now() - timer);
});
}
//for testing purposes
// var extensionOrigin = 'chrome-extension://' + chrome.runtime.id;
// if (!location.ancestorOrigins.contains(extensionOrigin)) {
// var iframe = document.createElement('iframe');
// iframe.id = "myFrame";
// iframe.src = chrome.runtime.getURL('popup.html');
// document.body.appendChild(iframe);
// }
// console.log("Time to run content script:", Date.now() - startTime);
}
And in my background page:
chrome.runtime.onMessage.addListener(function (request, sender, sendResponse){
if(request.contentScriptQuery === 'accessible?'){
var a = localStorage.getItem('accessible');
sendResponse(a);
}
return true; //with or without this line, timing is the same
});
I've been testing the content script just by commenting out each half (with and without reading from storage). You can see the line where I am logging how many milliseconds have elapsed since the content script started running. I have also verified this latency by testing load times in the network panel of dev tools. I get an average of 6.33s for load time without reading storage, and 6.72s with reading storage, which confirms the timing discrepancy I am logging in my content script. The only thing I am changing between tests are commenting out the half of the function so that I can test the other half.

Related

Auto refresh specific div and load image with jquery/ajax

I have an internet radio station and I need a script that will display a picture of the current song in a particular dvi with an id. The image is automatically uploaded via ftp to the server each time the song changes..
HTML:
<div id="auto"></div>
JS:
$ (document).ready(function() {
$('#auto').html('<img src="artwork.png"></img>');
refresh();
});
function refresh() {
setTimeout (function() {
$('#auto').html('<img src="artwork.png"></img>');
refresh();
}, 1000);
}
I tried this, but all I get is that the image is loaded, but in case of a change, I have to manually refresh the whole page again..
I'll point out multiple things here.
I think your code is just fine if you are going for the setTimeout recursive calls instead of one setInterval action to repeat it.
File Caching
your problem is probably the browser's cache since you are using the same image name and directory all the time. browsers compare the file name and directory and to decide to load it from its cache or else it will request it from the server. there are different tricks you can do to reload the image from the server in this particular case.
Use different file names/directories for the songs loaded dynamically
Use a randomized GET query (e.g. image.png?v=current timestamp)
Your method for switching
you are replacing the file with FTP, I wouldn't recommend that. maybe you should have all your albums and thumbnails uploaded to the server and use a different dynamic switching for efficiency and less error proneness and will help you achieve method #1 in the previous section better.
Loading with constant refresh
I would like to highlight that if you are using nodeJs or nginx servers - which are event based - you can achieve the same functionality with much less traffic. you don't need a refresh method since those servers can actually send data on specific events to the browser telling it to load a specific resource at that time. no constant refresh is required for this.
You consider your options, I tried to be as comprehensive as I could
At the top level, browser cache the image based on its absolute URL. You may add extra query to the url to trick browser that is another new image. In this case, new URL of artist.png will be artist.png?timestamp=123
Check this out for the refresh():
function refresh() {
setTimeout (function() {
var timestamp = new Date().getTime();
// reassign the url to be like artwork.png?timestamp=456784512 based on timestmap
$('#auto').html('<img src="artwork.png?timestamp='+ timestamp +'"></img>');
refresh();
}, 1000);
}
You may assign id attribute to the image and change its src url
html
<img id="myArtworkId" src="artwork.png"/>
js in the refresh method
$('#myArtworkId').attr('src', 'artwork.png?timestamp=' + new Date().getTime());
You can use window.setInterval() to call a method every x seconds and clearInterval() to stop calling that method. View this answer for more information on this.
// Array containing src for demo
$srcs = ['https://www.petmd.com/sites/default/files/Acute-Dog-Diarrhea-47066074.jpg',
'https://www.catster.com/wp-content/uploads/2018/05/Sad-cat-black-and-white-looking-out-the-window.jpg',
'https://img.buzzfeed.com/buzzfeed-static/static/2017-05/17/13/asset/buzzfeed-prod-fastlane-03/sub-buzz-25320-1495040572-8.jpg?downsize=700:*&output-format=auto&output-quality=auto'
]
$i = 0;
$(document).ready(function() {
$('#auto').html('<img src="https://images.pexels.com/photos/617278/pexels-photo-617278.jpeg?auto=compress&cs=tinysrgb&dpr=1&w=500"></img>');
// call method after every 2 seconds
window.setInterval(function() {
refresh();
}, 2000);
// To stop the calling of refresh method uncomment the line below
//clearInterval()
});
function refresh() {
$('#auto').html('<img src="' + $srcs[$i++] + '"></img>');
// Handling of index out of bound exception
if ($srcs.length == $i) {
$i = 0;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="auto"></div>

Print a form from another link on clicking a button

I'm trying to figure out how to print an image from another web page link on clicking a button.
I know window.print() but how could I specify the other link I want to print the image from?
Same domain
If the page you wish to print is from the same domain as the iframe's parent then MDN has a good example of how to do this.
You should create a hidden iframe, load your page in it, print the iframe contents and then remove the iframe.
JavaScript:
function printURL( url ) {
var frame = document.createElement( "iframe" );
frame.onload = printFrame;
frame.style.display = 'none';
frame.src = url;
document.body.appendChild(frame);
return false;
}
function printFrame() {
this.contentWindow.__container__ = this;
this.contentWindow.onbeforeunload = closeFrame;
this.contentWindow.onafterprint = closeFrame;
this.contentWindow.focus(); // Required for IE
this.contentWindow.print();
}
function closeFrame () {
document.body.removeChild(this.__container__);
}
HTML:
<button onclick="printURL('page.html');">Print external page!</button>
Cross domain
If the page you wish to print is from another domain then your browser will throw a Same-Origin Policy error. This is a security feature that forbids scripts accessing some data from different domains.
To print cross domain content you will need to scrape the page's source and load it into the iframe. The browser will then believe that the iframe's content comes from your domain and won't hiccough when you try to print.
However, if you try to do this in the frontend, this just pushes the problem back one step further, as the same-origin policy also won't let you scrape content from another domain in this way. But the same-origin policy for data scraping is the equivalent of tying a bull up with cotton thread - it doesn't really hold you back - so this hurdle is easily circumvented. You can either write your own backend script (in PHP or your choice of language) that will scrape the content and deliver it to your page, or you can use any one of a number of web services that already do this. https://multiverso.me/AllOrigins/ is as good as any, it doesn't require backend programming, and it's free so I'll use that in this example.
Using Jquery, the modified printURL function from above would be:
function printURL( url ) {
var jsonUrl = 'http://allorigins.me/get?url=' + encodeURIComponent(url) + '&callback=?';
// the url / php function that allows content to be scraped from different origins.
$.getJSON( jsonUrl, function( data ) {
// get the scraped content in data.content
var frame = document.createElement( "iframe" ),
iframedoc = frame.contentDocument || frame.contentWindow.document;
frame.onload = printFrame;
frame.style.display = 'none';
iframedoc.body.html( data.contents );
document.body.appendChild(frame);
}
return false;
}
The other functions from above would remain the same.
Note that if the page you're printing is built using AJAX calls or is significantly styled with scripting then the iframe may print something that looks quite unlike what you were expecting.

How do I navigate to bing.com and enter a search text using the chrome console?

Below is my code.
It is resulting in unexpected behaviour.
It navigates to bing.com but it does not fill in the text field. Also, I have noticed that the console get cleared after navigating to a new webpage.
window.location = "https://www.bing.com";
window.onload = function(){
var editSearch = document.getElementById("sb_form_q");
editSearch.value = "Quux";
}
You are binding the onload function to the existing window object.
When the browser loads the new page, it will make a new window object which won't have your property set on it.
JavaScript run in one page (even when you are running it through developer tools) can't persist variables onto a different page.
(Storage mechanisms like localStorage and cookies are available, but you would need code in the subsequent page to look for them).
JavaScript is only valid for the current page you are on. When you are executing code from DevTools console, you are executing code on that page itself. So, when you navigate to another page using window.location you loose the onload handler you have defined.
To add handlers to a different page, it must be connected to your page (the parent) in some way, like an iframe or a popup.
ifrm = document.getElementById('frame');
ifrm.src = 'http://example.com';
ifrm.contentWindow.onload = function () {
// do something here with
// ifrm.contentWindow.document.getElementById('form')
}
As #Quentin said.
But you can do another way like ..
var keyword = "Quux";
window.location = "https://www.bing.com/search?q="+keyword;

ReportViewer Web Form causes page to hang

I was asked to take a look at what should be a simple problem with one of our web pages for a small dashboard web app. This app just shows some basic state info for underlying backend apps which I work heavily on. The issues is as follows:
On a page where a user can input parameters and request to view a report with the given user input, a button invokes a JS function which opens a new page in the browser to show the rendered report. The code looks like this:
$('#btnShowReport').click(function () {
document.getElementById("Error").innerHTML = "";
var exists = CheckSession();
if (exists) {
window.open('<%=Url.Content("~/Reports/Launch.aspx?Report=Short&Area=1") %>');
}
});
The page that is then opened has the following code which is called from Page_Load:
rptViewer.ProcessingMode = ProcessingMode.Remote
rptViewer.AsyncRendering = True
rptViewer.ServerReport.Timeout = CInt(WebConfigurationManager.AppSettings("ReportTimeout")) * 60000
rptViewer.ServerReport.ReportServerUrl = New Uri(My.Settings.ReportURL)
rptViewer.ServerReport.ReportPath = "/" & My.Settings.ReportPath & "/" & Request("Report")
'Set the report to use the credentials from web.config
rptViewer.ServerReport.ReportServerCredentials = New SQLReportCredentials(My.Settings.ReportServerUser, My.Settings.ReportServerPassword, My.Settings.ReportServerDomain)
Dim myCredentials As New Microsoft.Reporting.WebForms.DataSourceCredentials
myCredentials.Name = My.Settings.ReportDataSource
myCredentials.UserId = My.Settings.DatabaseUser
myCredentials.Password = My.Settings.DatabasePassword
rptViewer.ServerReport.SetDataSourceCredentials(New Microsoft.Reporting.WebForms.DataSourceCredentials(0) {myCredentials})
rptViewer.ServerReport.SetParameters(parameters)
rptViewer.ServerReport.Refresh()
I have omitted some code which builds up the parameters for the report, but I doubt any of that is relevant.
The problem is that, when the user clicks the show report button, and this new page opens up, depending on the types of parameters they use the report could take quite some time to render, and in the mean time, the original page becomes completely unresponsive. The moment the report page actually renders, the main page begins functioning again. Where should I start (google keywords, ReportViewer properties, etc) if I want to fix this behavior such that the other page can load asynchronously without affecting the main page?
Edit -
I tried doing the follow, which was in a linked answer in a comment here:
$.ajax({
context: document.body,
async: true, //NOTE THIS
success: function () {
window.open(Address);
}
});
this replaced the window.open call. This seems to work, but when I check out the documentation, trying to understand what this is doing I found this:
The .context property was deprecated in jQuery 1.10 and is only maintained to the extent needed for supporting .live() in the jQuery Migrate plugin. It may be removed without notice in a future version.
I removed the context property entirely and it didnt seem to affect the code at all... Is it ok to use this ajax call in this way to open up the other window, or is there a better approach?
Using a timeout should open the window without blocking your main page
$('#btnShowReport').click(function () {
document.getElementById("Error").innerHTML = "";
var exists = CheckSession();
if (exists) {
setTimeout(function() {
window.open('<%=Url.Content("~/Reports/Launch.aspx?Report=Short&Area=1") %>');
}, 0);
}
});
This is a long shot, but have you tried opening the window with a blank URL first, and subsequently changing the location?
$("#btnShowReport").click(function(){
If (CheckSession()) {
var pop = window.open ('', 'showReport');
pop = window.open ('<%=Url.Content("~/Reports/Launch.aspx?Report=Short&Area=1") %>', 'showReport');
}
})
use
`$('#btnShowReport').click(function () {
document.getElementById("Error").innerHTML = "";
var exists = CheckSession();
if (exists) {
window.location.href='<%=Url.Content("~/Reports/Launch.aspx?Report=Short&Area=1") %>';
}
});`
it will work.

Navigating / scraping hashbang links with javascript (phantomjs)

I'm trying to download the HTML of a website that is almost entirely generated by JavaScript. So, I need to simulate browser access and have been playing around with PhantomJS. Problem is, the site uses hashbang URLs and I can't seem to get PhantomJS to process the hashbang -- it just keeps calling up the homepage.
The site is http://www.regulations.gov. The default takes you to #!home. I've tried using the following code (from here) to try and process different hashbangs.
if (phantom.state.length === 0) {
if (phantom.args.length === 0) {
console.log('Usage: loadreg_1.js <some hash>');
phantom.exit();
}
var address = 'http://www.regulations.gov/';
console.log(address);
phantom.state = Date.now().toString();
phantom.open(address);
} else {
var hash = phantom.args[0];
document.location = hash;
console.log(document.location.hash);
var elapsed = Date.now() - new Date().setTime(phantom.state);
if (phantom.loadStatus === 'success') {
if (!first_time) {
var first_time = true;
if (!document.addEventListener) {
console.log('Not SUPPORTED!');
}
phantom.render('result.png');
var markup = document.documentElement.innerHTML;
console.log(markup);
phantom.exit();
}
} else {
console.log('FAIL to load the address');
phantom.exit();
}
}
This code produces the correct hashbang (for instance, I can set the hash to '#!contactus') but it doesn't dynamically generate any different HTML--just the default page. It does, however, correctly output that has when I call document.location.hash.
I've also tried to set the initial address to the hashbang, but then the script just hangs and doesn't do anything. For example, if I set the url to http://www.regulations.gov/#!searchResults;rpp=10;po=0 the script just hangs after printing the address to the terminal and nothing ever happens.
The issue here is that the content of the page loads asynchronously, but you're expecting it to be available as soon as the page is loaded.
In order to scrape a page that loads content asynchronously, you need to wait to scrape until the content you're interested in has been loaded. Depending on the page, there might be different ways of checking, but the easiest is just to check at regular intervals for something you expect to see, until you find it.
The trick here is figuring out what to look for - you need something that won't be present on the page until your desired content has been loaded. In this case, the easiest option I found for top-level pages is to manually input the H1 tags you expect to see on each page, keying them to the hash:
var titleMap = {
'#!contactUs': 'Contact Us',
'#!aboutUs': 'About Us'
// etc for the other pages
};
Then in your success block, you can set a recurring timeout to look for the title you want in an h1 tag. When it shows up, you know you can render the page:
if (phantom.loadStatus === 'success') {
// set a recurring timeout for 300 milliseconds
var timeoutId = window.setInterval(function () {
// check for title element you expect to see
var h1s = document.querySelectorAll('h1');
if (h1s) {
// h1s is a node list, not an array, hence the
// weird syntax here
Array.prototype.forEach.call(h1s, function(h1) {
if (h1.textContent.trim() === titleMap[hash]) {
// we found it!
console.log('Found H1: ' + h1.textContent.trim());
phantom.render('result.png');
console.log("Rendered image.");
// stop the cycle
window.clearInterval(timeoutId);
phantom.exit();
}
});
console.log('Found H1 tags, but not ' + titleMap[hash]);
}
console.log('No H1 tags found.');
}, 300);
}
The above code works for me. But it won't work if you need to scrape search results - you'll need to figure out an identifying element or bit of text that you can look for without having to know the title ahead of time.
Edit: Also, it looks like the newest version of PhantomJS now triggers an onResourceReceived event when it gets new data. I haven't looked into this, but you might be able to bind a listener to this event to achieve the same effect.

Categories

Resources