How to generate target webpage title in link mouseover text? - javascript

I have a blog where I format links in a specific way, by adding mouseover/hover text with a basic description of the link in the following format: {page title} [by {author(s)} # {publication date & time}] | {website}. Here’s a screencap with an example.
As you can imagine, manually entering that title text for every single link gets quite tiresome. I’ve been doing this for years and I’m dying for a way to automate it.
Is there a way to automatically generate a target webpage’s title, and possibly the site/domain, in link mouseover texts across an entire website? (Ideally it would be formatted as indicated above, with author(s) and posted date/time and all, but that would likely involve too much coding for me.)
Please keep in mind that I only have a moderate, self-taught grasp of HTML and CSS.
Update: Anik Raj below provided an answer below that’s perfect – a bit of JS to generate a mouseover tooltip with the target page’s title – but I can’t get the script to work on my Blogger blog. I first saved the code to a .js file in my Dropbox and tried linking to it using the following code (which works fine for other external JS scripts):
<!-- Link hover title preview script (source: https://stackoverflow.com/questions/49950669/how-to-generate-target-webpage-title-in-link-mouseover-text/49951153#49951153) -->
<script async='async' src='https://dl.dropboxusercontent.com/s/h6enekx0rt9auax/link_hover_previews.js' type='text/javascript'/>
… But nothing happens. And when I insert the script in the page HTML, I get the following error (screencap here) and Blogger won’t let me save the template:
Error parsing XML, line 4002, column 21: The content of elements must consist of well-formed character data or markup.
I know nothing of code, JS included, so I don’t know how to fix it. I’ve tried several online JS validation tools; some identify an error there, others don’t. It clearly works just fine in the JSFiddle Anik provided.
If anyone could fix the code so it works in Blogger, you’d be my hero.

Edit: this works only under the same domain as other domains have CORS disabled.
The easiest way would be to add a java script file to the html file.
This is a simple script to set the title of the link as its hover text.
<script type = "text/javascript">
//get all links on the webpage
var links = document.getElementsByTagName("a");
for (var i = 0; i < links.length; i++) {
(function(i) {
// for every link, make a request using its href property and fetch its html
var request = makeHttpObject();
request.open("GET", links[i].href, true);
request.send(null);
request.onreadystatechange = function() {
if (request.readyState == 4) {
//on request received, process the html and set the title parameter
const doc = new DOMParser().parseFromString(request.responseText, "text/html");
const title = doc.querySelectorAll('title')[0];
if (title) links[i].setAttribute("title", title.innerText)
}
};
})(i);
}
//helper function to create requests
function makeHttpObject() {
try {
return new XMLHttpRequest();
} catch (error) {}
try {
return new ActiveXObject("Msxml2.XMLHTTP");
} catch (error) {}
try {
return new ActiveXObject("Microsoft.XMLHTTP");
} catch (error) {}
throw new Error("Could not create HTTP request object.");
}
</script>
Adding this script to the end will add hover text to all links.
See this JS Fiddle example -> https://jsfiddle.net/mcdvswud/11/

Related

Is it possible to cache a XML within a Javascript/HTML, if so how to?

I currently have the following Javascript as part of my HTML
function guide1(i) {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
guideload1(this, i);
}
};
xmlhttp.open("GET", "tvguidedata.xml", true);
xmlhttp.send();
}
function guideload1(xml, i) {
var i
var xmlDoc = xml.responseXML;
x = xmlDoc.getElementsByTagName("programme");
for (i = 1; i <x.length; i++)
if (*blah blah blah, the script starts working here, I assume its not necessary posting it in full*)
This script works as intended and so far so good.
However I would like to improve a small part of it for (what I consider) better performance of the overall intend.
Currently I have another script that is calling the above script at an set interval to make sure the data displayed inside my HTML is current (TV Guide airings)
Checking the chrome dev tools, I can see each time the script is running it re-downloads the full xml.
While the XML file is small, I would prefer it to only load/download the xml once and then be able to loop my script with the data being processed from a cached xml.
Is this even possible at all? And if so how?
I already tried my setInterval script to directly call the "guideload1" function instead of "guide1, however then I get an error and the script is not loading data correctly.
Maybe there is a script that first loads the XML into cache, local storage or whatever would be used in this case and any subsequent calls to "guide1" function and/or "guideload1" then take the cached file and don't make any download requested.
And the script that caches the XML, I would most likely set it to only reload/download once after 12 hours.
Any help is appreciated, I should also note that I know almost zero about coding (I got most of my coode by google search and similar) so if an explanation is necessary feel free to phrase it like I'm a baby.
Other information, I believe could be useful:
Cross-Browser compatibility is not required/necessary.
My whole HTML is only opened on Chrome browswer on Windows in kiosk mode.
There is never any user interaction, as the page is displayed to a HDMI-OUT/Modulator and certain sub-HTML elements are reloaded every day at midnight

How to change the colour of an anchor tag to a file, which is downloaded, when the last modifed date changed in Electron?

In my HTML code I have href links to files, which are being downloaded. I would like to create a javascript file, that checks the last modified date and changes the styling of the anchor tag to make the name of the link turn orange, until it is clicked for the first time. Is this possible in Javascript?
I am using Electron and am wondering if there is a way to implement this.
This is an example of the link in one of the HTML files:
<div><h3>Name of file</h3></div>
I found this Is it possible to retrieve the last modified date of a file using Javascript?
function fetchHeader(url, wch) {
try {
var req=new XMLHttpRequest();
req.open("HEAD", url, false);
req.send(null);
if(req.status== 200){
return req.getResponseHeader(wch);
}
else return false;
} catch(er) {
return er.message;
}
}
and am wondering if I could use
fetchHeader(location.href,'Last-Modified')
to check the last modified date of the file every time it gets downloaded and then show a desktop notification if the date has changed. I would have to store the dates in a file of course, because it should still be working when you start the app the next time. But I have to do it for local files, so I guess Javascript does not suppor this?
Then I would like to have a desktop notifcation. I created this script following this tutorial https://www.youtube.com/watch?v=ihcsKfIN6YU
function doNotify() {
Notification.requestPermission().then(function (result){
var myNotification = new Notification('NEW DOCUMENT!!!', {
body: "document" +".docx" + "has changed!!!",
icon: __dirname + '/app.ico'
});
});
}
doNotify();
When I try to call it, I get "Notifcation is not defined"
I thought of creating a function that checks for the last modified date, which is called when you click the anchor tag. This function should then call the doNotify() function, when the date has changed.
I know this is a lot. Should I maybe break this down into fewer questions?

How To Get an HTML element from another HTML document and display in current document?

I'm building a web site which lets users post requests on site and others to respond. I have built an app-like form to collect information. In that form I need to display 3 pages.
So I have created one page with the from and and a JavaScript file to handle these 3 pages. Other form-pages are designed separately (HTML only).
I'm planning to load the other two pages into that 1st page with XMLHttpRequest and it works.
But I need to take 3rd page into the 1st form-page (display the 3rd page of form) and change the innerHTML of that 3rd page. I tried it with
function setFinalDetails() {
document.getElementById("topic").innerHTML = object1.start;
}
//creating a XMLHttpObject and sending a request
//requiredPage is the page we request.
//elementId is the element we need to display
function requestAPage(requiredPage) {
selectElementId(requiredPage);
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (xhttp.readyState == 4 && xhttp.status == 200) {
m = xhttp.responseXML;
y = m.getElementById(elementId).innerHTML;
document.getElementById("hireForm").innerHTML = y;
//m is a global variable
//m is the object recieved from XMLHttpRequest.
//it is used to change the innerHTML of itself(m) and display.
//y is displaying the 3rd page in the form-page one("id =hireForm")
return m;
}
};
xhttp.open("GET", requiredPage, true);
xhttp.responseType = "document";
xhttp.send();
}
but that gives an error:
Cannot set a .innerHTML property of null
find my work on https://github.com/infinitecodem/Taxi-app-form.git
It it hard to help you in detail because we do not have a way to reproduce your error and test your code. I can suggest you to use a service like plunker to share your code. I do not know if this service will support to do some xhrRequest on other files in its working tree, you can try.
Your error message seems to reveal that one of your document.getElementById(...) is returning null instead of the html elt you want (so the id do not exist in the page).
But again without a way to test, it is hard to help you more sry. I really encourage you to try to share a link to a service (like plunker, etc) that will help others to play with your use case.

Pure javascript from console: extract links from page, emulate click to another page and do the same

I'm curious if it is possible with pure (vanilla) javascript code, entered into the browser console, to extract all links this (first) page, then emulate a click to go to another page, extract links there and go to the third page.
Extract links means to write them into console.
The same question as 1 but link to go to another page makes just an ajax call to update the part of the page and does NOT actually go to another page.
P.S. All links belong to one domain.
Any ideas how can this be done based on pure javascript?
As example, if you go to Google and enter some word ("example"), you may then open the console and enter
var array = [];
var links = document.getElementsByTagName("cite");
for(var i=0; i<links.length; i++) {
array.push(links[i].innerHTML);
};
console.log(array);
to display the array of URLs (with some text, but that's OK).
It is possible to repeat it 3 times from page 1 to page 3 automatically with pure javascript?
P.S. I should actually extract tags in the code above, so tags I named "links". Sorry for confusion (that doesn't change the question).
Thank you again.
If you want to write all the links into the console, you can use a more specific command
FOR GOOGLES
// Firstly, you get all the titles
var allTitles = document.getElementById("ires").getElementsByTagName("h3");
for(var getTitle of allTitles ) { // For each title, we get the link.
console.log(getTitle.getElementsByTagName("a")[0].href)
}
Then, you only need to simulate a click on the nav.
var navLinks = document.getElementById("nav").getElementsByTagName("a");
navLinks [navLinks.length-1].click() // Click on the "Next" button.
FOR ALL SITES
If you want to get all the links, just do the same command, grab the div ID you want id you only want some part of the page, then use getElementsByTagName("a")
You can find out how to use XHR or other to make raw AJAX request
Simple example found on Google :
// jQuery
$.get('//example.com', function (data) {
// code
})
// Vanilla
var httpRequest = new XMLHttpRequest()
httpRequest.onreadystatechange = function (data) {
// code
}
httpRequest.open('GET', url)
httpRequest.send()

How to load a script into a XUL app after initial loading

Greetings,
my xul app needs to load scripts dynamically, for this I derived a function that works in regular html/js apps :
function loadScript(url)
{
var e = document.createElement("script");
e.src = url;
e.type="text/javascript";
document.getElementsByTagName("head")[0].appendChild(e);
}
to something that ought work in XUL :
function loadScript( url)
{
var e = document.createElement("script");
//I can tell from statically loaded scripts that these 2 are set thru attributes
e.setAttribute( 'type' , "application/javascript" ); //type is as per MDC docs
e.setAttribute( 'src' , url );
//XUL apps attach scripts to the window I can tell from firebug, there is no head
document.getElementsByTagName("window")[0].appendChild(e);
}
The script tags get properly added, the attributes look fine,but it does not work at all, no code inside these loaded scripts is executed or even parsed.
Can any one give a hint as to what might be going on ?
T.
Okay,
as usual whenever I post on stack overflow, the answer will come pretty soon thru one last desperate Google search.
This works :
//Check this for how the url should look like :
//https://developer.mozilla.org/en/mozIJSSubScriptLoader
function loadScript( url)
{
var loader = Components.classes["#mozilla.org/moz/jssubscript-loader;1"].getService(Components.interfaces.mozIJSSubScriptLoader);
//The magic happens here
loader.loadSubScript( url );
}
This will only load local files, which is what I need for my app.
I am fairly disappointed by Mozilla, why not do this the same way like html, in a standard way ?
I've tried this, and I think you're right - I can't seem to get XUL to run dynamically appended script tags - perhaps it's a bug.
I'm curious as to why you would want to though - I can't think of any situation where one would need to do this - perhaps whatever you're trying could be achieved another way. Why is it they need to be dynamically loaded?
Off-topic: on the changes you made to the script.
e.setAttribute('src',url); is valid in normal webpages as well, and is actually technically more "correct" than e.src=url; anyway (although longer and not well supported in old browsers).
Types application/javascript or application/ecmascript are supposed to work in normal webpages and are more "correct" than text/javascript, but IE doesn't support them so they're not normally used.
Inside xul environment you are only allowed to use XHR+eval like the following:
function loadScript (url) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, false); // sync
xhr.send(null);
if (xhr.status && xhr.status != 200)
throw xhr.statusText;
try {
eval(xhr.responseText, window);
} catch (x) {
throw new Error("ERROR in loadScript: Can't load script '" + url+ "'\nError message is:" + x.message);
}
};

Categories

Resources