I want to start a greasemonkey plugin to an existing page. The plugin should fetch and display some images automatically, each image from different pages.
I thought of using jQuery.get("link", function(data)) and hide the page and display the images only but on an average to display 4 images I should load 6 webpages into present webpage it is creating a delay in loading.
Is there any other work around to create a function that loads the page html of all image pages in background or in another tab and get the href of <a> tag's in that page, into my page and load only images into my page?
You can try this solution below.
Just put the URLs you want in the "pages" array. When the script runs, it makes Ajax calls in the background. When they are ready, it searches the source returned for images and picks one randomly. If found, it wraps the image in a link to the page where it found it (or if available, the image's url) and inserts the linked image to the top of the body of your own current page.
You can try the code by pasting it into your browser's JavaScript console and it will add the images to the current page.
You also see a demo here: http://jsfiddle.net/3Lcj3918/3/
//pages you want
var pages =
[
'https://en.wikipedia.org/wiki/Special:Random',
'https://en.wikipedia.org/wiki/Special:Random',
'https://en.wikipedia.org/wiki/Special:Random',
'https://en.wikipedia.org/wiki/Special:Random',
'https://en.wikipedia.org/wiki/Special:Random'
]
//a simple function used to make an ajax call and run a callback with the target page source as an argument when successful
function getSubPageSource(url, successCallback)
{
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function()
{
if (xhr.readyState == 4 && xhr.status == 200)
{
//when source returned, run callback with the response text
successCallback(xhr.responseText);
}
};
//requires a proxy url for CORS
var proxyURL = 'https://cors-anywhere.herokuapp.com/';
xhr.open('GET', proxyURL+url, true);
//set headers required by proxy
xhr.setRequestHeader("X-Requested-With","XMLHttpRequest");
xhr.setRequestHeader("Access-Control-Allow-Origin","https://cors-anywhere.herokuapp.com/");
xhr.send();
}
//a function that extract images from given url and inserts into current page
function injectImagesFrom(url)
{
getSubPageSource(url, function(data)
{
//trim source code to body only
var bodySource = data.substr(data.indexOf('<body ')); //find body tag
bodySource = bodySource.substr(bodySource.indexOf('>') + 1); //finish removing body open tag
bodySource = bodySource.substring(0, bodySource.indexOf('</body')); //remove body close tag
//create an element to insert external source
var workingNode = document.createElement("span");
//insert source
workingNode.innerHTML = bodySource;
//find all images
var allImages = workingNode.getElementsByTagName('img');
//any images?
if (allImages.length > 0)
{
//grab random image
var randomIndex = Math.floor(Math.random() * allImages.length);
var randomImage = allImages.item(randomIndex);
//add border
randomImage.setAttribute('style', 'border: 1px solid red;');
//restrain size
randomImage.setAttribute('width', 200);
randomImage.setAttribute('height', 200);
//check if parent node is a link
var parentNode = randomImage.parentNode;
if (parentNode.tagName == 'A')
{
//yes, use it
var imageURL = parentNode.getAttribute('href');
}
else
{
//no, use image's page's url
var imageURL = url;
}
//add a link pointing to where image was taken from
var aLink = document.createElement("a");
aLink.setAttribute('href', imageURL);
aLink.setAttribute('target', '_blank');
//insert image into link
aLink.appendChild(randomImage);
/* INSERT INTO PAGE */
//insert image in beginning of body
document.body.insertBefore(aLink,document.body.childNodes[0]);
//remove working node children
while (workingNode.firstChild) {
workingNode.removeChild(workingNode.firstChild);
}
//unreference
workingNode = null;
}
});
}
for (var ii = 0, nn = pages.length; ii < nn; ii++)
{
injectImagesFrom(pages[ii]);
}
Related
i have two domains. One for selling products that is https://sellproducts.com and the other for product documentation that is https://docs.product.wiki
In https://sellproducts.com i have page called docs ( https://sellproducts.com/docs) which i used iframe to call or display contents from https://docs.product.wiki
<iframe id="docs" src="https://docs.product.wiki/" frameborder="0">
</iframe>
The https://docs.product.wiki have many pages example,
https://docs.product.wiki/intro.html
https://docs.product.wiki/about.hml
i want to use javascript or jquery to get the current url from iframe and display it in the browser like " https://sellproducts.com/docs?page=intro", when a page is clicked on or reloaded.
If you can put some js on both side it's possible.
In order, there the logic you needs:
Create/Get iframe element -> document.createElement
Parse URL -> URLSearchParams
Catching click event on iframe's link -> createEventListener
Manage main window location -> window.top and window.location
Following could be a good start:
On your https://sellproducts.com/docs put this code:
window.onload = function(e) {
const docsUrl = 'https://docs.product.wiki/';
const queryString = window.location.search; //Parse URL to get params like ?page=
let iframe;
if(document.querySelector('iframe').length) //If iframe exit use it
iframe = document.querySelector('iframe');
else
iframe = document.createElement('iframe'); //Create iframe element
iframe.src = docsUrl; //Set default URL
iframeframeBorder = 0; //Set frameborder 0 (optional)
if (queryString !== '') {
const urlParams = new URLSearchParams(queryString); //Convert to URLSearchParams, easy to manipulate after
const page = urlParams.get('page'); //Get the desired params value here "page"
iframe.src = docsUrl+page + '.html'; //Set iframe src example if ?page=intro so url is https://docs.product.wiki/intro.html
}
if(!document.querySelector('iframe').length)
document.body.appendChild(iframe);//Append iframe to DOM
}
And the https://docs.product.wiki side put this code in your global template (must be on all pages):
let links = document.querySelectorAll('a'); //Get all link tag <a>
links.forEach(function(link) { //Loop on each <a>
link.addEventListener('click', function(e) { //Add click event listener
let target = e.target.href; //Get href value of clicked link
let page = target.split("/").pop(); //Split it to get the page (eg: page.html)
page = page.replace(/\.[^/.]+$/, ""); //Remove .html so we get page
let currentHref = window.top.location.href; //Get the current windows location
//console.log(window.location.hostname+'/docs?page='+page);
window.top.location.href = 'https://sellproducts.com/docs?page='+page; //Set the current window (not the frame) location
e.preventDefault();
});
});
Feedback appreciated :)
I have been practicing my Vanilla Js/jQuery skills today by throwing together a newsfeed app using the news-api.
I have included a link to a jsfiddle of my code here. However, I have removed my API key.
On first load of the page, when the user clicks on an image for a media outlet, e.g. 'techcrunch', using an addEventListener, I pass the image's id attribute to the API end point 'https://newsapi.org/v1/articles' and run a GET request which then proceeds to create div elements with the news articles content.
However, after clicking 1 image, I cannot get the content to reload unless I reload the whole page manually or with location.reload().
On clicking another image the new GET request is running and returning results, as I am console logging the results.
I am looking for some general guidance on how to get the page content to reload with each new GET request.
Any help would be greatly appreciated.
Many thanks for your time.
Api convention:
e.g https://newsapi.org/v1/articles?source=techcrunch&apiKey=APIKEYHERE
EventListener:
sourceIMG.addEventListener('click', function() {
$.get('https://newsapi.org/v1/articles?source=' + this.id + '&sortBy=latest&apiKey=APIKEYHERE', function(data, status) {
console.log(data);
latestArticles = data.articles;
for (i = 0; i < latestArticles.length; i++) {
//New Article
var newArticle = document.createElement("DIV");
newArticle.id = "article";
newArticle.className += "article";
//Title
//Create an h1 Element
var header = document.createElement("H1");
//Create the text entry for the H1
var title = document.createTextNode(latestArticles[i].title);
//Append the text to the h1 Element
header.appendChild(title);
//Append the h1 element to the Div 'article'
newArticle.appendChild(header);
//Author
var para = document.createElement("P");
var author = document.createTextNode(latestArticles[i].author);
para.appendChild(author);
newArticle.appendChild(para);
//Description
var description = document.createElement("H4");
var desc = document.createTextNode(latestArticles[i].description);
description.appendChild(desc);
newArticle.appendChild(description);
//Image
var image = document.createElement("IMG");
image.src = latestArticles[i].urlToImage;
image.className += "articleImg";
newArticle.appendChild(image);
//Url link
//Create a href element
var a = document.createElement('a');
var link = document.createElement('p');
var innerLink = document.createTextNode('Read the full story ');
link.appendChild(innerLink);
a.setAttribute("href", latestArticles[i].url);
a.innerHTML = "here.";
link.appendChild(a);
newArticle.appendChild(link);
//Append the Div 'article' to the outer div 'articles'
document.getElementById("articles").appendChild(newArticle);
}
});
}, false);
I tried your fiddle using an api key. It is working for me in that content new content is appended to the previous content in the #articles div. If I'm understanding your question, when a news service image is clicked you would like for only that news service's articles to show. To do that you would need to clear the contents of #articles before appending new content.
To do that with plain js you could use the following above your for loop:
// Removing all children from an element
var articlesDiv = document.getElementById("articles");
while (articlesDiv.firstChild) {
articlesDiv.removeChild(articlesDiv.firstChild);
}
for (i = 0; i < latestArticles.length; i++) {...
Full disclosure, I added the variable name 'articlesDiv' but otherwise the above snippet came from https://developer.mozilla.org/en-US/docs/Web/API/Node/removeChild
I'm working on a WP website, and I'm pretty new with code, so after I struggled a whole day to make it work, I just gave up, and decided to ask someone.
I used dynamic meta for all open graphs and twitter cards except image.
All the website pages have a container with an article inside; some articles have an image, and some have none. For the ones with no image, I want to use the Company logo.
So I want to use javascript to add og:image and twitter:image to wordpress, but I can't get over one error that says:
document.getElementsByTagName(" ") is not a function
//add image meta tag
addImageMetaTag();
function addImageMetaTag() {
var imgHolder = document.getElementsByTagName("article")[0];
var image = imgHolder.getElementsByTagName("img");
var source;
function getSource() {
if (image.length != 0) {
var source = image[0].getAttribute("src");
} else {
var source = "http://link_to_my_default_image.png";
}
return source;
};
var meta = document.createElement('meta');
meta.setAttribute("property", "og:image");
meta.content = source;
meta.name = "twitter:image";
document.getElementsByTagName('head')[0].appendChild(meta);
};
Gerald, open graphs and twitter cards are used to create the shared snippet, so it's got nothing to do with crawling.
You were right with your other answer, there were two errors: indeed, I used "getSource" instead of "source", but Wordpress still wouldn't find the "article" class, because the content loads after the header, so I had "var content = undefined", so I got the function working by changing it to this:
// add image meta tag
window.addEventListener("load", function() {
addImageMetaTag();
});
function addImageMetaTag() {
var content = document.getElementById("primary");
var images = content.querySelectorAll("img");
var source;
if (images.length != 0) {
var source = images[0].getAttribute("src");
}
else {
var source = "http://link_to_my_default_image.png";
}
var meta = document.createElement('meta');
meta.setAttribute("property","og:image");
meta.content = source;
meta.name = "twitter:image";
document.getElementsByTagName('head')[0].appendChild(meta);
};
I have a JavaScript function which first of all fetches the value of a label element, which is an ID for a database entry. These ID's then get sent to a ASP page which fetch an images save location from the database.
This save location information for each selected image is then sent to a ASP.NET page which splits each image save location and rotates the images accordingly. This all works perfect, my only issue is that the images will not update until I reopen the HTA file.
Refreshing does not work, as seen in the video.
The files HAVE rotated as you can see in the video at the bottom
Here is the link to the video!
Here is my JavaScript which does the rotating:
function doRotate(dir,obj)
{
var http = getHTTPObject();
var http2 = getHTTPObject();
ids = fetchSelection().toString();
//Make button animate, visual aid that it is working
obj.src = "http://localhost/nightclub_photography/images/buttons/"+dir+"_animated.gif";
http.onreadystatechange = function()
{
//Fetch the save location of selected images
if (http.readyState == 4 && http.status == 200) {
//Create URL string to send to rotate script
var locs = http.responseText;
locs = locs.split(",");
//Start of URL
var url = "http://localhost/nightclub_photography/net/rotate_script.aspx?dir=" + dir;
for (var i=0; i < locs.length-1; i++)
{
url = url + "&t=" + locs[i];
}
//Add random math
url = url + "&k=" + Math.random();
http2.onreadystatechange = function()
{
if (http2.readyState == 4 && http2.status == 200)
{
//Stop animated button
obj.src = "http://localhost/nightclub_photography/images/buttons/"+dir+".png";
//Split id's
var idsSplit = ids.split(",");
for (var k=0; k < idsSplit.length; k++) {
reapplyStyle(idsSplit[k]);
}
}
}
http2.open("GET", url);
http2.send();
}
}
http.open("GET", "http://localhost/nightclub_photography/asp/returnDatabaseData.asp?ids="+ids+"&k=" + Math.random());
http.send();
}
I also have a function which reapplies (well, should do) the background image, which should reload the rotated images. Although reloading the page doesn't work, so I can't see that function working either, but that's a different issue. Here is the function:
function reapplyStyle(id) {
var background = doc(id+"_label").style.backgroundImage;
doc(id+"_label").style.backgroundImage = background;
}
If it is a caching problem, have you tried making the image url unique. Try something like this:
ts = new Date().getTime();
obj.src = "http://localhost/nightclub_photography/images/buttons/"+dir+".png?timestamp=" + ts;
I'm using a script to retrieve content from an external website, and the date is returned with certain elements stripped out so that they don't interfere with the page I'm pulling the data to. However, when I view my page with the error console open, I am receiving 404s on all images. Is there anyway I can strip out all the images from the script so that I'm just getting the text (which is still in its formatted tags)?
$(document).ready(function () {
var container = $('#target');
function doAjax(url) {
if (url.match('^http')) {
$.getJSON("http://query.yahooapis.com/v1/public/yql?"
+ "q=select%20*%20from%20html%20where%20url%3D%22"
+ encodeURIComponent(url)
+ "%22&format=xml'&callback=?",
function (data) {
if (data.results[0]) {
var fullResponse = $(filterData(data.results[0])),
justTable = fullResponse.find("table");
container.append(justTable);
} else {
var errormsg = '<p>Error: could not load the page.</p>';
container.html(errormsg);
}
});
} else {
$('#target').load(url);
}
}
function filterData(data) {
data = data.replace(/<?\/body[^>]*>/g, '');
data = data.replace(/[\r|\n]+/g, '');
data = data.replace(/<--[\S\s]*?-->/g, '');
data = data.replace(/<noscript[^>]*>[\S\s]*?<\/noscript>/g, '');
data = data.replace(/<script[^>]*>[\S\s]*?<\/script>/g, '');
data = data.replace(/<script.*\/>/, '');
return data;
}
doAjax('mywebsite');
});
Option 1:
You can strip the images by adding this line to filterData() function:
data = data.replace(/<img[^>]*>/g, '');
This will replace all strings starting with <img and then containing zero or more characters other than > with an empty string.
Option 2:
You can use jQuery to remove the elements. Insert this before container.append():
justTable.find("img").remove();
This will find all img elements inside the table and remove them.
Alternative:
Some images are not available because their URL is relative. If you have <img src="logo.png"> on http://example.com/page.html then browser is loading the image from example.com/logo.png. If you include the same <img> tag to your page http://own.com/my.html then browser will try to load own.com/logo.png.
You can fix this issue by changing the src attribute of the images to include the domain you retrieved the page from.
Example (not fully tested, may need modifications):
// copy everything for url except the string after last "/" character
// so if url == http://example.com/page.html then path == http://example.com/
var path = url.match("(.+/)[^/]+$")[1];
// modify all local images (value of src attribute not starting with "http://")
justTable.find('img').not('[src^="http://"]').attr('src', function() {
return path + $(this).attr('src');
});