When I fetch from a website using fetch, I need to figure out a way to get an element which loads after 3-4 seconds. How should I attempt to do this? My code currently is
const body = await fetch('websiteurl')
let html = await body.text();
const parser = new DOMParser();
html = parser.parseFromString(html, 'text/html');
html.getElementById('pde-ff'); // undefined
I can assure this element exists and if I go to the website and use the last line and replace html with document, it works but I need to wait for website to load. Any ideas?
pretty much your stuff isnt loading at the correct time
so you need to wait for html to open and theres a lot of ways to do this
use jquery $(function() {alert("It's loaded!");});
use vanilla js window.addEventListener('load', function () {alert("It's loaded!");})
Related
I have some html pages with the same footer. With JavaScript and only JavaScript could I import another html page inside it?
Here's how you could use just javascript to add a footer to your page.
2022 code, using fetch and insertAdjacentHTML:
async function addFooter() {
const resp = await fetch("footer.htm");
const html = await resp.text();
document.body.insertAdjacentHTML("beforeend", html);
}
Original 2011 code, using XMLHttpRequest and innerHTML:
var ajax = new XMLHttpRequest();
ajax.addEventListener("load", function () {
document.body.innerHTML += ajax.responseText;
}
ajax.open("GET", "footer.htm");
ajax.send();
The 2011 code will still work in all browsers today, but fetch is more intuitive, and allows you to avoid coding an event handler callback. insertAdjacentHTML is also available for all browsers, so you could use that or innerHTML with either example. Only fetch is new, and won't work in IE without a polyfill.
As above, one method is to use jQuery load. I happened to be doing the exact same thing now, so will post a quick example.
Using jQuery:
$("#yourDiv").load('readHtmlFromHere.html #readMe');
And your readHtmlFromHere.html page would consist of:
<div><div id="readMe"><p>I'm some text</p></div></div>
You can use ajax to return a whole HTML page. If you wanted to replace the whole page you could replace the body tag and all it's children currently on the page with the body tag returned from the ajax call.
If you wanted to just replace a section you'd have to write a server-side script to create that section, then use ajax as above but just replace an element rather than the whole page.
Along with what #Alex mentioned, jQuery has a method .load() that you can use to fetch a specific portion of a page (See Loading Page Fragments heading on that page). You specify the URL you want to retrieve along with a selector (so if you wanted only a specific <DIV>'s contents for instance).
Following this answer example (one of the answer in this question), I made this little reusable function:
/**
* Render the array of html files, to the
* selected container.
* #param {*} pages - array of html file names
* #param {*} container - HTML element name
*/
function render(pages, container) {
const template = document.createElement("template");
var ajax = new XMLHttpRequest();
pages.forEach(element => {
// this is the route where the files are stored
ajax.open("GET", `./view/shared/${element}.html`, false);
ajax.send();
template.innerHTML += ajax.responseText;
});
document.querySelector(container).append(template.content);
}
export { render };
Which you can use in your index.js file, like so:
import { render } from "./tools/render.js";
var headerContent = ["siteName", "navbar"];
render(headerContent, "header");
This is rendering the html files siteName.html and navbar.html, into the <header> tag in the root index.html file of the site.
NOTE. This function works on localhost, but for whatever reason (which I still have to fix; I'll let you know when I do) it does not work correctly when working on GitHub Pages.
You could do a server side include, depending on your webserver.
but the quickest way would probably be to create a JavaScript file that uses document.write or similar to output the html syntax.
and then just include the created JavaScipt file the normal way.
more info at:
http://webdesign.about.com/od/ssi/a/aa052002a.htm
You definitely could, but if all you're doing is templating I recommend you do this on the server.
fetch("MyHTMLFile.html").then((response) => {
response.text().then((text) => {
targetHTMLElement.innerHTML = text;
});
})
My question is similar to this one about Python, but, unlike it, mine is about Javascript.
1. The problem
I have a large list of Web Page URLs (about 10k) in plain text;
For each page#URL (or for majority of) I need to find some metadata and a title;
I want to NOT LOAD full pages, only load everything before </head> closing tag.
2. The questions
Is it possible to open a stream, load some bytes and, upon getting to the </head>, close stream and connection? If so, how?
Py's urllib.request.Request.read() has a "size" argument in number of bytes, but JS's ReadableStreamDefaultReader.read() does not. What should I use in JS then as an alternative?
Will this approach reduce network traffic, bandwidth usage, CPU and memory usage?
Answer for question 2:
Try use node-fetch's fetch(url, {size: 200})
https://github.com/node-fetch/node-fetch#fetchurl-options
I don't know if there is a method in which you can get only the head element from a response, but you can load the entire HTML document and then parse the head from it even though it might not be so efficient compared to other methods. I made a basic app using axios and cheerio to get the head element from an array of urls. I hope this might help someone.
const axios = require("axios")
const cheerio = require("cheerio")
const URLs = ["https://stackoverflow.com/questions/73191546/get-only-html-head-from-url"]
for (let i = 0; i < URLs.length; i++) {
axios.get(URLs[i])
.then(html => {
const document = html.data
// get the start index and the end index of the head
const startHead = document.indexOf("<head>")
const endHead = document.indexOf("</head>") + 7
//get the head as a string
const head = document.slice(startHead, endHead)
// load cheerio
const $ = cheerio.load(head)
// get the title from the head which is loaded into cheerio
console.log($("title").html())
})
.catch(e => console.log(e))
}
I got problem loading dynamic JS in my script, so here the case, I have a plan to build android app with local webview something like webView.loadUrl("file:///android_asset/filename.html");
Everything work fine since its only html file, but the problem came when I need to read local js contain array data that need to read by other js file.
To make it clear,
I have 100++ data_1.js data_2.js etc who contain something like
data = [{
no1:"xxx",
no1:"xxx",
no3:"xxx",
..
}]
Those data will be read by one of my js script and display it on html file, basically I only need 1 data each time the page open, so its like when I open file://.../folder/file.html?no=1 it will only need to read data_1.js, file://.../folder/file.html?no=2 it will only need to read data_2.js
This is what I try to do.
#1
Using getScript("assets/data/data_"+number+".js");
This work when we use local server, when I access via localhost/folder/file.html?no=1 its work, but when I access file://..../folder/file.html?no=1 it not load because cors block
#2
Using ajax, result same with #1
#3
Add all data_x.js directly in file.html
<script src="folder/data_1.js"></script>
<script src="folder/data_2.js"></script>
<script src="folder/data_3.js"></script>
Its work when we access file://..../folder/file.html?no=1 but the page load very slow because need to include and load whole data_x.js (more than 100++)
Is there any other way to solve this problem?
Note: I don't want to connect to any server because I want the apps can be access offline so all data will be included inside the apps
You can make use of URLSearchParam in combination with createElement, which is similar to your solution #3 but only loads the file from parameter id.
So you can also make use of: webView.loadUrl("file:///android_asset/filename.html?id=N");
<html>
<body>
<script>
const urlParams = new URLSearchParams(window.location.search),
id = urlParams.get('id');
if (id != null) {
const script = document.createElement("script");
script.src = `folder/data_${id}.js`;
script.onload = function() {
// do something ...?
}
document.body.appendChild(script);
}
</script>
</body>
</html>
EDIT:
At least at desktop it works fine.
I have a functionality in my system that transcripts from voice to text using an external library.
This is what the library renders:
What I need is really simple: to get the text from the generated textareas.
The textareas are rendered without any name or id, so I can only access them by class in the Google Chrome console. Whenever I try to get them by class in my javascript code, I get an array of [0] elements.
I think that the problem is that this library renders a new #document and I'm not able to get it's content in my $(document).ready function because it scopes the 'parent' document.
How it renders.
Any thoughts on this? Thank you.
I hope the code below helps.
// Get you iframe by Id or other way
let iframe = document.getElementById("myFrame");
// After iframe has been loaded
iframe.onload= function() {
// Get the element inside your iframe
// There are a lot of ways to do it
// It is good practice to store DOM objects in variables that start with $
let $elementByTag = iframe.contentWindow.document.getElementsByTagName("p")[0];
let $elementById = iframe.contentWindow.document.getElementById("elementId");
let $elementByClass = iframe.contentWindow.document.getElementsByClassName("classHere");
let $elementBySelector = iframe.contentWindow.document.querySelector("#dad .classname");
// After get the element extract the text/html
let text = $element.innerText
let html = $element.innerHTML
};
Most of the examples I have found on the web involve loading a URL.
However, if I simply have a string that contains an svg or html and I want to load it into a dom for manipulation, I cannot figure out how to manipulate it.
var fs=require('fs')
var content = fs.read("EarlierSavedPage.svg")
// How do I load content into a DOM?
I realize that, in this example where is a local file is being read, there is a workaround for reading the local file directly, but I am interested more generally in whether a page can be loaded from a string.
I have already looked at the documentation but did not see anything obvious.
The default page in PhantomJS is a comparable to about:blank and is essentially
<html>
<body>
</body>
</html>
It means that you can directly add your svg to the DOM to and render it. It seems that you have to render it asynchronously to give the browser time to actually compute the svg. Here is a complete script:
var page = require('webpage').create(),
fs = require('fs')
var content = fs.read("EarlierSavedPage.svg")
page.evaluate(function(content){
document.body.innerHTML = content;
}, content);
setTimeout(function(){
page.render("EarlierSavedPage.png"); // render or do whatever
phantom.exit();
}, 0); // phantomjs is single threaded so you need to do this asynchronously, but immediately
When you load an HTML file into content, then you can directly assign it to the current DOM (to page.content):
page.content = content;
This would likely also need some asynchronous decoupling like above.
The other way would be to actually load the HTML file with page.open:
page.open(filePathToHtmlFile, function(success){
// do something like render
phantom.exit();
});