I'm building somewhat of a caching mechanism for some webpages, so that if the webpage is already stored in IsolatedStorage, then it's not necessary to fetch it online. However, when I'm loading the page, the content (CSS, js, images) aren´t being loaded, although the files are already in the appropriate paths and the paths in the files all seem correct.
When I try using a very simple html with just an image in it, the content is loaded properly.
The more complex webpages have multiple sub-directories but all the paths to the content seem to be pointing correctly.
Any ideas on what may be happening? I've tried pretty much everything I've found on google, with no success. Is this just another buggy issue of the Webbrowser control on WP8 that I'm not aware of? I'd really appreciate if you have any suggestions.
Thank you in advance!
I managed to fix the problem. Apparently the main html file needs to be at the root of the folders' structure. Would be nice to have that information on the official msdn page. Hope at least this helps anyone that may have the same problem when loading local pages.
Thanks!
Related
I know this has been asked many times, but I can't find an answer that works.
I'm making a site (https://tallerthanshort.github.io/bottom.gg) and it won't load any of the js (some reactjs) or css.
All the files can be found on the Github repo (https://github.com/TallerThanShort/bottom.gg) and all of it works when running the site locally (either by simply opening the html file or running a local server), but the GitHub pages site simply returns a 404 for every file.
I have changed all of my script tags so they have an opening and closing tag, and also removed the initial / from /_next/ so that Github pages doesn't look at the wrong repo, but it still hasn't fixed my issue.
Any help would be largely appreciated.
Edit: I found that having all the files in the same location as the root html file works, but I want to link separate folders and separate files to make things neater.
I’m new to the world of HTML. I wanted to create a local copy of the website I wanted to play around with by copying, pasting and saving the HTML source, as well as saving the webpage (with all the CSS, javascript, .ico elements etc.) and placing the HTML file in the same directory. However, when I opened up the HTML file, it was broken, and all styles were gone. Why is this so? Sincere apologies if this is a repeat, I didn’t really know what to search when looking for an answer. Thank you!
Without going into too much detail - Most modern websites are incredibly complex (JS, CDNs, webfonts, crossdomain content, etc etc) and are unlikely to look like anything reasonable when 'copy and pasted' to a local location.
If you're using Chrome I'd try the More Tools -> Save Page As feature which is slightly smarter about preserving dependencies, and even then it might not work very well.
Your best bet is to analyze the site inside of the browser, e.g. Chrome's Developer Tools and apply your learnings to a local stack you create from scratch.
It's kind of impossible to know what's going on without seeing the code or what (if any) errors are being produced by your browser.
I would suggest opening the JavaScript console in your browser to see if there are any errors. I could guess that the JS and CSS files are being referenced with an absolute path, and because you're serving them form the filesystem you need a relative path. But like I said, you're just going to have to debug this yourself.
The html file doesn't always contain everything it needs to render the page. Often css is stored in external files. The html file you copied should have a reference to the css that looks like.
<link rel="stylesheet" href="styles.css" >
It sounds like the stylesheets / scripts in the copied html are pointing to files that don't exist in your copied environment.
Could you post the way you have the files structured?
It could be that you have the filesystem set up differently than in the original webpage, ex/
wwwroot/
|-- index.html (which has stylesheet references to "css/styles.css")
|-- app.js
|-- styles.css
^ here the index.html file would be looking for styles.css in a folder called "css", but the styles.css file is not in that location.
(I'm aware this is not an actual answer, and may be more appropriate as a comment. I don't have enough reputation for commenting, so my apologies if I'm not following the StackOverflow ways of doing things.)
This could be the issue ideally because of the use of the external resources which are being used in that webpage and are not present on your local system.
You can check it from the developer console in chrome or whichever browser you are using.
Press F12 while viewing the webpage, this will open the Developer Tools
Click on the Console tab, here you can see all the errors and warnings that can cause any issue to that page.
Refer to the screenshot for the process:
You will be able to see errors in red color and warnings in orange color.
In webview, can you remove html elements from a live website before it loads to the user?
I've been looking at a bunch of stackoverflow questions regarding this, except I realized they were only locally hosted Web pages inside the app. None of their solutions worked for me.
Any help would be appreciated.
Read first about building apps in webview and how to use javascript in it. Then try to accomplish your goal using Javascript, because it cannot be done simply with webview on android. GL!
One easy way of doing that, will be to first make an http request to load the html data from the website. Then edit the data (remove whatever you don't like from it). Finally display it into the webview with loadData.
However this may not work as you expect, considering the css or javascript from the page you want to load may be in seperate files.
We have a Sitecore installation, and in our environment we are seeing 404 errors when our site is requesting the following files:
sptier0.js
sptier0-ajax.js
sptier0-window.js
My Google-fu doesn't turn up much.
What are these files, and what in their purpose?
Thanks!
I have found that these scripts are most probably related to SharePath performance monitoring tool of Correlsense (http://www.correlsense.com/product/).
I would start the investigation from viewing the html source in a browser, finding the place where the scripts are injected and then analyzing the sources in this area to find how does these links appear on the page.
These JavaScript files are indeed part of SharePath.
Specifically, they are added to existing application HTML pages in order to measure and report browser timing.
Your question, in addition to your comment to Vadim Dubovitsky, suggest that the tool was not properly uninstalled - the files were deleted, but they are still being referenced by (or injected into) the application code.
Disclaimer: I work at Correlsense.
I need to regularly send html pages to a client as standalone .html files with no external dependencies. The original pages are done with node.js and express and they contains several librairies such as High Charts.
I have done the preparation manually until now, this includes:
Transform all images into blobs
Copy all external .js and .cs inside the page
Minimize where possible (standards librairies such as jQuery or Bootstrap...)
The result is a single .html file that can be opened without an internet connection and looks just like the original.
Is there any tool to do this automatically? If not, maybe I'll code it myself in Python. Do you have any recommendation around that?
Thanks
Monolith is a CLI tool for saving complete web pages as a single HTML file
See https://github.com/Y2Z/monolith
With apologies to OP, as this answer is probably far too late for him, but I'm posting it to help anyone with a similar problem:
HTTrack is an open-source project that does almost exactly what you described, though it doesn't work perfectly on some of the more peculiar JS.
It saves the page with most of the JS, the major images, and everything that the page needs to appear complete. It can be configured to include or exclude the entire or partial JS, images, and CSS.
This does not import all of the JS and other content into the HTML file, but neatly organizes all of the content into one folder and corrects all of the paths to make the folder portable.
It also seems to have trouble grabbing some external sources that are protected, but if it is your local site and simply uses common scripts like JQuery, you should be fine. When I tested it, it correctly downloaded all of my local CSS and any valid external CSS library that I incorporated, the JQuery and derivative scripts that I was using, and the embedded images.
Just to save everyone a question, the program by default saves the downloaded websites to C:\My Web Sites.