RequireJS - Module name "wget" has not been loaded yet for context - javascript

I'm trying to run an HTML file with JavaScript inside of it. In the JavaScript I'm trying to run is a program called wget. It downloads information from a website. I used it on CMD and in a batch file to get data from an xml that is hosted locally on my computer. Now I am trying to run wget so it runs from an HTML file. (the panel.html for the twitch panel extension), however I have been having a time just making the thing run.
I have been fiddling around, and the issue I now face is when I try to run the HTML in Chrome web browser the inspector says, Module name "wget" has not been loaded yet for context.
Screenshots:
HTML:
Error from Chrome:
(Click image to enlarge)
Installed wget from cmd:
I tried to read this for hours, and I don't understand it at all. In fact I don't think the issue in the link is the same issue as mine, but this is what every search keeps coming up with. I don't understand the whole dynamic thing or why they are even using the word "dynamic" for. It just seems like they can't use require because it doesn't work against paths, however I am not trying to define a path. I just want wget to work from this HTML file. I'm annoyed that I can't find anything on this exact problem. Every problem I have seen like this doesn't have a basic example of var wget = require('wget');
I just need what's in my JavaScript or HTML script tag to work.
I downloaded the require.js file and put it into the HTML as a script tag. From here it should just work. I already downloaded wget from cmd so it's on the computer somewhere. I also put the .exe in the same folder as the .html and the require.js.
Also I read somewhere that another reason this doesn't work is because wget is "loaded" or something like that. In that case can someone tell me how to "load" wget into the HTML or JavaScript first so that this error goes away? The basic wget example I found online is:
Here is the HTML file:
I'm not using a path, I just want wget to work from JavaScript. The wget example shows that it uses require. If I don't need require then please provide an example of how I can use wget in JavaScript without require or how to make this error go away.
I've been trying to figure out the best way to get the status information from my VLC player and put it into an HTML file so I can use that as a Twitch extension on my Twitch channel. VLC media player has a status.xml when you run it as a http server. I can only access the localhost:8080/requests/status.xml from a browser because it has basic authentication where I have to put in my user name, so I use wget to put in my password and download the status.xml back to my computer as another copy that isn't setup with authenitcation. Then I can use that download status.xml's information to post what music is playing on my VLC player. The problem is I need wget to pull the information from the localhost:8080/requests/status.xml file from the html file so that whenever its ran, the status.xml gets update with the new information and this the html will post the most current thing playing on my VLC player.

Related

Open HTML with CSS in JupyterLab tab with full formatting

I create an HTML document using Sphinx. When I click on the index.html file it opens a browser and looks like this. The look depends on some .CSS and .JS files being executed:
If I open the same file from the JupyterLab file browser, it opens in a tab but looks much worse: .CSS and .JS are not displayed, and images are not displayed. It looks like this:
Is there a way to get JupyterLab to get JupyterLab to execute the .CSS and .JS and pass through any images linked in the text? The JupyterLab is running on a remote server, so I don't have the option of having it create a new browser process on my local machine, because the files are remote.
Using JupyterLab within JupyterHub (old school install with conda, no docker and such)
I've been stuck at this HTML Preview issue for a few weeks.
I have the very same use case as you (Sphinx stuff for a team to work on their docs).
So far, no luck.
It may or may not work (depending on... I'm not sure of...) if I'm using JupyterLab from the browser on the hypervisor hosting JupyterHub itself
It won't work if I'm using JupyterLab from the browser on my client machine.
I tried to mess around with
c.NotebookApp.allow_remote_access = True parameter with no luck
tried to put it in my profile ~/.jupyter/jupyter_notebook_config.py
tried to add it to general config file /path/to/conf/jupyterhub_config.py
=> Not sure of the right way to set this option on JupyterLab's JupyterHub install, nor if it's even a relevant option...
Well, security wise, it's not, that's a given (^^'), but Preview HTML is an important feature for Sphinx users, hope someone can help with this...
I also looked after nginx config, but you get the issue with or without the reverse proxy anyway...

Would like to write a javascript that helps me to find documents in a folder

I would like to write a js for an offline website (located on a local Windows server or any other server). It's supposed to look for files like PDFs in several directories and display them as search result on the "website", which isn't a real website, since it's on a local server and not in the web. The PDF is supposed to open in the browser after clicking it. I already have this kind of search engine as a php file, which I wrote with some help from friends. I also want to share this site with other friends. Basically I'll send them the whole folder with the html - document (or the .php site), so they can use it to search for certain pdfs in the folder. Its like a offline wiki for medical research documents. But I don't want them to always install php on their local servers, so they can run my php-searchmachine, thus I need to write it new as a javascript. By google and stack overflow I came across this solution https://www.codegrepper.com/code-examples/javascript/find+file+in+directory+javascript but it seems like that this needs node.js, so all have to install node.js, which is similar to installing php, I guess (im not familiar with node.js). Also I'm not sure if node.js is running on a normal client or server, which is not a webserver.
How can I start with such a project? Is javascript the correct attempt to solve this?
Windows Search has the ability to search PDF contents when boosted by a PDF (index) iFilter, this means the user can search and find instantly a new search word or a saved search it took only a second to hand enter this search (actually took longer to save for double click next time) just for illustration I chose a word I knew was in one file and actually found it is also in two other PDFs.
The problem for your JS coding is how to use JavaScript to interface with Windows Search since using explorer I could not run that search on a remote server shared library drive (I could see their contents as per second screen but for search, had to pull a local library copy down to my documents) and that is where your JS skills come into play. Personally I would avoid JS and use a VLC method to share view via a remote Lan server or simpler invoke a plain text indexed local copy of remote files for download as and when required.

scrape external website that requires javascript being triggered

Since phantomjs is abandoned, I would like to know if there is any alternative method. e.g. chrome-webdriver wouldn't be a good solution as it wouldn't be able to run on a remote host such as heroku.
So, is it somehow possible to scrape an external website that require javascript being triggered first? Note that it should be possible to run it from a nodejs application.
I was getting ready to put together something for you, then I thought better and google'd it. Check out this build script; it seems to answer your question exactly.
https://github.com/stomita/heroku-buildpack-phantomjs
Set up a git branch and pull it locally if you have to, but this should work. Basically, you need to download the binary and then remote in and run "heroku run 'phantomjs'" or "heroku run 'bin/phantomjs'"

png files failing to load when opening html file directly, but they load when opening from webstorm

I've been working on a game in javascript for my CS course. When I open the document by hitting run in Webstorm, it loads the game correctly, however when I just try opening the html file from Finder, the webpage opens but none of the png files I'm using for the sprites load. I opened Inspect Element in google chrome, and the javascript files loaded correctly but all the png files listed as canceled. This doesnt happen when the game is run from webstorm (when I run it from webstorm, all image files load properly).
When the game is opened directly from an html file (that's when I have the problem), chrome lists the path of the html document as the webaddress, although when opened from webstorm, it lists http://localhost:63342/CS%20Week%2010/CS105_Jessica.Davis_DogGame.html?_ijt=tmrr2fndgac82h07hlvt101gi4
How can I get around this issue so that when opening the html file from Finder it loads everything correctly? All image files are in the same directory as the html file.
Because of browsers security, loading files like this might not work from a url starting with file://
What webstorm is probably is making a local web server so that instead of saying file:// you could say http://. if any website was able to load images from file:// then any webpage you visit would have been able to search for any file on your computer and send it over the internet without your consent so browser often have these settings on. So you'd need a server. If you are working on your computer, you could make a local server just like webstorm and host your own files there. or host it on another service like github pages or codepen.
Now since all images are in the same directory, make sure that every time you call loadImage you use the images name and extension instead of saying /User/user/whatever_other_directory_you_have_it_under/image.png.
Once you did that you can make a local web server for the project. To make a local server, open Terminal (an application under utilities, you could spotlight search for it as well) and type cd, drag your project folder and drop it over terminal, and hit enter. Then type python -m SimpleHTTPServer and wait till it says something like Serving HTTP on 0.0.0.0 port 8000 .... Then taking the 0.0.0.0 and the 8000 you see in the example (yours may or may not be the same) go to your browser and type http://0.0.0.0:8000 (replacing the digits with whatever you got, not this link doesn't work until you do that)
Images should load alright. If you need to stop the server you can go back to terminal and hit control+C.
Note that when presenting your p5 sketch, no one else would be able to see the website on their computers if you make your local server. The local server is secluded to the device that is running it (although if their making their own local server and have your project files it should work just fine).
If you want the website hosted so that you could share a link with anybody in the world you could use codepen or github pages. If you go to codepen.io it should be self-explanatory although you'd have to upload your images to some image hosting site like tumblr or something and add the URL source of those images to codepen or you could put everything into github for even better results!
To use github pages you'd need to make a github account (preferably with your username being whatever you want your page to be named). Make a repository named insert_username_here.github.io. add your files to the repository (make sure to try to keep all sub directories and folder exactly as they are from your project folder). After a minute or two go to http://insert_username_here.github.io to admire your brand new hosted webpage!

How to download CGI page as a html with all the stuff (js, images, css and etc) using WGET?

When I use Google Chrome's "Save as.." the cgi page is being downloaded as one single html and another folder with all required stuff in order to display it correctly offline. I tried many parameter but nothing worked properly as I expected. Also -p doesn't work.
GNU Wget 1.14 built on linux-gnu.
When I use -p option, get robots.txt and the .cgi file itself.
Can it be like this because of the cookies? Is there option that may fix this problem?
Is there another way? I mean for example if I put some parameters to chromium in the terminal.
Quoting from the man page:
‘-p’ ‘--page-requisites’
This option causes Wget to download all the files that are necessary
to properly display a given HTML page. This includes such things as
inlined images, sounds, and referenced stylesheets.
Problem was that I needed to use cookies to pass the log in session. So --load-cookies seems to solve my problem.

Categories

Resources