How to open a new browser window using gulp-connect? - javascript

I would like to open a browser window when my web-server is running, at the moment I am using this, but I am not able to open the browser, any idea what is wrong in my code?
var gulp = require('gulp');
var connect = require('gulp-connect');
gulp.task('webserver', function () {
connect.server({
root: "../../../www/",
livereload: true,
open: {
browser: 'chrome', // if not working OS X browser: 'Google Chrome'
url: 'http://localhost:8080/site/a/index.html?dev'
}
});
});

In my opinion, instead of a specific gulp task, you could just use a native node module, "opn" https://www.npmjs.com/package/opn
npm install --save-dev opn
Then in whatever callback your server module uses:
require("opn")("http://localhost:8080/site/a/index.html?dev",
{app: ['google chrome', '--incognito']})
gulp-connect doesn't look like it provides a callback, but you can probably just run the open task serially, or after a short wait. Other competitors to gulp-connect do provide a callback, which allows nice things like passing port/ip etc. dynamically to opn, allowing you to further configure what happens (browsersync, for instance, dynamically checks and uses an free port, and then passes along information about which port it used, which allows opn to open the correct local port automatically even if it changes from time to time. ).

Related

Use puppeteer with imgui-js

In case the lenght of the question might be scary, the summary of the question is how to interact with a front end app from a node server. Puppeteer usage should come along with that request solved I believe. Question is large because I explained all my failed attempts to achieve backend code (puppeteer) work in the browser. Apart from building and running the repo that although its easy right following the instructions might take a some time, I believe the question should be feasable for a javascript/node regular programmer. There it goes, thanks for reading.
I cloned, built and ran imgui-js repository succesfully.
I want to use it along with puppeteer for a small app. All the npm commands inside and stuff tried are inside the mentioned imgui-js project.
I tried:
1.- Run the node example from the project: With npm run-script start-example-node.
This runs the example/index.js script, but nothing is drawn as we are not in the browser and the window is undefined. Can be checked debugging in the main.ts:
if (typeof(window) !== "undefined") {
window.requestAnimationFrame(done ? _done : _loop);
}
So I do not understand the purpose of this example in the repo.
Edit: Seems it can be to have the client-server comunication done, but I do not now how to do this.
2.- Puppeteer browserify:
I followed the browserify hello world.
Just a summary of the steps:
npm install -g browserify
npm i puppeteer
Go to the build folder to generate de bundle.js for my const puppeteer = require('puppeteer'); script, so cd example, cd build, browserify myScript.js -o bundle.js
Add <script src="./build/bundle.js"></script> to the example/index.html.
I obtain this error:
Uncaught TypeError: System.register is not a function
at Object.96.puppeteer (bundle.js:19470:8)
at o (bundle.js:1:265)
at r (bundle.js:1:431)
at bundle.js:1:460
I also tried browserifying main.js along with my script: browserify main.js myScript.js -o bundle.js. Same error.
3.- Try to setup puppeter with the rollup module bundler: following this resource among others. So doing:
npm install --save-dev rollup tape-modern puppeteer
npm install --save-dev rollup-plugin-node-resolve
npm install --save-dev rollup-plugin-commonjs
npm install --save-dev sirv tape-browser-color
And tried to add that the the imgui-js rollup.config.js configuration file.
Think its not working because all the server setup at the npm start and so on is not performed with rollup.
4.- Puppeteer-web: Following the steps of this resource I tried to run puppeteer in the browser.
npm i puppeteer-web
Code in the client and the server:
Client:
<script src="https://unpkg.com/puppeteer-web"></script>
<script>
const browser = await puppeteer.connect({
browserWSEndpoint: `ws://0.0.0.0:8080`, // <-- connect to a server running somewhere
ignoreHTTPSErrors: true
});
const pagesCount = (await browser.pages()).length;
const browserWSEndpoint = await browser.wsEndpoint();
console.log({ browserWSEndpoint, pagesCount });
</script>
Server (server.js script):
const httpProxy = require("http-proxy");
const host = "0.0.0.0";
const port = 8080;
async function createServer(WSEndPoint, host, port) {
await httpProxy
.createServer({
target: WSEndPoint, // where we are connecting
ws: true,
localAddress: host // where to bind the proxy
})
.listen(port); // which port the proxy should listen to
return `ws://${host}:${port}`; // ie: ws://123.123.123.123:8080
}
const puppeteer = require("puppeteer");
puppeteer.launch().then(async browser=>{
const pagesCount = (await browser.pages()).length; // just to make sure we have the same stuff on both place
const browserWSEndpoint = await browser.wsEndpoint();
const customWSEndpoint = await createServer(browserWSEndpoint, host, port); // create the server here
console.log({ browserWSEndpoint, customWSEndpoint, pagesCount });
})
Run server script: node server.js. Server seems properly created. Terminal log:
browserWSEndpoint: 'ws://127.0.0.1:57640/devtools/browser/58dda865- b26e-4696-a057-25158dbc4093',
customWSEndpoint: 'ws://0.0.0.0:8080',
pagesCount: 1
npm start (from new terminal to assure the created server does not terminate)
I obtain the error in the client:
WebSocket connection to 'ws://0.0.0.0:8080/' failed:
(anonymous) # puppeteer-web:13354
I just want to use puppeteer with this front end library together in my app, fetching data with puppeteer to display it the UI and provide the user input back to puppeteer.
My ideal solution would be number 1, where I would be able to use any npm package apart from puppeteer and communicate from the backend(node server) to the client (imgui user interface) back and forth.
Thanks for any help.
EDIT:
I more less achieved it with the node server solution server which is my desired scenario, with expressjs and nodemon, running a different server in the application and communicationg with the app. Now I would find more valuable any help on:
1.- The browserifying solution and or insight about why my attempts with this approach failed.
2.- The solution that keeps everything in the one same server, that would be the server that in the repo serves the html to the browser with "start-example-html": "http-server -c-1 -o example/index.html". Dont know if that is possible. Its because I would not lose the life loading etc if I serve both things with my expressjs server added by myself.
Kind of what Create React App does with Proxying API Requests
3.- As suggested in the comments, guidance or solution to make the server code render a window through node with the imgui output (npm start-example-node) of course would be a valid answer to the question.
Seems not quite correct to change the question conditions during the bounty with a bit of a broad new scenario, but now that conditions has changed so I try to make the most of the investment and the research already done in the topic, also due to my lack of expertise in the wev-dev module bundling configuration area, so bounty may be granted for the most valuable advice in any of the two topics mentioned above. Thanks for your understanding.

Linux Command Line only Headless Browser Testing (React/Blaze)

i have only a command line Linux but would like to do some ui tests for our meteor application.
So i heard there are some libraries which provide functionality of headless browsers.
PhantomJS, Selenium, Headless Chrome
So what i would like to know, which of them can work without xvfb
and without having a browser (i.e. chrome or chromium) installed?
I would like to rely on meteor or npm packages opt. at best no global dependencies.
Any user experience is also appreciated. I heard PhantomJS is not recommended due to been outdated and strange behavior.
Selenium is used for controlling all chromium, phantomjs, headless chrome.
phantomjs is having many issues that I see daily on SO, so you should avoid using it.
chrome headless is very new feature and I would still not recommend it. And chrome or chrome headless both would require chromium to be present.
So I would suggest you use docker for this.
docker run -d -p 4444:4444 selenium/standalone-chrome
This would launch a chrome node on your server and then you can use the same on the language binding in which you would be writing your test. I wite py
var webdriverio = require('webdriverio');
var browser = webdriverio
// setup your selenium server address.
// If you are using default settings, leave it empty
.remote({ host: 'localhost', port: 4444 })
// run browser that we want to test
.init({ browserName: 'chrome', version: '45' });
describe('webdriver.io tests', function() {
it('is a test', function() {
browser.get('http://example.com');
browser.click('.logo');
});
it('is a second test', function() {
browser.click('.link');
});
});

create-react-app exclude/include/change code parts at build

I'm developing a React web app and I'm using the create-react-app npm utility.
My app communicates with a server which, during development, is on my local machine. For this reason all the Ajax request I make use a localhost:port address.
Of course, when I'm going to build and deploy my project in production I need those addresses to change to the production ones.
I am used to the preprocess Grunt plugin flow (https://github.com/jsoverson/grunt-preprocess) which has the possibility to mark parts of code to be excluded, included or changed at build time.
For example:
//#if DEV
const SERVER_PATH = "localhost:8888";
//#endif
//#if !DEV
const SERVER_PATH = "prot://example.com:8888";
//#endif
Do you know if there is a way to do such thing inside the create-react-app development environment?
Thank you in advance!
I'm not too sure exactly how your server-side code handles requests, however you shouldn't have to change your code when deploying to production if you use relative paths in your ajax queries. For example, here's an ajax query that uses a relative path:
$.ajax({
url: "something/getthing/",
dataType: 'json',
success: function ( data ) {
//do a thing
}
});
Hopefully that helps :)
When creating your networkInterface, use the process.env.NODE_ENV to determine what PATH to use.
if (process.env.NODE_ENV !== 'production') {
const SERVER_PATH = "localhost:8888";
}
else {
const SERVER_PATH = "prot://example.com:8888";
}
Your application will automatically detect whether you are in production or development and therefore create the const SERVER_PATH with the correct value for the environment.
According to the docs, the dev server can proxy your requests. You can configure it in your package.json like this:
"proxy": "http://localhost:4000",
Another option is to ask the browser for the current location. It works well when your API and static files are on the same backend, which is common with Node.js and React.
Here you go:
const { protocol, host } = window.location
const endpoint = protocol + host
// fetch(endpoint)

Grunt dev server to allow push states

I am trying to set up my grunt server to allow push states.
After countless google searches and reading SO posts I cannot figure out how to do this.
I keep getting errors like the one below.
Does anyone have any ideas how to fix this?
No "connect" targets found. Warning: Task "connect" failed. Use --force to continue.
It appears to me below that I have defined targets with the line
open: {
target: 'http://localhost:8000'
}
See complete code below:
var pushState = require('grunt-connect-pushstate/lib/utils').pushState;
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
connect: {
options: {
hostname: 'localhost',
port: 8000,
keepalive: true,
open: {
target: 'http://localhost:8000'
},
middleware: function (connect, options) {
return [
// Rewrite requests to root so they may be handled by router
pushState(),
// Serve static files
connect.static(options.base)
];
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-uglify'); // Load the plugin that provides the "uglify" task.
grunt.loadNpmTasks('grunt-contrib-connect'); // Load the plugin that provides the "connect" task.
// Default task(s).
grunt.registerTask('default', [ 'connect']);
};
Push states are already included in most SPA frameworks, so you might not need this unless you're building a framework.
Angular: https://scotch.io/tutorials/pretty-urls-in-angularjs-removing-the-hashtag
React: How to remove the hash from the url in react-router
This looks like a grunt build script to compile an application to serve. So I'm not exactly sure how you'd use pushStates in the build process. You may be trying to solve the wrong problem.
Don't bother with grunt to deploy a local dev pushstate server for your SPA.
In your project directory, install https://www.npmjs.com/package/pushstate-server
npm i pushstate-server -D
Then to launch it, add a script entry in the package.json of your project:
…
"scripts": {
"dev": "pushstate-server"
}
…
This way you can now start it running npm run dev
All the requests which would normally end in a 404 will now redirect to index.html.

XMLHttpRequest cannot load file [duplicate]

I'm using this code to make an AJAX request:
$("#userBarSignup").click(function(){
$.get("C:/xampp/htdocs/webname/resources/templates/signup.php",
{/*params*/},
function(response){
$("#signup").html("TEST");
$("#signup").html(response);
},
"html");
But from the Google Chrome JavaScript console I keep receiving this error:
XMLHttpRequest cannot load
file:///C:/xampp/htdocs/webname/resources/templates/signup.php. Cross
origin requests are only supported for HTTP.
The problem is that the signup.php file is hosted on my local web server that's where all the website is run from so it's not cross-domain.
How can I solve this problem?
I've had luck starting chrome with the following switch:
--allow-file-access-from-files
On os x try (re-type the dashes if you copy paste):
open -a 'Google Chrome' --args -allow-file-access-from-files
On other *nix run (not tested)
google-chrome --allow-file-access-from-files
or on windows edit the properties of the chrome shortcut and add the switch, e.g.
C:\ ... \Application\chrome.exe --allow-file-access-from-files
to the end of the "target" path
If you’re working on a little front-end project and want to test it locally, you’d typically open it by pointing your local directory in the web browser, for instance entering file:///home/erick/mysuperproject/index.html in your URL bar. However, if your site is trying to load resources, even if they’re placed in your local directory, you might see warnings like this:
XMLHttpRequest cannot load file:///home/erick/mysuperproject/mylibrary.js. Cross origin requests are only supported for HTTP.
Chrome and other modern browsers have implemented security restrictions for Cross Origin Requests, which means that you cannot load anything through file:/// , you need to use http:// protocol at all times, even locally -due Same Origin policies. Simple as that, you’d need to mount a webserver to run your project there.
This is not the end of the world and there are many solutions out there, including the good old Apache (with VirtualHosts if you’re running several other projects), node.js with express, a Ruby server, etc. or simply modifying your browser settings.
However there’s a simpler and lightweight solution for the lazy ones. You can use Python’s SimpleHTTPServer. It comes already bundled with python so you don’t need to install or configure anything at all!
So cd to your project directory, for instance
1
cd /home/erick/mysuperproject
and then simply use
1
python -m SimpleHTTPServer
And that’s it, you’ll see this message in your terminal
1
Serving HTTP on 0.0.0.0 port 8000 ...
So now you can go back to your browser and visit http://0.0.0.0:8000 with all your directory files served there. You can configure the port and other things, just see the documentation. But this simply trick works for me when I’m in a rush to test a new library or work out a new idea.
EDIT:
In Python 3+, SimpleHTTPServer has been replaced with http.server. So In Python 3.3, for example, the following command is equivalent:
python -m http.server 8000
You need to actually run a webserver, and make the get request to a URI on that server, rather than making the get request to a file; e.g. change the line:
$.get("C:/xampp/htdocs/webname/resources/templates/signup.php",
to read something like:
$.get("http://localhost/resources/templates/signup.php",
and the initial request page needs to be made over http as well.
I was getting the same error while trying to load simply HTML files that used JSON data to populate the page, so I used used node.js and express to solve the problem. If you do not have node installed, you need to install node first.
Install express
npm install express
Create a server.js file in the root folder of your project, in my case one folder above the files I wanted to server
Put something like the following in the server.js file and read about this on the express gihub site:
var express = require('express');
var app = express();
var path = require('path');
// __dirname will use the current path from where you run this file
app.use(express.static(__dirname));
app.use(express.static(path.join(__dirname, '/FOLDERTOHTMLFILESTOSERVER')));
app.listen(8000);
console.log('Listening on port 8000');
After you've saved server.js, you can run the server using:
node server.js
Go to http://localhost:8000/FILENAME and you should see the HTML file you were trying to load
If you have nodejs installed, you can download and install the server using command line:
npm install -g http-server
Change directories to the directory where you want to serve files from:
$ cd ~/projects/angular/current_project
Run the server:
$ http-server
which will produce the message Starting up http-server, serving on:
Available on:
http://your_ip:8080 and
http://127.0.0.1:8080
That allows you to use urls in your browser like
http://your_ip:8080/index.html
It works best this way. Make sure that both files are on the server. When calling the html page, make use of the web address like: http:://localhost/myhtmlfile.html, and not, C::///users/myhtmlfile.html. Make usre as well that the url passed to the json is a web address as denoted below:
$(function(){
$('#typeahead').typeahead({
source: function(query, process){
$.ajax({
url: 'http://localhost:2222/bootstrap/source.php',
type: 'POST',
data: 'query=' +query,
dataType: 'JSON',
async: true,
success: function(data){
process(data);
}
});
}
});
});
REM kill all existing instance of chrome
taskkill /F /IM chrome.exe /T
REM directory path where chrome.exe is located
set chromeLocation="C:\Program Files (x86)\Google\Chrome\Application"
cd %chromeLocation%
cd c:
start chrome.exe --allow-file-access-from-files
change chromeLocation path with yours.
save above as .bat file.
drag drop you file on the batch file you created. (chrome does give restore pages
option though so if you have pages open just hit restore and it will work).
You can also start a server without python using php interpreter.
E.g:
cd /your/path/to/website/root
php -S localhost:8000
This can be useful if you want an alternative to npm, as php utility comes preinstalled on some OS' (including Mac).
For all python users:
Simply go to your destination folder in the terminal.
cd projectFoder
then start HTTP server
For Python3+:
python -m http.server 8000
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
go to your link: http://0.0.0.0:8000/
Enjoy :)

Categories

Resources