I have made a silent print web application that prints a PDF file. The key was to add JavaScript to the PDF file that silently print itself.
To do this I open the PDF with acrobat reader in chrome, that allow me to execute the script (with the proper permissions).
But as it was announced this solution won't work after chrome 45 because the npapi issue.
I guess a possible solution could be to use the recently release printProvider of chrome extensions.
Nevertheless I can't imagine how to fire any of the printProvider events.
So the question is: Is ok to think in chrome extensions to make a silent print web application, and how can I fire and handle a print job for an embedded PDF of a HTML Page.
Finally I reached an acceptable solution for this problem, as I couldn't find it out there, but read to many post with the same issue I will leave my solution here.
So first you need to add your printer to the Google Cloud Print and then you will need to add a proyect to the Google Developers Console
Then add this script and any time you need to print something execute the print() function. This method will print the document indicated in the content
The application will ask for your permission once to manage your printers.
function auth() {
gapi.auth.authorize({
'client_id': 'YOUR_GOOGLE_API_CLIENT_ID',
'scope': 'https://www.googleapis.com/auth/cloudprint',
'immediate': true
});
}
function print() {
var xhr = new XMLHttpRequest();
var q = new FormData()
q.append('xsrf', gapi.auth.getToken().access_token);
q.append('printerid', 'YOUR_GOOGLE_CLOUD_PRINTER_ID');
q.append('jobid', '');
q.append('title', 'silentPrintTest');
q.append('contentType', 'url');
q.append('content',"http://www.pdf995.com/samples/pdf.pdf");
q.append('ticket', '{ "version": "1.0", "print": {}}');
xhr.open('POST', 'https://www.google.com/cloudprint/submit');
xhr.setRequestHeader('Authorization', 'Bearer ' + gapi.auth.getToken().access_token);
xhr.onload = function () {
try {
var r = JSON.parse(xhr.responseText);
console.log(r.message)
} catch (e) {
console.log(xhr.responseText)
}
}
xhr.send(q)
}
window.addEventListener('load', auth);
<script src="https://apis.google.com/js/client.js"></script>
Anyway this script throw a 'Access-Control-Allow-Origin' error, even though this appears in the documentation... I couldn't make it work :(
Google APIs support requests and responses using Cross-origin Resource Sharing (CORS). You do not need to load the complete JavaScript client library to use CORS. If you want your application to access a user's personal information, however, it must still work with Google's OAuth 2.0 mechanism. To make this possible, Google provides the standalone auth client — a subset of the JavaScript client.
So to go throw this I had to install this chrome extension CORS. I'm sure that some one will improve this script to avoid this chrome extension.
You can register an Application to a URI Scheme to trigger the local application to print silently. The setting is pretty easy and straightforward. It's a seamless experience. I have posted the solution here with full example:
https://stackoverflow.com/a/37601807/409319
After the removal of npapi, I don't believe this is possible solely programmatically. The only current way I know to get chrome to print silently is using chrome kiosk mode, which is a flag (mode) you have to set when starting chrome.
Take a look at these SO posts:
Silent printing (direct) using KIOSK mode in Google Chrome
Running Chrome with extension in kiosk mode
This used to be possible using browser plugins (e.g. Java + NPAPI, ActiveX) but has been blacklisted by most browsers for several years.
If interested in modern solutions that use similar techniques, the architecture usually requires the following:
WebSocket, HTTP or Custom URI connection back to localhost
API that talks through web transport (JavaScript or custom URI scheme) to an app running locally.
A detail of projects (several of them are open source) that leverage these technologies are available here:
https://stackoverflow.com/a/28783269/3196753
Since the source code of these projects can vary (hundreds of lines to tens-of-thousands of lines), a code snippet would be too large unless a inquiring about a specific project's API.
Side note: Some technologies offer dedicated cloud resources, which add convenience at the expense of potential latency and privacy. At the time of writing this, the most popular "free" cloud solution -- Google Cloud Print -- is slated to be retired in December 2020.
Related
I need a suggestion to clarify my thought.Now I am working on a Web application in ASP.NET MVC5 with Angularjs as the front end framework.
Is there any way to open client side application like MS Word/Outlook using any scripting languages like jquery,ajax,angularjs etc.
Yes you can open any MS-WORD document using ActiveXObject.
Following is the sample code to print file data on webpage.
var w=new ActiveXObject(‘Word.Application’);
if (w != null)
{
w.Visible = true; //set to false to stop the Word document from opening
obj=w.Documents.Open("C:\\blank.doc"); //this can be any location on your PC, not just C:
docText = obj.Content;
w.Selection.TypeText("Hello world!");
w.Documents.Save();
document.write(docText);//Print on webpage
For more information you can refer here.
In general, no, because that would be a huge security hole and lead to the spread of viruses and malware.
In certain specific cases where you can control the user's computer already, you may be able to do it (e.g. Internet Explorer with trusted sites as Strom said).
But it's not really worth pursuing such options as they are being aggresively shut down by browser vendors all the time.
I'm developing website with a lot of HTML5 and CSS3 features. I'm also using iframe to embed several content on my website. It works fine if I open it using Chrome/Firefox/Safari mobile browser. However, if I share on facebook (post/page) and I opened it up with Facebook application with Facebook Internal Browser, my website is messed up.
Is there any tools or way to debug on Facebook Browser? Thanks.
This is how you can do the debugging yourself. It's painful, but the only way I've come across so far.
tl;dr Get the Facebook App loading a page on your local server so you can iterate quickly. Then print debug statements directly to the page until you figure out what is going on.
Get a link to a page on your local server that you can access on your mobile device (test in mobile safari that it works). See this to find out your local IP address How do you access a website running on localhost from iPhone browser. It will look something like this
http://192.xxx.1.127:3000/facebook-test
Post that link on your Facebook page (you can make it private so your friends aren't all like WTF?)
Click the posted link in the Facebook mobile App and it will open up in Facebook's mobile browser
Since you don't have a console, you basically need to print debug statements directly to the page so it is visible. Put debug statements all over your code. If your problems are primarily related to CSS, then you can iteratively comment out stuff until you've found the issue(s) or print the relevant CSS attributes using JavaScript. Eg something like (using JQuery)
function debug(str){$('body').append("<br>"+str);}
Quite possibly the most painful part. The Facebook browser caches very aggressively. If you are making changes and nothing has happened, it's because the content is cached. You can sometimes resolve this by updating the URLs, eg /facebook-test-1, /facebook-test-2, or adding dummy parameters eg /facebook-test?dummy=1. But if the changes are in external css or js sheets it sometimes will still cache. To 100% clear the cache, delete the Facebook App from your mobile device and reinstall.
The internal browser the Facebook app uses is essentially a uiWebView. Paul Irish has made a simple iOS app that lets you load any URL into a uiWebView which you then can debug using Safari's Developer Tools.
https://github.com/paulirish/iOS-WebView-App
I found a way how to debug it easier. You will need to install the Ghostlab app (You have a 7-day free trial there, however it's totally worth paying for).
In Ghostlab, add the website address (or a localhost address) you want to debug and start the session.
Ghostlab will generate a link for access.
Copy that link and post it on Facebook (as a private post)
Open the link on mobile and that's it! Ghostlab will identify you once you open that link, and will allow you to debug the page.
For debugging, you will have all the same tools as in the Chrome devtools (how cool is that!). For example, you can tweak CSS and see the changes applied live.
If you want to debug a possible error, you can try to catch it and display it.
Put this at the very top of your code:
window.onerror = function (msg, url, lineNo, columnNo, error) {
var string = msg.toLowerCase();
var substring = "script error";
if (string.indexOf(substring) > -1){
alert('Script Error: See Browser Console for Detail');
} else {
var message = [
'Message: ' + msg,
'URL: ' + url,
'Line: ' + lineNo,
'Column: ' + columnNo,
'Error object: ' + JSON.stringify(error)
].join(' - ');
alert(message);
}
}
(Source: MDN)
This will catch and alert your errors.
Share a link on Facebook (privately), or send yourself a message on Facebook Messenger (easier). To break the cache, create a new URL every time, e.g. by appending a random string to the URL.
Follow the link and see if you can find any errors.
With help of ngrok create temporary http & https adress instead of your ordinary localhost:3000(or other port) and you could run your app on any devices. It is super easy to use.
and as it was written above all other useful information you should write somewhere inside div element (in case of React I recommend to put onClick on that div with force update or other function for getting info, sometimes it helps because JS in FB could be executed erlier than your information appears). Keep in mind that alerts are not reliable, sometimes they are blocked
bonus from ngrok that in console you will see which files was
requested and response code (it will replace lack of network tab)
and about iFrame.If you use it on other domain and you rely on cookies - you should know that facebook in-app browser blocks 3rd party cookies
test on Android and iOS separately because technicaly they use different browsers
I am trying to integrate the Youtube iframe API on Xbox One to be able to play Youtube videos from an application, but so far did not manage to make it work. Is it even possible ?
It seems that windows store apps imposes a lot of restrictions compared to a web app (for very understandable security reasons).
The first problem when porting the web app is the local context / web context. There seems to be two options there:
grab a version of Youtube's code (at least the part that loads the library) and integrate it into the app (this way, we control more of the code at certification time, but it could eventually not be in sync anymore with the rest of the web code)
load all Youtube's code from the web in a web context (by putting the YT.player inside another iframe) and then do a proxy in the local context to post messages to the equivalent web context.
What method is the recommended one ?
The second problem is that IE in the application seems to load YouTube's videos in Flash, because it complains about ActiveX not being there. I get the following error:
Exception was thrown at line 328, column 376 in
https://s.ytimg.com/yts/jsbin/www-embed-player-new-vflRnMsMv/www-embed-player-new.js
0x800a1391 - JavaScript runtime error: 'ActiveXObject' is undefined
Is there a way to force the app to load videos in HTML5 instead of flash ? I tried setting html5=1 in the playerVars, like in the following code (as suggested in http://jsfiddle.net/rocha/eMAU5/), but it didn't help:
player = new YT.Player('player', {
height: '390',
width: '640',
videoId: 'OEoXaMPEzfM',
playerVars: {
html5: 1,
}
Or maybe I am not interpreting correctly the reason for the loading of this ActiveX ? I know that ActiveX are deactivated in windows store applications (and X1 apps). Anyway, how can I make this work (if at all possible) ?
Thank you
This is not supported behavior. Not only is ActiveX not supported in ADK apps, but loading in remote code is against XR-010. I suggest launching the browser with the YouTube video url using Launcher.LaunchUriAsync:
// The URI to launch
var uriToLaunch = "https://www.youtube.com/user/xbox";
// Create a Uri object from a URI string
var uri = new Windows.Foundation.Uri(uriToLaunch);
// Launch the URI
Windows.System.Launcher.launchUriAsync(uri).then(
function (success) {
if (success) {
// URI launched
} else {
// URI launch failed
}
});
Lastly, please post your Xbox specific questions to the appropriate Xbox forums. I will be happy to answer them there, and more thoroughly. NDA protected program information should not be discussed on a public forum.
Please advise how to scrape AJAX pages.
Overview:
All screen scraping first requires manual review of the page you want to extract resources from. When dealing with AJAX you usually just need to analyze a bit more than just simply the HTML.
When dealing with AJAX this just means that the value you want is not in the initial HTML document that you requested, but that javascript will be exectued which asks the server for the extra information you want.
You can therefore usually simply analyze the javascript and see which request the javascript makes and just call this URL instead from the start.
Example:
Take this as an example, assume the page you want to scrape from has the following script:
<script type="text/javascript">
function ajaxFunction()
{
var xmlHttp;
try
{
// Firefox, Opera 8.0+, Safari
xmlHttp=new XMLHttpRequest();
}
catch (e)
{
// Internet Explorer
try
{
xmlHttp=new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e)
{
try
{
xmlHttp=new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e)
{
alert("Your browser does not support AJAX!");
return false;
}
}
}
xmlHttp.onreadystatechange=function()
{
if(xmlHttp.readyState==4)
{
document.myForm.time.value=xmlHttp.responseText;
}
}
xmlHttp.open("GET","time.asp",true);
xmlHttp.send(null);
}
</script>
Then all you need to do is instead do an HTTP request to time.asp of the same server instead. Example from w3schools.
Advanced scraping with C++:
For complex usage, and if you're using C++ you could also consider using the firefox javascript engine SpiderMonkey to execute the javascript on a page.
Advanced scraping with Java:
For complex usage, and if you're using Java you could also consider using the firefox javascript engine for Java Rhino
Advanced scraping with .NET:
For complex usage, and if you're using .Net you could also consider using the Microsoft.vsa assembly. Recently replaced with ICodeCompiler/CodeDOM.
In my opinion the simpliest solution is to use Casperjs, a framework based on the WebKit headless browser phantomjs.
The whole page is loaded, and it's very easy to scrape any ajax-related data.
You can check this basic tutorial to learn Automating & Scraping with PhantomJS and CasperJS
You can also give a look at this example code, on how to scrape google suggests keywords :
/*global casper:true*/
var casper = require('casper').create();
var suggestions = [];
var word = casper.cli.get(0);
if (!word) {
casper.echo('please provide a word').exit(1);
}
casper.start('http://www.google.com/', function() {
this.sendKeys('input[name=q]', word);
});
casper.waitFor(function() {
return this.fetchText('.gsq_a table span').indexOf(word) === 0
}, function() {
suggestions = this.evaluate(function() {
var nodes = document.querySelectorAll('.gsq_a table span');
return [].map.call(nodes, function(node){
return node.textContent;
});
});
});
casper.run(function() {
this.echo(suggestions.join('\n')).exit();
});
If you can get at it, try examining the DOM tree. Selenium does this as a part of testing a page. It also has functions to click buttons and follow links, which may be useful.
The best way to scrape web pages using Ajax or in general pages using Javascript is with a browser itself or a headless browser (a browser without GUI). Currently phantomjs is a well promoted headless browser using WebKit. An alternative that I used with success is HtmlUnit (in Java or .NET via IKVM, which is a simulated browser. Another known alternative is using a web automation tool like Selenium.
I wrote many articles about this subject like web scraping Ajax and Javascript sites and automated browserless OAuth authentication for Twitter. At the end of the first article there are a lot of extra resources that I have been compiling since 2011.
I like PhearJS, but that might be partially because I built it.
That said, it's a service you run in the background that speaks HTTP(S) and renders pages as JSON for you, including any metadata you might need.
Depends on the ajax page. The first part of screen scraping is determining how the page works. Is there some sort of variable you can iterate through to request all the data from the page? Personally I've used Web Scraper Plus for a lot of screen scraping related tasks because it is cheap, not difficult to get started, non-programmers can get it working relatively quickly.
Side Note: Terms of Use is probably somewhere you might want to check before doing this. Depending on the site iterating through everything may raise some flags.
I think Brian R. Bondy's answer is useful when the source code is easy to read. I prefer an easy way using tools like Wireshark or HttpAnalyzer to capture the packet and get the url from the "Host" field and the "GET" field.
For example,I capture a packet like the following:
GET /hqzx/quote.aspx?type=3&market=1&sorttype=3&updown=up&page=1&count=8&time=164330
HTTP/1.1
Accept: */*
Referer: http://quote.hexun.com/stock/default.aspx
Accept-Language: zh-cn
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Host: quote.tool.hexun.com
Connection: Keep-Alive
Then the URL is :
http://quote.tool.hexun.com/hqzx/quote.aspx?type=3&market=1&sorttype=3&updown=up&page=1&count=8&time=164330
As a low cost solution you can also try SWExplorerAutomation (SWEA). The program creates an automation API for any Web application developed with HTML, DHTML or AJAX.
Selenium WebDriver is a good solution: you program a browser and you automate what needs to be done in the browser. Browsers (Chrome, Firefox, etc) provide their own drivers that work with Selenium. Since it works as an automated REAL browser, the pages (including javascript and Ajax) get loaded as they do with a human using that browser.
The downside is that it is slow (since you would most probably like to wait for all images and scripts to load before you do your scraping on that single page).
I have previously linked to MIT's solvent and EnvJS as my answers to scrape off Ajax pages. These projects seem no longer accessible.
Out of sheer necessity, I have invented another way to actually scrape off Ajax pages, and it has worked for tough sites like findthecompany which have methods to find headless javascript engines and show no data.
The technique is to use chrome extensions to do scraping. Chrome extensions are the best place to scrape off Ajax pages because they actually allow us access to javascript modified DOM. The technique is as follows, I will certainly open source the code in sometime. Create a chrome extension ( assuming you know how to create one, and its architecture and capabilities. This is easy to learn and practice as there are lots of samples),
Use content scripts to access the DOM, by using xpath. Pretty much get the entire list or table or dynamically rendered content using xpath into a variable as string HTML Nodes. ( Only content scripts can access DOM but they can't contact a URL using XMLHTTP )
From content script, using message passing, message the entire stripped DOM as string, to a background script. ( Background scripts can talk to URLs but can't touch the DOM ). We use message passing to get these to talk.
You can use various events to loop through web pages and pass each stripped HTML Node content to the background script.
Now use the background script, to talk to an external server (on localhost), a simple one created using Nodejs/python. Just send the entire HTML Nodes as string, to the server, where the server would just persist the content posted to it, into files, with appropriate variables to identify page numbers or URLs.
Now you have scraped AJAX content ( HTML Nodes as string ), but these are partial html nodes. Now you can use your favorite XPATH library to load these into memory and use XPATH to scrape information into Tables or text.
Please comment if you cant understand and I can write it better. ( first attempt ). Also, I am trying to release sample code as soon as possible.
Hi I have a Web Application and a Console Application in my C# .Net solution. I am trying to call a ReverseService class in my Console Application from my Web Application. In the Console Application in the static void Main function I am running the following code:
var host = new WebSocketsHost<ReverseService>(new Uri("ws://localhost:4502/reverse"));
host.AddWebSocketsEndpoint();
host.Open();
Console.ReadLine();
I am trying to call this WebSocketEndpoint from my Web Application with the following Javascript code in Chrome 12.
if (window.WebSocket) {
//establishes websocket connection
websocket = new WebSocket('ws://localhost:4502/reverse');
websocket.onopen = function () {
$('body').append('Connected.');
$('#inputbox').keyup(function () {
websocket.send($('#inputbox').val());
});
};
websocket.onclose = function () {
$('body').append('Closed.');
}
websocket.onmessage = function (event) {
$('#outputbox').val(event.data);
};
}
The websocket.onclose function does in fact get called, but the websocket.onopen function never does. I've googled and looked on here but to no avail, any help would be greatly appreciated.
The error is almost certainly a mismatch between the protocol versions for your client and server. It looks like you are using the HTML5 labs prototype - if you are using the most recent version then your server is talking WebSockets hybi-09. Chrome implements a much older version of the protocol (I think its hixie-76 but I am not certain).
Your options are:
Use a WebSocket server that implements hixie-76
Use a different client implementation. For example, you could use the implementation that is provided with the HTML5 labs prototype. Its a Silverlight plugin with a JavaScript wrapper. You need to include a few scripts and use new WebSocketDraft instead of new WebSocket. You will also need to drop the clientaccesspolicy.xml file in your wwwroot - see the prototype readme file for more info. Have a look at some of the samples for more info.
Use a TCP packet sniffer to watch the protocol handshake.
Successful tests with two protocols :
hybi-00/hixie-76 : Firefox 5.0, Safari 5.1.1 use this old protocol with Sec-WebSocket-Key1/Key2...
There are several free server source codes in C#.
For example search WebSocket76English.zip. It works even with Safari on iPhone.
hybi-06 : Firefox 7.0, Chrome 14 are using the last protocol with Sec-WebSocket-Key.
The last .Net WebSocket prototype use that protocol.
If someone has a version of WebSocket prototype supporting the old protocol hybi-00, please let me know.
The last specification draft suggest that a server can support several protocols and tell about
the supported versions in field Sec-WebSocket-Version.