Im working on a video player at the moment (a flash one). Before the video starts, the player receives a manifest.f4m file. I would like to detect WHEN this file is requested and WHEN it actually arrives (ie the time for the server to generate it). I really have NO idea how to do it, but it should be possible using javascript because Firebug and even Google Chrome's console timeline are able to detect this "event".
Do you have any clue ?
Since it's in Flash, Firebug may not pick it up in the Net tab since those calls don't go through the browser's API. You may consider using Fiddler. It will show you just about everything.
Related
I am creating an application that is transmitting images but the way basic javascript works the images can potentially be downloaded straight from the console in JS. I wanted to know if there is a solution for transmitting images that can't be downloaded from the console. I have seen topics about encrypted media extensions that are used by such as netflix. Is this a solution for what I have described? If so how do you use it with a library such as React?
I don't know if it's appropriate to answer this way, but there is not a true way of preventing images from being downloaded from the browser. For the user to see an image served by your website, it has to be loaded into the browser's memory. While there are tricks to prevent a simple right-click and "Save," your question is asking if it's possible to prevent transmitted images from being saved via console (or in general, it seems). The answer to that is "no", since its presence in memory opens it up to all manner of copying it. Could be a simple screenshot or getting it from the dev tools sources tab or via console script.
I want to get the INSPECT ELEMENT data of a website. Let's say Truecaller. So that i can get the Name of the person who's mobile number I searched.
But whenever i make a python script it gives me the PAGE SOURCE that does not contain the required information.
Kindly help me. I am a beginner so kindly excuse me of any mistake in the question.
TL;DR: Use Selenium (and PhantomJS)
The view page source will give you the html that was loaded when you made a request for the page (which is most likely what you are getting when you make a request from python.
Since nowadays a lot of pages load things and modify the DOM after the initial html was loaded, you will not get most of the information you want just by looking into that initial response.
To get the inspect element information you will need some sort of web browser to actually go to the page, wait for the information you want to load, and then use it. However you still want to do this in your python script.
Enter selenium, which is a tool for browser automation (mostly used for testing webpages). You can create a python script that opens a browser page and executes whatever code you write for it to do (even wait for a while and search for an after load DOM element!). Your script will still open a browser (which is kind of weird I would guess).
Enter PhantomJS, another library that you can use to have a headless browser to do all your web testing without having to rely on the actual browser UI.
Using selenium only you might achieve your goals, but with phantomjs you can do that in an even cleaner way! Good Luck.
INSPECT ELEMENT and VIEW PAGE SOURCE are not the same.
View source shows you the original HTML source of the page. When you view source from the browser, you get the HTML as it was delivered by the server, not after javascript does its thing.
The inspector shows you the DOM as it was interpreted by the browser. This includes for example changes made by javascript which cannot be seen in the HTML source.
what you see in the element inspector is not the source-code anymore.
You see a javascript manipulated version.
Instead of trying to execute all the scripts on your own which may lead into multiple problems like cross origin security and so on,
search the network tab for the actual search request and its parameters.
Then request the data from there, that is the trick.
Also it seems like you need to be logged in to search on the url you provided so you need to eventually adapt cookie/session/header and stuff, just like a request from your browser would.
So what i want to say is, better analyse where the data you look for is coming from if it is not in the source
I have been searching for something in chrome extension reference to find anything that would allow me to manipulate audio level of a tab. Only option that has come to my mind is make script have it go through all elements in page and either remove them or mute them if possible.
But i feel there has to be a way to reroute all audio streams to nothing, like break them from output which is speakers if using audio api of html5...however no avail either with chrome extension apis or web audio api.
Goal: mute all sounds on page (flash, audio element, etc.)
You cannot do this now, although this will hopefully change in the near-term future.
At the moment, there is nothing in the Chrome APIs, although I did propose a tabaudio API back in February (and am working on a new draft -- as well as an implementation -- right now.)
Can you give me an idea as to what you want this functionality for? (They ask for potential uses when proposing APIs.)
Perhaps the closest that you can do is something similar to what the MuteTab Chrome extension does (written by me, http://www.github.com/jaredsohn/mutetab), which basically scans the page for object, embed, audio, video, and applet tags and hides them from the page. Unfortunately, this misses web audio. Also, instead of muting, it "stops" it by removing it from the page, which could block the video or game associated with the sound. Alternatively, if you just care about HTML5 video or audio or Flash that has an API (such as YouTube), you could could use JavaScript to pause or mute things.
There's now a Chrome extension allowing to mute websites by URL using blacklist/whitelist approach called "Mute Tabs by URL".
It does require you to allow it to read your 'browsing history', but description swears that it doesn't store your URLs anywhere, and event points to a location of source code, so you can verify it for yourself
I'm currently developping a Java application, using Play! Framework 2.
In some page, I need to generate a PDF file on my server, then send it to the browser and display it, on an iframe or an embed tag.
I'm sending the file to the client-side using Play! result mechanism :
File resultFile = new File(outputFilePath);
response().setHeader("Content-Length", Long.toString(resultFile.length()));
return ok(resultFile).as("application/pdf");
On the client-side, I receive the file, and use createObjectUrl in order to get an URL to pass to my iframe
window.URL = window.URL || window.webkitURL;
var file = new window.Blob([result], { type: 'application/pdf' });
var fileUrl = window.URL.createObjectURL(file);
$("#displayDoc").attr("data", fileUrl);
The result of all of this is quite weird : the iFrame is displayed, shows as many pages as there is in my document, but all the pages are empty. And I get this error on the Dev Tools plugin of Chrome :
resource interpreted as document but transferred with mime type application/pdf chrome
Does anybody have an idea of what's going on?
Thanks!
Lauris
There's a few ways you can approach this.
The first way, and what might fit in better with your actual question.
Generate the PDF server side, then convert it to HTML
This is not nearly as difficult as it sounds however, and there are a number of tools out there to do it, such as 'PDF2DOM' ( http://cssbox.sourceforge.net/pdf2dom/ ), which will take an existing PDF representation and create an HTML dom from it, this HTML can then just be injected straight into the document using normal JavaScript techniques.
The second way (Especially if your using Java) is
Use a browser based plugin that can consume and display a PDF
Since your already developing in Java you could develop a Java Applet that can consume the PDF sent to it, and display it in the IFrame as required, you could also extend this Idea to use a Flash based PDF component, or anything else that the browser can run as a Plugin.
The Problem your going to have with this approach, is the growing trend of browser manufacturers to embrace HTML5 and move away from allowing 3rd party content in the Browser. Chrome & FF still support some, as does IE11, but as we go forward , especially into the brave new world of tablet computing, many browsers will not allow Plugins to be run, which means in the long term, this might work now, but may not for long.
The Third approach is
Take advantage of Native Browser Support
Both Chrome and Firefox today have built in PDF readers, and can display PDF documents directly in a browser tab just as though it was a regular HTML page.
The key here however is "In a TAB", in most cases browsers that can display PDF's will want to try and display them as a full tab resource.
It is possible to render PDF's in an IFrame, but the support vary's greatly from browser to browser, with IE8 and below, only displaying IF you have a plugin such as Adobe Acrobat installed.
One possible route you could try is to render the content out as an inline object using something like 'PDFObject' ( http://cssbox.sourceforge.net/pdf2dom/ ) , this will at least try to get the browser to see the PDF as an object in HTML5 which you may then be able to process using some other code in JS on the client.
My advice to you however, is for compatibility reasons, and if you want to leverage server side as much as possible, then option one is probably your best bet.
As for your chrome document type error, well that's quite an easy one to work out. :-)
You sent your content using the correct MIME Type (application/pdf) , but as far as chrome was concerned, when it received it, the document didn't look like a PDF file.
This often happens when you set the mime-type as PDF with the intention of sending a PDF, then send HTML (EG: Beacuse a server side error occurred and the web server sent an error page instead)
Hopefully something here will steer you in the right direction.
I am working on an open source plugin for the web mapping library Leaflet. The idea is to generate printable PDF documents from the map directly in the client browser. It's basic functionality works, but there is an issue in Google Chrome.
Depending of the document format and dpi settings, the script can take some time to fetch all the map tiles as images, convert them to data uris and add them to the document. In this case, the Firefox user interface doesn't respond for some seconds and then it shows the finished PDF. However, Chrome stops executing the script and shows me a bad smiley.
Aw, Snap! Something went wrong while displaying this webpage. To continue, reload or go to another page.
Normally, I would say that that is fine since there is a limitation due to processing power. But this actually happens for DIN A4 format at 300dpi, so I can't live with that. I have strong guess that this is not related to a bug in my code, because I can increase options step by step and at some level, Chrome stops executing the script.
How can I debug my code to find the bottleneck? How can I prevent Chrome from stopping my script?