Security with JS which is generated by HttpHandler - javascript

I am working with a plugin which is provide to user according requirements that provided by user at registration time. all code is run through JS and that JS generated in HttpHandler. with this plugin we are using our company logo. if user uses the our link than no problem but if any one can download the JS and can edit that and use without our logo. I want to set security so no one can use that JS offline(I mean without our link). Is this possible and how?

This is not possible. Your server sends code to the browser and then browser executes the code. It is possible to capture the code by using HTTP sniffers or JavaScript debugger (available in most of the modern browsers), modify it and reuse. The only option that is used on many sites is obfuscation - but it does not really prevents from using modified version of the code - it complicates its modification.

Related

Converting a web app into an embeddable <script> tag

I just did a proof of concept/demo for a web app idea I had but that idea needs to be embedded on pages to work properly.
I'm now done with the development of the demo but now I have to tweak it so it works within a tag on any websites.
The question here is:
How do I achieve this without breaking up the main website's stylesheets and javascript?
It's a node.js/socket.io/angularjs/bootstrap based app for your information.
I basically have a small HTML file, a few css and js files and that's all. Any idea or suggestions?
If all you have is a script tag, and you want to inject UI/HTML/etc. into the host page, that means that an iframe approach may not be what you want (although you could possibly do a hybrid approach). So, there are a number of things that you'd need to do.
For one, I'd suggest you look into the general concept of a bookmarklet. While it's not exactly what you want, it's very similar. The problems of creating a bookmarklet will be very similar:
You'll need to isolate your JavaScript dependencies. For example, you can't load a version of a library that breaks the host page. jQuery for example, can be loaded without it taking over the $ symbol globally. But, not all libraries support that.
Any styles you use would also need to be carefully managed so as to not cause issues on the host page. You can load styles dynamically, but loading something like Bootstrap is likely going to cause problems on most pages that aren't using the exact same version you need.
You'll want your core Javascript file to load quickly and do as much async work as possible as to not affect the overall page load time (unless your functionality is necessary). You'll want to review content like this from Steve Souders.
You could load your UI via a web service or you could construct it locally.
If you don't want to use JSONP style requests, you'll need to investigate enabling CORS.
You could use an iframe and PostMessage to show some UI without needing to do complex wrapping/remapping of the various application dependencies that you have. PostMessage would allow you to send messages to tell the listening iFrame "what to do" at any given point, while the code that is running in the host page could move/manipulate the iframe into position. A number of popular embedded APIs have used this technique over the years. I think DropBox was using it for example.

Organizing Javascript w/ jQuery

I am tinkering around with jQuery and am finding it very useful and almost exciting.
As of now, I am referencing the jQuery script via Google's CDN and I store plugins I use locally in a static/scripts directory.
Naturally, each page has its own individual implementation of components that are required for the features it currently offers. I.E. the main page has the Twitter plugin whereas the login page has form validation logic and password strength metering. However, certain components (navigation bar) for example use the same script across multiple pages.
Admittedly so, I am not a fan of putting javascript code in the header of a page, but I rather prefer to have it in an external file (for caching, re-usability, and optimization purposes).
My question is, what is the preferred route for organizing the external files. I wanted to try and keep it to one javascript file for the entire site to reduce IO requests. However, I am not sure how to implement document ready functions on a conditional per page bases.
$(document).ready(function () { ... }
Is there some way to reference a page by some method (preferably id based and not a url conditional).
Thank you in advance for your time!
You should try REQUIRE JS.
This will allow you to load only those plugins the pages where you need them, and unload them again if they are not needed anymore.
Then again, it might be overkill. It really depends on the size of your project.
Paul Irish:
http://paulirish.com/2009/markup-based-unobtrusive-comprehensive-dom-ready-execution/
This will allow you to block your scripts by body class/ID and execute them automatically.
First you might want to use YUI Compressor or some other JS compressing tool. Then perhaps creating a resource file (resx) for your JavaScript is the way to go. Then just reference the resource within your code. This is the approach Telerik took for their RadControl ASP.NET AJAX control framework.

Web crawler: Using Perl's MozRepl module to deal with Javascript

I am trying to save a couple of web pages by using a web crawler. Usually I prefer doing it with perl's WWW::Mechanize modul. However, as far as I can tell, the site I am trying to crawl has many javascripts on it which seem to be hard to avoid. Therefore I looked into the following perl modules
WWW::Mechanize::Firefox
MozRepl
MozRepl::RemoteObject
The Firefox MozRepl extension itself works perfectly. I can use the terminal for navigating the web site just the way it is shown in the developer's tutorial - in theory. However, I have no idea about javascript and therefore am having a hard time using the moduls properly.
So here is the source i like to start from: Morgan Stanley
For a couple of listed firms beneath 'Companies - as of 10/14/2011' I like to save their respective pages. E.g. clicking on the first listed company (i.e. '1-800-Flowers.com, Inc') a javascript function gets called with two arguments -> dtxt('FLWS.O','2011-10-14'), which produces the desired new page. The page I now like to save locally.
With perl's MozRepl module I thought about something like this:
use strict;
use warnings;
use MozRepl;
my $repl = MozRepl->new;
$repl->setup;
$repl->execute('window.open("http://www.morganstanley.com/eqr/disclosures/webapp/coverage")');
$repl->repl_enter({ source => "content" });
$repl->execute('dtxt("FLWS.O", "2011-10-14")');
Now I like to save the produced HTML page.
So again, the desired code I like to produce should visit for a couple of firms their HTML site and simply save the web page. (Here are e.g. three firms: MMM.N, FLWS.O, SSRX.O)
Is it correct, that I cannot go around the page's javascript functions and therefore cannot use WWW::Mechanize?
Following question 1, are the mentioned perl modules a plausible approach to take?
And finally, if you say the first two questions can be anwsered with yes, it would be really nice if you can help me out with the actual coding. E.g. in the above code, the essential part which is missing is a 'save'-command. (Maybe using Firefox's saveDocument function?)
The web works via HTTP requests and responses.
If you can discover the proper request to send, then you will get the proper response.
If the target site uses JS to form the request, then you can either execute the JS,
or analyse what it does so that you can do the same in the language that you are using.
An even easier approach is to use a tool that will capture the resulting request for you, whether the request is created by JS or not, then you can craft your scraping code
to create the request that you want.
The "Web Scraping Proxy" from AT&T is such a tool.
You set it up, then navigate the website as normal to get to the page you want to scrape,
and the WSP will log all requests and responses for you.
It logs them in the form of Perl code, which you can then modify to suit your needs.

how to use code igniter for iframes

does code igniter provide css or javascript to help make iframes and web pages within webpages? Any suggestions on how to go about doing this in CI? I need to make a menu that when you put your mouse over it, the button while drop down other buttons. The when you click the corresponding button the iframe below populates from the database.
No, CodeIgniter does not provide any help on Javascript manipulation. It is a PHP framework. Server based, not oriented toward client interfaces.
EDIT :
Since, this has been downvoted, let me quote official CodeIgniter documentation.
Javascript Class : Note: This driver is experimental. Its
feature set and implementation may change in future releases.
With link provided : documentation.
Plus, I have downloaded the latest version 2.0.2. It does not contain any css nor javascript files. (Except for their documentation).

newbie question about javascript embed code?

I am a javascript newbie. I am trying to write a requirements document, and need some help describing what I am looking for. We want our application to generate a javascript snippet like this:
<script src="http://www.jotform.com/jsform/10511502633"></script>
This will load a web form.
So my question is:
- How does a single script load an entire web form? Is this a JSON?
- What is this called? Is this a cross browser javascript?
- Can anyone point me in the direction of learning more about what this is?
Thank you for your help!
The javascript file is just hosted on an external site. It appears to be dynamically generated, so feel free to use some fancy words ;) But basically, you just include it here, as if it was on your own site.
You could say "The application will generate the required script-tags to include dynamically generated javascript file from an external, third-party site".
Offcourse you need to take special cautions for cases when the include won't work, because the other site is not reachable (site is down, DNS does not work, file is moved on other webserver, your application is on an intranet/behind a proxy/firewall...). Why can't you copy their file and mirror it locally? Or use a reliable Content Delivery Network, like Google or Amazon.
There are many names for this type of inclusion. The most common being widget.
What does it actually do:
take an id of some sort as parameter
use the id to fetch some specific data (most likely from a database)
generate some js and html based on the id/data
usually this involves iframes of some sort.
To use a script rather than an html iframe has multiple advantages
you can change what is actually delivered to the users browsers without changing the include
you can resize the iframe to fit certain predefined sizes
you can inject the necessary things into the page the widget is included (of course you need to make sure this is sanctioned)
We use this all the time and we never regreted it.
If you don't want to build the widget infrastructure yourself you can always use one of the widget providers like widgetbox:
http://www.widgetbox.com/widgets/make/
With those you are up and running in no time.
This is typically called a script include.
Google have lots of these types of items, and even they call them by many names,
widgets, custom javascript, snippets, custom code, etc. It really depending on who you are writing for... I would go with "cross platform embeddable javascript code" meaning that it would need to load all its dependancies. Also specify which browsers need to be supported and what should happen is the user has javascript turned off.
EDIT :
Actually since we are talking unique IDs, you will need 2 parts probably, the user/site unique "cross platform embeddable javascript code" and whatever serverside code to support it. Basically this is an API that is accessed using your own javascript widget. Feel free you point to examples in your requirements document, programmers love examples.

Categories

Resources