Executing self-contained javascript from.... javascript - javascript

I have javascript that is being generated at design time that needs to be executed to find a value at run time. The code is stored as a string in an object and I would like to be able to execute it and retrieve the value and then scrap the code. Is there a way to do this? Do I have to use eval()?

You can use eval(String)
Or use new Function (String)
Or use document.createElement
[edited]
Depend on how it was done your code
1 -
if those strings are saved in shared across different pages (with cookies or database), then SERVER-SIDE you can generate a tag <script> with the values ​​saved in a JSON for quick access.
2 -
If the strings are saved only at runtime (ie in pagination are not recoverable values) you may not need to save these values ​​in Strings, talves you can create a global Json in Window Object (eg. window.MyObjectGlobal), making the values ​​accessible at any time on the page (since there is no paging) - is idea can also be reused in case of using the SERVER-SIDE, combined with Ajax (ajax only to save the data in your database), or document.cookie (but will have to use document.createElement("script") or eval)
Good luck

Yes, you can do that using eval.
However, remember evalis evil and it could potentially introduce security risks.
Anyway, if you know what you're doing, that's the way to go

Related

Prevent variable from being scraped on webpage

Let's say you have a javascript with a variable like this:
<script type="text/javascript" >
...some javascript...
var secret = 123;
...some javascript...
</script>
Assuming you're able to write this javascript on the server side. Is it possible to prevent it from being scraped using regular expressions or Xpath?
I'm already changing the variable name on the server side to a random variable name.
The goal here is not to defeat a human taking the time to digest my code and make amendments on the client side (I know I can't stop this). It's more to make it hard/impossible for automated scripts to grab variables. Humans, given enough time, are better at scraping code than machines.
The short answer is no.
The slightly longer answer is that the variable's value will always be transmitted in plaintext to the browser and, therefore, can always be scraped from the JavaScript because it will be declared (or used) in a predictable* manner.
The only bit you, as a creator, can control is making it difficult to programmatically extract. This might be by changing the location and form of the value/secret on every page load in a random manner, making any tokens single-use only, and by encrypting/encoding* things.
This doesn't prevent access to the secret, just makes it less easy to do, and by how much depends on the level of access the user has to the browser. Once you accept this, it boils down to a cost/benefit analysis of how much effort you want to put into this compared to the benefit.
*If the value is encoded in some way, there will be a decoding function within the JavaScript, enabling the encoded variable to be passed into the decoding function. Or perhaps any functions can be replaced. For further reading, have a look into the very few attempts at encryption/decryption in JavaScript.

How to avoid reloading large JavaScript array?

I have a large 40,000 words array loading from a database into a JavaScript/HTML array on every page of our web application... What would be the best way/technology to optimize it? In order to avoid this unnecessary downloads.
Somehow keep the array in a cookie and read from there?
Use ajax to load the array dynamically only parts that are needed?
What is the common practice?
On modern browsers you can use sessionStorage to have it persist during the current session, or localStorage to have it hang around between sessions.
NB: both only permit storage of strings - you'll have to serialise the array (e.g. into JSON) and deserialise it on retrieval.
If you want to actually use the word list as a local database with efficient lookup you might also want to investigate indexedDB
you can place the data in session and retrieve it, the same can be used in every page with out fetching the same every time.
Thanks & best regards.
If you need all the 40k words in all pages then you can use localStorage or sessionStorage. Just keep in mind sessionStorage will delete saved data when the tab/window is closed so the whole array will be downloaded again when the website is opened in new windows/tabs.
If you need only specific parts of the array in different pages I would tidy the array's elements into taxonomy/categories (if you are able to), so that you can download only the needed for a specific section of your application.
This depends on the composition of your array, if it is formed only by words or complex objects. This will help to avoid slow load of your website when it's visited the first time.
If the array is always the same (there is no need to update it), I'd create a js file and then I'd add it to every html page. The browser's cache would do the rest to avoid unnecessary re-loading. Something like:
big-array.js file:
var myBigArray=[...]
In each html file
<html>
... whatever you need
<script src="/my-path/big-array.js"></script>
...my other scripts here
</html>
It's a bit difficult to answer this question properly as to do so would require more information about your hosting environment and what you have access to. If you have a server side language available, such as PHP, you could look at caching which is generally the most efficient way to handle data that is used repeatedly across pages. Perhaps you could post more info about what technologies you have available to you?

how to avoid cross site scripting in javascript used in tcl script? [duplicate]

This question already has answers here:
What are the common defenses against XSS? [closed]
(4 answers)
Closed 8 years ago.
while working on project open which is open source application, the url http://[host_ip]:8000/register/ includes Java Scripts which are vulnerable to cross-site scripting and Authentication Bypass Using SQL Injection.
I want to know that how can I avoid it? do I have to insert filter for that? and how should I do that?
please let me know if the problem is not clear to understand.
SQL Injection
The universal answer to SQL Injection problems is “never send any user input to the database as part of an SQL string”. Anything that can go as a parameter should do so. Thus, instead of (in some dialect that might not exactly match what you're looking at):
db eval "SELECT userid FROM users WHERE username = '$user' AND password = '$pass'"
you do:
db eval "SELECT userid FROM users WHERE username = ? AND password = ?" $user $pass
# I personally prefer to put SQL inside {braces}… but that's your call
The key is that because the database engine just understands that these are parameters, it never tries to interpret them as SQL. Injection Impossible. (Unless you're using badly-written stored procedures.)
It gets much more complex where you want to have a table or column name specified by a user. That's a case where you can't send it as a parameter; such SQL identifiers must be interpreted by the SQL engine. Your only alternatives there are to either remap from user-supplied terms to ones that you control, or to rigorously validate.
Remapping is done by having a separate trivial table that maps from user-supplied names to ones you've generated:
db eval {SELECT realname FROM namemap WHERE externalname = ?} $externalname
Because the generated name is easy to guarantee to be free of nasty characters and not to be one of SQLs keywords, it can be safely used in SQL text without further quoting. You can also try doing the mapping per request (factor out the mapping code to a procedure of course) by stripping all bad characters from it. A suitable regsub might be:
regsub -all {\W+} $externalname "" realname
but then we need additional checks to see that it isn't “evil”:
# You'll need to create an array, SQLidentifiers, first, perhaps like:
# array set SQLidentifers {UPDATE - SELECT - REPLACE - DELETE - ALTER - INSERT -}
# But you can do that once, as a global "constant"
if {[regexp {^\d} $realname] || [info exist SQLidentifiers([string toupper $realname])]} {
error "Bad identifier, $externalname"
}
As you can see, it's a good idea to factor out such transforms and checks into their own procedure so you get them right, once.
And you must test your code extensively. I cannot stress that hard enough. Your tests must try really hard to break things, to make SQL injections via every possible field that anyone could pass into the software; not one of them should ever result in anything happening that your code ever expects.
It's probably a good idea to get someone else to write at least some of the tests; experience from the security community suggests that it is relatively easy to write code that you can't break yourself, but much harder to write code that someone else can't break. Also consider doing fuzz testing, sending computer-generated random data at the interface. In all cases, either things should give a graceful error or should succeed, but never ever cause the application to outright fail.
(You might well allow highly-authenticated users — system/database administrators — to outright specify SQL to evaluate so they can do things like setting the system up, but they're the minority case.)
Cross-site Scripting
This is actually conceptually quite similar: it's caused (principally) by someone putting something in your site that unexpectedly gets interpreted as HTML (or CSS, or Javascript) rather than as human-readable text (with SQL injection, it's something getting interpreted as SQL rather than as data). Because you can't do the equivalent of parameterised queries when going back to the client, you have to use careful quoting. You're strongly recommended to do the careful quoting by using a proper templating library that constructs a DOM tree (with data coming from users or from the database being only ever inserted as text nodes).
If you want users to supply a marked up piece of text, consider either delivering it back as plain text before using Javascript to render it as, say, Markdown, or completely parsing the user-supplied text on the server to construct a model (e.g., DOM tree) of what should be delivered, before sending it back as HTML generated from that model.
You must not allow users to specify a location where you load a script or frame from. Even allowing them to specify links is worrying, but you probably have to permit that if you can't restrict things to straight plain text. (Consider adding a mechanism for listing all links that have been supplied by users. Consider marking all external links with rel=nofollow unless you can positively detect that they go to somewhere that you whitelist.)
Direct supply of HTML is a “highly-authenticated users only” operation.
(I told a lie above. You can do the equivalent of SQL parameterised queries. You write JS that the client executes to fetch the user data using an AJAX query, perhaps serialized as JSON, and then do DOM manipulations there to render it; in effect, you're moving the DOM construction from the server to the client, but you're still doing DOM construction as that's the core of how you get this right. You have to remember to never insert the things retrieved as straight HTML though. Clients must not trust the server too much.)
The comments I made above above about testing apply here too. With testing for XSS, you're looking to inject something like <script>alert("boom!")</script>; any time you can get that in and cause a popup dialog — except by being a system administrator with direct permission to edit HTML directly — you've got a massive dangerous hole to plug. (It's quite a good thing to try to inject, as it is very noticeable and yet fairly benign in itself.)
Don't try to just filter out <script> using regular expressions. It's far too hard to get that right.

packing tree-like structures into a cookie

I would like to be able to store a tree-like structure in a cookie. Ideally, I would like to have something that easily serealizes/deserializes a javascript plain object.
JSON might be a good option, but a quick googling did not filtered out a mainstream approach how to serialize to JSON from JavaScript.
What is the best way to approach the problem?
UPD
Related questions bubbled up Javascript / PHP cookie serialization methods?, which suggests using Prototype's Object.toJSON. I would prefer to stay with jQuery.
UPD2
It turned out that window.JSON.stringify might actually suffice in my case, but mentioned Douglas Crockford's library seems like a good fallback to support browsers where JSON property of the global object is not present.
JSON is your friend.
A free and recognized implementation made by Douglas Crockford is available here
I have used this method to read and store to HTML5's local storage without any problems.
JSON is undoubtedly a good option. To have it work cross-browser include this file in your page https://github.com/douglascrockford/JSON-js/blob/master/json2.js. Then use JSON.stringify() to convert to a string and store, and JSON.parse() to retrieve the object from the cookie.
Be aware that there can be quite low character limits on a single cookie's length, which any jsonified tree could hit, so you might want to preprocess your data before converting to JSON (e.g. replacing booleans with 1's and 0's, switching property names for abbreviated versions) and post-process to reverse these changes after retrieveing from your cookie.
If the amount of data you're storing is really large it may be better to store a session/identifier cookie which is used to retrieve the data from the server via an ajax request (or if you need a quick response on page load, output the data into a script tag) and save the data directly to the server via ajax requests instead of using a cookie.
One more JSON serialization implementation as a jQuery plugin: http://code.google.com/p/jquery-json/

How dangerous is it to store JSON data in a database?

I need a mechanism for storing complex data structures created in client side javascript. I've been considering using the stringify method to convert the javascript object into a string, store it in the database and then pull it back out and use the reverse parse method to give me the javascript object back.
Is this just a bad idea or can it be done safely? If it can, what are some pitfalls I should be sure to avoid? Or should I just come up with my own method for accomplishing this?
It can be done and I've done it. It's as safe as your database.
The only downside is it's practically impossible to use the stored data in queries. Down the track you may come to wish you'd stored the data as table fields to enable filtering and sorting etc.
Since the data is user created make sure you're using a safe method to insert the data to protect yourself from injection attacks (don't just blindly concatenate the data into a query string).
It's fine so long as you don't deserialize using eval.
Because you are using a database it means you need a serverside language to communicate with the database. Any data you have is easily converted from and to json with most serverside languages.
I can't imagine a proper usecase unless you have a sh*tload of javascript, it needs to be very performant, and you have exhausted all other possibilities such as caching, query optimization, etc...
An other downside of doing this is that you can't easily query the data in your database which is always nice when you want to get any kind of reporting done.
And what if your json structure changes? Will you update all the scripts in your database? Or will you force yourself to cope with the changes in the parsing code?
Conclusion
Imho it is not dangerous to do so but it leaves little room for manageability and future updates.

Categories

Resources