I'm wondering if anyone else has experienced the following issue.
On a single non-linked (to a master page) .aspx page, I'm performing simple JS validations:
function validateMaxTrans(sender, args) {
// requires at least one digit, numeric only characters
var error = true;
var regexp = new RegExp("^[0-9]{1,40}(\.[0-9]{1,2})?$");
var txtAmount = document.getElementById('TxtMaxTransAmount');
if (txtAmount.value.match(regexp) && parseInt(txtAmount.value) >= 30) {
document.getElementById('maxTransValMsg').innerHTML = ""
args.IsValid = true;
}
else {
document.getElementById('maxTransValMsg').innerHTML = "*";
args.IsValid = false;
}
}
Then as soon as I move this into a Master page's content page, I get txtAmount is null.
Is there a different way to access the DOM when attempting to perform client-side JS validation with master/content pages?
Look at the source for your rendered page within the master page. Many elements will have an ID like ControlX$SubControlY$txtMaxTransAmount ... you'll need to adjust your validation accordingly. I will often just inject the IDs into the client doc..
<script type="text/javascript">
var controls = {
'txtAmount': '<%=TxtMaxTransAmount.ClientId%>',
...
}
</script>
I'd put this right before the end of your content area, to make sure the controls are rendered already. This way you can simply use window.controls.txtAmount to reference the server-side control's tag id. You could even make the right-side value a document.getElementById('...') directly.
Are you using asp textboxes? If so I believe you need to do somethign like document.getElementById('<%= txtMaxTransAmount.ClientID %>').
Hope this helps
Tom
Related
I currently only know javascript. But the thing is I looked up how to do it and some people talk about something called localStorage. I have tried this and for some reason when I jump to a new page those variables aren't kept. Maybe I am doing something wrong? I jump to a new page via
and all I want do do is select a certain image. take that image to a new page and add it to that page.
I tried using the localStorage variables and even turning it into JSON.stringify and doing JSON.parse when trying to call the localstorage to another script. It didn't seem to work for me. Is there another solution?
This is some of my code. There are two scripts.
document.querySelectorAll(".card").forEach(item => {
item.addEventListener("click", onProductClick);
})
var div;
var productImg;
var ratingElement;
var reviewCount;
var price;
function onProductClick(){
// This took a week to find out (this.id)
// console.log(this.id);
div = document.getElementById(this.id);
productImg = div.getElementsByTagName('img')[0];
ratingElement = div.getElementsByTagName('a')[2];
reviewCount = div.getElementsByTagName('a')[3]
price = div.getElementsByTagName('a')[4];
console.log(div.getElementsByTagName('a')[4]);
var productData = [div, productImg,ratingElement,reviewCount,price];
window.localStorage.setItem("price", JSON.stringify(price));
}
function TranslateProduct(){
console.log("Hello");
}
This is script 2
var productPageImage = document.getElementById("product-image");
var myData = localStorage['productdata-local'];
var value =JSON.parse(window.localStorage.getItem('price'));
console.log(value);
// function setProductPage(img){
// if(productImg != null){
// return;
// }
// console.log(window.price);
// }
To explain my thought process on this code in the first script I have multiple images that have event listeners for a click. I wanted to Click any given image and grab all the data about it and the product. Then I wanted to move that to another script (script 2) and add it to a dynamic second page. yet I print my variables and they work on the first script and somehow don't on the second. This is my code. in the meantime I will look into cookies Thank you!
Have you tried Cookies
You can always use cookies, but you may run into their limitations. These days, cookies are not the best choice, even though they have the ability to preserve data even longer than the current window session.
or you can make a GET request to the other page by attaching your serialized object to the URL as follows:
http://www.app.com/second.xyz?MyObject=SerializedData
That other page can then easily parse its URL and deserialize data using JavaScript.
you can check this answer for more details Pass javascript object from one page to other
So i am working on a Userscript and there is one major step i'm trying to find the easiest resolve with since i am very new to Javascript coding...I'm trying to perform/code a function that will open a specified URL:
EXAMPLE: Homepage ("http://www.EXAMPLE.com")
(page can be opened as 'Window.open' = Blank, or _self);
...when the parent or (current) URL that is open
EXAMPLE: innner.href = ("www.EXAMPLE.com/new/01262016/blah/blah/blah");
...has a text on the HTML documnt page that reads:
EXAMPLE TEXT from page ("www.EXAMPLE.com/new/01262016/blah/blah/blah");:
"this is the end of the page, please refresh to return back to homepage"
(TEXT: not the real keyword, but want to use phase as a detection for a setTimeout function to return back to home.)
Any help will be much appreicated, you guys are veryinformative here. Thanks in advance.
I think I have the gist of you question. It is a straighforward, though quite intensive, task to scan the entire text content of a page for specific keywords with JavaScript. However, if the keywords appear more than once (on multiple pages that should not redirect) then your users will get undesirable results.
A simple solution would be to add a class="last-page" attribute to the body-tag of the final page and run a function that checks for this. Something like....
HTML
<body class="last-page"><!--page content--></body>
JS
window.onload = function() {
var interval = 5000; // five seconds
if (document.body.classList.contains('last-page')) {
setTimeout(function() {
window.location.assign('http://the-next-page.com/');
}, interval);
}
};
Alternatively, if you have the ability to wrap the specified text in a uniquely identified html-tag, such as...
<span id="last-page">EXAMPLE TEXT</span>
...then the presence of this tag can be checked on each page load - similar to the function above:
window.onload = function() {
var interval = 5000;
if (document.getElementById('last-page') {
setTimeout(/* code as before */);
}
};
Yet another solution is to check the page URL against a variable...
window.onload = function() {
var finalURL = 'http://the-last-page.com/blah/...';
if (window.location === finalURL) {
/* same as before */
}
};
If this kind of thing is not an option please leave a comment and I'll add a function that gathers a pages entire text content and compares adjacent words to a pre-defined set of keys.
A javascript with "IF" statement written in all pages header template and which is unnecessary only for a specific page, is it possible to disable that statement for a specific page. Below is the javascript which make external links to open in new tab, can i disable this small script only for a google custom search page because the result links have google(external) url which redirects to website(internal), that's why script is reading links as external. Or is there some better way than disabling the if statement? If anyone knows how to solve this problem, please help
Javascript:
$(document).ready(function() {
$('a').each(function() {
var a = new RegExp('/' + window.location.host + '/');
if (!a.test(this.href)) {
$(this).attr("target", "_blank");
}
});
});
One way to do it for multiple pages, like this:
var excludedPages = ['blockpage1.html', 'blockpage2.html'];
for (var i = 0; i < excludedPages.length; i++) {
if (location.href.indexOf(excludedPages[i]) !== -1) {
// do something if page found
console.log("this page is blocked for extra code");
} else {
// do something if page not found in the list
console.log("this page is not included in block list");
}
}
EDIT
Note: The only thing to be aware of with JavaScript, it is running on client side (browser side) and any one with basic web development knowledge are able to change the block site or edit any the site content. This make it possible getting access to what ever site that was blocked. So it all depends how important your blocking mechanism and strategy.
Probably the easiest way would be to set a flag variable on the custom search page above the script, for example:
var keepInternal = true;
And then modify one line in your script to check for that flag:
$(document).ready(function() {
$('a').each(function() {
var a = new RegExp('/' + window.location.host + '/');
if (!a.test(this.href) && !keepInternal) {
$(this).attr("target", "_blank");
}
});
});
I have been trying to write a script that fetches results from my university website. Someone suggested that I use Mechanize and it does look really promising.
In order to get the result, one has to first enter the roll number and then select the session.
Simulating the first part has been easy with Mechanize, but with the second part I'm having problems as it is actually a JavaScript onchange event.
I read the function definition in the JavaScript and this is what I have come up with so far. Mechanize can't handle the onchange event and also when I pass the values that are actually changed by the JavaScript function manually, the same page is returned.
Here's the javaScript Code
function __doPostBack(eventTarget, eventArgument) {
var theform;
if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) {
theform = document.Form1;
}
else {
theform = document.forms["Form1"];
}
theform.__EVENTTARGET.value = eventTarget.split("$").join(":");
theform.__EVENTARGUMENT.value = eventArgument;
theform.submit();
}
I set a breakpoint in firebug and found the value of __EVENTTARGET to be 'Dt1', whereas __EVENTARGUMENT stays ''.
The ruby script that I have written to do this is
require 'mechanize'
#set up the agent to mimic firefox on windows
agent = Mechanize.new
agent.keep_alive = true
agent.user_agent = 'Windows Mozilla'
page = agent.get('http://www.nitt.edu/prm/nitreg/ShowRes.aspx')
#using mechanize to get us past the first form presented
result_form = page.form('Form1')
result_form.TextBox1 = '205110018'
page = agent.submit( result_form, result_form.buttons.first )
#the second hurdle that we encounter,
#here i'm trying to get past the JavaScript by doing what it does manually
result_form = page.form('Form1')
result_form.field_with('Dt1').options.find { |opt| opt.value == '66' }.select
result_form.field_with( :name => '__EVENTTARGET' ).value = 'Dt1'
#here i should have got the page with the results
page = agent.submit(result_form)
pp page
Can anyone tell me what I'm doing wrong?
It looks like you have it working already! Try using puts page.body instead of pp page and you'll see the contents of the page. You can use Mechanize search functions to scrape the data from the page.
Also, you could simplify that code to:
result_form['__EVENTTARGET'] = 'Dt1'
result_form['Dt1'] = '66'
I load this JS code from a bookmarklet:
function in_array(a, b)
{
for (i in b)
if (b[i] == a)
return true;
return false;
}
function include_dom(script_filename) {
var html_doc = document.getElementsByTagName('head').item(0);
var js = document.createElement('script');
js.setAttribute('language', 'javascript');
js.setAttribute('type', 'text/javascript');
js.setAttribute('src', script_filename);
html_doc.appendChild(js);
return false;
}
var itemname = '';
var currency = '';
var price = '';
var supported = new Array('www.amazon.com');
var domain = document.domain;
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
alert(getName());
}
[...]
Note that the 'getName()' function is in http://localhost/bklts/parse/www.amazon.com/js. This code works only the -second- time I click the bookmarklet (the function doesn't seem to get loaded until after the alert()).
Oddly enough, if I change the code to:
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
alert('hello there');
alert(getName());
}
I get both alerts on the first click, and the rest of the script functions. How can I make the script work on the first click of the bookmarklet without spurious alerts?
Thanks!
-Mala
Adding a <script> tag through DHTML makes the script load asynchroneously, which means that the browser will start loading it, but won't wait for it to run the rest of script.
You can handle events on the tag object to find out when the script is loaded. Here is a piece of sample code I use that seems to work fine in all browsers, although I'm sure theres a better way of achieving this, I hope this should point you in the right direction:
Don't forget to change tag to your object holding the <script> element, fnLoader to a function to call when the script is loaded, and fnError to a function to call if loading the script fails.
Bear in mind that those function will be called at a later time, so they (like tag) must be available then (a closure would take care of that normally).
tag.onload = fnLoader;
tag.onerror = fnError;
tag.onreadystatechange = function() {
if (!window.opera && typeof tag.readyState == "string"){
/* Disgusting IE fix */
if (tag.readyState == "complete" || tag.readyState == "loaded") {
fnLoader();
} else if (tag.readyState != "loading") {
fnError();
};
} else if (tag.readyState == 4) {
if (tag.status != 200) {
fnLoader();
}
else {
fnError();
};
};
});
It sounds like the loading of the external script (http://localhost/bklts/parse/www.amazon.com/js) isn't blocking execution until it is loaded. A simple timeout might be enough to give the browser a chance to update the DOM and then immediately queue up the execution of your next block of logic:
//...
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
setTimeout(function() {
alert(getName());
}, 0);
}
//...
In my experience, if zero doesn't work for the timeout amount, then you have a real race condition. Making the timeout longer (e.g. 10-100) may fix it for some situations but you get into a risky situation if you need this to always work. If zero works for you, then it should be pretty solid. If not, then you may need to push more (all?) of your remaining code to be executed into the external script.
The best way I could get working: Don't.
Since I was calling the JS from a small loader bookmarklet anyway (which just tacks the script on to the page you're looking at) I modified the bookmarklet to point the src to a php script which outputs the JS code, taking the document.domain as a parameter. As such, I just used php to include the external code.
Hope that helps someone. Since it's not really an answer to my question, I won't mark this as the accepted answer. If someone has a better way, I'd love to know it, but I'll be leaving my code as is:
bookmarklet:
javascript:(function(){document.body.appendChild(document.createElement('script')).src='http://localhost/bklts/div.php?d='+escape(document.domain);})();
localhost/bklts/div.php:
<?php
print("
// JS code
");
$supported = array("www.amazon.com", "www.amazon.co.uk");
$domain = #$_GET['d']
if (in_array($domain, $supported))
include("parse/$domain.js");
print("
// more JS code
");
?>