I've been researching this like crazy, but I can't find a way to to get this error from triggering. I'm hoping someone here can help me.
I have a drop down list object on my page that I'm creating as a server control, but which I'm dealing with entirely client-side at run time. The reason why I'm using a server control at all is because I need it to trigger an AJAX updatepanel elsewhere on my page. Anyway, this dropdown list starts blank, but gets populated with options by some jquery code based on user input. Up to this point there's no problem, but when the user makes a selection from this dropdown, I get the ClientScriptManager error. Selecting from this dropdown triggers an ajax json call to get data from the server.
I'm registering all my client-side script files (including the one that contains the offending json call) with ClientScriptManager.RegisterClientScriptInclude. Registering the dropdown itself with RegisterforEventValidation doesn't work, because the dropdown has no options at load time.
The application works in spite of this, but the error is defeating some enhancements I want to make, so I need to put this to rest. You can see the application (and view the error in your browser's debugging console) at https://www.heritagecutter.com/MillingCalc/; the dropdown your looking for is the one headed "Series", which will become active after you make selections in Material Group and Material Type above. The error appears after you select a series.
Thanks in advance for any guidance.
Found a solution, for anyone who may stumble across this in the distant future:
So the situation with my code was that I was using mostly HTML objects, with ASP.NET server controls for those few instances where I was dealing with the server, even though those server controls were triggering client-side code at runtime. I did this because I are noobsauce.
I was already using client-side AJAX calls to contact the server for my data, so I cleared out the clever-clogs server controls (my offending drop down list, and the Ajax UpdatePanel in toto) and replaced them with HTML objects. I kept the .aspx page that contains my code, because I needed the code-behind to store the web methods being called by client-side script, and I left my web methods and ajax code alone, because the didn't need to be changed, and the error went away on the first try.
I go away somewhat embarrassed, but wiser for it.
Related
I'm trying to automate the workflow of a webpage for my company's inventory system. The page is generated by the server-side logic of an ASP.Net page, which I don't have access to. The page has several fields on it to allow you to enter a new container barcode, the item that should go in the container, etc. Each of these fields has an onchange event listener hooked up to it which calls the page's __doPostBack() function to verify the entered data. If the data is verified, the page code is re-served with the data entered so far, and focus is set to the next field on the form.
I want to automate this page with a userscript in Chrome. I started by using ViolentMonkey to inject a custom script, but I could only get the script to trigger on the initial load, not after each data entry. After this, I tried using Chrome Local Overrides to change __doPostBack() to try to capture the data I need to automate the page. That also only works once; after a field is filled and loses focus and new HTML is served, it overwrites Chrome's local copy.
I think that my problems are being caused by an asynchronous refresh of the entire page contents, which wipes out the injected userscript and Chrome's Local Override without triggering the normal page refresh listeners in Chrome Overrides or ViolentMonkey to re-inject the modified code. Does anyone have any thoughts on how I could modify the JavaScript in such a way that it would persist after the page content is replaced with new HTML?
P.S. I don't think the code itself is relevant to this particular problem, but if anyone thinks it would be helpful to share a limited section of the client-side code, let me know.
Edit 1: Here's a more in-depth view of what I'm trying to accomplish, and the progress I've made so far. For reference, the form looks like this:
My Original Plan
The user loads the page. ViolentMonkey injects a userscript which issues a series of prompts, collecting data on the range of new barcodes that the user would like entered into the system. (Specifically, the barcode prefix, the starting barcode number, and the ending barcode number.) This values are stored in localStorage.
After this data has been collected and validated by the user, the page loads normally. For reference, the form looks something like this:
The user fills out the fields as normal. After each field is filled out (with the exception of the Container Description field), the page pushes focus to the next field. (For example: <script language="javascript"> try { document.getElementById('txtContDesc').focus() } catch (e) { } </script>. The id of the field to focus is dynamically changed via the server logic.)
I need to collect the User Badge, Container Type, and Destination Barcode values so that I can refill them later when I automate the form. My original plan was to add a onfocus event listener to the Container Description field, since focus will be shifted to it once the Destination Barcode field has been verified. I will know at this point that the user has successfully entered a valid entry for each of the fields above the Container Description field, and I would then be able to collect these values and store them in localStorage.
Once I have all the data needed for the form, I would pilot the form using the userscript in ViolentMonkey and the data stored in localStorage, to persist data across page refreshes.
Other Alternatives:
The eventListener idea on an element doesn't work, because ASP.NET updates the page with fresh code every time a field is verified, wiping out the listener. It also doesn't trigger a refresh, so ViolentMonkey doesn't rerun my userscript.
My other thought was to modify doPostBack(). The doPostBack() function looks like this (as far as I can tell):
<script type="text/javascript">
var theForm = document.forms['formNewContainer'];
if (!theForm) {
theForm = document.formNewContainer;
}
function __doPostBack(eventTarget, eventArgument) {
console.log("Form submitted");
}
</script>
It is called on verified fields with the following onchange handler:
onchange="javascript:setTimeout('__doPostBack(\'ctl00$newContPage$txtBarcode\',\'\')', 0)"
My goal would be to modify doPostBack() to save the information I need to localStorage before executing the rest of doPostBack() without changing it.
(Note: doPostBack() here looks incredibly simplistic, so I think I'm missing some information about how ASP.NET works here. This is outside of the question though, unless it's relevant for what I'm trying to do.)
I was able to successfully modify doPostBack() in this way using Chrome Local Overrides to serve myself a local copy of the page on page load, instead of the server version. But this only works for the first doPostBack() request. After the first request, the server serves me new code. Like with ViolentMonkey, the lack of a refresh trigger prevents Chrome Local Overrides from re-serving my local copy, and I'm served code without the doPostBack() modification.
So that's where I'm at. I'll try adding a global listener like #wOxxOm suggested, and see where that gets me.
I ended up using a Chrome extension called "Run Javascript" (has an elephant for it's logo), which runs the JavaScript code even on AJAX requests.
Link: https://chrome.google.com/webstore/detail/run-javascript/lmilalhkkdhfieeienjbiicclobibjao/
I don't see how this is possible at all. You need to work with the people that created that web page.
Asp.net and the server side code will be EXTENSIVE .net code (c# or vb.net). Each of those events you trigger will set variables and server side session (or viewstate) values for the code behind to run.
That's how asp.net pages work. You post back, page travels up to server, THEN the .net code behind runs. That code will modify the page, modify controls, and modify the view state for that page. And after that code runs (say on a button click), then you client side will receive a whole new fresh page - that will blow out any JavaScript you try and inject. (you would have to re-inject each time). But, it gets worse, since quite of bit of that code behind also checks and often will NOT tolerate that the page settings have been messed with, and will be rejected.
About the only way to do this would be to write some desktop software, and that software would "house" or "host" a full "com" object copy of the web page, and you thus automate that given page. (and even then, you still fighting a losing battle).
Hint:
Web development, business logic, and a functional business applcation is NOT some simple markup and JavaScript (despite what that lame 2 week HTML course tells you).
This is a application, and asp.net applcation. Trying to think of this as just some markup and JavaScript is actually quite silly here. It not how you write, or build business solutions for a company.
If you can't write and modify the code and the web server side of things then find out if that site has some kind of web api or whatever.
But, really - this is silly, and unless this is some simple college project, or some hacked up html page and some JavaScript? Forget this approach - you dealing with FAR too much server side and code behind on the server.
In fact, asp.net as noted has quite a bit built in features that check if the page being posted back been messed with, and you never really be sure that you set values and that the proper amounts of code behind that runs to setup row values, database primary key values and a WHOLE boatload of state values that are probably 100% saved in server side session() based class objects - and objects that are never exposed server side.
Tring to supposed modify or assume you can create or modify such a system with only client side tools is not going to work - its just not.
code behind runs, it re-processes the page with .net code and then sends the whole page back down - all with new state values etc. This is not some lame html + JavaScript, but is a full server side code driven system written in c# .net code.
Let's suppose we have an ASP.Net Web Form, Page.aspx, in which we do the following:
<script>
$(document).ready(function () {
// grab the standard ASP.Net form
var form = document.forms['ctl01'];
form.addEventListener("submit", function (event) {
event.preventDefault();
sendData(form);
});
});
function sendData(form) {
const xhr = new XMLHttpRequest();
const fd = new FormData(form);
xhr.addEventListener("load", function (event) {
document.open();
document.write(event.target.response);
document.close();
});
xhr.addEventListener("error", function (event) {
alert('Error!');
});
xhr.open("POST", "Page.aspx");
xhr.send(fd);
}
</script>
The reason for this setup is I want to take advantage of the XMLHttpRequest progress event to erm, show some progress indication because the postback may include files that take some time to upload.
The load event handler works great. As a result of the POST I get the contents of Page.aspx again and replace my current document. So it seems that some kind of POST actually does happen BUT, there is one problem. In Page.Load(), the Request.Form and Request.Files collections are empty so I can't process the form/files.
I tried adding the following header but without much luck:
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
Do you think it is possible to make a successful POST (with page receiving data) using XMLHttpRequest/FormData, or is there some fundamental limitation that prevents this from happening for ASP.Net pages?
Thanks!
Well any ajax call simply can run some code behind, but since the web page IS STILL sitting on the client side in the browser, then things like controls and the page state are NOT available.
So you don’t want a post back, but now you asking for a post back? (I am confused). I mean, either you post back the whole page (standard event post back). Or you drop the controls and things in question into a update panel, and then ONLY that part of the page is posted back. I fail to see any advantage of trying to send “more” of the page in a ajax call when the WHOLE idea is to NOT send the page in the first place, right?
I mean, if you need some extra values in the ajax call, then you have to get/grab those bits and parts from the page, and include that information in your ajax call. (perhaps as a json string).
Without a post back, then viewstate and all of the controls are STILL just sitting on the users desktop in the browser. The code behind, and even the page class object + code ARE OUT OF scope at this point in time. Only upon a post-back does the WHOLE page travel up to server - code behind runs - you have use of full controls on the page, and THEN the whole page travels back down to the client side (and this quite much means that JavaScript code will have to re-start!!!
However, if you need a few parts and values in a page and don't want a full post-back? Then simply put those parts into a update panel. you can then in JavaScript for example do this:
varMyAspNetButton = docuement.GetElementById("Button1");
varMyAspNetButton.Click();
The above will save you world poverty and not have to wire up a bunch of js and web methods since the js code simply CLICKS on your button (that is inside the update panel). In this case, of course a whole page post back does NOT occur, but the page load and events do in fact fire - these so called "partial" page post backs means that the code behind is LIMITED to the information (controls) inside of that up-date panel.
However, as noted, if you do a post back, then the browser page NOW travels up to the server - and that quite much means any js code client side is toast and now can't run, since a whole NEW fresh copy of the web page is about to travel down to the client side again - and that re-starts your js code.
As noted, you can do a partial page post back with a update panel. And in js you can fire a "click" or in fact MOST events of asp.net controls on that page with js.
But, then again?
You don't want a full page post back, and you likely don't want all of the controls and the whole page to travel up to the server. But then again you wondering why you can't use or access controls on the page with ajax calls? Well as noted, the server side code behind is OUT of scope and OUT of context when you make ajax calls. The web page does NOT exist server side. We don't know if the user turned off their computer, or will never do anything in that client side browser and web page. The server at that point in time has lost ALL KNOWLEGE of that web page. So any ajax call does not have use of the controls on the page, and does not even have use of viewstate either.
This tends to mean that say when using say a ajax system to up-load files? Well, you can't store the status in the web page server side - since the page DOES NOT really exist at that point in time. So you can call some web methods, and about the ONLY way to keep some values in context is to use session(), since that does not need the web page, or the view state to function and work.
The major down side of session() of course is that if some user has two tabs open or even two different browsers open? Well, session() is SHARED between those pages - so while session() is great, it also shared between ALL copies of web pages for that given user - and thus you need to add code to separate out each session "set" of values, or simply hope that the user will not have two pages in operation for such file up-loads.
But to answer your question?
You can do and achieve partial page pushbacks by using a up-date panela And thus you can have timer code or js code client side to continue to run since a full page life-cycle does NOT occur. In other words, you control what part of the web page will and is sent up to the server side by using a update panel.
If you don't use a up-date panel, then any ajax calls you make WILL have to pass the data from the browser side, since it STILL just sitting on the users desktop, and any code behind can't grab, nor reach out, or see or even KNOW that the web page exists client side.
So you either pass extra values from the web page with your ajax calls OR YOU can use a update panel, drop controls inside and then the partial page post back will ONLY send up and have use of what you want inside of that panel. So you have two really great choices.
And in either case (a full page post back) or a partial one?
Grab a reference to the client side asp.net button, and fire off a .click event. You can I suppose wire up all kinds of _doPostBack in js, but with update panels and the click() trick, then you have a choice of how much of the page gets sent up, and it all quite much automatic wired up for you and saves a TRUCKLOAD of work that you would have to manually write and wire up if you don't use a update panel to control this.
So you get that "partial" page post back, and in that case the code and events inside of that up-date panel can update/see/use/modify controls in that up-date panel, but anything outside of that up-date panel will NOT have traveled up to the server.
And if you don't use a update-panel, then any ajax call is just that - a direct call to the server side - but the web page STAYS client side - thus on-load and any of the controls or objects or in fact the WHOLE class form object that represents that web page IS STILL SITTING client side - thus as noted, no on-load, no code behind can touch or even see or know about the values of controls on that page, and as noted there is also no ViewState either.
The WHOLE idea of ajax calls is that you did not want and never did want the page to travel up to the server, and then be re-rendered, and then re-sent back down to the client side. But you need to be 100% crystal clear here:
Without a page post back (or partial one with update panels), then the web page does NOT exist any more server side. Web pages are state-less and once the round trip has occurred (web page up to server - code behind runs, page sent back to client), then as far as the server is concerned (and you the developer) that web page is GONE and DOES NOT exist anymore at all - it is out of scope and from your point of view (and the server point of view) that web page does NOT exist anymore the instant it been sent back down to the client side. As noted, the only exception that is practical here is session() values - since they are not part of any given web page.
So, you have to decide if you want a partial page post back to get at and modify some values with server side code.
Or you pass the values with your ajax calls and the returned values can then update the browser controls. And of course once you do eventually do that say full page post back, then the code behind can certainly see + use any controls that the client js code changed - but can only do so with that full page post back, or as noted, controls limited to a update panel if we are talking about a partial page post back (update panel).
You either have to include additional data in your ajax calls, or consider using a partial page post back to send up part of the web page if you need to modify that part of the page with code behind. Or as noted, return information with your ajax call, and then update the client side. There not really a in-between choice here.
I am currently working on a project of finding empty classrooms in our school in real time. For that purpose, I need to extract substitution published on our school page (https://ssnovohradska.edupage.org/substitution/?), since there might be any additional changes.
But when I try to extract the html source code and parse it with bs4, it cannot find the divs(class: "section print-nobreak") that contain the substitution text. When I took a look at the page source code(Ctrl+U) I found that there is only a javascript that prints it all directly.
Is there any way to extract the html after the javascript output has been already rendered?
Thanks for help!
Parsing HTML is unfortunately necessary to solve your problem. But I will explain how to find ways to avoid that in your future projects (not based on this website).
You've correctly noticed that the text is created by JavaScript code running on the page. This could also indicate that the data is either loaded from another resource (XHR/fetch call getting a response from an API) or is stored as a JSON/JS inside of the website's code. (Or is generated from an algorithm, but this is unlikely to be the case in such websites.)
The website actually uses both methods (initial render gets data stored inside of the website's code, but when you switch dates on the calendar it makes AJAX requests). You can see this by searching for ReactDOM.render(React.createElement( in the code. They're providing a HTML string to the createElement call, so I would suggest looking into the AJAX way of doing things.
Now, to check where the resource is located, all you need to do is opening Developer Tools in your favorite browser (usually Control+Shift+I) and navigating to the Network tab. Now that your network tab is open, you need to cause the website to load external data, for example, by pressing a date on the "calendar bar".
Here you will notice many external requests, but we're actually looking only for XHR calls. Click on the XHR button next to the "Filter" text field. That should result in only one request being shown:
Unfortunately for us, the response only contains HTML. Also, API calls are protected - they require a PHP session ID and some sort of a token (__gsh) to not fail. So, going back to step 1 - seems like our only solution is to use regular expressions to find the text between "report_html":"<div class and </div></div></div> from the source code, if you're interested in today's date only. If you want to get contents for tomorrow or any other date - you will need to either fetch the page, save the cookies and find the token to supply to the request and then make that request, or use something like puppeteer or pyppeteer (since you've mentioned BS4) and load the webpage in that. If you aren't doing the data fetching that often, you should be fine overall.
Imagine a page that shows a newsfeed. As soon as a user request said page, we go to the database (or whatever) and get the newsfeed. After the page is already loaded, we also have new news items added dynamically (through ajax/json). This means we effectively have two mechanisms to build a newsfeed. One with our server side language for the initial page request, and one with Javascript for any new items.
This is hard to maintain (because when something changes, we have to change both the JS mechanism and the Server side mechanism).
What is a good solution for this? And why? I've come up with the following scenarios:
Giving javascript an intial set, somewhere in the html, and let it build the initial view when document is ready;
Letting javascript do an ajax request on document ready to get the initial data; or
Keep it as described above, having a JS version and a SS version.
I'm leaning towards the first scenario, And for that I have a followup question: How do you give JS the dataset? in a hidden div or something?
Doing one more AJAX request to get the data isn't really costly and lets you have one simple architecture. This is a big benefit.
But another benefit you seem to forget is that by serving always the same static resources, you let them be cached.
It seems to me there's no benefit in integrating data in your initial page, use only one scheme, AJAX, and do an initial request.
Use a separate news provider to be loaded from page providing data as-is. This will keep things simple and make it load very quickly to be available nearly as fast as any embedded but hidden data set.
Internet Explorer (with default settings, which I generally assume will be in effect on the desktops of the Great Unwashed) seems to dislike the idea of accepting attachment content in an HTTP response if the corresponding request wasn't made directly from a user action (like a "click" handler, or a native form submit). There are probably more details and nuances, but that's the basic behavior that's frustrating me.
It seems to me that this situation is common: the user interface in front of some downloadable content — say, a prepared PDF report — allows for some options and inputs to be used in the creation of the content. Now, as with all forms that allow the user to stipulate how an application does something, it's possible that the input will be erroneous. Not always, but sometimes.
Thus there's a dilemma. If the client tries to do something fancy, like run an AJAX transaction to let the server vet the form contents, and then resubmit to get the download, IE won't like that. It won't like it because the actual HTTP transaction that carries the attachment back will happen not in the original user-action event handler, but in the AJAX completion callback. Worse, since the IE security bar seems to think that the solution to all one's problems is to simply reload the outer page from its original URL, its invitation to the user to go ahead and download the suspicious content won't even work.
The other option is to just have the form fire away. The server checks the parameters, and if there's anything wrong it responds with the form-container page, peppered appropriately with error messages. If the form contents are OK, it generates the content and ships it back in the HTTP response as an attached file. In this case (I think), IE is happy because the content was apparently directly requested by the user (which is, by the way, a ridiculously flimsy way to tell good content from bad content). This is great, but the problem now is that the client environment (that is, the code on my page) can't tell that the download worked, so the form is still just sitting there. If my form is in some sort of dialog, then I really need to close that up when the operation is complete — really, that's one of the motivations for doing it the AJAX way.
It seems to me that the only thing to do is equip the form dialogs with messaging that says something like, "Close this when your download begins." That really seems lame to me because it's an example of a "please push this button for me" interface: ideally, my own code should be able to push the buutton when it's appropriate. A key thing that I don't know is whether there's any way for client code to detect that form submission has resulted in an attachment download. I've never heard of a way to detect that, but that'd break the impasse for me.
I take it you're submitting the form with a different target window; hence the form staying in place.
There are several options.
Keep the submit button disabled and do ongoing validation in the background, polling the form for changes to fields and then firing off the validation request for a field as it changes. When the form is in a valid state, enable the button; when it isn't, disable the button. This isn't perfect, as there will tend to be a delay, but it may be good enough for whatever you're doing.
Do basic validation that doesn't require round-trips to the server in a handler for the form's submit event, then submit the form and remove it (or possibly just hide it). If the further validation on the server detects a problem, it can return a page that uses JavaScript to tell the original window to re-display the form.
Use a session cookie and a unique form ID (the current time from new Date().getTime() would do); when the form is submitted, disable its submit button but keep it visible until the response comes back. Make the response set a session cookie with that ID indicating success/failure. Have the window containing the form poll for the cookie every second or so and act on the result when it sees it. (I've never done this last one; not immediately seeing why it wouldn't work.)
I expect there are about a dozen other ways to skin this cat, but those are three that came to mind.
(Edit) If you're not submitting to a different target, you might want to go ahead and do that -- to a hidden iframe on the same page. That (possibly combined with the above or other answers) might help you get the user experience you're looking for.
There's a whole number of really good reasons IE does this, and I'm sure it's not something anyone would argue with - so the main objective is to get around it somehow to make things better for your users.
Sometimes its worth re-thinking how things are done. Perhaps disable the button, use javascript to check when all the fields are filled out, and fire off an ajax request once they are. If the ajax was successful, enable the button. This is but one suggestion, I'm sure there will be more...
Edit: more...
Do simple submission (non-AJAX), and if the checks fail, send a page back rather than an attachment. The page sent back could contain all the information originally submitted (plus whatever error message to the user) so the user doesn't need to fill out the entire form again. And I'm also sure there will be more ideas...
Edit: more...
I'm sure you've seen this type of thing before - and yes, it is an extra click (not ideal, but not hard).... an "if your download fails, click here" -> in this case, do it as you want to do it, but add a new link/button to the page when the AJAX returns, so if the download failed, they can submit the already validated form from a "direct user action". And I'm sure I'll think of more (or someone else will).....
I have been fighting a similar issue for a while. In my case, posting to a hidden iframe didn't work if my web app was embedded in an iframe on another site (third party cookie issues) unless our site was added to the Trusted Sites list.
I have found that I could break up the download into POST and GET sequence. The post returns a short lived GUID that can be used in a GET request to initiate the download. The POST can do the form validation as well as return the GUID in a successful response. Once the client has the GUID, you can set the src property of a hidden iframe element to the download URL. The browser sees the 'Content-Disposition': 'attachement' header and gives the user a download ribbon to download the file.
So far it appears to work in all the latest browsers. Unfortunately it requires you to modify you server side API for downloading the file.