Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am presently studying javascript coding practice on CodeAcademy. When testing code in Codeacademy I use console.log to output strings to the built in browser within Codeacademy. The code works fine. The first issue is this: When I test the same piece of code within Dreamweaver text editor and output it to the browser it prints nothing, I have to change it to document.write for it to work.
Next Part
I then read somewhere that using document.write in production code is not recommended! Can somebody explain this.
Next Part
I was at a brief introduction to JS free meeting a few days ago. At this meeting it was suggested that using something like prompt("La di da"); is not recommended in production work.
If anybody has the time and energy to explain why these things are built into JS but no recommended to be used or why they do not work when used, I would be very grateful.
Code Academy will be emulating a console in their web application. Press "F12" in most browsers, and you'll get "Developer Tools" opening up; which will have a console built in, which is where console.* (including console.log()) calls get output to.
Like I said, Code Academy will have some of their own JavaScript, which will be catching these calls, and to make their tutorials easier to use, outputting it to a place which is easier for you to see.
Dreamweaver however, won't be doing this, which is why you're not seeing it.
There is nothing inherantly wrong with using document.write. However it behaves differently depending on whether the page has loaded or not, and there are generally friendlier and more useful alternatives such as document.getElementById() for targeting where to direct output.
For more info, see Why is document.write considered a "bad practice"?
As with document.write, there is nothing wrong with prompt(), confirm() etc; Stack Overflow themselves use confirm() on their websites. The downside is that they cannot be styled, and prompt() for example, is restricted to asking for one thing at a time.
Model windows however (such as jQuery UI dialog, Bootstraps "Model" or various lightbox plugins, can be.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm currently studying about web development, I still don't know about jquery, but I've a little knowledge about javascript, html and css (basic).
I've been looking at some examples in github to improve my skills, and I've found this content;
https://github.com/stewilondanga/editables
I perfectly understand the theory, but I do not know how to put it into practice, I would like for any similar examples (simplified alternatives) and how to convert the exported code generated by javascript into a html5 table?
Any example would be appreciated! thanks for your attention!
First of all, jQuery does not generate code. It's a framework, you load it into a web page, and then you can use it from within Javascript code in that page.
I suggest you start by looking at the source of https://stewilondanga.github.io/editables/, if an editable tables is what you need. There are more general frameworks to do this, e.g. Aloha
To try it yourself, I'd suggest you bite the bullet equip yourself with some kind of web server, be it on a server somewhere, or on your local machine, so you can easily try out things like this, copy the sources, alter the code etc.., and quickly hit reload on your browser.
While it may seem easier to run a local server and point your browser at http://localhost/something, IMHO it also takes more tinkering to get browsers to embrace that fully. You don't need the extra grief while already learning all those new concepts. If you want to tackle this seriously, consider getting a hosting service or small VPS somewhere. If you don't know how to do that, get help for that first, but get it out of the way. It'll save you much grief.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm very new to python and programming in general but have enrolled in a few courses to improve my knowledge. It seems it's quite important to have a 'goal' in mind when learning and one of mine is to successfully scrape and manipulate sports data.
I would like to scrape the results from https://www.britishhorseracing.com/racing/results/ but it looks like it's dynamically loading data via JS:
There looks to be a LOT of data here, results going back ~20 years plus multiple races for each racecourse on the day. From what I've read, selenium and beautifulsoup may offer some solutions here but before I start experimenting I wanted to check with you guys how realistic this goal is/ whether it's even achivable with how the website is structing the data and some pointers for how to get started?
Any help would be hugely appreciated.
Thanks
I'm not too familiar with Selenium or BeautifulSoup, but there are other JavaScript related web scrapers. Some I know are NightmareJS, PhantomJS, and ZombieJS (All horror related haha). NightmareJS runs off of and electron Chromium instance, PhantomJS is a javascript wrapper for selenium, and zombiejs is a raw node solution. I personally would recommend using NightmareJS.
However if you need to run NightmareJS on a server that is a whole different ball park. NightmareJS requires there to be graphics interface. There are modules that allow NightmareJS to be ran on a terminal instance however. If would would rather avoid that, then you should be fine installing PhantomJS on the server and use that.
With nightmare JS there is a scroll option that probably would trigger the rest of the data to load.
Here is an issue found of github. Some solutions are provided there.
If you would rather still use something like selenium or python, I'm pretty sure there ought to be some documentation describing how to scroll a page.
I was originally going to say you could use the API network call that BHA does by looking in the developer network tools, however looking at the API quick you need some authentication with the API.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Folks! I'm wondering where to start to understand why JavaScript alert returns in Chrome in this way. Can it be lacking JavaScript in PHP source code? .Where should I start to understand and diagnose the problem?
My Script
"<script type='text/javascript'>
$(document).ready(function(){
$('#menu2').click(function(event){
event.preventDefault();
alert('Text');
})
});
</script>";
Result Chrome:
Result Mozilla
The reason the default pop ups in chrome don't look good is because they aren't really important, so there's no reason for the developers at Google to spend a bunch of time designing and building beautiful ones. If you look at the pop ups in most other browsers (to the best of my knowledge), they will look similar.
Your second screenshot looks like some sort of modified version (possibly bootstrap?) and has absolutely nothing to do with the default pop up.
So to answer your question, no, there is no missing javascript or PHP source code. It's just a design choice on the part of Google to focus resources into more important areas.
If you want to change the look, you can't. It's part of the browser, not part of the website. But, if you really need to have a better looking one (and I would strongly recommend you look into different options as pop ups are bad UX), http://jqueryui.com/dialog/ will be able to help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have some limited experience with web scraping using tools like Beautiful Soup and Nokogiri.
My approach thus far when looking for information is to first inspect the HTML elements and CSS tags, then applying the selector. While this works, slight differences/changes among web sites would render the code useless. Also, there have been situations where sites simply don't add the selector tags to their HTML elements, so I once had to resort to the hacky approach of selecting the style property of the element.
How would one devise a scraper that would work across multiple sites? I'm aware that the solution would depend on the context, but is there a general good practice in doing it? I was actually asked in an interview before this question and I had no idea.
I have tried googling but much of what I found doesn't go past the basics, and I don't know where to look. Any help would be appreciated.
It's not clear from your question what exactly you are trying to accomplish. If you want the content of the page (like in an article) - you should try goose, which should give you a leg up. You can also try searching for conventional web page approaches like meta tags.
Either way, you should remember that this is the World Wild Web, and the HTML is a very forgiving language, which lets people design pages which are very hard to read by a machine. Even big sites sometimes have their proprietary breaks from conventions, which forces exceptions in your code in order to read them. Site logic may also conflict with conventional logic, or other major site.
This means that your code would probably consist of a lot of use-cases and exceptions.
My suggestion to you is to keep samples of pages of sites you want to scrape, and have a unit test which iterates over them and verifies the scraping results. This way, each time you find a new quirk, you can add it to your collection, and be certain that if the change you made broke some other site's scraping, you would know about it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am trying to evaluate a very large JavaScript code. The file has been compressed with JavaScript Compressor and it was hard to understand the code. So I decompressed it using the JSFormat Package of Sublime Text editor. The code is now good to read however, when I run it in browser the code breaks. Why does this happen and what can I do to prevent it?
If the JavaScript in question runs in a web browser and works in Chrome, consider decompressing it using Chrome's built in JavaScript beautifying function, "Pretty print".
You can access the pretty printing feature by navigating to the developer console's script tab and clicking the {} curly brackets in the bottom left corner of the screen—if they're blue, the feature is on. Chrome's routines are probably more robust than the Sublime Text module's, so you might stand a better chance of getting working code out of it.
If by following the steps above you actually do manage to get working, cleanly formatted code, you can satisfy your curiosity by running the output of both code formatting engines through a diff program.