Get specific data from an external domain - javascript

http://www.color-hex.com/color-palette/35967
Using javascript/JQuery I want to get the colors from the above color palette website. The only api I found seemed limited.
Any answers for api's or other palette-picker sharing sites are accepted as well. API's are preferred.
Edit:
found a promising api: http://www.colourlovers.com/api
Though being a bit of a noob means I do not know exactly how I'm supposed to use it without an explicit javascript example :'(

Based on the structure of the page:
$("table.table tr td a").each((i,e) => console.log($(e).html()))
should output the list of five hex colors representing the palette. However, without knowing how you are using the information, obtaining the HTML from the page is still a mystery.
Have a look at examples using $.get() and $.parseHTML() in hopes of pulling the page's data and then manipulating the resultant DOM.

Related

Is it faster to load data as json or from a html file?

The title phrases it badly so here's a longer description :
I have an application that exports data in html format. ( 500 rows, 20 columns)
It looks terrible with lots of useless columns.
I want to use something like datatables to make a more usable table, i.e. paging/sorting/filtering/hiding columns
The option I'm trying first is to insert the table from the exported html file using the .load() function from jquery. Then I loop through the table deleting/modifying columns.
This seems very slow (I suspect my looping and searching) so I'm looking for improvements.
One idea is to pre-convert my exported html file to json (using notepad++ macros or something like that) and then build the table that I want from that json file.
Any opinions on whether I can expect a large performace boost, or potential problems to look out for ?
Many thanks / Colm
JSON should be faster, when its loaded its ready to go without all of the text parsing you would need to do with a text file. Lots of other jquery addons available to make it easy for you once it is in JSON.
I think this is not about which loads data faster but which solution is better for your problem. Datatables is very flexible and you can load from different sources. Take a look at the "Data Sources" and "Server side processing" in the examples: http://datatables.net/examples/
Datatables uses mostly JSON format. To process your data need to find the best approach; convert your exported html file, process the file with javascript to convert data (jquery can help you here), etc..
This page gives some real world examples of loading data in json vs data in a html table. Fairly conclusive, see the post from sd_zuo on July 2010, a fourfold increase in speed loading from json and then just building the table that you want to display.
Granted the page deals specifically with the slowness of the innerHtml function in IE8 but I think I'll give it a go in json and see how it compares across a couple of browsers.
P.S. This page gives good advice on fast creation of html using raw javascript and then only using jquery to insert one full row at a time

Retrieve posts based on Labels in blogger using Gdata

Is it possible to use gdata javascript or any other javascript api to retrieve the list of blog posts based on labels?
My usage case:
Each blog post has a label that means its category. Some posts are labelled with 'Summary' and the category it belongs.
I want to be able to display the summary of MyCategory(Label) on the label's page. e.g. http://myblog.blogspot.com/search/label/MyCategory
Is it possible to retrieve the list of blog posts matching 'Summary' and 'MyCategory'?
UPDATE:
more details:
it is a blog I have edit access to
the js can be placed on google sites or inside the blog html
the blog has 18k+ posts, so listing all posts and filtering is not an option.
myblog.blogspot was referring to any blogger, not the actual one. I was just talking about label-based blogger filter.
I've read and re-read this question and blogspot-link a couple of times. It's difficult to understand.
I think it would help if you gave some more information:
where do you want to place this javascript? I mean: is it going to be
placed on the same blog? I'm asking because this determines cross-site security requirements.
I have a strong feeling this is actually a question where you want to a cross-domain request (load data from a different domain|server (blogspot.com)) that you do not control, otherwise you'd be playing with 'Access-Control-Allow-Origin' on the server-side.
Will this script be located in a online or local (x)html source?
Could you please provide a more elaborate example (or sample) of an existing list that contain's this labels, or do you want to crawl a blog like a spider|index-robot?
If the above assumptions are correct, the first part of your problem is retrieving cross-domain data (which is hard nowadays using simple solutions like XMLHttpRequest aka AJAX).
You could then start looking at some own server-side scripts (php) to get this data and send it (pre-parsed) to your browser-application (effectively this is simply a proxy located on your own domain).
I have also heard of using a java-object (or silverlight? or flash which nowadays also suffers from cross-domain-security restrictions), to get around this modern day cross-domain security.
You could then embed one or more of these objects (that retrieve the source) and communicate with them through javascript. A variation of this technique is also often used for cross-browser multiple file-uploads.
There is a big chance there is already a solution (object) to this part of your problem here on StackOverflow.
If you fix this first part of the problem, the second part of your problem simply comes down to parsing (regex for example) your retrieved 'label'-data, building new links from them to retrieve the 'summary'-content you where after, using the same data-retrieval technique that was used to get the labels-list in the first place..
Is this what you are after?
UPDATE:
In pure javascript/json there is an excellent topic here on SO.
Should you go with java, you could look at this.
In php you use file_get_contents() or file_get_html(). See also this topic on SO.
UPDATE2: The accepted ANSWER (out of comment's below:)
On google's developers blogger docs 2.0 you can find: RetrievingWithQuery.
Quote:
/category
  
Specifies categories (also known as labels) to filter the feed results. For example,
blogger.com/feeds/blogID/posts/default/-/Fritz/Laurie returns entries
with both the labels Fritz and Laurie.
You can also find a working piece of javascript that uses this technique over here: list-recent-posts-by-label
Now you can simply continue 'AJAX'ing your summary's out of this filtered list.
Good luck!

Does Google index content generated using javascript?

does Google index content generated using javascript I'm using this function to write the text
document.write(String.fromCharCode(...))
something like that
document.write(String.fromCharCode(60,112,62,65,100,100,32,100,101,115,99,114,105,112,116,105,111,110,32,102,111,114,32,121,111,117,114,32,65,114,116,105,99,108,101,32,102,114,111,109,32,104,101,114,101,46,60,47,112,62,10));
and if Google don't index javascript will it regard this code as malicious as this function is also used to generate malicious javascript codes of course I'm not using malicious
Thanks in advance.
Yes/No
Google WILL index content if you give it some hints. For example, you'd need to use the #! format and your URLs need to resolve WITHOUT the #!. Like Twitter:
http://twitter.com#!/oscargodson and http://twitter.com/oscargodson work. Google sees a link to the first link then forwards onto the second.
For random bits of JS though? Most likely not. Google doesn't give out specific details to their algorithm. They have quitely switched to indexing PDFs, Flash, Docs, and more when before they said they didn't. With the rise of JS, i wouldn't be surprised if they officially did tho sometime soon.
Here are Google's docs on it:
http://code.google.com/web/ajaxcrawling/
Yes. As of May of 2014, they publicized their move.
http://googlewebmastercentral.blogspot.com.es/2014/05/understanding-web-pages-better.html

Handling large grid datasets in JavaScript

What are some of the better solutions to handling large datasets (100K) on the client with JavaScript. In particular, if you have multi-column sort and search capabilities, how do you handle fetching (and pre-fetching) the data, client side model binding (for display), and caching the data.
I would imagine a good solution would be doing some thoughtful work in the background. For instance, initially, if the table was displaying N items, it might fetch 2N items, return the data for the user, and then go fetch the next 2N items in the background (even if the user hasn't requested this). As the user made search/sort changes, it would throw out (or maybe even cache the initial base case), and do similar functionality.
Can you share the best solutions you have seen?
Thanks
Use a jQuery table plugin like DataTables: http://datatables.net/
It supports server-side processing for sorting, filtering, and paging. And it includes pipelining support to prefetch the next x pages of records: http://www.datatables.net/examples/server_side/pipeline.html
Actually the DataTables plugin works 4 different ways:
1. With an HTML table, so you could send down a bunch of HTML and then have all the sorting, filtering, and paging work client-side.
2. With a JavaScript array, so you could send down a 2D array and let it create the table from there.
3. Ajax source - which is not really applicable to you.
4. Server-side, where you send data in JSON format to an empty table and let DataTables take it from there.
SlickGrid does exactly what you're looking for. (Demo)
Using the AJAX data store, SlickGrid can handle millions of rows without flinching.
Since you tagged this with Ext JS, I'll point you to Ext.ux.LiveGrid if you haven't already seen it. The source is available, so you might have a look and see how they've addressed this issue. This is a popular and widely-used extension in the Ext JS world.
With that said, I personally think (virtually) loading that much data is useless as a user experience. Manually pulling a scrollbar around (jumping hundreds of records per pixel) is a far inferior experience to simply typing what you want. I'd much prefer some robust filtering/searching instead of presenting that much data to the user.
What if you went to Google and instead of a search box, it just loaded the entire internet into one long virtual list that you had to scroll through to find your site... :)
It depends on how the data will be used.
For a large dataset, where the browser's Find function was adequate, just returning a straight HTML table was effective. It takes a while to load, but the display is responsive on older, slower clients, and you never have to worry about it breaking.
When the client did the sorting and search, and you're not showing the entire table at once, I had the server send tab-delimited tables through XMLHTTPRequest, parsed them in the browser with list = String.split('\n'), and updated the display with repeated calls to $('node').innerHTML = 'blah'. The JS engine can store long strings pretty efficiently. That ran a lot faster on the client than showing, hiding, and rearranging DOM nodes. Creating and destroying new DOM nodes on the fly turned out to be really slow. Splitting each line into fields on-demand seems to work; I haven't experimented with that degree of freedom.
I've never tried the obvious pre-fetch & background trick, because these other methods worked well enough.
Check out this comprehensive list of data grids and
spreadsheets.
For filtering/sorting/pagination purposes you may be interested in great Handsontable, or DataTables as a free alternative.
If you need simply display huge list without any additional features Clusterize.js should be sufficient.

XML and Javascript: the right tool for the job?

For years I've been reading about XML and I have just not quite grokked it. Most documents I see about it simply explain the syntax (extraordinarily easy to understand) and say that it's portable: I've worked with Unix my whole life so the idea of putting things in plain text to be portable is hardly revolutionary. My specific question is that I have a document (my CV) that I would like to present to web visitors in several formats: as a webpage, as a pdf, or even as plain text. Is XML and Javascript the right approach to take?
What I need is for the document to be easily editable, conversion easy and just easy general upkeep. For example, when I publish a paper, I'd like to take less than five minutes to add the info and then have everything go automatically from there.
Give me your opinions: I also use LaTeX compulsively, so my current approach has been just to have my CV in LaTeX and to convert it to a web-page using LaTeXML. However, I sorta have the feeling that with everybody jumping up and down about XML and Javascript, that there might be something good to learn about it.
I would also like to simplify maintaining my homepage by not duplicating the same footer for every single page that I set up.
Thanks,
Joel
Edit: I'll also take any book recommendations!
I think this is a slight misunderstanding of the combination of JavaScript and XML.
XML, in and of itself is an excellent means of representing data. It's largely human-readable, and easily parsed with libraries in nearly every programming language. That is the main benefit of XML.
Using XML with JavaScript is certainly a solution, but I think it's a matter of the question you're asking. JavaScript can parse XML, and allow you to obtain and manipulate data from your XML document. If you want to grab data from a server without reloading your HTML page (synchronously or asynchronously), then using JavaScript and XML is a valid way to do that.
If you want to, however, display your XML as a webpage, you would likely be better off using XML and XSLT [wikipedia], or perhaps PHP and XPath, to transform the document into browser-readable HTML. On the other hand, you could use nearly any language to convert the XML to a plain-text file, rich text file, or store it in a normalized database.
To sum up, XML is a great way to store data, because it can be used in so many different ways, and by so many different languages. It's an answer to many different questions; you just have to figure out which questions you're asking.
To elaborate on my comment
The transformation to whatever output you desire is depending on how you store your CV on your server and whether you have the possibility to process it on the server. If you store it in XML, you can transform it to desired (binary) output using server based tools - that would for php be pdf and word (on windows server platform) for example. XML would be interesting from a mark-up point of view since it would make it clear where the table of contents, headers, lists of experience and so one would be found.
JavaScript cannot transform something into PDF or word, that has to be done on the server. What javascript can do is to get a text from the server in XML or JSON using AJAX and manipulate this into what the user sees on the screen. For XML that can be done with XSL(T) too. If you want for self-education purposes to use JavaScript, JSON is very nice since it is in my opinion more readable than XML and it creates a populated javascript object with the least work.
Footer in javascript: in the page have
<script type="text/javascript" src="footer.js"></script> and in footer.js, you can for example do
var footerText = 'Here goes whatever you want';
document.write(footerText);
Comparison between XML and JSON
I've got a webpage with browser-side XSLT transformation up and running for years. It's a playground, only some words in german. See how easy it is to build this on heese.net/test. You can switch between "Beispiel" (=Demo) and XSL. The sourcecode of the page in the iframe is the XML. You can do this serverside with 3 lines of PHP-code.
On Javascript: you can use it with XSLT and I show this on my site, but it can't interact. First the XSLT builds an HTML page out of your XML data and after this job is completely done the Javascript in the resultig HTML document begins to work.
Parsing XML with Javascript is a different task.

Categories

Resources