I've been playing with the multiple address look up API from blockchain info (documented here https://blockchain.info/api/blockchain_api), I had my code working earlier in the day but bizzarely it's stopped.
The purpose of it is to eventually write a little JQuery library which will search the DOM for bitcoin addresses as data attributes and then insert the final balance into that element creating a polling mechanism to keep the page updated as well.
The original problem I ran into earlier while developing it was because it's a CORS ajax request but later I adjusted the query per the blockchain info API documents and I added cors=true it then seemed to work fine but now it doesn't seem to want to work at all again. I don't get how changing computers would effect this kind of request.
Here's my code on JSFiddle, http://jsfiddle.net/SlyFoxy12/9mr7L/7/
My primary code is:
(function ($) {
var methods = {
init: function(data, options) {
//put your init logic here.
},
query_addresses: function(addresses) {
var addresses_implode = addresses.join("|");
$.getJSON("http://blockchain.info/multiaddr?cors=true&active="+addresses_implode, function( data ) {
$.each( data.addresses, function( index ) {
$('#output').append(" "+data.addresses[index].final_balance);
});
});
}
};
$.fn.bitstrap = function () {
var addresses = new Array();
$('[data-xbt-address]').each(function () {
$(this).text($(this).data('xbtAddress'));
addresses.push($(this).data('xbtAddress'));
});
methods.query_addresses(addresses);
}
}(jQuery));
$().ready(function() {
$().bitstrap();
});
Ok, turns out it's an issue with Chrome some how, I've tried it in safari and it works again, it must have been a different version of Chrome on the other computer I used.
There seems to be more info about it here https://code.google.com/p/chromium/issues/detail?id=104920
Related
I'm building a very simple scraper to get the 'now playing' info from an online radio station I like to listen too.
It's stored in a simple p element on their site:
data html location
Now using the standard apify/web-scraper I run into a strange issue. The scraping sometimes works, but sometimes doesn't using this code:
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
const nowPlaying = $('p.js-playing-now').text();
return {
nowPlaying
};
}
If the scraper works I get this result:
[{"nowPlaying": "Hangover Hotline - hosted by Lamebrane"}]
But if it doesn't I get this:
[{"nowPlaying": ""}]
And there is only a 5 minute difference between the two scrapes. The website doesn't change, the data is always presented in the same way. I tried checking all the boxes to circumvent security and different mixes of options (Use Chrome, Use Stealth, Ignore SSL errors, Ignore CORS and CSP) but that doesn't seem to fix it unfortunately.
Scraping instable
Any suggestions on how I can get this scraping task to constantly return the data I need?
It would be great if you can attach the URL, it will help me to find out the problem.
With the information you provided, I guess that the data you want to are loaded asynchronously. You can use context.waitFor() function.
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
await context.waitFor(() => !!$('p.js-playing-now').text());
const nowPlaying = $('p.js-playing-now').text();
return {
nowPlaying
};
}
You can pass the function to wait, and I will wait until the result of the function will be true. You can check the doc.
I have been using this JSON ticker for the last month. It has been working like a charm, but today it stopped working; maybe anyone knows what could have gone wrong here?
$(function () {
startRefresh();
});
function startRefresh() {
setTimeout(startRefresh, 10000);
var turl = 'https://btc-e.com/api/2/ltc_btc/ticker';
$.getJSON('http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20json%20where%20url%3D%22' + encodeURIComponent(turl) + '%22&format=json', function (data) {
jQuery('#ticker').html(data['query'].results.ticker.last);
jQuery('#ticker').append(' BTC');
});
}
http://jsfiddle.net/marcetin/9FHp3/4/
Here is the same example but with Cryptsy API and works well:
http://jsfiddle.net/marcetin/P2t9R/2/
I checked out https://btc-e.com/api/2/ltc_btc/ticker and got JSON back, so the issue is not with that site.
I checked out your code, and aside from being a little dirty, there was nothing that would keep it from pulling that service.
So, the issue seems to be at Yahoo's side. Purhaps that API is no longer available through Yahoo.
I have have cleaned up (and commented) your code:http://jsfiddle.net/9FHp3/27/
// Function for pulling JSON
function startRefresh() {
// This is the API URL
var turl = 'https://btc-e.com/api/2/ltc_btc/ticker';
// This sends the API URL through Yahoo?
$.getJSON('http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20json%20where%20url%3D%22' + encodeURIComponent(turl) + '%22&format=json', function (data) {
// Writes to the page
$('#ticker').html(data['query'].results.ticker.last+' BTC');
});
}
// Do the initial pull
startRefresh();
// Refresh every 10000
setInterval(startRefresh, 10000);
However, you should really be pulling REST APIs from a server-side code such as PHP. Unless they are available in JSONP or CORS they are not really intended for cross-domain client-side script.
I hope this helps!
I am writing a notification application which based on DHTMLxScheduler.
I would like to know more abut the idea of CRUD by IndexedDB for DHTMLxscheduler
As far I know, the following website shows an excellent example
http://www.codeproject.com/Articles/594924/Build-Calendar-App-for-Windows-8-with-dhtmlxSchedu
However, the data store is not persistent and the application would freeze during multi-touch event.
Does anyone can help to direct the coding needed for the CRUD by its default IndexedDB using the following?
scheduler.attachEvent("onEventDeleted",
function(event_id,event_object){
//add event to the data store
});
scheduler.attachEvent("onEventChanged", function(event_id, event_object){
//update event in the data store
});
scheduler.attachEvent("onEventAdded", function(event_id, event_object){
//delete event from the data store
});
The following example shows how to integrate by IndexedDB
http://www.dotnetcurry.com/ShowArticle.aspx?ID=868
However, they shares different framework, while the original scheduler sample used callback always to detect changes.
Thanks a lot for your help!
IndexedDB takes relatively lot of code for CRUD operation, so tutorial has been seriously simplified in order not to overload it with implementation details.
There is also complete example with working CRUD, check the '/samples/CalendarApp/' folder of this package:
http://dhtmlx.com/x/download/regular/dhtmlxScheduler_windows.zip
As for multi-touch issue, most probably it will be fixed in the nearest time. Current version of the package is based on dhtmlxScheduler3.7, we are going to update it to 4.0 which has an improvements for windows based touch devices.
And here is a example of database handling, similary to how it's done in the app from dhtmlx site.
//connect to indexedDb and fire the callback on success
function connect(callback){
try{
var db = null;
var req = window.indexedDB.open("SchedulerApp", 1);
req.onsuccess = function (ev) {
db = ev.target.result;
if(callback)//fire a callback on connect
callback(db);
}
req.onupgradeneeded = function(e){
//The event is fired when connecting to the new database, or on version change.
//This is the only place for defining database structure(object stores)
var db = ev.target.result;
if (!db.objectStoreNames.contains("events")) {
//create datastore, set 'id' as autoincremental key
var events = db.createObjectStore("events", { keyPath: "id", autoIncrement: true });
}
}
}catch(e){
}
}
//add js object to the database and fire callback on success
function insertEvent(data, callback) {
connect(function (db) {
var store = db.transaction("events", "readwrite").objectStore("events");
var updated = store.add(data);
updated.onsuccess = function (res) {
callback(res.target.result);
}
});
}
// use all defined above with the dhtmlxScheduler
// when user adds an event into the scheduler - it will be saved to the database
scheduler.attachEvent("onEventAdded", function (id) {
var ev = copyEvent(scheduler.getEvent(id));//where copyEvent is a helper function for deep copying
delete ev.id;//real id will be assigned by the database
insertEvent(ev, function (newId) {
scheduler.changeEventId(id, newId);//update event id in the app
});
return true;
});
However, I can't guarantee it will work right away, I can't test the code at the moment.
I'd also suggest you to check these articles on MSDN
FYI, I work for DHTMLX
I'm developing an add-on for the first time. It puts a little widget in the status bar that displays the number of unread Google Reader items. To accommodate this, the add-on process queries the Google Reader API every minute and passes the response to the widget. When I run cfx test I get this error:
Error: The page has been destroyed and can no longer be used.
I made sure to catch the widget's detach event and stop the refresh timer in response, but I'm still seeing the error. What am I doing wrong? Here's the relevant code:
// main.js - Main entry point
const tabs = require('tabs');
const widgets = require('widget');
const data = require('self').data;
const timers = require("timers");
const Request = require("request").Request;
function refreshUnreadCount() {
// Put in Google Reader API request
Request({
url: "https://www.google.com/reader/api/0/unread-count?output=json",
onComplete: function(response) {
// Ignore response if we encountered a 404 (e.g. user isn't logged in)
// or a different HTTP error.
// TODO: Can I make this work when third-party cookies are disabled?
if (response.status == 200) {
monitorWidget.postMessage(response.json);
} else {
monitorWidget.postMessage(null);
}
}
}).get();
}
var monitorWidget = widgets.Widget({
// Mandatory widget ID string
id: "greader-monitor",
// A required string description of the widget used for
// accessibility, title bars, and error reporting.
label: "GReader Monitor",
contentURL: data.url("widget.html"),
contentScriptFile: [data.url("jquery-1.7.2.min.js"), data.url("widget.js")],
onClick: function() {
// Open Google Reader when the widget is clicked.
tabs.open("https://www.google.com/reader/view/");
},
onAttach: function(worker) {
// If the widget's inner width changes, reflect that in the GUI
worker.port.on("widthReported", function(newWidth) {
worker.width = newWidth;
});
var refreshTimer = timers.setInterval(refreshUnreadCount, 60000);
// If the monitor widget is destroyed, make sure the timer gets cancelled.
worker.on("detach", function() {
timers.clearInterval(refreshTimer);
});
refreshUnreadCount();
}
});
// widget.js - Status bar widget script
// Every so often, we'll receive the updated item feed. It's our job
// to parse it.
self.on("message", function(json) {
if (json == null) {
$("span#counter").attr("class", "");
$("span#counter").text("N/A");
} else {
var newTotal = 0;
for (var item in json.unreadcounts) {
newTotal += json.unreadcounts[item].count;
}
// Since the cumulative reading list count is a separate part of the
// unread count info, we have to divide the total by 2.
newTotal /= 2;
$("span#counter").text(newTotal);
// Update style
if (newTotal > 0)
$("span#counter").attr("class", "newitems");
else
$("span#counter").attr("class", "");
}
// Reports the current width of the widget
self.port.emit("widthReported", $("div#widget").width());
});
Edit: I've uploaded the project in its entirety to this GitHub repository.
I think if you use the method monitorWidget.port.emit("widthReported", response.json); you can fire the event. It the second way to communicate with the content script and the add-on script.
Reference for the port communication
Reference for the communication with postMessage
I guess that this message comes up when you call monitorWidget.postMessage() in refreshUnreadCount(). The obvious cause for it would be: while you make sure to call refreshUnreadCount() only when the worker is still active, this function will do an asynchronous request which might take a while. So by the time this request completes the worker might be destroyed already.
One solution would be to pass the worker as a parameter to refreshUnreadCount(). It could then add its own detach listener (remove it when the request is done) and ignore the response if the worker was detached while the request was performed.
function refreshUnreadCount(worker) {
var detached = false;
function onDetach()
{
detached = true;
}
worker.on("detach", onDetach);
Request({
...
onComplete: function(response) {
worker.removeListener("detach", onDetach);
if (detached)
return; // Nothing to update with out data
...
}
}).get();
}
Then again, using try..catch to detect this situation and suppress the error would probably be simpler - but not exactly a clean solution.
I've just seen your message on irc, thanks for reporting your issues.
You are facing some internal bug in the SDK. I've opened a bug about that here.
You should definitely keep the first version of your code, where you send messages to the widget, i.e. widget.postMessage (instead of worker.postMessage). Then we will have to fix the bug I linked to in order to just make your code work!!
Then I suggest you to move the setInterval to the toplevel, otherwise you will fire multiple interval and request, one per window. This attach event is fired for each new firefox window.
Or, not precisely "executing," but updating a function that exists before the response with a function returned in the response.
Step 1
I have an HTML5 page that describes a location.
On the server side this page includes a ColdFusion file "MapFunction.cfm." (Which is used for consistent mapping all over the site at large.)
MapFunction.cfm outputs a javascript function "loadMap" mixed with the HTML.
loadMap() contains all the javascript needed to place a Bing map of the location on the page.
Some javascript in a separate js file actually calls loadMap().
This works when the page is first loaded.
Step 2
Search & results stuff is all fine too. Nothing needs to be done with the map here.
Step 3
When a search result is clicked, the result detail is loaded asynchronously via a jQuery $.get() request.
It returns mixed HTML and javascript which I use jQuery to traverse through.
With the jQuery objects I update specific areas of the page to show different details.
One of the areas I need to update is the map. That part isn't working.
What I'm working with is mixed HTML and Javascript that is identical in both Step 1 and Step 3:
<section id="mod-map" class="module mod-map">
<header class="mod-head">
Map <span class="arrow"></span>
</header>
<div id="map" class="mod-body">
<div id="cmMap" style="position:relative;width:369px;height:303px;"></div>
<script type="text/javascript" id="cfLoadMap">
// ...some global variables are defined to use...
function loadMap()
{
// ...Bing/Virtual Earth Map Stuff...
// This part here is unique to each location's detail
var propertypoint = new VELatLong(parseFloat(36.707756),parseFloat(-78.74204));
// ...More Bing/Virtual Earth Map Stuff...
// This part here is unique to each location's detail
var label = "<div class=\"wrapper\"><img onerror=\"replaceImage(this);\" src=\"noimage.jpg\" width=\"100\" class=\"thumb\" alt=\"\" /><div class=\"caption\"><br />City<br /> State, 12345</div></div>";
// ...More Bing/Virtual Earth Map Stuff...
}
</script>
</div>
</section>
Now, in Step 3 loadMap() does get called again, but it just refreshes the map to the same location. The loadMap() function as the browser knows it doesn't get updated with the one retrieved via ajax.
That updated block of mixed HTML & javascript above does get successfully added to the page after each ajax call. It is placed right where it originally is, but with different coordinates and captions where indicated by the comments above. The ajax callback looks like (slightly simplified):
$.get(urlToLoad, {}, function(data, status, request){
var newData = $(innerShiv(data, false)),
newModules = newData.find(".module");
// (innerShiv is used to make HTML5 tags work in IE. It's possible I'm going a little overboard with using it, but I had a lot of issues with IE. :-))
newModules.each(function(i){
var thisId = "#" + $(this).attr("id"),
thisBody = $(this).find(".mod-body").html(),
toReplaceAll = $("body").find(thisId),
toReplaceBody = toReplaceAll.find(".mod-body");
// These variables are used to choose add content in different ways based on thisID. Below is the one the map area is subject to.
toReplaceBody.html(innerShiv(thisBody));
}); // each
// Various things including loadMap() get called/re-initiated/etc. here
}, "html"); // get
This works in Firefox 3.6, but nowhere else I've tested (Opera 11, IE 7, Chrome 8).
I have done this before in a similar situation with dynamically PHP generated javascript written to a separate js file--$.getScript works great there. But this is mixed into the HTML of the ajax response.
I've been looking and have found and tried the following (among other things):
Attempted Solutions
1. var myScript = new Function($('script#cfLoadMap', data).text()); myScript();
2. eval(newData.text());
3. eval(newData.find("#cfLoadMap").text());
4. $("head").append(newData.find("#cfLoadMap"));
None of these so far seem to be doing any good.
I know there are a few other ways this could theoretically be done. But as it stands at the moment, I do not have any ability to change much of anything but what I do with the mixed HTML & javascript response. I need a solution where,
The details will be updated via ajax.
The javascript will be mixed in with the HTML.
The javascript will be a javascript function generated dynamically by ColdFusion.
Very similar questions have been asked & resolved before, so I hope this can be done. However, none of the solutions I've found are working for me. Might be making a mistake or missing something, or maybe it's just different when it's a function?
Any help would be appreciated.
Answer
It suddenly started working with the following code:
$.get(urlToLoad, {}, function(data, status, request){
var safeData = $(innerShiv(data, false)),
newModules = safeData.find(".module"),
newScript = safeData.find("script#cfLoadMap");
// Update Each module
newModules.each(function(i){
var jqoThis = $(this),
thisId = "#" + jqoThis.attr("id"),
newModule = jqoThis,
newModBody = jqoThis.find(".mod-body"),
curModule = $("body").find(thisId),
curModBody = curModule.find(".mod-body");
// Varies by id, this one is used by the map area.
curModBody.html(innerShiv(newModBody.html()));
}); // each
// Make sure plugins are bound to new content
$("body").oneTime(100, function(){
// Various things get initiated here
// Maps -- this one works: Chrome, Firefox, IE7, Opera
$("head").append(newScript);
// Maps -- these did not work
/*
// Firefox only (but Firefox always works)
runScript = new Function(newScript.text());
runScript();
*/
/*
// Firefox only (but Firefox always works)
eval(newScript.text());
*/
}); // oneTime
}, "html"); // get
One thing I did notice for sure was that without innerShiv, in all my browsers, $(data).find("script#cfLoadMap").text() was blank -- which I did not expect.
Most likely it's that functions in JavaScript can be declared in non-global scope, so when you're inserting the <script> tag, jQuery is evaling it, but not replacing the original function (as you noticed).
A fix for this would be to change how you declare the function from this:
function loadMap()
{
...
}
to this:
window.loadMap = function loadMap() {
...
}
That way the top-level loadMap will always be the latest one that came down from the server.
You may want to consider not modifying the client code this way as it can make debugging trickier, but that's totally up to you. Hopefully this answer works for you either way.
It suddenly started working with the following code:
$.get(urlToLoad, {}, function(data, status, request){
var safeData = $(innerShiv(data, false)),
newModules = safeData.find(".module"),
newScript = safeData.find("script#cfLoadMap");
// Update Each module
newModules.each(function(i){
var jqoThis = $(this),
thisId = "#" + jqoThis.attr("id"),
newModule = jqoThis,
newModBody = jqoThis.find(".mod-body"),
curModule = $("body").find(thisId),
curModBody = curModule.find(".mod-body");
// Varies by id, this one is used by the map area.
curModBody.html(innerShiv(newModBody.html()));
}); // each
// Make sure plugins are bound to new content
$("body").oneTime(100, function(){
// Various things get initiated here
// Maps -- this one works: Chrome, Firefox, IE7, Opera
$("head").append(newScript);
// Maps -- these did not work
/*
// Firefox only (but Firefox always works)
runScript = new Function(newScript.text());
runScript();
*/
/*
// Firefox only (but Firefox always works)
eval(newScript.text());
*/
}); // oneTime
}, "html"); // get
One thing I did notice for sure was that without innerShiv, in all my browsers, $(data).find("script#cfLoadMap").text() was blank -- which I did not expect.
However, I don't really see how this is different from what I had tried before and which failed. If someone spots a substantive difference, please let me know, for future reference?
(Note: it doesn't seem to make a difference that the Map bit is placed in the timeout, it works as well above it.)