Basically, what I'm trying to do is to pass a parameter through the URL to the php code, however, it seems that in the function body of the on message event, I can't change the source. Here's the code below:
var source = new EventSource("get_message.php?latest_chat_ID=0");
var i = 0;
$(source).on("message", function (event) {
var data = event.originalEvent.data;
++i;
source = new EventSource("get_message.php?latest_chat_ID=" + i);
// $.post("get_message.php", {latest_chat_ID: 0}, function (data, status) {});
$("#messages").html($("#messages").html() + data);
});
I was wondering -
How do I rectify this problem?
Are there other ways to send data to a PHP page? (I contemplated using the $.post{} jQuery function, but that will execute the script twice - once from firing the EventSource event and once from the .post{} request?)
I also understand that alternative technologies exist, such as WebSockets and libraries such as node.js, that are better suited for bidirectional communication. However, most of my base code is written with an SSE implementation in mind, and I'd like to stick to that.
If you want to continue using SSE, I think you'll need to rewrite what you have similar to what is below. The reason what you have right now doesn't work is because you are only listening to the first EventSource, and just changing the variable. The variable is not reused by jQuery when you change it. Plus, I probably would skip using jQuery for that since it's going to try and cache things you don't want cached.
var listenToServer = function(i) {
source = new EventSource("get_message.php?latest_chat_ID=" + i);
source.onmessage = function(event) {
var data = event.originalEvent.data;
$messages.html($messages.html() + data);
listenToServer(i + 1);
}
},
$messages = $('#messages'),
source;
listenToServer(0);
I also went ahead and cached $('#messages') so you're not creating new objects over and over. Left source outside of the function so that you don't have to worry as much about garbage collection with the various EventSources.
Related
I'm experimenting with indexedDB. Now everything is like asynchronous and that hurts my brain a lot.
I created an object like this:
var application = {};
application.indexedDB = {};
application.indexedDB.db = null;
application.indexedDB.open = function() {
var dbName = "application";
var dbVersion = 1;
var openRequest = indexedDB.open(dbName, dbVersion);
openRequest.onupgradeneeded = function(e) {
console.log("Upgrading your DB (" + dbName + ", v" + dbVersion + ")...");
var thisDB = e.target.result;
if (!thisDB.objectStoreNames.contains("journal")) {
thisDB.createObjectStore(
"journal",
{keyPath: "id"}
);
}
}
openRequest.onsuccess = function(e) {
console.log("Opened DB (" + dbName + ", v" + dbVersion + ")");
application.indexedDB.db = e.target.result;
}
openRequest.onerror = function(e) {
console.log("Error");
console.dir(e);
}
};
Now I am able to open the dbconnection with application.indexedDB.open(). Now I added another function to my Object:
application.indexedDB.addItemToTable = function(item, table) {
var transaction = application.indexedDB.db.transaction([table], "readwrite");
var store = transaction.objectStore(table);
//Perform the add
var request = store.add(item);
request.onerror = function(e) {
console.log("Error", e.target.error.name);
//some type of error handler
}
request.onsuccess = function(e) {
console.log("Woot! Did it");
}
};
My instruction-sequence extended like this:
application.indexedDB.open()
application.indexedDB.addItemToTable(item, "journal")
But this doesn't work. Because the open-Instruction is asynchronous the application.indexedDB.dbis not yet available when i call it in the addItemToTable-Function.
How does a Javascript-Developer solve this?
I was following this tutorial here: http://code.tutsplus.com/tutorials/working-with-indexeddb--net-34673 and now I have some problems with those examples.
For example he creates the HTML-Output directly in the "onsuccess"-Part (in the "Read More Data" Section) . In my eyes this is bad coding because the view has nothing to do with the db-reading part.. isn't it? but then comes my question. how the heck can you return something in the "onsuccess"-Part?
Adding callbackfunctions is somewhat complicated. Especially when i want to read some data and with that resultset get some more data. It's very complicated to describe what i mean.
I made a little fiddle - maybe it clarifies things.. -- http://jsfiddle.net/kb8nuby6/
Thank you
You don't need to use someone else's extra program. You will need to learn about asynchronous javascript (AJAX) before using indexedDB. There is no way to avoid it. You can learn about AJAX without learning about indexedDB. For example, look at how XMLHttpRequest works. Learn about setTimeout and setInterval. Maybe learn about requestAnimationFrame. If you know nodejs stuff, review process.nextTick. Learn about how functions are first class. Learn about the idea of using a callback function. Learn about continuation passing style.
You will probably not get the answer you are looking for to this question. If anything, this is a duplicate of the several thousand other questions on stack overflow about asynchronous programming in javascript. It is not even that related to indexedDB. Take a look at some of the numerous other questions about asynchronous js.
Maybe this gets you started:
var a;
setTimeout(function() { a = 1; }, 10);
console.log('The value of a is %s', a);
Figure out why that did not work. If you do, you will be much closer to finding the answer to this question.
The pattern I commonly adopted is wait all database operations until connected. It is similar concept to $.ready in jQuery.
You will find that as the app get age, you have many schema versions and need to upgrade data as well. A lot of logic in database connection itself.
You can use callback queue, if you need to use database before ready. Here is snippet from Google analytics on commend queue:
// Creates an initial ga() function. The queued commands will be executed once analytics.js loads.
i[r] = i[r] || function() {
(i[r].q = i[r].q || []).push(arguments)
},
Basically, you will execute these callbacks once database is connected.
I highly recommend to check out my own library, ydn-db. It has all these concepts.
I'm working on a small extension with the Firefox addon-sdk that has to alter the content of DOM elements in pages. I'm using PageMod to add the content script and register some events, some of which I want to pass along a callback function to, like this :
main.js
pageMod.PageMod({
include: "*",
contentScriptWhen: 'ready',
contentScriptFile: [self.data.url("my_content_script.js")],
onAttach: function(worker) {
worker.port.on("processElement", function(elementSrc, callback) {
doSomeProcessingViaJsctypes();
callback("http://someUrl/image.png");
});
}
});
my_content_script.js
var elements = document.getElementsByTagName("img");
var elementsLength = elements.length;
for (var i = 0; i < elementsLength; i++)
{
(function(obj) {
obj.setAttribute("data-processed", "true");
self.port.emit("processElement", obj.src, function(newSrc) {
console.log("replaced " + obj.src);
obj.src = newSrc;
});
})(elements[i]);
}
The error
TypeError: callback is not a function
Stack trace:
.onAttach/<#resource://gre/modules/XPIProvider.jsm -> file:///c:/users/sebast~1/appdata/local/temp/tmpjprtpo.mozrunner/extensions/jid1-gWyqTW27PXeXmA#jetpack/bootstrap.js -> resource://gre/modules/commonjs/toolkit/loader.js -> resource://jid1-gwyqtw27pxexma-at-jetpack/testextension/lib/main.js:53
I can't seem to find anything on the matter on the web. I need this approach since the processing takes a bit of time and depends on a .dll file so I can't call it from the content script.
If I were to process the element and after that call a worker.port.emit() I would have to iterate through the entire tree again to identify the element and change it's src attribute. This will take a long time and would add extra loops for each img in the document.
I've thought about generating a unique class name and appending it to the element's classes and then calling getElementsByClassName(). I have not tested this but it seems to me that it would take the same amount of time as the process I described above.
Any suggestions would be appreciated.
EDIT : I have found this answer on a different question. Wladimir Palant suggests using window-utils to get the activeBrowserWindow and then iterate thorough it's content.
He also mentions that
these low-level APIs aren't guaranteed to be stable and the window-utils module isn't even fully documented
Has anything changed since then? I was wondering if you can get the same content attribute using the tabs and if you can identify the tab from which a worker sent a self.port.emit().
When using messaging between content-scripts and your modules (main.js), you can only pass data around that is JSON-serializable.
Passing <img>.src should be OK, as this a string, and therefore JSON-serializable.
Your code breaks because of the callback function you're trying to pass, since function is not JSON-serializable (same as whole DOM nodes are not JSON-serializable).
Also, .emit and .on use only the first argument as the message payload.
Instead of a callback, you'll have to actually emit another message back to the content script after you did your processing. And since you cannot pass DOM elements, you'll need to keep track of what DOM element belongs to what message.
Alright, here is for example how I'd do it.
First main.js:
const self = require("sdk/self");
const {PageMod} = require("sdk/page-mod");
function processImage(src) {
return src + " dummy";
}
PageMod({
include: "*",
contentScriptWhen: 'ready',
contentScriptFile: [self.data.url("content.js")],
onAttach: function(worker) {
worker.port.on("processImage", function(data) {
worker.port.emit("processedImage", {
job: data.job,
newSrc: processImage(data.src)
});
});
}
});
In my design, each processImage message has a job associated with it (see the content script), which main.js considers opaque and just posts back verbatim with the response.
Now, data/content.js, aka. my content script:
var jobs = new Map();
var jobCounter = 0;
self.port.on("processedImage", function(data) {
var img = jobs.get(data.job);
jobs.delete(data.job);
var newSrc = data.newSrc;
console.log("supposed replace", img.src, "with", newSrc);
});
for (var i of document.querySelectorAll("img")) {
var job = jobCounter++; // new job number
jobs.set(job, i);
self.port.emit("processImage", {
job: job,
src: i.src
});
}
So essentially for each image, we will create a job number (could be an uuid or whatever instead, but incrementing a counter is good enough for our use case), and put the DOM image associated with that job number into a map to keep track of it.
After that is, just post the message to main.js.
The processedImage handler, will the receive back the job number and new source, use the job number and jobs map get back the DOM element, remove it from the map again (we don't wanna leak it stuff) and do whatever processing is required; in this example just log stuff.
I use following sample program to append media files, but get "Uncaught InvalidStateError : An attempt was made to use an object that is not, or is no longer, usable" error at the first instant it hits code "mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);". I am using Chrome 31.0.1650.57. Can anyone advice me to resolve this?
https://github.com/jbochi/media-source-playground/blob/master/test.html
I have done following modification to append files.
var buffer_len = 0;
function HaveMoreMediaSegments(){
//return false; //return buffers.length > 0;
return buffers.length > buffer_len;
}
// var GetNextMediaSegment = GetInitializationSegment;
function GetNextMediaSegment(){
var buffer = buffers[buffer_len];
buffers = buffers.slice(1);
buffer_len = buffer_len + 1;
return buffer;
}
And changed
mediaSource.sourceBuffers[0].append(mediaSegment);
to
mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
And
sourceBuffer.append(initSegment);
to
sourceBuffer.appendBuffer(initSegment);
As append method is not working in my environment.
And I use sourceopen instead of webkitsourceopen inside window.setTimeout() event.
mediaSource.addEventListener('sourceopen', onSourceOpen.bind(this, video));
The problem is that after you append data, the SourceBuffer instance becomes temporarily unusable while it's working. During this time, the SourceBuffer's updating property will be set to true, so it's easy to check for.
But probably the simplest way to deal with this is to listen to the updateend event, and just queue up all your buffers and only append them when the SourceBuffer tells you it's ready for a new one. Like this:
// store the buffers until you're ready for them
var queue = [];
// whatever normally would have called appendBuffer(buffer) can
// now just call queue.push(buffer) instead
sourceBuffer.addEventListener('updateend', function() {
if ( queue.length ) {
sourceBuffer.appendBuffer(queue.shift());
}
}, false);
Keep in mind that in order for the first event to fire, you need to append the first buffer manually, rather than pushing to queue. After that, just dump everything into the array.
Have you found the answer? Same problem here :(
I have a temporary solution which is setting setTimeout for each buffer append since you have to wait for previous buffer to finish updating (sourceBuffer.updating == false). The code may look like this:
var i = 0;
stream.on('data', function (buffer) {
setTimeout(function(){
sourceBuffer.appendBuffer(buffer);
}, (i++) * 200);
});
So I've been stuck on this for quite a while. I asked a similar question here: How exactly does done() work and how can I loop executions inside done()?
but I guess my problem has changed a bit.
So the thing is, I'm loading a lot of streams and it's taking a while to process them all. So to make up for that, I want to at least load the streams that have already been processed onto my webpage, and continue processing stream of tweets at the same time.
loadTweets: function(username) {
$.ajax({
url: '/api/1.0/tweetsForUsername.php?username=' + username
}).done(function (data) {
var json = jQuery.parseJSON(data);
var jsonTweets = json['tweets'];
$.Mustache.load('/mustaches.php', function() {
for (var i = 0; i < jsonTweets.length; i++) {
var tweet = jsonTweets[i];
var optional_id = '_user_tweets';
$('#all-user-tweets').mustache('tweets_tweet', { tweet: tweet, optional_id: optional_id });
configureTweetSentiment(tweet);
configureTweetView(tweet);
}
});
});
}};
}
This is pretty much the structure to my code right now. I guess the problem is the for loop, because nothing will display until the for loop is done. So I have two questions.
How can I get the stream of tweets to display on my website as they're processed?
How can I make sure the Mustache.load() is only executed once while doing this?
The problem is that the UI manipulation and JS operations all run in the same thread. So to solve this problem you should just use a setTimeout function so that the JS operations are queued at the end of all UI operations. You can also pass a parameter for the timeinterval (around 4 ms) so that browsers with a slower JS engine can also perform smoothly.
...
var i = 0;
var timer = setInterval(function() {
var tweet = jsonTweets[i++];
var optional_id = '_user_tweets';
$('#all-user-tweets').mustache('tweets_tweet', {
tweet: tweet,
optional_id: optional_id
});
configureTweetSentiment(tweet);
configureTweetView(tweet);
if(i === jsonTweets.length){
clearInterval(timer);
}
}, 4); //Interval between loading tweets
...
NOTE
The solution is based on the following assumptions -
You are manipulating the dom with the configureTweetSentiment and the configureTweetView methods.
Ideally the solution provided above would not be the best solution. Instead you should create all html elements first in javascript only and at the end append the final html string to a div. You would see a drastic change in performance (Seriously!)
You don't want to use web workers because they are not supported in old browsers. If that's not the case and you are not manipulating the dom with the configure methods then web workers are the way to go for data intensive operations.
I have a C# webserver which provides some html/js/css and images to a browser-based client.
I create the page dynamically using javascript, filling images and divs while receiving messages from a websocket C# server.
Sometimes, I receive a message from the websocket and I try to access a resource in the webserver which might be protected.
I have some custom http server events for this (say, 406) because otherwise the browser pops up a dialog box for digest auth login.
I was wondering whether there is a way to channel all 406 events into the same function, say
RegisterHttpEventHandler(406,myFunction);
I guess I could just wrap any request/dynamic load in try and catches, although I'd love a cleaner way of doing this.
EDIT
Here is an example of the workflow which I implemented so far, and which works fine.
// Websocket definition
var conn = new WebSocket('ws://' + addressIP);
// Websocket receiver callback
conn.onmessage = function (event) {
// my messages are json-formatted
var mObj = JSON.parse(event.data);
// check if we have something nice
if(mObj.message == "I have got a nice image for you"){
showImageInANiceDiv(mObj.ImageUrl);
}
};
// Dynamic load image
function showImageInANiceDiv(imgUrl )
{
var imgHtml = wrapUrlInErrorHandler(imgUrl);
$("#imageBox").html(imgHtml);
}
// function to wrap everything in an error handler
function wrapUrlInErrorHandler(Url)
{
return '<img src="' + Url + '" onerror="errorHandler(\''+ Url +'\');"/>';
}
// function to handle errors
function errorHandler(imgUrl)
{
console.log("this guy triggered an error : " + imgUrl);
}
//
the onerror does not tell me what failed, so I have to make an XHR just to find it out. That's a minor thing
You could first try the XHR. If it fails, you know what happened, if it succeeds you can display the image from cache (see also here). And of course you also could make it call some globally-defined custom hooks for special status codes - yet you will need to call that manually, there is no pre-defined global event source.
I'd like to have something like a document.onerror handler to avoid using the wrapUrlInErrorHandler function every time I have to put an image in the page
That's impossible. The error event (MDN, MSDN) does not bubble, it fires directly and only on the <img> element. However, you could get around that ugly wrapUrlInErrorHandler if you didn't use inline event attributes, but (traditional) advanced event handling:
function showImageInANiceDiv(imgUrl) {
var img = new Image();
img.src = imgUrl;
img.onerror = errorHandler; // here
$("#imageBox").empty().append(img);
}
// function to handle errors
function errorHandler(event) {
var img = event.srcElement; // or use the "this" keyword
console.log("this guy triggered an "+event.type+": " + img.src);
}