I am trying to stream mp3 data from my server to the client side. I am doing this using Ajax. The server sends 50 kilobytes per request. I wrote two functions: one that gets the mp3 data and one that plays them. The first function takes the 50 kilobytes, decodes them and stores the decoded data in an array then it calls itself recursively. The second function starts playing as soon as the first element in the array is filled with data. The problem is that this works for the first 50kilobytes only then it fails. What I want to do is keep my get_buffer function running until the server tells it no more data to send, and keep my play() function playing data until there is no more elements in the array.
Here is my two functions:
function buffer_seg() {
// starts a new request
buff_req = new XMLHttpRequest();
// Request attributes
var method = 'GET';
var url = '/buffer.php?request=buffer&offset=' + offset;
var async = true;
// set attributes
buff_req.open(method, url, async);
buff_req.responseType = 'arraybuffer';
// keeps loading until something is recieved
if (!loaded) {
change_icon();
buffering = true;
}
buff_req.onload = function() {
segment = buff_req.response;
// if the whole file was already buffered
if (segment.byteLength == 4) {
return true;
} else if (segment.byteLength == 3) {
return false;
}
// sets the new offset
if (offset == -1) {
offset = BUFFER_SIZE;
} else {
offset += BUFFER_SIZE;
}
//decodes mp3 data and adds it to the array
audioContext.decodeAudioData(segment, function(decoded) {
buffer.push(decoded);
debugger;
if (index == 0) {
play();
}
});
}
buff_req.send();
buff_seg();
}
Second function:
function play() {
// checks if the end of buffer has been reached
if (index == buffer.length) {
loaded = false;
change_icon();
if (buffer_seg == false) {
stop();
change_icon();
return false;
}
}
loaded = true;
change_icon();
// new buffer source
var src = audioContext.createBufferSource();
src.buffer = buffer[index++];
// connects
src.connect(audioContext.destination);
src.start(time);
time += src.buffer.duration;
src.onended = function() {
src.disconnect(audioContext.destination);
play();
}
}
The recursive call to buffer_seg is in the main body of buffer_seg, not in the callback, so it happens immediately - not, as you seem to intend, after a response is received. Second, this also means that the recursive call is unconditional when it should be based on whether the previous response indicated more data would be available. If this isn't just crashing your browser, I'm not sure why. It also means that chunks of streamed audio could be pushed into the buffer out of order.
So to start I'd look at moving the recursive call to the end of the onload handler, after the check for end of stream.
In the 2nd function, what do you intend if (buffer_seg == false) to do? This condition will never be met. Are you thinking this is a way to see the last return value from buffer_seg? That's not how it works. Perhaps you should have a variable that both functions can see, which buffer_seg can set and play can test, or something like that.
Related
I try to get the buffered parts of media element (specifically I try to get it from a video element, but I want it to be able to use audio too), but when I use the start() or end() functions with some offset (for example, 0), The log returns the following error:
IndexSizeError: Index or size is negative or greater than the allowed amount
What's wrong with my code?
var mediaelement = function(e) {
return e.buffered.start(0);
}
console.log(mediaelement(document.querySelector('video')));
Most probably, there is nothing buffered yet when you call your function.
You should add a check to e.buffered.length before calling TimeRanges.start() or TimeRanges.end():
function getbufferedstart(el) {
if(el.buffered.length) {
return el.buffered.start(0);
}
else {
return 'avoided a throw';
}
}
var a = new Audio('https://dl.dropboxusercontent.com/s/8c9m92u1euqnkaz/GershwinWhiteman-RhapsodyInBluePart1.mp3');
a.onloadedmetadata = onwehavebufferedsomething;
console.log(getbufferedstart(a)); // avoided
console.log('really ?');
a.buffered.start(0); // Error
function onwehavebufferedsomething(evt) {
console.log("now it's ok");
console.log(getbufferedstart(a)); // 0
}
body>.as-console-wrapper{max-height:100vh}
I have a XMLHttpRequest with a progress event handler that is requesting a chunked page which continuously sends adds message chunks. If I do not set a responseType, I can access the response property of the XMLHttpRequest in each progress event and handle the additional message chunk. The problem of this approach is that the browser must keep the entire response in memory, and eventually, the browser will crash due to this memory waste.
So, I tried a responseType of arraybuffer in the hope that I can slice the buffer to prevent the previous excessive memory waste. Unfortunately, the progress event handler is no longer capable of reading the response property of the XMLHttpRequest at this point. The event parameter of the progress event does not contain the buffer, either. Here is a short, self-contained example of my attempt at this (this is written for node.js):
var http = require('http');
// -- The server.
http.createServer(function(req, res) {
if (req.url === '/stream') return serverStream(res);
serverMain(res);
}).listen(3000);
// -- The server functions to send a HTML page with the client code, or a stream.
function serverMain(res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('<html><body>Hello World</body><script>');
res.end(client.toString() + ';client();</script></html>');
}
function serverStream(res) {
res.writeHead(200, {'Content-Type': 'text/html'});
setInterval(function() {
res.write('Hello World<br />\n');
}, 1000);
}
// -- The client code which runs in the browser.
function client() {
var xhr = new XMLHttpRequest();
xhr.addEventListener('progress', function() {
if (!xhr.response) return console.log('progress without response :-(');
console.log('progress: ' + xhr.response.size);
}, false);
xhr.open('GET', '/stream', true);
xhr.responseType = 'arraybuffer';
xhr.send();
}
The progress event handler has no access to the response I wanted. How can I handle the message chunks in the browser in a memory-efficient way? Please do not suggest a WebSocket. I do not wish to use one just to process a read-only stream of message chunks.
XMLHttpRequest doesn't seem really designed for this kind of usage. The obvious solution is polling, which is a popular use of XMLHttpRequest but I'm guessing you don't want to miss data from your stream that would slip between the calls.
To my question Can the "real" data chunks be identified in some way or is it basically random data ?, you answered With some effort, the chunks could be identified by adding an event-id of sorts to the server-side
Based on this premise, I propose:
The idea: cooperating concurrent listeners
Connect to the stream and set up the progress listener (referred to as listenerA()).
When a chunk arrives, process it and output it. Keep a reference to the ids of both the first and last chunk received by listenerA(). Count how many chunks listenerA() has received.
After listenerA() has received a certain amount of chunks, spawn another "thread" (connection + listener, listenerB()) doing the steps 1 and 2 in parallel to the first one but keep the processed data in a buffer instead of outputting it.
When listenerA() receives the chunk with the same id as the first chunk received by listenerB(), send a signal to listenerB(), drop the first connection and kill listenerA().
When listenerB() receives the termination signal from the listenerA(), dump the buffer to the output and keep processing normally.
Have listenerB() spawn listenerC() on the same conditions as before.
Keep repeating with as many connections + listeners as necessary.
By using two overlapping connections, you can prevent the possible loss of chunks that would result from dropping a single connection and then reconnecting.
Notes
This assumes the data stream is the same for all connections and doesn't introduce some individualized settings.
Depending on the output rate of the stream and the connection delay, the buffer dump during the transition from one connection to another might be noticeable.
You could also measure the total response size rather than the chunks count to decide when to switch to a new connection.
It might be necessary to keep a complete list of chunks ids to compare against rather than just the first and last one because we can't guarantee the timing of the overlap.
The responseType of XMLHttpRequest must be set to its default value of "" or "text", to return text. Other datatypes will not return a partial response. See https://xhr.spec.whatwg.org/#the-response-attribute
Test server in node.js
The following code is a node.js server that outputs a consistent stream of elements for testing purposes. You can open multiple connections to it, the output will be the same accross sessions, minus possible server lag.
http://localhost:5500/stream
will return data where id is an incremented number
http://localhost:5500/streamRandom
will return data where id is a random 40 characters long string. This is meant to test a scenario where the id can not be relied upon for ordering the data.
var crypto = require('crypto');
// init + update nodeId
var nodeId = 0;
var nodeIdRand = '0000000000000000000000000000000000000000';
setInterval(function() {
// regular id
++nodeId;
//random id
nodeIdRand = crypto.createHash('sha1').update(nodeId.toString()).digest('hex');
}, 1000);
// create server (port 5500)
var http = require('http');
http.createServer(function(req, res) {
if(req.url === '/stream') {
return serverStream(res);
}
else if(req.url === '/streamRandom') {
return serverStream(res, true);
}
}).listen(5500);
// serve nodeId
function serverStream(res, rand) {
// headers
res.writeHead(200, {
'Content-Type' : 'text/plain',
'Access-Control-Allow-Origin' : '*',
});
// remember last served id
var last = null;
// output interval
setInterval(function() {
// output on new node
if(last != nodeId) {
res.write('[node id="'+(rand ? nodeIdRand : nodeId)+'"]');
last = nodeId;
}
}, 250);
}
Proof of concept, using aforementioned node.js server code
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
</head>
<body>
<button id="stop">stop</button>
<div id="output"></div>
<script>
/*
Listening to a never ending page load (http stream) without running out of
memory by using concurrent overlapping connections to prevent loss of data,
using only xmlHttpRequest, under the condition that the data can be identified.
listen arguments
url url of the http stream
chunkMax number of chunks to receive before switching to new connection
listen properties
output a reference to a DOM element with id "output"
queue an array filled with non-duplicate received chunks and metadata
lastFetcherId an incrementing number used to assign an id to new fetchers
fetchers an array listing all active fetchers
listen methods
fire internal use fire an event
stop external use stop all connections
fetch internal use starts a new connection
fetchRun internal use initialize a new fetcher object
Usage
var myListen = new listen('http://localhost:5500/streamRandom', 20);
will listen to url "http://localhost:5500/streamRandom"
will switch connections every 20 chunks
myListen.stop()
will stop all connections in myListen
*/
function listen(url, chunkMax) {
// main ref
var that = this;
// output element
that.output = document.getElementById('output');
// main queue
that.queue = [];
// last fetcher id
that.lastFetcherId = 0;
// list of fetchers
that.fetchers = [];
//********************************************************* event dispatcher
that.fire = function(name, data) {
document.dispatchEvent(new CustomEvent(name, {'detail':data}));
}
//******************************************************** kill all fetchers
that.stop = function() {
that.fire('fetch-kill', -1);
}
//************************************************************** url fetcher
that.fetch = function(fetchId, url, fetchRef) {
//console.log('start fetcher #'+fetchId);
var len = 0;
var xhr = new XMLHttpRequest();
var cb_progress;
var cb_kill;
// progress listener
xhr.addEventListener('progress', cb_progress = function(e) {
// extract chunk data
var chunkData = xhr.response.substr(len);
// chunk id
var chunkId = chunkData.match(/id="([a-z0-9]+)"/)[1];
// update response end point
len = xhr.response.length;
// signal end of chunk processing
that.fire('chunk-ready', {
'fetchId' : fetchId,
'fetchRef' : fetchRef,
'chunkId' : chunkId,
'chunkData' : chunkData,
});
}, false);
// kill switch
document.addEventListener('fetch-kill', cb_kill = function(e) {
// kill this fetcher or all fetchers (-1)
if(e.detail == fetchId || e.detail == -1) {
//console.log('kill fetcher #'+fetchId);
xhr.removeEventListener('progress', cb_progress);
document.removeEventListener('fetch-kill', cb_kill);
xhr.abort();
that.fetchers.shift(); // remove oldest fetcher from list
xhr = null;
delete xhr;
}
}, false);
// go
xhr.open('GET', url, true);
xhr.responseType = 'text';
xhr.send();
};
//****************************************************** start a new fetcher
that.fetchRun = function() {
// new id
var id = ++that.lastFetcherId;
//console.log('create fetcher #'+id);
// create fetcher with new id
var fetchRef = {
'id' : id, // self id
'queue' : [], // internal queue
'chunksIds' : [], // retrieved ids, also used to count
'hasSuccessor' : false, // keep track of next fetcher spawn
'ignoreId' : null, // when set, ignore chunks until this id is received (this id included)
};
that.fetchers.push(fetchRef);
// run fetcher
that.fetch(id, url, fetchRef);
};
//************************************************ a fetcher returns a chunk
document.addEventListener('chunk-ready', function(e) {
// shorthand
var f = e.detail;
// ignore flag is not set, process chunk
if(f.fetchRef.ignoreId == null) {
// store chunk id
f.fetchRef.chunksIds.push(f.chunkId);
// create queue item
var queueItem = {'id':f.chunkId, 'data':f.chunkData};
// chunk is received from oldest fetcher
if(f.fetchId == that.fetchers[0].id) {
// send to main queue
that.queue.push(queueItem);
// signal queue insertion
that.fire('queue-new');
}
// not oldest fetcher
else {
// use fetcher internal queue
f.fetchRef.queue.push(queueItem);
}
}
// ignore flag is set, current chunk id the one to ignore
else if(f.fetchRef.ignoreId == f.chunkId) {
// disable ignore flag
f.fetchRef.ignoreId = null;
}
//******************** check chunks count for fetcher, threshold reached
if(f.fetchRef.chunksIds.length >= chunkMax && !f.fetchRef.hasSuccessor) {
// remember the spawn
f.fetchRef.hasSuccessor = true;
// spawn new fetcher
that.fetchRun();
}
/***********************************************************************
check if the first chunk of the second oldest fetcher exists in the
oldest fetcher.
If true, then they overlap and we can kill the oldest fetcher
***********************************************************************/
if(
// is this the oldest fetcher ?
f.fetchId == that.fetchers[0].id
// is there a successor ?
&& that.fetchers[1]
// has oldest fetcher received the first chunk of its successor ?
&& that.fetchers[0].chunksIds.indexOf(
that.fetchers[1].chunksIds[0]
) > -1
) {
// get index of last chunk of the oldest fetcher within successor queue
var lastChunkId = that.fetchers[0].chunksIds[that.fetchers[0].chunksIds.length-1]
var lastChunkIndex = that.fetchers[1].chunksIds.indexOf(lastChunkId);
// successor has not reached its parent last chunk
if(lastChunkIndex < 0) {
// discard whole queue
that.fetchers[1].queue = [];
that.fetchers[1].chunksIds = [];
// set ignore id in successor to future discard duplicates
that.fetchers[1].ignoreId = lastChunkId;
}
// there is overlap
else {
/**
console.log('triming queue start: '+that.fetchers[1].queue.length
+" "+(lastChunkIndex+1)
+" "+(that.fetchers[1].queue.length-1)
);
/**/
var trimStart = lastChunkIndex+1;
var trimEnd = that.fetchers[1].queue.length-1;
// trim queue
that.fetchers[1].queue = that.fetchers[1].queue.splice(trimStart, trimEnd);
that.fetchers[1].chunksIds = that.fetchers[1].chunksIds.splice(trimStart, trimEnd);
//console.log('triming queue end: '+that.fetchers[1].queue.length);
}
// kill oldest fetcher
that.fire('fetch-kill', that.fetchers[0].id);
}
}, false);
//***************************************************** main queue processor
document.addEventListener('queue-new', function(e) {
// process chunks in queue
while(that.queue.length > 0) {
// get chunk and remove from queue
var chunk = that.queue.shift();
// output item to document
if(that.output) {
that.output.innerHTML += "<br />"+chunk.data;
}
}
}, false);
//****************************************************** start first fetcher
that.fetchRun();
};
// run
var process = new listen('http://localhost:5500/streamRandom', 20);
// bind global kill switch to button
document.getElementById('stop').addEventListener('click', process.stop, false);
</script>
</body>
</html>
I have a script which does an ajax call to receive notifications from some page.The data received in the callback function is an integer(0,1,2,3...) only which is the number of rows selected in the query . Please find the query I am using in the description below.
I am using the following JS :
function myFunction() {
$.get("AjaxServicesNoty.aspx", function (data) {
var recievedCount = data;
var existingCount = $("lblEventCount").text();
if (existingCount == "") {
$(".lblEventCount").html(recievedCount);
$(".lblAcceptedCount").html(recievedCount);
var sound = new Audio("Sound/notificatonSound.wav"); // buffers automatically when created
sound.play();
}
else if (parseInt(recievedCount) > parseInt(existingCount)) {
$(".lblEventCount").html(recievedCount);
$(".lblAcceptedCount").html(recievedCount);
var sound = new Audio("Sound/notificatonSound.wav");
sound.play();
}
else {
$(".lblEventCount").html(existingCount);
$(".lblAcceptedCount").html(existingCount);
// var sound = new Audio("Sound/notificatonSound.wav");
// sound.play();
}
});
}
setInterval(myFunction, 3000);
The sound plays fine now. Now as the setInterval() function is called , every three seconds the beep keeps playing even though according to my if-else conditions . It shouldn't be playing so . It should be only played if there are new updates .
I have used the following query :
public int getAcceptedCount(int CustomerID)
{
string query = #"SELECT Status
FROM tblEventService
WHERE EventID IN (SELECT EventID FROM tblEvent WHERE (CustomerID = #cid)) AND (Status = 'Accepted' OR Status = 'Rejected')";
List<SqlParameter> lstP = new List<SqlParameter>();
lstP.Add(new SqlParameter("#cid", CustomerID));
DataTable dt = DBUtility.SelectData(query, lstP);
if (dt.Rows.Count > 0)
{
return dt.Rows.Count;
}
else
{
return 0;
}
}
The query selects all the rows whose Status is Accepted or Rejected. So if there are any new Accepted or Rejected events, the count will increase. And thus the JS should play the notification sound!
What am i doing wrong here ?
I think the problem on your code is this if statement if (existingCount == "") . if your exisitingCount should be an integer you don't have to test this, otherwise whenever your exisitingCount equals to 0 your sound will be played.
But if sometimes you can get an empty string then you can optimise your code and removing the first condition, and parse all the data before you start testing :
function myFunction() {
$.get("AjaxServicesNoty.aspx", function (data) {
var recievedCount = parseInt(data);
var existingCount = parseInt($("lblEventCount").text());
if ( recievedCount > existingCount ) {
$(".lblEventCount").html(recievedCount);
$(".lblAcceptedCount").html(recievedCount);
var sound = new Audio("Sound/notificatonSound.wav");
sound.play();
} else {
$(".lblEventCount").html(existingCount);
$(".lblAcceptedCount").html(existingCount);
}
});
}
setInterval(myFunction, 3000);
The browser is looking for the sound file at http://localhost:40825/EventManagement/Sound/notificationSound.wav. It sounds like the path is just wrong and the way you're referencing it in your JS is looking for a folder named Sound inside the folder the JS file is located (EventManagement).
Where exactly is this Sound folder located on your machine? If it's a sibling of the EventManagement folder, try new Audio("../Sound/notificatonSound.wav");. Otherwise, I'd just move that folder and audio inside the EventManagement folder :)
My Chrome app has a function that asks for a file to be loaded by another function, checks that the function has set a flag signifying success (External.curFile.lodd), then attempts to process it. My problem is that the flags are not set the first time I call the function, but when I call it a second time the flags are already set.
I had a feeling this has to do with Chrome file functions being asynchronous, so I had the first function idle for a bit while the file loads. The first load never succeeds, no matter how long I wait, but the second load always does!
Calling Function:
function load_by_lines_from_cur_dir( fileName, context ){ // determine the 'meaning' of a file line by line, return last 'meaning', otherwise 'null'
var curLineMeaning = null;
var lastLineValid = true;
External.read_file_in_load_path(fileName); // 'External' load 'fileName' and reads lines, REPLacement does not see this file
// This is a dirty workaround that accounts for the fact that 'DirectoryEntry.getFile' is asynchronous, thus pre-parsing checks fail intil loaded
var counter = 0, maxLoops = 10;
nuClock();
do{
sleep(500);
counter++;
preDebug.innerText += '\r\nLoop:' + counter + " , " + time_since_last();
}while( !External.curFile.lodd && (counter < maxLoops) ); //idle and check if file loaded, 5000ms max
preDebug.innerText += '\r\nLoaded?:' + External.curFile.lodd;
preDebug.innerText += '\r\nLines?:' + External.curFile.lins;
if( External.curFile.lodd ){ // The last load operating was successful, attempt to parse and interpret each line
// parse and interpret lines, storing each meaning in 'curLineMeaning', until last line is reached
while(!External.curFile.rEOF){
curLineMeaning = meaning( s( External.readln_from_current_file() ), context);
preDebug.innerText += '\r\nNext Line?: ' + External.curFile.lnnm;
preDebug.innerText += '\r\nEOF?: ' + External.curFile.rEOF;
}
} // else, return 'null'
return curLineMeaning; // return the result of the last form
}
which calls the following:
External.read_file_in_load_path = function(nameStr){ // Read the lines of 'nameStr' into 'External.curFile.lins'
External.curPath.objt.getFile( // call 'DirectoryEntry.getFile' to fetch a file in that directory
nameStr,
{create: false},
function(fileEntry){ // action to perform on the fetched file, success
External.curFile.name = nameStr; // store the file name for later use
External.curFile.objt = fileEntry; // store the 'FileEntry' for later use
External.curFile.objt.file( function(file){ // Returns 'File' object associated with selected file. Use this to read the file's content.
var reader = new FileReader();
reader.onload = function(e){
External.curFile.lodd = true; // File load success
};
reader.onloadend = function(e){
//var contents = e.target.result;
// URL, split string into lines: http://stackoverflow.com/questions/12371970/read-text-file-using-filereader
External.curFile.lins = e.target.result.split('\n'); // split the string result into individual lines
};
reader.readAsText(file);
External.curFile.lnnm = 0; // Set current line to 0 for the newly-loaded file
External.curFile.rEOF = false; // Reset EOF flag
// let's try a message instead of a flag ...
/*chrome.runtime.sendMessage({greeting: "hello"}, function(response) {
console.log(response.farewell);
});*/
} );
},
function(e){ External.curFile.lodd = false; } // There was an error
);
};
This app is a dialect of Scheme. It's important that the app knows that the source file has been loaded or not.
I didn't read through all of your code, but you can't kick off an asynchronous activity and then busy-wait for it to complete, because JavaScript is single threaded. No matter what's happened, the asynchronous function won't be executed until the script completes its current processing. In other words, asynchronous does not imply concurrent.
Generally speaking, if task A is to be performed after asynchronous task B completes, you should execute A from the completion callback for B. That's the straightforward, safe way to do it. Any shortcut, to achieve better responsiveness or to simplify the code, is going to have dependency or race-condition problems, and will require lots of horsing around to get right. Even then, it will be hard to prove that the code operates correctly on all platforms in all circumstances.
I've been messing with the YouTube Javascript API recently, but I've run into a problem (as always!).
I need a function to return an array of information about a video:
[0: Title, 1: Description, 2: Publish Date, 3: Thumbnail URL]
The function takes the id of a video then does a video.list with that id. Here is that function:
function getVidInfo(VidId){
var vidRequest;
var vidRequestResponse;
var returnArray = new Array(4);
vidRequest = gapi.client.youtube.videos.list({
part: 'snippet',
id: VidId
});
vidRequest.execute(function(response){
if(response.pageInfo.totalResults != 0) {
returnArray[0] = response.result.items[0].snippet.title;
returnArray[1] = response.result.items[0].snippet.description;
returnArray[2] = response.result.items[0].snippet.publishedAt;
//Check for HD thumbnail
if (response.result.items[0].snippet.thumbnails.maxres.url){
returnArray[3] = response.result.items[0].snippet.thumbnails.maxres.url
}
else {
returnArray[3] = response.result.items[0].snippet.thumbnails.standard.url;
}
console.log(returnArray); //Will log desired return array
}
});
return returnArray; //Not returning desired array
}
As you can see from the comments the array is being set correctly however it's not returning that value.
What have I tried?
Using an external variable being set from a function called in vidRequest.execute()
Returning from vidRequest.execute()
Putting response into a variable and then assigning the array (Gave me an error about pageInfo being undefined)
Notes
It appears to be an asynchronous problem
I need to keep getVidInfo()
It definetely gets called when the Google API loads
It appears to work on the initial load, but a refresh breaks it
All info is logged to the console at the moment
Full Code
index.html
<!DOCTYPE html>
<html>
<head>
<title>YT Test</title>
<!--My Scripts-->
<script src="test.js"></script>
</head>
<body>
<!-- Load google api last-->
<script src="https://apis.google.com/js/client.js?onload=googleApiClientReady"> </script>
</body>
</html>
test.js
var apiKey = "[YOUR API KEY]"; //I did set this to my API key
var latestVidUrl;
var request;
var vidId;
var vidInfo;
function googleApiClientReady() {
console.log("Google api loaded");
gapi.client.setApiKey(apiKey);
gapi.client.load('youtube', 'v3', function() {
request = gapi.client.youtube.search.list({
part: 'id',
channelId: 'UCOYWgypDktXdb-HfZnSMK6A',
maxResults: 1,
type: 'video',
order: 'date'
});
request.execute(function(response) {
if(response.pageInfo.totalResults != 0) {
vidId = response.result.items[0].id.videoId;
//console.log(vidId);
vidInfo = getVidInfo(vidId);
console.log(vidInfo);
}
});
});
}
function getEmbedCode(id){
var baseURL = "http://www.youtube.com/watch?v="
return baseURL + id.toString();
}
function getVidInfo(VidId){
var vidRequest;
var vidRequestResponse;
var returnArray = new Array(4);
vidRequest = gapi.client.youtube.videos.list({
part: 'snippet',
id: VidId
});
vidRequest.execute(function(response){
if(response.pageInfo.totalResults != 0) {
returnArray[0] = response.result.items[0].snippet.title;
returnArray[1] = response.result.items[0].snippet.description;
returnArray[2] = response.result.items[0].snippet.publishedAt;
//Check for HD thumbnail
if (response.result.items[0].snippet.thumbnails.maxres.url){
returnArray[3] = response.result.items[0].snippet.thumbnails.maxres.url
}
else {
returnArray[3] = response.result.items[0].snippet.thumbnails.standard.url;
}
console.log(returnArray); //Will log desired return array
}
});
return returnArray; //Not returning desired array
}
I think the problem is that you are returning returnArray when it might not be handled yet. To clarify what I mean, Even though you have return returnArray at the end, the actual request is still be handled, but the code keeps going anyway. So when it finally gets a response, and handles the code, it writes it correctly to the log, but the function has already returned returnArray earlier. Without testing if this works, you could probably just add a polling function to wait until returnArray is not null, as long as you never expect it to be null. Maybe something like:
while(returnArray == null) {
; }
return returnArray;
I'll just edit this to clarify what I mean:
function getVidInfo(VidId){
var vidRequest;
var vidRequestResponse;
var returnArray = new Array(4);
vidRequest = gapi.client.youtube.videos.list({
part: 'snippet',
id: VidId
});
vidRequest.execute(function(response){
if(response.pageInfo.totalResults != 0) {
returnArray[0] = response.result.items[0].snippet.title;
returnArray[1] = response.result.items[0].snippet.description;
returnArray[2] = response.result.items[0].snippet.publishedAt;
//Check for HD thumbnail
if (response.result.items[0].snippet.thumbnails.maxres.url){
returnArray[3] = response.result.items[0].snippet.thumbnails.maxres.url
}
else {
returnArray[3] = response.result.items[0].snippet.thumbnails.standard.url;
}
console.log(returnArray); //Will log desired return array
}
});
while(returnArray == null) { //Create busy loop to wait for value
; }
return returnArray;
}
The execute function is asynchronous; thus, it hasn't completed by the time you're returning the returnArray array, and so an empty array gets sent back instead (if you have the console open you'll see that's the case, where the empty array comes back and gets logged, and then a second or so later the logging within the callback happens). This is one of the biggest obstacles in asynchronous programming, and with the YouTube APIs it used to be that the only way around it was to nest your callbacks in multiple levels (i.e. don't have it as a separate function that returns a value) -- or what I like to affectionately term callback inception. So you could go that route (where you move all the code from your getVidInfo function up into the callback from the request where you get the ID), but that will get very messy ... and luckily the API client very recently introduced features that make solving this problem a whole lot easier -- the gapi client is now Promises/A+ compliant.
So basically, all request objects can now return a Promise object instead of utilize a callback function, and you can chain them all together so they all get processed and resolved in the order you need them to (note that this promise object does very slightly change the json structure of the response packet, where parameters such as pageInfo are children of the result attribute rather than siblings -- you'll see in the sample code below what I mean). This will also greatly simplify your code, so you could do something like this:
var apiKey = "[YOUR API KEY]";
function googleApiClientReady() {
console.log("Google api loaded");
gapi.client.setApiKey(apiKey);
gapi.client.load('youtube', 'v3', function() {
var request = gapi.client.youtube.search.list({
part: 'id',
channelId: 'UCOYWgypDktXdb-HfZnSMK6A',
maxResults: 1,
type: 'video',
order: 'date'
}).then(function(response) {
if(response.result.pageInfo.totalResults != 0) { // note how pageInfo is now a child of response.result ... this is because the promise object is structured a bit differently
return response.result.items[0].id.videoId;
}
}).then(function(vidId) {
return gapi.client.youtube.videos.list({
part: 'snippet',
id: vidId
});
}).then(function(response) {
var returnArray=Array();
if(response.result.pageInfo.totalResults != 0) {
returnArray[0] = response.result.items[0].snippet.title;
returnArray[1] = response.result.items[0].snippet.description;
returnArray[2] = response.result.items[0].snippet.publishedAt;
//Check for HD thumbnail
if (response.result.items[0].snippet.thumbnails.maxres.url){
returnArray[3] = response.result.items[0].snippet.thumbnails.maxres.url;
}
else {
returnArray[3] = response.result.items[0].snippet.thumbnails.standard.url;
}
}
return returnArray;
}).then(function(returnArray) {
console.log(returnArray);
});
});
}
This architecture could also greatly help with error handling, as you could construct additional anonymous functions to pass as the 2nd argument in each then call to be executed when the API throws an error of some sort. Because each of the calls returns a promise, you can, in the final call, use returnArray however you need, and it will wait until all the pieces are resolved before executing.