I am working on a project where my Electron App interacts with a physical device using serial commands, via serialport. The app sends a string to the device, the device executes the command (which can take ~30s) and then sends back a string to signify completion and results from that operation.
My goal is to automate a series of actions. For that, basically the following needs to be done asynchronously, so that the render thread doesn't get blocked:
Start a loop
Send a string to the device
Wait until a specific response comes back
Tell the render thread about the response, so it can update the UI
Afterwards, repeat with the next string.
Actually, multiple different commands need to be send in each loop cycle, and between each one the app has to wait for a specific string from the device.
This is kind of related to my last question, What's the correct way to run a function asynchronously in Electron?. From that, I know I should use web workers to run something asynchronously. However, my plan turned out to involve more problems than I anticipated, and I wanted to ask what would be a good way to implement this, having the whole plan in mind and not just a certain aspect of it.
I am especially not sure how to make the worker work with serialport. The serial device it needs to interact with is a child of the render process, so sending commands will probably be done over web worker messages. But I have no idea on how to make the worker wait for a specific response from the device.
(Since this question is of a more general nature, I am unsure whether I should provide some code snippets. If this is to general, I can try to write some pseudo code to make my problem more clear.)
I would go for a promise-based approach like this:
let promiseChain = Promise.resolve();
waitForEvent = function(){
return new Promise(resolve=>{
event.on("someEvent", (eventData => {
resolve(eventData)
}))
})
}
while(someLoopCondition) {
promiseChain = promiseChain
.then(sendToSerialPort(someString))
.then(waitForEvent)
.then(result=>{
updateUI(result)
})
}
This is directly related to the answer in Google Apps Script - possible charts types.
I am trying to extend the top answer by deploying it as a webapp instead of an add-on, and also to pass URL arguments to the app script.
Everything is exactly the same as the linked example above, except that I stripped out the addon code and put in the most basic webapp code by adding a doGet(e) function.
/*
//if I manually specify the values in the script, it works fine
var sheetRange = "A1:D20"; // standard range to gather data
var sheetTabName = "Sheet1"; //name of the tab in the spreadsheet to look for. must be unique
var spreadsheetId = '1CKQTQYXgt3YgnUXu0YHFeMcG5sMh99sj293oKRFVp4M'; //spreadsheet ID
*/
var sheetRange;
var sheetTabName;
var spreadsheetId;
function doGet(e) {
//but if I try to load the arguments from the URL, it doesn't work
//these values never get set here
sheetRange = e.parameter.sheetRange;
sheetTabName = e.parameter.sheetTabName;
spreadsheetId = e.parameter.spreadsheetId;
Logger.log("This never gets run %s %s %s",sheetRange,sheetTabName,spreadsheetId );
//but this template gets made
var template = HtmlService.createTemplateFromFile('BubbleEx')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME)
.setWidth(800)
.setHeight(600);
Logger.log("Why doesn't this get printed at least?");
//and returned
return template;
}
function getSpreadsheetData() {
Logger.log("This does get run!\nSpreadsheetId is: %s\nSheetRange is: %s\nSheetTabName is: %s",spreadsheetId,sheetRange,sheetTabName);
var sheet = SpreadsheetApp.openById(spreadsheetId);
var data = sheet.getSheetByName(sheetTabName).getRange(sheetRange).getValues();
return (data.length > 1) ? data : null;
}
Clearly I'm missing something fundamental about the order of execution here. Something about the way the HTML is interacting with the script is causing it to be completed before certain parts of the code.gs complete. I'm really new to using GAS as a deployed webapp, so any/all help is greatly appreciated. Thanks!
Here's the preformatted link (with the included arguments) I'm trying to use. The sheet is publicly viewable with a link:
https://script.google.com/a/macros/edmonton.ca/s/AKfycbxMbCG3p-zdoJReIS6jRHnLK3J-XsI1Zm_BFvfz_UQ/dev?spreadsheetId=1CKQTQYXgt3YgnUXu0YHFeMcG5sMh99sj293oKRFVp4M&sheetTabName=Sheet1&sheetRange=A1%3AD20
Visualization in a Web App
I expanded on the linked question in a blog post a while back, with a dashboard example as a web app. (The code for the dashboard is in the blog, I won't bother repeating it here.)
Logger usage
Comments you've left in your code imply that the conclusions you're making about what-has-run-when is based on whether or not logs have shown up. If only it was that easy!
Unfortunately, the Logger is an unreliable tool when used for debugging a web app or other asynchronous operations. Surprise! It's the subject of another blog post of mine.
The Logger can be extended by using the BetterLog library and a few simple utility functions, so that you can generate logs from the client side as well as from asynchronous server-side calls.
Why aren't those globals working?
Order of execution isn't the issue - rather it's about how global variables behave between execution instances.
When you've set spreadsheetId in your doGet() function, its content is available to the whole script, but only for the duration of that instance's execution. In the following diagram, I've illustrated the communication between a few of the pieces of your solution. Each asynchronous call to a Google Apps Script function creates a new, independent execution instance of your script. Each instance has its own copy of the script's global variables.
The upshot of this is that the spreadsheetId value you set in doGet() isn't available to getSpreadsheetData() when it is invoked by the google.script.run call in the client-side JavaScript. The variable exists as a symbol only - it isn't always the same piece of computer memory. (It might not even be on the same physical computer.)
If you want to "set" some "global" variables to survive between instances, you can use a persistent storage method such as the Properties Service. In your example, though, you would want to be careful with this; if two users were accessing the Web App at the same time, the last one in would over-write values previously set by the earlier user.
A more appropriate way to handle this would be to explicitly pass the "globals" via the html template. (If you create a new Google Apps Script using the demo "Web App" template, you'll see an example of this.)
I have a program written in c++ that reads values from this board. Anyways that part is not important. What I have is data that is constantly changing and I will like to graph that data. I was hoping to use a web browser to display the data since there are so many open source graphs and charts out there written in JavaScript. So my problem is to send data to the browser from my c++ program
I already investigated and UDP is not available in browsers yet so I will have to use TCP. TCP websockets are not that fast and I was thinking about using html5 localstorage instead. By that I mean have my c++ program write to the database on localStorage then javascript will wait for the value of that variable to exist and invent some sort of protocol that will make that work. Local storage is really fast for example :
<script type="text/javascript">
var counter = 0;
window.onload = function () {
function Test() {
counter++;
localStorage.p = counter + ""; // perform write
var read = localStorage.p; // perform read
if (read == "5000")
alert((new Date() - now)); // shows 45
else
Test(); // loop again
}
var now = new Date();
Test();
}
</script>
that script takes 54 milliseconds and it reads AND writes 5000 times! That means that instead of creating a plug-in for the browser next time I will just implement some sort of protocol that will enable me to exchange information using the localStorage. For example I could have the browser waiting for the variable x to exist. Once it exist I then creates a variable y by the browser notifying the c++ program that it is ready to receive data and so on. localStorage is just a sqlite database located on C:\Users[USER]\AppData\Local\Google\Chrome\User Data\Default\Local Storage
I haven't seen anyone online that uses this approach. Maybe it is too dangerous and Sqlite cannot handle multiple threads that good and I will be wasting time creating this program.
So should I start implementing this protocol? Should I use websockets? Or should I give it a try to https://stackoverflow.com/a/10219977/637142 ?
I would go with node.js as middleware from your C++ to the browser, instead of using directly websocket (been there done that) go with http://socket.io/ that will make your life much easier :)
I need users to be able to post data from a single page browser application (SPA) to me, but I can't put server-side code on the host.
Is there a web service that I can use for this? I looked at Amazon SQS (simple queue service) but I can't call their REST APIs from within the browser due to cross origin policy.
I favour ease of development over robustness right now, so even just receiving an email would be fine. I'm not sure that the site is even going to catch on. If it does, then I'll develop a server-side component and move hosts.
Not only there are Web Services, but nowadays there are robust systems that provide a way to server-side some logic on your applications. They are called BaaS or Backend as a Service providers, usually to provide some backbone to your front end applications.
Although they have multiple uses, I'm going to list the most common in my opinion:
For mobile applications - Instead of having to learn an API for each device you code to, you can use an standard platform to store logic and data for your application.
For prototyping - If you want to create a slick application, but you don't want to code all the backend logic for the data -less dealing with all the operations and system administration that represents-, through a BaaS provider you only need good Front End skills to code the simplest CRUD applications you can imagine. Some BaaS even allow you to bind some Reduce algorithms to calls your perform to their API.
For web applications - When PaaS (Platform as a Service) came to town to ease the job for Backend End developers in order to avoid the hassle of System Administration and Operations, it was just logic that the same was going to happen to the Backend. There are many clones that showcase the real power of this strategy.
All of this is amazing, but I have yet to mention any of them. I'm going to list the ones that I know the most and have actually used in projects. There are probably many, but as far as I know, this one have satisfied most of my news, whether it's any of the previously ones mentioned.
Parse.com
Parse's most outstanding features target mobile devices; however, nowadays Parse contains an incredible amount of API's that allows you to use it as full feature backend service for Javascript, Android and even Windows 8 applications (Windows 8 SDK was introduced a few months ago this year).
How does a Parse code looks in Javascript?
Parse works through classes and objects (ain't that beautiful?), so you first create a specific class (can be done through Javascript, REST or even the Data Browser manager) and then you add objects to specific classes.
First, add up Parse as a script tag in javascript:
<script type="text/javascript" src="http://www.parsecdn.com/js/parse-1.1.15.min.js"></script>
Then, through a given Application ID and a Javascript Key, initialize Parse.
Parse.initialize("APPLICATION_ID", "JAVASCRIPT_KEY");
From there, it's all object manipulation
var Person = Parse.Object.extend("Person"); //Person is a class *cof* uppercase *cof*
var personObject = new Person();
personObject.save({name: "John"}, {
success: function(object) {
console.log("The object with the data "+ JSON.stringify(object) + " was saved successfully.");
},
error: function(model, error) {
console.log("There was an error! The following model and error object were provided by the Server");
console.log(model);
console.log(error);
}
});
What about authentication and security?
Parse has a User based authentication system, which pretty much allows you to store a base of users that can manipulate the data. If map the data with User information, you can ensure that only a given user can manipulate specific data. Plus, in the settings of your Parse application, you can specify that no clients are allowed to create classes, to ensure innecesary calls are performed.
Did you REALLY used in a web application?
Yes, it was my tool of choice for a medium fidelity prototype.
Firebase.com
Firebase's main feature is the ability to provide Real Time to your application without all the hassle. You don't need a MeteorJS server in order to bring Push Notifications to your software. If you know Javascript, you are half way through to bring Real Time magic to your users.
How does a Firebase looks in Javascript?
Firebase works in a REST fashion, and I think they do an amazing job structuring the Glory of REST. As a good example, look at the following Resource structure in Firebase:
https://SampleChat.firebaseIO-demo.com/users/fred/name/first
You don't need to be a rocket scientist to know that you are retrieve the first name of the user "Fred", giving there's at least one -usually there should be a UUID instead of a name, but hey, it's an example, give me a break-.
In order to start using Firebase, as with Parse, add up their CDN Javascript
<script type='text/javascript' src='https://cdn.firebase.com/v0/firebase.js'></script>
Now, create a reference object that will allow you to consume the Firebase API
var myRootRef = new Firebase('https://myprojectname.firebaseIO-demo.com/');
From there, you can create a bunch of neat applications.
var USERS_LOCATION = 'https://SampleChat.firebaseIO-demo.com/users';
var userId = "Fred"; // Username
var usersRef = new Firebase(USERS_LOCATION);
usersRef.child(userId).once('value', function(snapshot) {
var exists = (snapshot.val() !== null);
if (exists) {
console.log("Username "+userId+" is part of our database");
} else {
console.log("We have no register of the username "+userId);
}
});
What about authentication and security?
You are in luck! Firebase released their Security API about two weeks ago! I have yet to explore it, but I'm sure it fills most of the gaps that allowed random people to use your reference to their own purpose.
Did you REALLY used in a web application?
Eeehm... ok, no. I used it in a Chrome Extension! It's still in process but it's going to be a Real Time chat inside a Chrome Extension. Ain't that cool? Fine. I find it cool. Anyway, you can browse more awesome examples for Firebase in their examples page.
What's the magic of these services? If you read your Dependency Injection and Mock Object Testing, at some point you can completely replace all of those services for your own through a REST Web Service provider.
Since these services were created to be used inside any application, they are CORS ready. As stated before, I have successfully used both of them from multiple domains without any issue (I'm even trying to use Firebase in a Chrome Extension, and I'm sure I will succeed soon).
Both Parse and Firebase have Data Browser managers, which means that you can see the data you are manipulating through a simple web browser. As a final disclaimer, I have no relationship with any of those services other than the face that James Taplin (Firebase Co-founder) was amazing enough to lend me some Beta access to Firebase.
You actually CAN use SQS from the browser, even without CORS, as long as you only need the browser to send messages, not receive them. Warning: this is a kludge that would make my CS professors cry.
When you perform a GET request via javascript, the browser will always perform the request, however, you'll only get access to the response if it was from the same origin (protocol, host, port). This is your ticket to ride, since messages can be posted to an SQS queue with just a GET, and who really cares about the response anyways?
Assuming you're using jquery, your queue is https://sqs.us-east-1.amazonaws.com/71717171/myqueue, and allows anyone to post a message, the following will post a message with the body "HITHERE" to the queue:
$.ajax({
url: 'https://sqs.us-east-1.amazonaws.com/71717171/myqueue' +
'?Action=SendMessage' +
'&Version=2012-11-05' +
'&MessageBody=HITHERE'
})
The'll be an error in the console saying that the request failed, but the message will show up in the queue anyways.
Have you considered JSONP? That is one way of calling cross-domain scripts from javascript without running into the same origin policy. You're going to have to set up some script somewhere to send you the data, though. Javascript just isn't up to the task.
Depending in what kind of data you want to send, and what you're going to do with it, one way of solving it would be to post the data to a Google Spreadsheet using Ajax. It's a bit tricky to accomplish though.Here is another stackoverflow question about it.
If presentation isn't that important you can just have an embedded Google Spreadsheet Form.
What about mailto:youremail#goeshere.com ? ihihi
Meantime, you can turn on some free hostings like Altervista or Heroku or somenthing else like them .. so you can connect to their server , if i remember these free services allows servers p2p, so you can create a sort of personal web services and push ajax requests as well, obviously their servers are slow for free accounts, but i think it's enought if you do not have so much users traffic, else you should turn on some better VPS or Hosting or Cloud solution.
Maybe CouchDB can provide what you're after. IrisCouch provides free CouchDB instances. Lock it down so that users can't view documents and have a sensible validation function and you've got yourself an easy RESTful place to stick your data in.
I have a dashboard screen that needs to make about 20 AJAX requests on load, each returning a different statistics. In total, it takes about 10 seconds for all the requests to come back. However during that 10 seconds, the UI is pretty much locked.
I recall reading a JS book by Nick Zakas that described techniques for maintaining UI responsiveness during intensive operations (using timers). I'm wondering if there is a similar technique for dealing with my situation?
*I'm trying to avoid combining the AJAX calls for a number of reasons
$(".report").each(function(){
var container = $(this)
var stat = $(this).attr('id')
var cache = db.getItem(stat)
if(cache != null && cacheOn)
{
container.find(".value").html(cache)
}
else
{
$.ajax({
url: "/admin/" + stat,
cache: false,
success: function(value){
container.find(".value").html(value.stat)
db.setItem(stat, value.stat);
db.setItem("lastUpdate", new Date().getTime())
}
});
}
})
If you have access to jQuery, you can utilize the $.Deferred object to make multiple async calls simultaneously and perform a callback when they all resolve.
http://api.jquery.com/category/deferred-object/
http://api.jquery.com/deferred.promise/
If each of these callbacks are making modifications to the DOM, you should store the changes in some temporary location (such as in-memory DOM objects) and then append them all at once. DOM manipulation calls are very time consuming.
I've had similar problems working heavily with SharePoint web services - you often need to pull data from multiple sources to generate input for a single process.
To solve it I embedded this kind of functionality into my AJAX abstraction library. You can easily define a request which will trigger a set of handlers when complete. However each request can be defined with multiple http calls. Here's the component (and detailed documentation):
DPAJAX at DepressedPress.com
This simple example creates one request with three calls and then passes that information, in the call order, to a single handler:
// The handler function
function AddUp(Nums) { alert(Nums[1] + Nums[2] + Nums[3]) };
// Create the pool
myPool = DP_AJAX.createPool();
// Create the request
myRequest = DP_AJAX.createRequest(AddUp);
// Add the calls to the request
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [5,10]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [4,6]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [7,13]);
// Add the request to the pool
myPool.addRequest(myRequest);
Note that unlike many of the other solutions provided this method does not force single threading of the calls being made - each will still run as quickly (or as slowly) as the environment allows but the single handler will only be called when all are complete. It also supports the setting of timeout values and retry attempts if your service is a little flakey.
In your case you could make a single request (or group related requests - for example a quick "most needed" request and a longer-running "nice to have" request) to call all your data and display it all at the same time (or in chunks if multiple requests) when complete. You can also specifically set the number of background objects/threads to utilize which might help with your performance issues.
I've found it insanely useful (and incredibly simple to understand from a code perspective). No more chaining, no more counting calls and saving output. Just "set it and forget it".
Oh - concerning your lockups - are you, by any chance, testing this on a local development platform (running the requests against a server on the same machine as the browser)? If so it may simply be that the machine itself is working on your requests and not at all indicative of an actual browser issue.