How can I slow down readings I get from a QR Scanner? - javascript

this is my first post.
greetings to readers.
So Im fairly new to coding, and ive got this code implemented onto my frontend, a succesful scan sends a GET request to my python API to fetch data from database.. but this script scans qr code few times a second (not only that but it submits it too).
So my question is how could I slow it down a bit, lets say a timeout for 2 seconds after a succesful scan?
function onScanSuccess(decodedText, decodedResult) {
// Handle on success condition with the decoded text or result.
console.log(`Scan result: ${decodedText}`, decodedResult);
$('#search').val(decodedText);
$('#frm').submit();
}
var html5QrcodeScanner = new Html5QrcodeScanner(
"reader", { fps: 10, qrbox: 250 });
html5QrcodeScanner.render(onScanSuccess);
});
edit: I havent said I didnt write this and I have no idea how to do timeouts in Typescript or Javascript and not even where.
Im thanking you for your time :)

This is taken directly from the html5QrcodeScanner example. On success, it will update the result, and if there's no new result scanned, it wont update the result
var resultContainer = document.getElementById('qr-reader-results');
var lastResult, countResults = 0;
function onScanSuccess(decodedText, decodedResult) {
if (decodedText !== lastResult) {
++countResults;
lastResult = decodedText;
// Handle on success condition with the decoded message.
console.log(`Scan result ${decodedText}`, decodedResult);
}
}
var html5QrcodeScanner = new Html5QrcodeScanner(
"qr-reader", { fps: 10, qrbox: 250 });
html5QrcodeScanner.render(onScanSuccess);
but this wont stop your device from scanning, it just won't update the result as from my understanding of your question that would be sufficient, but if you want to stop the camera/ scanning process altogether after successful scan, you can go into a lil bit advanced part of the library
import {Html5Qrcode} from "html5-qrcode"
const html5QrCode = new Html5Qrcode("reader");
html5QrCode.stop().then((ignore) => {
// QR Code scanning is stopped.
}).catch((err) => {
// Stop failed, handle it.
});
by doing this also means that you need to implement the whole process from Pro Mode, you can refer here for pro mode

Related

How to efficiently stream a real-time chart from a local data file

complete noob picking up NodeJS over the last few days here, and I've gotten myself in big trouble, it looks like. I've currently got a working Node JS+Express server instance, running on a Raspberry Pi, acting as a web interface for a local data acquisition script ("the DAQ"). When executed, the script writes out data to a local file on the Pi, in .csv format, writing out in real-time every second.
My Node app is a simple web interface to start (on-click) the data acquisition script, as well as to plot previously acquired data logs, and visualize the actively being collected data in real time. Plotting of old logs was simple, and I wrote a JS function (using Plotly + d3) to read a local csv file via AJAX call, and plot it - using this script as a starting point, but using the logs served by express rather than an external file.
When I went to translate this into a real-time plot, I started out using the setInterval() method to update the graph periodically, based on other examples. After dealing with a few unwanted recursion issues, and adjusting the interval to a more reasonable setting, I eliminated the memory/traffic issues which were crashing the browser after a minute or two, and things are mostly stable.
However, I need help with one thing primarily:
Improving the efficiency of my first attempt approach: This acquisition script absolutely needs to be written to file every second, but considering that a typical run might last 1-2 weeks, the file size being requested on every Interval loop will quickly start to balloon. I'm completely new to Node/Express, so I'm sure there's a much better way of doing the real-time rendering aspect of this - that's the real issue here. Any pointers of a better way to go about doing this would be massively helpful!
Right now, the killDAQ() call issued by the "Stop" button kills the underlying python process writing out the data to disk. Is there a way to hook into using that same button click to also terminate the setInterval() loop updating the graph? There's no need for it to be updated any longer after the data acquisition has been stopped so having the single click do double duty would be ideal. I think that setting up a listener or res/req approach would be an option, but pointers in the right direction would be massively helpful.
(Edit: I solved #2, using global window. variables. It's a hack, but it seems to work:
window.refreshIntervalId = setInterval(foo);
...
clearInterval(window.refreshIntervalId);
)
Thanks for much for the help!
MWE:
html (using Pug as a template engine):
doctype html
html
body.default
.container-fluid
.row
.col-md-5
.row.text-center
.col-md-6
button#start_button(type="button", onclick="makeCallToDAQ()") Start Acquisition
.col-md-6
button#stop_button(type="button", onclick="killDAQ()") Stop Acquisition
.col-md-7
#myDAQDiv(style='width: 980px; height: 500px;')
javascript (start/stop acquisition):
function makeCallToDAQ() {
fetch('/start_daq', {
// call to app to start the acquisition script
})
.then(console.log(dateTime))
.then(function(response) {
console.log(response)
setInterval(function(){ callPlotly(dateTime.concat('.csv')); }, 5000);
});
}
function killDAQ() {
fetch('/stop_daq')
// kills the process
.then(function(response) {
// Use the response sent here
alert('DAQ has stopped!')
})
}
javascript (call to Plotly for plotting):
function callPlotly(filename) {
var csv_filename = filename;
console.log(csv_filename)
function makeplot(csv_filename) {
// Read data via AJAX call and grab header names
var headerNames = [];
d3.csv(csv_filename, function(error, data) {
headerNames = d3.keys(data[0]);
processData(data, headerNames)
});
};
function processData(allRows, headerNames) {
// Plot data from relevant columns
var plotDiv = document.getElementById("plot");
var traces = [{
x: x,
y: y
}];
Plotly.newPlot('myDAQDiv', traces, plotting_options);
};
makeplot(filename);
}
node.js (the actual Node app):
// Start the DAQ
app.use(express.json());
var isDaqRunning = true;
var pythonPID = 0;
const { spawn } = require('child_process')
var process;
app.post('/start_daq', function(req, res) {
isDaqRunning = true;
// Call the python script here.
const process = spawn('python', ['../private/BIC_script.py', arg1, arg2])
pythonPID = process.pid;
process.stdout.on('data', (myData) => {
res.send("Done!")
})
process.stderr.on('data', (myErr) => {
// If anything gets written to stderr, it'll be in the myErr variable
})
res.status(200).send(); //.json(result);
})
// Stop the DAQ
app.get('/stop_daq', function(req, res) {
isDaqRunning = false;
process.on('close', (code, signal) => {
console.log(
`child process terminated due to receipt of signal ${signal}`);
});
// Send SIGTERM to process
process.kill('SIGTERM');
res.status(200).send();
})

How to manually insert js code on each load of page via the console of the browser

I am preparing JavaScript code that shows a random number for user as follows: if the user spend more than two minutes to pass to the next web page or if the actual page has the GET parameter "&source", the random number is replaced by another one. otherwise, the same random number is displayed for all the web pages.
The problem is that the JavaScript code should be executed manually from browser console on each page load: I should prepare a code that can be integrated to any web page from console.
Is there any difference from the normal case (include script with<script></script>)
Thanks for posting! In future posts, please try to provide some code or an example of something you've tried previously.
Anyways, here is a brief example of a script that will check for an existing number, check to see if there is a &source parameter set, begin the timer if there isn't one, and generate a new number if the timer finishes or the parameter is set.
To save the information between pages, you should consider using window.localStorage. This will allow you to check for and save the number to be used on later loads.
Note that this snippet isn't going to work until you bring it into your own page. Also, as #Sorin-Vladu mentioned, you'll have to use a browser extension if you don't have access to modify the pages you're running the script on.
const timeout = 120000
// This can be replaced by your manual execution
window.onload = () => {
start()
}
function start() {
// Attempt to pull the code from storage
let code = localStorage.getItem('code')
console.log(code)
// Get the URL parameters
let urlParams = new URLSearchParams(window.location.search)
// Check to see if the source parameter exists
if (!urlParams.has('source')) {
// If not, begin the timer
setTimeout(() => {
setCode()
}, timeout)
} else {
setCode()
}
}
function setCode() {
const code = Math.floor(Math.random() * 1000000)
localStorage.setItem('code', code)
console.log(code)
}

Is there any way to determine if a nodejs childprocess wants input or is just sending feedback?

I had a little freetime so I decided to rewrite all my bash scripts in JavaScript (NodeJS - ES6) with child processes. Everything went smoothly until I wanted to automate user input.
Yes, you can do automate the user input. But there is one Problem - you can't determine if the given data event is a feedback or a request for input. At least I can't find a way to do it.
So basically you can do this:
// new Spawn.
let spawn = require('child_process');
// new ufw process.
let ufw = spawn('ufw', ['enable']);
// Use defined input.
ufw.stdin.setEncoding('utf-8');
ufw.stdout.pipe(process.stdout);
ufw.stdin.write('y\n');
// Event Standard Out.
ufw.stdout.on('data', (data) => {
console.log(data.toString('utf8'));
});
// Event Standard Error.
ufw.stderr.on('data', (err) => {
// Logerror.
console.log(err);
});
// When job is finished (with or without error) it ends up here.
ufw.on('close', (code) => {
// Check if there were errors.
if (code !== 0) console.log('Exited with code: ' + code.toString());
// End input stream.
ufw.stdin.end();
});
The above example works totally fine. But there are 2 things giving me an headache:
Will ufw.stdin.write('y\n'); wait until it is needed and what happens if I have multiple inputs? For example 'yes', 'yes', 'no'. Do I have to write 3 lines of stdin.write()?
Isn't the position where I use ufw.stdin.write('y\n'); a little confusing? I thought I need the input after my prompt made a request for input so I decided to change my code that my stdin.write() could run at the right time, makes sense right? However the only way to check when the 'right' time is on the stdout.on('data', callback) event. That makes thinks a little difficult, since I need to know if the prompt is aksing for user input or not...
Here is my code which I think is totally wrong:
// new Spawn.
let spawn = require('child_process');
// new ufw process.
let ufw = spawn('ufw', ['enable']);
// Event Standard Out.
ufw.stdout.on('data', (data) => {
console.log(data.toString('utf8'));
// Use defined input.
ufw.stdin.setEncoding('utf-8');
ufw.stdout.pipe(process.stdout);
ufw.stdin.write('y\n');
});
// Event Standard Error.
ufw.stderr.on('data', (err) => {
// Logerror.
console.log(err);
});
// When job is finished (with or without error) it ends up here.
ufw.on('close', (code) => {
// Check if there were errors.
if (code !== 0) console.log('Exited with code: ' + code.toString());
// End input stream.
ufw.stdin.end();
});
My major misunderstanding is when to use stdin for user input (automated) and where to place it in my code so it will be used at the right time, for example if I have multiple inputs for something like mysql_secure_installation.
So I was wondering if it is possible and it seems not. I posted an issue for node which ended up beeing closed: https://github.com/nodejs/node/issues/16214
I am asking for a way to determine if the current process is waiting for an input.
There isn't one. I think you have wrong expectations about pipe I/O
because that's simply not how it works.
Talking about expectations, check out expect. There is probably a
node.js port if you look around.
I'll close this out because it's not implementable as a feature, and
as a question nodejs/help is the more appropriate place.
So if anyone has the same problem as I had you can simply write multiple lines into stdin and use that as predefined values. Keep in mind that will eventually break the stream if any input is broken or wrong in feature updates:
// new Spawn.
let spawn = require('child_process');
// new msqlsec process.
let msqlsec = spawn('mysql_secure_installation', ['']);
// Arguments as Array.
let inputArgs = ['password', 'n', 'y', 'y', 'y', 'y'];
// Set correct encodings for logging.
msqlsec.stdin.setEncoding('utf-8');
msqlsec.stdout.setEncoding('utf-8');
msqlsec.stderr.setEncoding('utf-8');
// Use defined input and write line for each of them.
for (let a = 0; a < inputArgs.length; a++) {
msqlsec.stdin.write(inputArgs[a] + '\n');
}
// Event Standard Out.
msqlsec.stdout.on('data', (data) => {
console.log(data.toString('utf8'));
});
// Event Standard Error.
msqlsec.stderr.on('data', (err) => {
// Logerror.
console.log(err);
});
// When job is finished (with or without error) it ends up here.
msqlsec.on('close', (code) => {
// Check if there were errors.
if (code !== 0) console.log('Exited with code: ' + code.toString());
// close input to writeable stream.
msqlsec.stdin.end();
});
For the sake of completeness if someone wants to fill the user input manually you can simply start the given process like this:
// new msqlsec process.
let msqlsec = spawn('mysql_secure_installation', [''], { stdio: 'inherit', shell: true });

TransactionInactiveError with subsequent put calls

I can't figure out if I'm doing something wrong or if I'm just pushing it to hard.
I'm trying to sync ~70000 records from my online db to IndexedDB in combination with EventSource and a Worker.
So I get 2000 records per package and then use the following code to store them in IndexedDB:
eventSource.addEventListener('package', function(e) {
var data = JSON.parse(e.data);
putData(data.type, data.records);
});
function putData(storeName, data) {
var store = db.transaction([storeName], 'readwrite').objectStore(storeName);
return new Promise(function(resolve, reject) {
putRecord(data, store, 0);
store.transaction.oncomplete = resolve;
store.transaction.onerror = reject;
});
}
function putRecord(data, store, recordIndex) {
if(recordIndex < data.length){
var req = store.put(data[recordIndex]);
req.onsuccess = function(e) {
recordIndex += 1;
putRecord(data, store, recordIndex);
};
req.onerror = function() {
self.postMessage(this.error.name);
recordIndex += 1;
putRecord(data, store, recordIndex);
};
}
}
It all works for about ~10000 records. Didn't really test where the limit is though. I suspect that at some point there are too many transactions in parallel which causes a single transaction to be very slow and thus causing trouble because of some timeout. According to the dev tools the 70000 records are around 20MB.
Complete error:
Uncaught TransactionInactiveError: Failed to execute 'put' on
'IDBObjectStore': The transaction has finished.
Any ideas?
I don't see an obvious error in your code, but you can make it much simpler and faster. There's no need to wait for the success of a previous put() to issue a second put() request.
function putData(storeName, data) {
var store = db.transaction([storeName], 'readwrite').objectStore(storeName);
return new Promise(function(resolve, reject) {
for (var i = 0; i < data.length; ++i) {
var req = store.put(data[i]);
req.onerror = function(e) {
self.postMessage(e.target.error.name);
};
}
store.transaction.oncomplete = resolve;
store.transaction.onerror = reject;
});
}
It is possible that the error you are seeing is because the browser has implemented an arbitrary time limit on the transaction. But again, your code looks correct, including the use of Promises (which are tricky with IDB, but so far as I can tell you're doing it correctly!)
If this is still occurring I second the comment to file a bug against the browser(s) with a stand-alone repro. (If it's happening in Chrome I'd be happy to take a look.)
I think this is due the implementation. If you read the specs a transaction must keep a list of all the requests made in the transaction. When the transaction is commited all these changes are persisted otherwise the transaction will be aborted. Specs
Probably is the maximum request list in your case a 1000 request. You can easily test that by trying to insert a 1001 records. So my guess is when the 1000 request is reached, the transaction is set to inactive.
Maybe change your stratigy and only make 1000 request in every transaction and start a new transaction when the other one is completed.

Error: The page has been destroyed and can no longer be used

I'm developing an add-on for the first time. It puts a little widget in the status bar that displays the number of unread Google Reader items. To accommodate this, the add-on process queries the Google Reader API every minute and passes the response to the widget. When I run cfx test I get this error:
Error: The page has been destroyed and can no longer be used.
I made sure to catch the widget's detach event and stop the refresh timer in response, but I'm still seeing the error. What am I doing wrong? Here's the relevant code:
// main.js - Main entry point
const tabs = require('tabs');
const widgets = require('widget');
const data = require('self').data;
const timers = require("timers");
const Request = require("request").Request;
function refreshUnreadCount() {
// Put in Google Reader API request
Request({
url: "https://www.google.com/reader/api/0/unread-count?output=json",
onComplete: function(response) {
// Ignore response if we encountered a 404 (e.g. user isn't logged in)
// or a different HTTP error.
// TODO: Can I make this work when third-party cookies are disabled?
if (response.status == 200) {
monitorWidget.postMessage(response.json);
} else {
monitorWidget.postMessage(null);
}
}
}).get();
}
var monitorWidget = widgets.Widget({
// Mandatory widget ID string
id: "greader-monitor",
// A required string description of the widget used for
// accessibility, title bars, and error reporting.
label: "GReader Monitor",
contentURL: data.url("widget.html"),
contentScriptFile: [data.url("jquery-1.7.2.min.js"), data.url("widget.js")],
onClick: function() {
// Open Google Reader when the widget is clicked.
tabs.open("https://www.google.com/reader/view/");
},
onAttach: function(worker) {
// If the widget's inner width changes, reflect that in the GUI
worker.port.on("widthReported", function(newWidth) {
worker.width = newWidth;
});
var refreshTimer = timers.setInterval(refreshUnreadCount, 60000);
// If the monitor widget is destroyed, make sure the timer gets cancelled.
worker.on("detach", function() {
timers.clearInterval(refreshTimer);
});
refreshUnreadCount();
}
});
// widget.js - Status bar widget script
// Every so often, we'll receive the updated item feed. It's our job
// to parse it.
self.on("message", function(json) {
if (json == null) {
$("span#counter").attr("class", "");
$("span#counter").text("N/A");
} else {
var newTotal = 0;
for (var item in json.unreadcounts) {
newTotal += json.unreadcounts[item].count;
}
// Since the cumulative reading list count is a separate part of the
// unread count info, we have to divide the total by 2.
newTotal /= 2;
$("span#counter").text(newTotal);
// Update style
if (newTotal > 0)
$("span#counter").attr("class", "newitems");
else
$("span#counter").attr("class", "");
}
// Reports the current width of the widget
self.port.emit("widthReported", $("div#widget").width());
});
Edit: I've uploaded the project in its entirety to this GitHub repository.
I think if you use the method monitorWidget.port.emit("widthReported", response.json); you can fire the event. It the second way to communicate with the content script and the add-on script.
Reference for the port communication
Reference for the communication with postMessage
I guess that this message comes up when you call monitorWidget.postMessage() in refreshUnreadCount(). The obvious cause for it would be: while you make sure to call refreshUnreadCount() only when the worker is still active, this function will do an asynchronous request which might take a while. So by the time this request completes the worker might be destroyed already.
One solution would be to pass the worker as a parameter to refreshUnreadCount(). It could then add its own detach listener (remove it when the request is done) and ignore the response if the worker was detached while the request was performed.
function refreshUnreadCount(worker) {
var detached = false;
function onDetach()
{
detached = true;
}
worker.on("detach", onDetach);
Request({
...
onComplete: function(response) {
worker.removeListener("detach", onDetach);
if (detached)
return; // Nothing to update with out data
...
}
}).get();
}
Then again, using try..catch to detect this situation and suppress the error would probably be simpler - but not exactly a clean solution.
I've just seen your message on irc, thanks for reporting your issues.
You are facing some internal bug in the SDK. I've opened a bug about that here.
You should definitely keep the first version of your code, where you send messages to the widget, i.e. widget.postMessage (instead of worker.postMessage). Then we will have to fix the bug I linked to in order to just make your code work!!
Then I suggest you to move the setInterval to the toplevel, otherwise you will fire multiple interval and request, one per window. This attach event is fired for each new firefox window.

Categories

Resources