So I'm making a webApp that involves thousands of API queries. Since the API has a limit on the amount of queries I can send it per day, I was wondering if I could simply run the query loop a single time and then write the resulting objects to an array in a new file.
is this possible?
You want to make calls, then create cache, then use cache instead of call.
Are you on client side or in server side js ?
Client side will be tricky, but server side is easy :
Files can be a cache, so does a DB or a lot of tools (memcached, etc..).
Sure, just send the array to JSON.stringify() and write it to a file.
If you are using Node.js it would look something like this:
function writeResponse(resp, cb)
{
fs.writeFile('response.json', JSON.stringify(resp, null, 2), function (err) {
if (err) console.log(err);
if(cb) cb();
});
}
If you are in a browser you can use the Web Storage API which allows storage in key/value pairs up to 10Mb. If that doesn't work, maybe write a quick Node.js server that works as a caching proxy. A quick google search suggests that you might be able to find one ready to deploy.
You could probably use local storage, which is accessible across your domain, and will remain on the users computer indefinitely. Perhaps something like this:
function getData(){
var data = localStorage.getItem("myData");
if(data === null){
data = makeQuery();
localStorage.setItem("myData", data);
}
return data
}
Related
I'm trying to send multiple files from the client to the NodeJS server using WebSockets.
To send one file, I currently do the following:
// Client
let upload = document.getElementById('upload')
button.onclick = async function() {
let file = upload.files[0];
let byteFile = await getAsByteArray(file);
socket.send(byteFile);
}
async function getAsByteArray(file) {
return new Uint8Array(await readFile(file))
}
function readFile(file) {
return new Promise((resolve, reject) => {
let reader = new FileReader()
reader.addEventListener("loadend", e => resolve(e.target.result))
reader.addEventListener("error", reject)
reader.readAsArrayBuffer(file)
});
}
// Server
ws.on('message', function incoming(message) {
// This returns a buffer which is what I'm looking for when working with a single file.
console.log(message);
return;
}
This works great for one file. I'm able to use the buffer and process the file as I would like. To send two files, my thought was to convert each file to a Uint8Array (as I did for the single file) and push to an array like so:
// Client
let filesArray = [];
let files = upload.files; // Grab uploaded Manifests
for (let file of files) {
let byteFile = await getAsByteArray(file);
filesArray.push(byteFile);
}
socket.send(filesArray);
In the same way as with one file, the server returns a buffer for the array that was sent; however, I'm not sure how to work with it. I need each file to be their own buffer in order to work with them. Am I taking the wrong approach here? Or am I just missing some conversion to be able to work with each file?
This works great for one file.
Not really. Unless it is supposed to be used in some very simplistic setup, probably in an isolated (from the internet) network.
You literally send a sequence of bytes to the server which reads it and what is it going to do with it? Save it to disk? Without validating? But how can it validate a random sequence of bytes, it has no hint about what it is? Secondly, where will it save it? Under what name? You didn't send any metadata like filename. Is it supposed to generate a random name for it? How will the user know that this is his file? Heck, as it is you don't even know who sent that file (no authentication). Finally, what about security? Can I open a WebSocket connection to your server and spam it with arbitrary sequences of data, effictively killing it? You probably need some authentication, but even with it, can any user spam such upload? Maybe you additionally need tokens with timeouts for that (but then you have to think about how will your server issue such tokens).
I need each file to be their own buffer in order to work with them.
No, you don't. The bare minimum you need is (1) the ability to send files with metadata from the client and (2) the ability to read files with metadata on the server side. You most likely need some authentication mechanism as well. Typically you would use classical HTTP for that, which I strongly encourage you to utilize.
If you want to stick with WebSockets, then you have to implement those already well established mechanisms by yourself. So here's how I would do that:
(1) Define a custom protocol on top of WebSocket. Each frame should have a structure, for example first two bytes indicating "size of command", next X bytes (previous 2 bytes interpreted as int of size 16) the command as string. On the server side you read that command, map it to some handler, and run appropriate action. The data that the command should process, is the data from the remaining bytes of the frame.
(2) Setup authentication. Not in the scope of this answer, just indicating it is crucial. I'm putting this after (1) because you can reuse the protocol for that.
(3) Whenever you want to upload a file: send a command "SEND" to the server. In the same frame, after "SEND" command put metadata (file name, size, content type, etc.), you can encode it as JSON prefixed with length. Afterwards put the content of the file in the buffer.
This solution should obviously be refined with (mentioned earlier) tokens. For proper responsivness and concurrency, you should probably split large files into separate WebSocket frames (which complicates the design a lot).
Anyway, as you can see, the topic is far from trivial and requires lots of experience. And it is basically reimplementing what HTTP does anyway. Again: I strongly suggest you use plain old HTTP.
Send each buffer in separate message:
button.onclick = async function() {
upload.files.forEach(file => socket.send(await getAsByteArray(file)));
}
What is the standard way of serving and storing dynamic images in meteor
PS. I cannot use any package that uses public or subscribe. So any use of mongo apis like Collection.find() in the client, has to be served by calling some function to server.
I am not sure what you mean by "dynamic images". From where I'm standing, here are some options.
The easy way: you store and serve them directly from your public directory. If you have an image in /public/images/lolcat.png you can load it using the url /images/lolcat.png. However, be warned about one thing: as long as you are in development more, the server will reload each time you add or modify one of your public assets.
The less easy way, you can use nginx to serve your content. See more info about it here: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-meteor-js-application-on-ubuntu-14-04-with-nginx
The tricky way, you fork a package like file-collection and you replace the publications with server methods returning a cursor (I never did that but I assume it is possible).
You would then call your method like below, and use result as a cursor.
Meteor.call("meteorMethod", dataObject, function(error, result){
if(error){
console.log("error", error);
}
if(result){
}
});
I have a simple app built using Node, Express, and Socket.io on the server side. My page queries my API when it needs to retrieve data that will not change, and uses WebSockets for getting live updates from the server for dynamic data. The app allows a single person, the "Supervisor", to send questions to any number of "Users" (unauthenticated) and view their answers as they trickle in. The Users send their data to the server using a POST request, and it is streamed to the Supervisor over a WebSocket. The server stores user data in a simple array, and uses an ES6 map of the items in the array (users) to objects containing each their questions and answers, like this:
class User {}
let users = [], qa = new Map();
io.on('connection', socket => {
let user = new User(socket.id);
users.push(user);
qa.set(user, {});
socket.on('question-answered', ({id, answer}) => {
let questionData = qa.get(user);
questionData[id] = answer;
qa.set(user, questionData);
});
});
This is obviously a very primitive way of handling data, but I don't see the need for additional complexity. The data doesn't need to persist across server crashes or restarts (the user's questions and answers are also stored in localStorage), and MongoDB and even Redis just seem like overkill for this kind of data.
So my question is, am I going about this the right way? Are there any points I'm missing? I just want a simple way to store data in memory and be able to access it through client-side GET requests and socket.io. Thank you for any help.
If an array and a map provide you the type of access you need to the data and you don't need crash persistence and you have an appropriate amount of memory to hold the amount of data, then you're done.
There is no need for more than that unless your needs (query, persistence, performance, multi-user, crash recovery, backup, etc...) require something more complicated. A simple cliche applies here: If it ain't broke, it don't need fixing.
I am trying to create an app on Parse. That app uses data, but in order to make the data storage more secure, you would not like the clients to be able to run it. Instead, only the server should be able to modify data,
So far, I haven't seen any options as to how to achieve that, except by using user/role-based authentication, and that is something I'd rather avoid because it is the environment, not the user or role I would like to make the data access depend on.
Are there any ways to do that?
Turn off write access for everyone on each class, then in your Cloud Code use the master key which lets you bypass permissions.
You could use a beforeSave handler in Cloud Code...
Parse.Cloud.beforeSave('myClassName', function(req, res) {
if (req.master) {
res.success();
} else {
res.error('Cannot change this data.');
}
}
Then only requests made using the Master Key can alter this data.
In other places in Cloud Code, you can pass this option for individual requests like this:
obj.save(null, { useMasterKey: true });
Or turn it on for actions that follow:
Parse.Cloud.useMasterKey();
I am creating a chrome app with plain javascript and bootstrap. Initially user have to give some data and app keeps that data and reuse that for next times. I think I have to store that
data in a file. Can I use the plain javascript file IO to do that or is there a special way to do it in chrome-style. (Because some features are disabled in packaged apps)
You're right that localstorage is not enabled for Chrome packaged apps. However, depending on what kind of data you're managing, there are two APIs that should work.
chrome.storage.local is a general key-value store that will save data on the local machine (chrome.storage.sync is an identical API that will also synchronize the data between a users' devices, but I wouldn't recommend it for large files)
The API is simple to use:
chrome.storage.local.set({myKey: "myValue"}, function() {
if (!chrome.runtime.lastError) {
console.log("The value has been stored!");
} else {
console.error("There was an error!");
}
});
chrome.storage.local.get("myKey", function(data) {
if (!chrome.runtime.lastError) {
console.log("The value is " + data.myKey);
} else {
console.error("There was an error!");
}
});
(If you're using chrome.storage.sync, then you probably also want to add a listener to the chrome.storage.onChanged event, to know when data was changed from another location)
The other way may be what you're thinking of as "plain javascript file IO" -- the W3C File System API is supported by packaged apps. You can request storage space from the user and store actual files that you can read and write in JavaScript. There's a good introduction to it here.