Write data to a local file by javascript/AngularJs - javascript

I'm beginner to the JS development.what I want to do is write the data to a file in local storage.
In my application i read a raw data and save it in mongoDB as key value pairs.same time i want to write those data to a file in a local storage.
this code is used to read the read the file line by line.i get structured data in "event" .what i want to do is write those data to a file in local storage.
var lines = readDetails.split('\n');
for(var line = 0; line < lines.length-1; line++){
var FileContent = "";
var linesSpace = lines[line].split(' ');
for(var y=3;y <= linesSpace.length-1;y++){
FileContent +=linesSpace[y];
FileContent += " ";
}
var event = {
dat: linesSpace[0],
tim: linesSpace[1],
details: FileContent,
};
}
if this is not that much clear .please questioned me.
Thanks.

ngStorage is the best I've found, super easy to use.

Related

How input Value from Scanner, and output the value to an txt file in Java?

I'm struggling to input value from Scanner option in Java to a txt file. while I can smoothly read the data using try{} catch{}, I cannot write the data from a scanner to the txt file. I can easily write data to txt file using PrintWriter, but that's not my goal... According to the scenario of the assignment, I have to create the system to input values and store the data text file, which I'm struggling to do.
Please help me with this problem, and provide me a solution...
This is my first Java project. Thanks
Scanner sc = new Scanner(System.in);
String data = sc.nextLine(); //taking input from user
// Use try with resource to release system resources
try ( FileWriter myWriter = new FileWriter("filename.txt"); ) {
myWriter.write(data); //writing into file
}
As you say you have successfully read (and maybe also manipulated) the data. Lets assume you have it ready to be written out as a String data and you also have a String filename of the file's intended name.
You can then do the following:
// generate the File object
File f = Paths.get("./" + filename).toFile();
f.delete(); // remove previous existing file -- equivalent to overwrite
try(BufferedWriter wr = new BufferedWriter(new FileWriter(f))){
wr.append(data); // adding the data into write buffer
wr.flush(); // writing the data out to the file
wr.close(); // closing the buffered writer
} catch (Exception e) {
e.printStackTrace();
}

Google Scripts: XML parsing errors

I have a google script that locates a specific .zip folder on a server, extracts the files, and takes a specific .xml file to be processed. My problem is getting this file into the proper format.
The applicable snippet of code:
var dir = UrlFetchApp.fetch(url);
var b = dir.getBlob();
var files = Utilities.unzip(b);
var vesselDataBlob;
for (var i = 0; i < files.length; i++) {
if (files[i].getName().equals("dat/vesselDataMod.xml")) { //finds file with appropriate name
vesselDataBlob = files[i];
break;
}
}
var vesselData = vesselDataBlob.getDataAsString(); // Returns FULL document as a string.
var data = XmlService.parse(vesselData); // Throws error.
vesselData is in xml format, and vesselData.getContentType() returns "text/xml".
However, I'm struggling to find a way to parse the data. XmlService.parse(vesselData) throws an error: "Content is not allowed in prolog." I tried using DOMParser, which also throws an error. Is there something wrong with how I've set up my code? Is the data not actually in xml format?
The obvious difference between what most people probably do and my situation is that I'm pulling a file from a zipped folder, instead of just straight from a website. That's not the problem, I've tried just using a xml file uploaded to Drive, and the same problem occurs.
I can set up string manipulation to get the data I need, but I'd rather not go through the effort if someone can help out. Thanks!
I've been using this snippet of xml for debugging:
<?xml version="1.0" encoding="UTF-8"?>
<vessel_data version="2.1">
<hullRace ID="0" name="TNS" keys="player">
<taunt immunity="Yadayada" text="More yadayada"/>
</hullRace>
</vessel_data>
The following function works for me with a very simple zip file. I recommend that you try getDataAsString("UTF-8") and see if that resolves the issue.
function test() {
var f = DriveApp.getFilesByName("ingest.zip").next();
var files = Utilities.unzip(f.getBlob());
for(var i=0; i<files.length; i++) {
var ff = files[i];
if (/\.xml$/.test(ff.getName())){
var s = XmlService.parse(ff.getDataAsString());
Logger.log(s);
s = XmlService.parse(ff.getDataAsString("UTF-8"));
Logger.log(s);
break;
}
}
}
I put your XML file into a gist (as XML, not zip) and it parses.
function test2() {
var f = UrlFetchApp.fetch("...gisturl.../test.xml").getBlob(
);
var s = XmlService.parse(f.getDataAsString());
Logger.log(s.getDescendants().length);
}
Unfortunately, I am now having trouble getting Utilities.unzip() to run on a zip file uploaded to Google Drive. Hopefully another user will give you a better solution.

Loading a Random Caption from a text file using Javascript and Displaying via HTML

I am trying to load a random caption every time my page is loaded. I have a separate text file and contained on each line is a string. I am new to both html and Javascript, as you will see.
HTML:
<div class="centerpiece">
<h1>DEL NORTE BANQUEST</h1>
<p class="caption"><script src = "js/caption.js"></script><script>getCaption();</script></p>
<a class="btn" id="browse-videos-button" href="#video-list">Browse Videos<br><img src="img/arrow-down.svg"style="width:15px;height:15px;"></a>
</div>
Javascript:
function getCaption()
{
var txtFile = "text/captions.txt"
var file = new File(txtFile);
file.open("r"); // open file with read access
var str = "";
var numLines = 0; //to get the range of lines in the file
while (!file.eof)
{
// read each line of text
numLines += 1;
}
file.close();
file.open("r");
var selectLine = Math.getRandomInt(0,numLines);//get the correct line number
var currentLine = 0;
while(selectLine != currentLine)
{
currentLine += 1;
}
if(selectLine = currentLine)
{
str = file.readln();
}
file.close();
return str;
}
Text in Source File:
We talked yesterday
Freshman boys!
5/10
I'm having a heart attack *pounds chest super hard
The site is for my highschool cross country team in case the text file was confusing.
I am unfamiliar with most syntax and was unable to see if by iterating through the file with a loop if i needed to reset somehow which is why I opened and closed the file twice. Here is a jsfiddle of the specific caption I am trying to change and what my function is in Javascript.
https://jsfiddle.net/7cre9qqj/
If you need more code to work with please let me know and any critiques you may have please dont hold back if it looks like a mess, I am trying to learn after all! Thank you for your help!
The File API allows access to the file system on the client side, so it's not really suited to what you want to do. It's also only allowed to be used in very specific circumstances.
A simple solution is to just run an AJAX request to populate your quote. The AJAX call can read the file on your server, then it's simple to split the contents of the file by line, and pick a random line to display. Since you're open to jQuery, the code is pretty simple:
$.get("text/captions.txt")).then(function(data) {
var lines = data.split('\n');
var index = Math.floor(Math.random() * lines.length);
$("#quote").html(lines[index]);
});
Here's a fiddle that demonstrates it in full; every time it runs it will load a random quote: https://jsfiddle.net/s1w8x4ff/

Running out of memory writing to a file in NodeJS

I'm processing a very large amount of data that I'm manipulating and storing it in a file. I iterate over the dataset, then I want to store it all in a JSON file.
My initial method using fs, storing it all in an object then dumping it didn't work as I was running out of memory and it became extremely slow.
I'm now using fs.createWriteStream but as far as I can tell it's still storing it all in memory.
I want the data to be written object by object to the file, unless someone can recommend a better way of doing it.
Part of my code:
// Top of the file
var wstream = fs.createWriteStream('mydata.json');
...
// In a loop
let JSONtoWrite = {}
JSONtoWrite[entry.word] = wordData
wstream.write(JSON.stringify(JSONtoWrite))
...
// Outside my loop (when memory is probably maxed out)
wstream.end()
I think I'm using Streams wrong, can someone tell me how to write all this data to a file without running out of memory? Every example I find online relates to reading a stream in but because of the calculations I'm doing on the data, I can't use a readable stream. I need to add to this file sequentially.
The problem is that you're not waiting for the data to be flushed to the filesystem, but instead keep throwing new and new data to the stream synchronously in a tight loop.
Here's an piece of pseudocode that should work for you:
// Top of the file
const wstream = fs.createWriteStream('mydata.json');
// I'm no sure how're you getting the data, let's say you have it all in an object
const entry = {};
const words = Object.keys(entry);
function writeCB(index) {
if (index >= words.length) {
wstream.end()
return;
}
const JSONtoWrite = {};
JSONtoWrite[words[index]] = entry[words[index]];
wstream.write(JSON.stringify(JSONtoWrite), writeCB.bind(index + 1));
}
wstream.write(JSON.stringify(JSONtoWrite), writeCB.bind(0));
You should wrap your data source in a readable stream too. I don't know what is your source, but you have to make sure, it does not load all your data in memory.
For example, assuming your data set come from another file where JSON objects are splitted with end of line character, you could create a Read stream as follow:
const Readable = require('stream').Readable;
class JSONReader extends Readable {
constructor(options={}){
super(options);
this._source=options.source: // the source stream
this._buffer='';
source.on('readable', function() {
this.read();
}.bind(this));//read whenever the source is ready
}
_read(size){
var chunk;
var line;
var lineIndex;
var result;
if (this._buffer.length === 0) {
chunk = this._source.read(); // read more from source when buffer is empty
this._buffer += chunk;
}
lineIndex = this._buffer.indexOf('\n'); // find end of line
if (lineIndex !== -1) { //we have a end of line and therefore a new object
line = this._buffer.slice(0, lineIndex); // get the character related to the object
if (line) {
result = JSON.parse(line);
this._buffer = this._buffer.slice(lineIndex + 1);
this.push(JSON.stringify(line) // push to the internal read queue
} else {
this._buffer.slice(1)
}
}
}}
now you can use
const source = fs.createReadStream('mySourceFile');
const reader = new JSONReader({source});
const target = fs.createWriteStream('myTargetFile');
reader.pipe(target);
then you'll have a better memory flow:
Please note that the picture and the above example are taken from the excellent nodejs in practice book

pdf.js failing on getDocument

browser: Chrome
environment: grails app localhost
I'm running a grails app on local host (which i know there's an issue with pdf.js and local file system) and instead of using a file: url which i know would fail i'm passing in a typed javascript array and it's still failing. To be correct it's not telling me anything but "Warning: Setting up fake worker." and then it does nothing.
this.base64ToBinary = function(dataURI) {
var BASE64_MARKER = ';base64,';
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for(i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
return array;
};
PDFJS.disableWorker = true; // due to CORS
// I convert some base64 data to binary data here which comes back correctly
var data = utilities.base64ToBinary(result);
PDFJS.getDocument(data).then(function (pdf) {
//nothing console logs or reaches here
console.log(pdf);
}).catch(function(error){
//no error message is logged either
console.log("Error occurred", error);
});
I'm wondering if I just don't have it set up correctly? Can I use this library purely on the client side by just including pdf.js or do I need to include viewer.js too? and also i noticed compatibility file... the set up isn't very clear and this example works FIDDLE and mine doesn't and I'm not understanding the difference. Also if I use the url supplied in that example it also says the same thing.
I get to answer my own question:
the documentation isn't clear at all. If you don't define PDFJS.workerSrc to point to the correct pdf.worker.js file than in pdf.js it tries to figure out what the correct src path is to the file and load it.
Their method however is pretty sketchy for doing this:
if (!PDFJS.workerSrc && typeof document !== 'undefined') {
// workerSrc is not set -- using last script url to define default location
PDFJS.workerSrc = (function () {
'use strict';
var scriptTagContainer = document.body ||
document.getElementsByTagName('head')[0];
var pdfjsSrc = scriptTagContainer.lastChild.src;
return pdfjsSrc && pdfjsSrc.replace(/\.js$/i, '.worker.js');
})();
}
They only grab the last script tag in the head and assume that that is the right src to load the file instead of searching all the script tags for the src that contains "pdf.js" and using that as the correct one.
Instead they should just make it clear and require that you do in fact point PDFJS.workerSrc = "(your path)/pdf.worker.js"
Here is the short answer : define PDFJS.workerSrc at the begining of your code.
PDFJS.workerSrc = "(your path)/pdf.worker.js"
see the exemple on the documentation : https://mozilla.github.io/pdf.js/examples/#interactive-examples

Categories

Resources