How do I reuse the result arrays of Papa Parse? - javascript

I was reluctant to use Papa Parse, but now I realize how powerful it is. I am using Papa Parse on a local file, but I don't know how to use the results. I want to be able to use the results so I can combine the array with another and then sort highest to lowest based on a certain element. Console.log doesn't work. From what I have researched, it may have something to do with a callback function. I am stuck on how to do the callback function with Papa Parse. Thanks for any advice.
This is my output
Finished input (async).
Time: 43.90000000000873
Arguments(3)
0:
data:
Array(1136) [0 … 99]
0: (9) [
"CONTENT TYPE", "TITLE", "ABBR", "ISSN",
"e-ISSN", "PUBLICATION RANGE: START",
"PUBLICATION RANGE: LATEST PUBLISHED",
"SHORTCUT URL", "ARCHIVE URL"
]
1: (9) [
"Journals", "ACM Computing Surveys ",
"ACM Comput. Surv.", "0360-0300", "1557-7341",
"Volume 1 Issue 1 (March 1969)",
"Volume 46 Issue 1 (October 2013)",
"http://dl.acm.org/citation.cfm?id=J204",
"http://dl.acm.org/citation.cfm?id=J204&picked=prox"
]

Based on a conversation with you, it appears you're trying to retrofit the Papa Parse demo for your own needs. Below is a stripped down code snippet that should be drop-in-ready for your project and will get you started.
document.addEventListener('DOMContentLoaded', () => {
const file = document.getElementById('file');
file.addEventListener('change', () => {
Papa.parse(file.files[0], {
complete: function(results) {
// Here you can do something with results.data
console.log("Finished:", results.data);
}
});
})
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/4.6.2/papaparse.js"></script>
<input type="file" id="file" />
Original Answer
Since I suspect you're loading your local csv file from the files system, and not an upload form, you'll need to use download: true to make it work.
Papa.parse('data.csv', {
download: true,
complete: function(results) {
console.log("Finished:", results.data);
}
});
Technically, when loading local files, you're supposed to supply Papa.parse with a File Object. This is a snippet from MDN File API documentation
File objects are generally retrieved from a FileList object returned
as a result of a user selecting files using the input element
If of course you're running this in NodeJS, then you'd just do the following:
const fs = require('fs');
const Papa = require('papaparse');
const csv = fs.createReadStream('data.csv');
Papa.parse(csv, {
complete: function(results) {
console.log("Finished:", results);
}
});
Documentation
https://www.papaparse.com/docs#local-files
https://developer.mozilla.org/en-US/docs/Web/API/File

Related

Remove unwanted columns from CSV file using Papaparse

I have a situation where a user can upload a csv file. This CSV file contains a lot of data, but I am only interested in 2 columns (ID and Date). At the moment, I am parsing the CSV using Papaparse
Papa.parse(ev.data, {
delimiter: "",
newline: "",
quoteChar: '"',
header: true,
error: function(err, file, inputElem, reason) { },
complete: function (results) {
this.parsed_csv = results.data;
}
});
When this is run this.parsed_csv represents objects of data keyed by the field name. So if I JSON.stringify the output is something like this
[
{
"ID": 123456,
"Date": "2012-01-01",
"Irrelevant_Column_1": 123,
"Irrelevant_Column_2": 234,
"Irrelevant_Column_3": 345,
"Irrelevant_Column_4": 456
},
...
]
So my main question is how can I get rid of the columns I dont need, and just produce a new csv containing the columns ID and Date?
Thanks
One thing I realised, is there a way to add dynamic variables. For instance I am letting users select the columns I want to map. Now I need to do something like this
let ID = this.selectedIdCol;
this.parsed_csv = results.data.map(element => ({ID: element.ID, Date: element.Date}));
It is saying that ID is unused however. Thanks
let data = [
{
"ID": 123456,
"Date": "2012-01-01",
"Irrelevant_Column_1": 123,
"Irrelevant_Column_2": 234,
"Irrelevant_Column_3": 345,
"Irrelevant_Column_4": 456
},
...
]
just produce results by using the following code:
data = data.map(element => ({ID: element.ID, Date: element.Date}))
Now you have desired column, please generate a new CSV on these columns
As Serrurier pointed out above, You should use the step/chunk function to alter the data rather than after parse map as in memory data is already available.
PapaParse.parse(file, { skipEmptyLines: true, header: true, step: (results, parser) => {
results.data = _.pick(results.data , [ 'column1' 'column2']);
return results;
}});
Note that if you are loading a huge file, you will have the whole file in memory right after the parsing. Moreover it may freeze the browser due to the heavy workload. You can avoid that by reading and discarding columns :
row by row
chunk by chunk.
You should read Papaparse's FAQ before implementing that. To sum up, you will store required columns by extracting them from the step or chunk callbacks.

How to export json object into excel using javascript or jquery?

I wanted to export my json object into excel file. I have searched in Google and tried the following, but I am unable to get or export my object data into excel file on clicking of button, it is downloading but without any columns/data(i.e empty file is downloading).
I am not sure what is the problem, Please someone help me regarding this. Thanks in advance !
Created Plnkr.
html:
<div id="dvjson"></div>
<br><button type="button" id='DLtoExcel'>
Download CSV
</button>
js:
$(document).ready(function(){
//if I give below json object, file is downloading with the columns/data and hence it is fine
//var testjsondata = [{"number":123}];
//if I give like below object, empty file is downloading, not having any data
//it is not working //it should also work
//var testjsondata = {"number": 123}//it is not working //it should also work
//and the following object format also should work
var testjsondata = {
"test": {
"name": "abc",
"address": [{
"number": "12345",
"street": "xyz"
}]
},
"mynumber": 12
};
var $btnDLtoExcel = $('#DLtoExcel');
$btnDLtoExcel.on('click', function () {
$("#dvjson").excelexportjs({
containerid: "dvjson"
, datatype: 'json'
, dataset: testjsondata
, columns: getColumns(testjsondata)
});
});
console.log(testjsondata);
});
Any other solutions or libraries also welcome, my json object is plain type, not any array type.

How do you use require() within a map function?

Looking at the docs for CouchDB 1.6.1 here, there is mention that you can use the JS require(path) function. How do you do this? The documentation says path is "A CommonJS module path started from design document root".
My design doc is called _design/data. I have uploaded an attachment to this design doc called test.js, which can be accessed at /_design/data/test.js, and contains the following code:
exports.stuff = function() {
this.getMsg = (function() {
return 'hi';
})()
}
But the following code in my map function:
function(doc) {
try {
var x = require('test.js');
} catch (e) {
emit ('error', e)
}
}
results in this error:
["error", "invalid_require_path", "Object has no property \"test.js\". {\"views\":{\"lib\":null},\"_module_cache\":{}}"]
It looks like require is looking for the path as an object in the docparam... but I don't understand why if it is.
Looking at this link, describing this feature in an older version of CouchDB, it says you can:
However, in the upcoming CouchDB 1.1.x views will be able to require modules provided they exist below the 'views' property (eg, 'views/lib/module')
And gives the following code example:
{
"_id": "_design/example",
"lib": {
// modules here would not be accessible from view functions
},
"views": {
"lib" {
// this module is accessible from view functions
"module": "exports.test = 'asdf';"
},
"commonjs": {
"map": function (doc) {
var val = require('views/lib/module').test;
emit(doc._id, val);
}
}
}
}
But this did not work for me on CouchDB 1.6.1. I get the error:
{message: "mod.current is null", fileName: "/usr/share/couchdb/server/main.js", lineNumber: 1137, stack: "([object Array],[object Object])#/usr/share/couchdb/server/main.js:1137\n([object Array],[object Object])#/usr/share/couchdb/server/main.js:1143\n([object Array],[object Object],[object Object])#/usr/share/couchdb/server/main.js:1143\n(\"views/lib/module\")#/usr/share/couchdb/server/main.js:1173\n([object Object])#undefined:3\n([object Object])#/usr/share/couchdb/server/main.js:1394\n()#/usr/share/couchdb/server/main.js:1562\n#/usr/share/couchdb/server/main.js:1573\n"
In your question you didn't provide the function as a string. It's not too easy to spot, but you must stringify functions before storing them in CouchDB (manually or by using .toString()). Caolan has that error in the post that you linked.
Using this example:
15 views: {
16 lib: {
17 foo: "exports.bar = 42;"
18 },
19 test: {
20 map: "function(doc) { emit(doc._id, require('views/lib/foo').bar); }"
21 }
22 }
Found in older CouchDB docs here: https://wiki.apache.org/couchdb/CommonJS_Modules
I got an example working. Not sure what the difference was really... I was running 'temp' views instead of saving but I don't know why that would have effected the require statement

How to create a file object with a path in NodeJS?

I want to know if it is possible to create a file object (name, size, data, ...) in NodeJS with the path of existing file ? I know that it is possible in client side but I see nothing for NodeJS.
In others words, I want the same function works in NodeJS :
function srcToFile(src, fileName, mimeType){
return (fetch(src)
.then(function(res){return res.arrayBuffer();})
.then(function(buf){return new File([buf], fileName, {type:mimeType});})
);
}
srcToFile('/images/logo.png', 'logo.png', 'image/png')
.then(function(file){
console.log(file);
});
And ouput will be like :
File {name: "logo.png", lastModified: 1491465000541, lastModifiedDate: Thu Apr 06 2017 09:50:00 GMT+0200 (Paris, Madrid (heure d’été)), webkitRelativePath: "", size: 49029, type:"image/png"…}
For those that are looking for a solution to this problem, I created an npm package to make it easier to retrieve files using Node's file system and convert them to JS File objects:
https://www.npmjs.com/package/get-file-object-from-local-path
This solves the lack of interoperability between Node's fs file system (which the browser doesn't have access to), and the browser's File object type, which Node cannot create.
3 steps are required:
Get the file data in the Node instance and construct a LocalFileData object from it
Send the created LocalFileData object to the client
Convert the LocalFileData object to a File object in the browser.
// Within node.js
const fileData = new LocalFileData('path/to/file.txt')
// Within browser code
const file = constructFileFromLocalFileData(fileData)
So, I search with File Systems and others possibilities and nothing.
I decide to create my own File object with JSON.
var imagePath = path.join('/images/logo.png', 'logo.png');
if (fs.statSync(imagePath)) {
var bitmap = fs.readFileSync(imagePath);
var bufferImage = new Buffer(bitmap);
Magic = mmm.Magic;
var magic = new Magic(mmm.MAGIC_MIME_TYPE);
magic.detectFile(imagePath, function(err, result) {
if (err) throw err;
datas = [{"buffer": bufferImage, "mimetype": result, "originalname": path.basename(imagePath)}];
var JsonDatas= JSON.parse(JSON.stringify(datas));
log.notice(JsonDatas);
});
}
The output :
{
buffer:
{
type: 'Buffer',
data:
[
255,
216,
255
... 24908 more items,
[length]: 25008
]
},
mimetype: 'image/png',
originalname: 'logo.png'
}
I think is probably not the better solution, but it give me what I want. If you have a better solution, you are welcome.
You can use arrayBuffer (thats what i did to make a downloadable pdf) or createReadStream / createWriteStream under fs(FileSystem objects)

Typeahead and Bloodhound shows unrelated suggestions when 'remote' is used

When using Typeahead/Bloodhound with a remote option, when the local/prefetch results are under the "limit" (5) the suggestions shown are not related to the input. Looks likes its just showing from the top of the results set up to 5.
Photo: 'Love' is the expected result, everything else is unrelated:
My code:
var keywords = [
{"value": "Ambient"}, {"value": "Blues"},{"value": "Cinematic"},{"value": "Classical"},{"value": "Country"},
{"value": "Electronic"},{"value": "Holiday"},{"value": "Jazz"},{"value": "Lounge"},{"value": "Folk"},
{"value": "Hip Hop"},{"value": "Indie"},{"value": "Pop"},{"value": "Post Rock"},{"value": "Rock"},{"value": "Singer-Songwriter"},{"value": "Soul"},
{"value": "World"},{"value": "Happy"},{"value": "Sad"},{"value": "Love"},{"value": "Angry"},
{"value":"Joy"},{"value": "Delight"},{"value": "Light"},{"value": "Dark"},{"value": "Religious"},{"value": "Driving"},
{"value":"Excited"},{"value": "Yummy"},{"value": "Delicious"},{"value": "Fun"},{"value": "Rage"},
{"value":"Hard"},{"value": "Soft"}
];
// Instantiate the Bloodhound suggestion engine
var keywordsEngine = new Bloodhound({
datumTokenizer: function (datum) {
return Bloodhound.tokenizers.whitespace(datum.value);
},
queryTokenizer: Bloodhound.tokenizers.whitespace,
local: keywords,
remote: {
url: '/stub/keywords.json',
filter: function (keywords) {
// Map the remote source JSON array to a JavaScript object array
return $.map(keywords, function (keyword) {
return {
value: keyword.value
};
});
}
},
prefetch: {
url: '/stub/keywords.json',
filter: function (keywords) {
// Map the remote source JSON array to a JavaScript object array
return $.map(keywords, function (keyword) {
return {
value: keyword.value
};
});
}
}
});
// kicks off the loading/processing of `local` and `prefetch`
keywordsEngine.initialize();
$('#keyword-search-input').typeahead({
hint: true,
highlight: true,
minLength: 1
},
{
name: 'keyword',
displayKey: 'value',
// `ttAdapter` wraps the suggestion engine in an adapter that
// is compatible with the typeahead jQuery plugin
source: keywordsEngine.ttAdapter()
});
Upon further research, I think I need to filter remote suggestions manually, according to this thread on the Github Issues for Typeahead.js:
"So the idea is I guess that the data returned from remote should already be filtered by the remote, so no further filtering is done on that."
https://github.com/twitter/typeahead.js/issues/148
I wish to go more in deep on this question for future references. Bear in mind that I am not a JavaScript expert or any expert for that matter. With the Bloodhound engine it does not accommodate constant dynamic interactions with the search parameter for a remote url. Because of this, if you are using some json file, the typeahead search box will only display the limit. So if limit: 10 then the first 10 records of the json data will be displayed, and the result will not change despite the user typing. Only the the first record of the json will have a suggestion based on user prompts which is trivial.
However if the remote source has a query(eg fire query) that gets the required results as in this example, then the search box will be filled with the relevant results each time the search box is populated.
So what if you have a large json file, which you generated from some database, and rather not use prefecth? Obviously for speed and efficiency you will need to use remote. Using php script you would need to do something like:
$key=$_GET['key'];
$con=mysqli_connect("localhost","root","");
$db=mysqli_select_db($con, "database_name");
$query=mysqli_query($con, "SELECT * FROM table WHERE column LIKE '%{$key}%'");
$rows=array();
while($row=mysqli_fetch_assoc($query))
{
$rows[] = $row;
}
echo json_encode($rows);
Here you are getting the value of the search parameter using GET, and you have formed a connection with the database hence your search pool will always be hydrated with "relevant results" upon user prompts.

Categories

Resources