sample json data file
{
"Includes": {
"Employees": {
"14": {
"name": "john",
"age": 12,
"activity": {
"Count": 3502,
"RatingValue": 5
}
},
"17": {
"name": "smith",
"age": 23,
"activity": {
"Count": 232,
"RatingValue": 5
}
}
}
}
}
js was written to retrieve the nested document and stored in array
var result = [];
db.details.find().forEach(function(doc) {
var Employees = doc.Includes.Employees;
if (Employees) {
for (var key in Employees) {
var Employee = Employees[key];
var item = [];
item.push(key);
item.push(Employee.name);
item.push(Employee.age);
item.push(Employee.activity.Count);
item.push(Employee.activity.RatingValue);
result.push(item.join(","));
}
}
});
print(result);
How can we store the output of array in csv with 2 rows because the data contains 2 rows by using mongoexport. In csv output must be
14,john,12,3502,5
17,smith,23,232,5
var csv=""; //this is the output file
for(var i=0;i<result.length;i++){//loop through output
csv+=result[i]+"\n"; //append the text and a newline
}
window.open('data:text/csv;' + (window.btoa?'base64,'+btoa(csv):csv)); //open in a new window, chrome will automatically download since it is a csv.
Change that final print(result); to the following:
print(result.join("\n"));
Then call your script and direct the output to a CSV file like so:
mongo --quiet "full-path-to-script.js" > "full-path-to-output.csv"
Note: The --quiet arg suppresses the standard Mongo header output (shell version and initial database).
I created a details collection, and added your JSON document to it, and then running the modified script resulted in the following CSV file content:
14,john,12,3502,5
17,smith,23,232,5
If you want a CSV header row as well, see my nearly identical answer to your nearly identical question here: https://stackoverflow.com/a/26310323/3212415
Related
I have been working on the backend of my app. At this point, it can access all data in a data base, and output it. I'm trying to implement some queries, so that the user can filter out the content that is returned. My DAL/DAO, looks like this
let mflix //Creates a variable used to store a ref to our DB
class MflixDAO {
static async injectDB(conn){
if(mflix){
return
}
try{
mflix = await conn.db(process.env.JD_NS).collection("movies")
}catch(e){
console.error('Unable to establish a collection handle in mflixDAO: ' + e)
}
}
// Creates a query to fetch data from the collection/table in the DB
static async getMovies({
mflix.controller
filters = null,
page = 0,
moviesPerPage = 20,
} = {}) {
let query
if (filters){
// Code
if("year" in filters){
query = {"year": {$eq: filters["year"]}}
}
// Code
}
// Cursor represents the returned data
let cursor
try{
cursor = await mflix.find(query)
}catch(e){
console.error('Unable to issue find command ' + e)
return {moviesList: [], totalNumMovies: 0}
}
const displayCursor = cursor.limit(moviesPerPage).skip(moviesPerPage * page)
try{
const moviesList = await displayCursor.toArray() // Puts data in an array
const totalNumMovies = await mflix.countDocuments(query) // Gets total number of documents
return { moviesList, totalNumMovies}
} catch(e){
console.error('Unable to convert cursor to array or problem counting documents ' + e)
return{moviesList: [], totalNumMovies: 0}
}
}
}
export default MflixDAO
Just so you know, I am using a sample database from MongoDB Atlas. I am using Postman to test HTTP requests. All the data follows JSON format
Anyway, when I execute a basic GET request. The program runs without any problems. All the data outputs as expected. However, if I execute something along the lines of
GET http://localhost:5000/api/v1/mflix?year=1903
Then moviesList returns an empty array [], but no error message.
After debugging, I suspect the problem lies either at cursor = await mflix.find(query) or displayCursor = cursor.limit(moviesPerPage).skip(moviesPerPage * page), but the callstacks for those methods is so complex for me, I don't know what to even look for.
Any suggestions?
Edit: Here is an example of the document I am trying to access:
{
"_id": "573a1390f29313caabcd42e8",
"plot": "A group of bandits stage a brazen train hold-up, only to find a determined posse hot on their heels.",
"genres": [
"Short",
"Western"
],
"runtime": 11,
"cast": [
"A.C. Abadie",
"Gilbert M. 'Broncho Billy' Anderson",
"George Barnes",
"Justus D. Barnes"
],
"poster": "https://m.media-amazon.com/images/M/MV5BMTU3NjE5NzYtYTYyNS00MDVmLWIwYjgtMmYwYWIxZDYyNzU2XkEyXkFqcGdeQXVyNzQzNzQxNzI#._V1_SY1000_SX677_AL_.jpg",
"title": "The Great Train Robbery",
"fullplot": "Among the earliest existing films in American cinema - notable as the first film that presented a narrative story to tell - it depicts a group of cowboy outlaws who hold up a train and rob the passengers. They are then pursued by a Sheriff's posse. Several scenes have color included - all hand tinted.",
"languages": [
"English"
],
"released": "1903-12-01T00:00:00.000Z",
"directors": [
"Edwin S. Porter"
],
"rated": "TV-G",
"awards": {
"wins": 1,
"nominations": 0,
"text": "1 win."
},
"lastupdated": "2015-08-13 00:27:59.177000000",
"year": 1903,
"imdb": {
"rating": 7.4,
"votes": 9847,
"id": 439
},
"countries": [
"USA"
],
"type": "movie",
"tomatoes": {
"viewer": {
"rating": 3.7,
"numReviews": 2559,
"meter": 75
},
"fresh": 6,
"critic": {
"rating": 7.6,
"numReviews": 6,
"meter": 100
},
"rotten": 0,
"lastUpdated": "2015-08-08T19:16:10.000Z"
},
"num_mflix_comments": 0
}
EDIT: It seems to be a datatype problem. When I request a data with a string/varchar type, the program returns values that contain that value. Example:
Input:
GET localhost:5000/api/v1/mflix?rated=TV-G
Output:
{
"_id": "XXXXXXXXXX"
// Data
"rated" = "TV-G"
// Data
}
EDIT: The problem has nothing to do with anything I've posted up to this point it seems. The problem is in this piece of code:
let filters = {}
if(req.query.year){
filters.year = req.query.year // This line needs to be changed
}
const {moviesList, totalNumMovies} = await MflixDAO.getMovies({
filters,
page,
moviesPerPage,
})
I will explain in the answer below
Ok so the problem, as it turns out, is that when I make an HTTP request, the requested value is passed as a string. So in
GET http://localhost:5000/api/v1/mflix?year=1903
the value of year is registered by the program as a string. In other words, the DAO ends up looking for "1903" instead of 1903. Naturally, year = "1903" does not exist. To fix this, the line filters.year = req.query.year must be changed to filters.year = parseInt(req.query.year).
I created a code that can export a file via excel the code is working fine without error but my problem is the excel file has a lot of spaces can.
as you can see in the image row 123 data is put in column 7 not in column
heres my code for exporting the data
export function download() {
var header = [];
var finalData = []
var group = [
{ "group_name": "123" },
{ "group_name": "123b" },
{ "group_name": "123ef" },
{ "group_name": "Accounts Payable" },
{ "group_name": "ADG JET TEAM" },
{ "group_name": "001 Approval" }
]
var member = [
{"001 Approval": "083817 - Ranjeet Kumar (ranjeet.kumar3#concentrix.com)" },
{ "001 Approval": "C01747 - Abid Shaikh (abid.shaikh1#concentrix.com)"},
{ "001 Approval": "C01747 - Abid Shaikh (abid.shaikh1#concentrix.com)"},
{ "123b": "C01747 - Abid Shaikh (abid.shaikh1#concentrix.com)"},
{
"123ef": "C01747 - Abid Shaikh (abid.shaikh1#concentrix.com)"
}
]
group.forEach(data=>{
header.push(data.group_name)
})
finalData.push(header)
header.forEach(headerData=>{
var temp = []
member.forEach(memberData=>{
if (headerData === Object.keys(memberData)[0]){
temp.push(memberData[Object.keys(memberData)[0]])
}else{
temp.push("")
}
})
finalData.push(temp)
})
exportToCsv('export.csv', finalData)}
the exportToexcel code is from here https://jsfiddle.net/jossef/m3rrLzk0/
Open your CSV in a text editor and and check the output, what are you using as separator and delimiter. This can be a CSV bad format (fields not surrounded by quotes, for ex), or a problem with excel configuration (csv delimiters and separators).
If you have data like that
field1, field2, field 3 supercool, this is a phrase, ops
It can be a problem, it should be something similar to
"field1", "field2", "field 3 supercool", "this is a phrase, ops"
In addition, try to open your csv with Google Sheets (docs), which will try to recognize the delimiters and separators automatically. See if it works.
A common problem for that is that your CSV is saparated by spaces or commas, but phrase can have a space and a comma which will be interpreted as a separator, and the whole document will break.
It can be useful to take a look at that link: Write a string containing commas and double quotes to CSV
I have a situation where a user can upload a csv file. This CSV file contains a lot of data, but I am only interested in 2 columns (ID and Date). At the moment, I am parsing the CSV using Papaparse
Papa.parse(ev.data, {
delimiter: "",
newline: "",
quoteChar: '"',
header: true,
error: function(err, file, inputElem, reason) { },
complete: function (results) {
this.parsed_csv = results.data;
}
});
When this is run this.parsed_csv represents objects of data keyed by the field name. So if I JSON.stringify the output is something like this
[
{
"ID": 123456,
"Date": "2012-01-01",
"Irrelevant_Column_1": 123,
"Irrelevant_Column_2": 234,
"Irrelevant_Column_3": 345,
"Irrelevant_Column_4": 456
},
...
]
So my main question is how can I get rid of the columns I dont need, and just produce a new csv containing the columns ID and Date?
Thanks
One thing I realised, is there a way to add dynamic variables. For instance I am letting users select the columns I want to map. Now I need to do something like this
let ID = this.selectedIdCol;
this.parsed_csv = results.data.map(element => ({ID: element.ID, Date: element.Date}));
It is saying that ID is unused however. Thanks
let data = [
{
"ID": 123456,
"Date": "2012-01-01",
"Irrelevant_Column_1": 123,
"Irrelevant_Column_2": 234,
"Irrelevant_Column_3": 345,
"Irrelevant_Column_4": 456
},
...
]
just produce results by using the following code:
data = data.map(element => ({ID: element.ID, Date: element.Date}))
Now you have desired column, please generate a new CSV on these columns
As Serrurier pointed out above, You should use the step/chunk function to alter the data rather than after parse map as in memory data is already available.
PapaParse.parse(file, { skipEmptyLines: true, header: true, step: (results, parser) => {
results.data = _.pick(results.data , [ 'column1' 'column2']);
return results;
}});
Note that if you are loading a huge file, you will have the whole file in memory right after the parsing. Moreover it may freeze the browser due to the heavy workload. You can avoid that by reading and discarding columns :
row by row
chunk by chunk.
You should read Papaparse's FAQ before implementing that. To sum up, you will store required columns by extracting them from the step or chunk callbacks.
I'm new in Firebase. I would like to create an app (using Angular and AngularFire library), which shows current price of some wares. I have list all available wares in Firebase Realtime Database in the following format:
"warehouse": {
"wares": {
"id1": {
"id": "id1",
"name": "name1",
"price": "0.99"
},
"id2": {
"id": "id2",
"name": "name2",
"price": "15.00"
},
... //much more stuff
}
}
I'm using ngrx with my app, so I think that I can load all wares to store as an object not list because normalizing state tree. I wanted load wares to store in this way:
this.db.object('warehouse/wares').valueChanges();
The problem is wares' price will be refresh every 5 minutes. The number og wares is huge (about 3000 items) so one response will be weight about 700kB. I know that I will exceed limit downloaded data in a short time, in this way.
I want limit the loading data to interesing for user, so every user will can choose wares. I will store this choices in following way:
"users": {
"user1": {
"id": "user1",
"wares": {
"id1": {
"order": 1
},
"id27": {
"order": 2
},
"id533": {
"order": 3
}
},
"waresIds": ["id1", "id27", "id533"]
}
}
And my question is:
Is there a way to getting wares based on waresIds' current user? I mean, does it exist way to get only wares, whose ids are in argument array? F.e.
"wares": {
"id1": {
"id": "id1",
"name": "name1",
"price": "0.99"
},
"id27": {
"id": "id27",
"name": "name27",
"price": "0.19"
},
"id533": {
"id": "id533",
"name": "name533",
"price": "1.19"
}
}
for query like:
this.db.object('warehouse/wares').contains(["id1", "id27", "id533"]).valueChanges();
I saw query limits in Angular Fire like equalTo and etc. but every is for list. I'm totally confused. Is there anyone who can help me? Maybe I'm making mistakes in the design of the app structure. If so, I am asking for clarification.
Because you are saving the ids inside user try this way.
wares: Observable<any[]>;
//inside ngOnInit or function
this.wares = this.db.list('users/currentUserId/wares').snapshotChanges().map(changes => {
return changes.map(c => {
const id = c.payload.key; //gets ids under users/wares/ids..
let wares=[];
//now get the wares
this.db.list('warehouse/wares', ref => ref.orderByChild('id').equalTo(id)).valueChanges().subscribe(res=>{
res.forEach(data=>{
wares.push(data);
})
});
return wares;
});
});
There are two things you can do. I don't believe Firebase allows you to query for multiple equals values at once. You can however loop over the array of "ids" and query for each one directly.
I am assuming you already queried for "waresIds" and you've stored those ID's in an array named idArray:
for id in idArray {
database.ref('warehouse/wares').orderByChild('id').equalTo(id).once('value').then((snapshot) => {
console.log(snapshot.val());
})
}
In order to use the above query efficiently you'll have to index your data on id.
Your second option would be to use .childChanged to get only the updated data after your initial fetch. This should cut down drastically on the amount of data you need to download.
Yes , you can get exactly data that you want in firebase,
See official Firebase documents about filtering
You need to get each waresID
var waresID = // logic to get waresID
var userId = // logic to get userId
var ref = firebase.database().ref("wares/" + userId).child(waresID);
ref.once("value")
.then(function(snapshot) {
console.log(snapshot.val());
});
this will return only data related to that waresID or userId
Note: this is javascript code, i hope this will work for you.
Ok, so I am programming a web operating system using js. I am using JSON for the file system. I have looking online for tutorials on JSON stuff for about a week now, but I cannot find anything on writing JSON files from a web page. I need to create new objects in the file, not change existing ones. Here is my code so far:
{"/": {
"Users/": {
"Guest/": {
"bla.txt": {
"content":
"This is a test text file"
}
},
"Admin/": {
"html.html": {
"content":
"yo"
}
}
},
"bin/": {
"ls": {
"man": "Lists the contents of a directory a files<br/>Usage: ls"
},
"cd": {
"man": "Changes your directory<br/>Usage: cd <directory>"
},
"fun": {
"man": "outputs a word an amount of times<br/>Usage: fun <word> <times>"
},
"help": {
"man": "shows a list of commands<br/>Usage: help"
},
"clear": {
"man": "Clears the terminal<br/>Usage: clear"
},
"cat": {
"man": "prints content of a file<br/>Usage: cat <filename>"
}
},
"usr/": {
"bin/": {
},
"dev/": {
}
}
}}
I think the better solution is to stringify your JSON, encode with base64 encoding and then send it to a server-side script (a PHP page, for instance) which could save this file. See:
var json = JSON.stringify(myJson);
var encoded = btoa(json);
You can use ajax for sending:
var xhr = new XMLHttpRequest();
xhr.open('POST','myServerPage.php',true);
xhr.setRequestHeader('Content-type','application/x-www-form-urlencoded');
xhr.send('json=' + encoded);
And in the server-side:
$decoded = base64_decode($_POST['json'])
$jsonFile = fopen('myJson.json','w+');
fwrite($jsonFile,$decoded);
fclose($jsonFile);
I'd take off the "/"'s from the keys then could split on "/" and walk the tree by shifting values off the result. For example, the following code will create the full path if it doesn't already exist, but preserving the folder & contents if it does.
var fs = {
"bin": {
"mkdir": function(inPath) {
// Gets rid of the initial empty string due to starting /
var path = inPath.split("/").slice(1);
var curFolder = fs;
while(path.length) {
curFolder[path[0]] = curFolder[path[0]] || {};
curFolder = curFolder[path.shift()];
}
}
}
}
fs.bin.mkdir("/foo/bar");
console.log(JSON.stringify(fs, function(key,val) {
if(key === 'mkdir') return undefined;
return val;
}, 2));
Output:
{
"bin": {},
"foo": {
"bar": {}
}
}
As others have mentioned, rather than building the JSON object by hand with strings, to avoid syntax errors (and frustration), building it through code then using JSON.stringify to get the final result would likely be simpler.