I want to convert a json string to yaml format in javascript.I am trying my hand on google from last two days didnt found any solution or libraries.
There are answers available for java but not for javascript.
Suppose i have json string like this:
{
"json": [
"fat and rigid"
],
"yaml": [
"skinny and flexible"
],
"object": {
"array": [
{
"null_value": null
},
{
"boolean": true
},
{
"integer": 1
}
]
}
}
conver to yaml:
json:
- fat and rigid
yaml:
- skinny and flexible
object:
array:
- null_value:
- boolean: true
- integer: 1
There is a online converter http://www.json2yaml.com/ , but how to convert to it in javascript.
You can use yaml NPM package.
const YAML = require('yaml');
const jsonObject = {
version: "1.0.0",
dependencies: {
yaml: "^1.10.0"
},
package: {
exclude: [ ".idea/**", ".gitignore" ]
}
}
const doc = new YAML.Document();
doc.contents = jsonObject;
console.log(doc.toString());
Output
version: 1.0.0
dependencies:
yaml: ^1.10.0
package:
exclude:
- .idea/**
- .gitignore
Use the 'js-yaml' npm package! That's the one that is officially recognized by yaml.org. (If you want to be extra sure && this post is outdated, go check yaml.org yourself to see which package it recommends.) I initially used 'json2yaml' instead and got strange parsing behavior when converting json (as string) to yaml.
If someone still wants to convert JSON to YAML, you may use this JavaScript library:
https://www.npmjs.com/package/json2yaml
I tried to package the answer to a bash script.
#!/bin/bash
#convert-json-to-yaml.sh
if [[ "$1" == "" ]]; then
echo "You must provide a json file in argument i.e. ./convert-json-to-yaml.sh your_file.json"
exit 1
fi
jsonFile=$1
yamlFile="$1.yaml"
if [[ "$2" != "" ]]; then
yamlFile=$2
fi
python -c 'import sys, yaml, json; yaml.safe_dump(json.load(sys.stdin), sys.stdout, default_flow_style=False)' < ${jsonFile} > ${yamlFile}
Here is how to do it :)
import * as YAML from 'yaml';
const YAMLfile = YAML.stringify(JSONFILE);
And if you want to create a YAML file, you do it with de FS.
import * as fs from 'fs';
fs.writeFileSync('./fileName.yaml', YAMLfile);
Related
I'm using #babel/parser to read in javascript, modify it and regenerate it with babel-generator. I need to do this while retaining the javascript's formatting but it's re-formatting it. I can't see anything else in the options for the parser or generator or docs that would stop it from doing this so I'm a bit lost. Here's what I have so far.
import { parse } from '#babel/parser';
import generate from 'babel-generator';
var file = parse(script, {
sourceType: 'module',
ranges: true,
tokens: true,
plugins: ['nullishCoalescingOperator', 'typescript']
});
var script = generate(file, {
retainLines: true,
comments: true
});
console.log(script.code);
The test script I am passing it is below;
import This from 'that'
import That from './this'
export default (() => {
return Object.assign(
{},
This
{
That
}
)
})()
However generate is returning the script below;
import This from 'that'
import That from './this'
export default (() => {
return Object.assign(
{},
This
{
That });
})();
As you can see it's changing the number of spaces in a return, collapsing the end }) of the Object.assign and adding semi-colons. It will be reading in script from lots of different formats and styles and I can't change it and ideally don't want to have to provide it with an es-lint file, is there anyway to get it to print the code back out with the exact same formatting?
EDIT: the ast nodes contain the loc and range vales but it seems to be ignoring it. I'm not even sure if it's parsing it incorrectly or if the generate is ignoring it.
"loc": {
"start": {
"line": 10,
"column": 8
},
"end": {
"line": 10,
"column": 10
}
},
"range": [
319,
321
],
you need use jscodeshift, instead of babel.This library is more suitable for this scenario.
https://github.com/facebook/jscodeshift
I have a problem with parsing an XML file.
I want to remove strings with characters like \t\n.
XML File: http://ftp.thinkimmo.com/home/immoanzeigen24/immo.xml
{
trim: true,
normalize: true,
attrValueProcessors: [cleanValue, name => name],
valueProcessors: [cleanValue, name => name]
}
cleanValue:
const cleanValue = value => {
return value.toString().trim().replace("\t","atest");
};
I tried cleaning it with a lot of regex I've found online - but value always stays like following:
"verwaltung_objekt": {
"objektadresse_freigeben": "0",
"verfuegbar_ab": "nachaasjkdhkjshadjkashdAbsprache",
"bisdatum": "2016-01-15",
"min_mietdauer": "\n\t\t\t\t",
"max_mietdauer": "\n\t\t\t\t",
}
This is a difficult one!
I'd suggest following a simple strategy and pre-processing the xml data before you parse it.
This should resolve your issue at least.
If you just do something like:
function trimXml(xml) {
return xml.replace(/>\s+</g, "><");
}
xml = trimXml(xml);
Then parse the trimmed xml data. You should see the output now looks like so:
"verwaltung_objekt": [
{
"objektadresse_freigeben": [
"1"
],
"abdatum": [
"2017-03-01"
],
"min_mietdauer": [
""
],
"max_mietdauer": [
""
]
}
],
Which is a bit more like what you want!
I have downloaded my twitter data archive and have been provided with a .js file. My understanding is that this is a JavaScript file and not a JSON file (is that right?). How do I convert this to a JSON file (so I can then convert it to a Pandas df)? Or how do I import it into Python and convert it to a Pandas df?
I can find posts that describe how to import JSON data and convert it to a Pandas df but haven't found anything on importing .js files.
The file I want to import is called tweet.js and I want to import it into a variable called twitter_data
UPDATE:
Here is a snapshot of the first few rows of the file, to the end of the first 'tweet' object (I've added indentation for ease of reading but the \n are actually included in the string in the .js file):
'window.YTD.tweet.part0 = [ {\n
"tweet" : {\n
"retweeted" : false,\n
"source" : "Twitter for iPhone",\n
"entities" : {\n
"hashtags" : [],\n
"symbols" : [],\n
"user_mentions" : [],\n
"urls" : []\n },\n
"display_text_range" : [ "0", "152" ],\n
"favorite_count" : "1",\n
"in_reply_to_status_id_str" : "1276854543486197760",\n
"id_str" : "1277154367104262145",\n
"in_reply_to_user_id" : "2735246778",\n
"truncated" : false,\n
"retweet_count" : "0",\n
"id" : "1277154367104262145",\n
"in_reply_to_status_id" : "1276854543486197760",\n
"created_at" : "Sun Jun 28 08:18:24 +0000 2020",\n
"favorited" : false,\n
"full_text" : "#ThisUser #thatuser Yesterday I learned how to use pipelines and gridsearch with python",\n
"lang" : "en",\n
"in_reply_to_screen_name" : "ThisUser",\n
"in_reply_to_user_id_str" : "2735246778"\n }\n},
I want to get all the "tweet" objects and for each of those I'm only interested in the "full_text" sub-object.
I have now read in the .js file into my Jupyter Notebook using the following code:
with open('tweet.js') as js_file:
twitter_data = js_file.read()
Now I need to convert it to a dataframe of tweets. I tried running json.loads(twitter_data) but this results in the following error:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
This is probably not the best solution, but it would work if you have one .js file or multiple files with the same structure.
For example you have this .js file:
data = """
window.data = [
{"some unimportant things": []},
{"jsonObjectYouNeed": [
{"text": "hello", "date": "2020-06-28"},
{"text": "stack", "date": "2020-06-28"},
{"text": "overflow", "date": "2020-06-28"}
]
}]
"""
You can exctract JSON object from js file using python string processing methods: .partition, slicing and all others that you can find in string methods documentation:
parts = data.partition('{"jsonObjectYouNeed')
json_data = parts[1] + parts[2]
json_data = json_data[:-2]
Then you can convert it into python object using json.loads and load it to the pd.DataFrame:
import json
data = json.loads(json_data)['jsonObjectYouNeed']
df = pd.json_normalize(data)
df
Result:
text date
0 hello 2020-06-28
1 stack 2020-06-28
2 overflow 2020-06-28
I'm trying to convert a BSON file generated by a python script into a Javascript object. I am using https://www.npmjs.com/package/bson for the BSON package and using XMLHttpRequest to load the file.
In case it matters, this is my package-lock.json entry for bson.
"bson": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/bson/-/bson-4.0.4.tgz",
"integrity": "sha512-Ioi3TD0/1V3aI8+hPfC56TetYmzfq2H07jJa9A1lKTxWsFtHtYdLMGMXjtGEg9v0f72NSM07diRQEUNYhLupIA==",
"requires": {
"buffer": "^5.1.0",
"long": "^4.0.0"
},
"dependencies": {
"buffer": {
"version": "5.6.0",
"resolved": "https://registry.npmjs.org/buffer/-/buffer-5.6.0.tgz",
"integrity": "sha512-/gDYp/UtU0eA1ys8bOs9J6a+E/KWIY+DZ+Q2WESNUA0jFRsJOc0SNUO6xJ5SGA1xueg3NL65W6s+NY5l9cunuw==",
"requires": {
"base64-js": "^1.0.2",
"ieee754": "^1.1.4"
}
}
}
}
The generated file is based on a very simple python program
data = { 'key0': 'a', 'key1': [ 1, 2, 3 ], 'key2': 'b', 'key3': [ { 'k0': 'random', 'k1': 'string', 'k2': 'to use', 'k3': 3.145 }, { 'k0': 'other', 'k1': 'values', 'k2': 'here', 'k3': 0.0001}] }
with open('test.bson', 'wb') as fp:
encoded = bson.encode(data)
fp.write(encoded)
The package being used for the python is pymongo (Version 3.10.1).
NOTE: I've updated the data dict. The first version worked fine when I used Dekel's solution. However, my actual data doesn't work. I modified it, and now it doesn't work with this error:
Uncaught Error: buffer length 199 must === bson size 253
deserialize$1
I can load the file, however I cannot figure out the correct BSON calls to use in JS to get this to a Javascript object. I'm met with wrong type errors (needs a Buffer), transpilation errors, or exceptions.
My code looks like the following (it uses Dekel's deserialize in his answer).
import { deserialize } from 'bson'
let xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = () => {
if (xmlHttp.status == 200 && xmlHttp.readyState == 4) {
const buf = Buffer.from(xmlHttp.responseText, 'binary');
const dat = deserialize(buf, {});
console.log(dat);
}
};
xmlHttp.open("GET", 'assets/test.bson');
xmlHttp.send();
If I didn't use the {} as the second argument to deserialize, it generates it results in
TS2554: Expected 2 arguments, but got 1
I am using Webpack and Typescript for development.
It's not clear if the data is perhaps incorrect from the point of view of the JS BSON implementation or if I am calling the JS BSON incorrectly.
I can decode the file in python and bsondump also properly decodes the file.
I've created a GitHub repo which has more details as well as the test data. https://github.com/mobileben/test-bson-js
Some other details discovered.
When converting to a Buffer, must include binary as the encoding, else it will not work right.
This json (note it is represented as a python dict) {"key3": [{"k0": "random", "k1": "string", "k2": "to use", "k3": 3.145}, {"k0": "other", "k1": "values", "k2": "here", "k3": 0.0001}]} will cause the Uncaught error where the sizing doesn't match
Float values, when using a dict that will not result in the Uncaught error, have the wrong value. If I use integer values, they are fine.
For the last item
{"key3": {"k9": "here", "k0": 1, "k1": 2, "k2": 3}}
Will work. It results in (on the JS-side)
{"key3":{"k9":"here","k0":1,"k1":2,"k2":3}}
However
{"key3": {"k9": "here", "k0": 0.1, "k1": 0.2, "k2": 0.3}}
Results in (on the JS side)
{"key3":{"k9":"here","k0":1.8745098039215684,"k1":1.8745098039215684,"k2":1.825}}
Running bsondump on the same file yields:
{"key3":{"k9":"here","k0":{"$numberDouble":"0.1"},"k1":{"$numberDouble":"0.2"},"k2":{"$numberDouble":"0.3"}}}
Assuming you have the content of the file inside the data variable, you can use the BSON lib the following:
import { deserialize } from 'bson';
import data from '!!raw-loader!assets/test.bson'
console.log(deserialize(Buffer.from(data)))
If you are using the xmlhttprequest:
import { deserialize } from 'bson';
let xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = () => {
if (xmlHttp.status == 200 && xmlHttp.readyState == 4) {
console.log(deserialize(Buffer.from(xmlHttp.responseText)));
}
};
xmlHttp.open("GET", 'assets/test.bson');
xmlHttp.send();
Consider plugin c is supported in recent versions of Node.
What would be the best way to conditionally load it?
module.exports = {
plugins: [
require("a"),
require("b"),
[require("c"), { default: false }] //only if node version > 0.11
]
};
Make sure you add the semver package as a dependenc, then:
var semver = require("semver")
var plugins = [
require("a"),
require("b"),
];
if(semver.gt(process.version, "0.11")){
plugins.push(require("c"));
}
module.exports = {
plugins: plugins
};
This code checks for the node version using process.version, and appends the required plugin to the list if it is supported.
If you want to make sure the major part of the version number is 0 and the minor part of the version number is greater than 11 you could use this
var sem_ver = process.version.replace(/[^0-9\.]/g, '').split('.');
if(parseInt(sem_ver[0], 10) == 0 && parseInt(sem_ver[1], 10) > 11)) {
// load it
}
What you want to do is check the process object. According to the documentation, it will give you an object like so:
console.log(process.versions);
{ http_parser: '1.0',
node: '0.10.4',
v8: '3.14.5.8',
ares: '1.9.0-DEV',
uv: '0.10.3',
zlib: '1.2.3',
modules: '11',
openssl: '1.0.1e' }
Then, simply process out the node property into a conditional if in your object.
You could use process.version:
var subver = parseFloat(process.version.replace(/v/, ''));
module.exports = {
plugins: [
require("a"),
require("b"),
(subver > 0.11) [require("c"), { default: false }]
]
};