Situation
I am trying to load multiple xml files (located on server) without the need to declare the name of the files hard coded. For this I am trying to use the d3.queue library https://github.com/d3/d3-queue.
I have implemented the xml to force layout to my own needs (https://bl.ocks.org/mbostock/1080941), but there is one crucial flaw namely I need to manually type in the name of the xml file that I want to load...
Reproduce
Given (adjusted example from http://learnjsdata.com/read_data.html) :
queue()
.defer(d3.xml, "/mappings/Customer.hbm.xml")
.defer(d3.xml, "/mappings/Actor.hbm.xml")
.await(analyze);
function analyze(error, Customer, Actor) {
if(error) { console.log(error); }
// do stuff with Customer data, do stuff with Actor data
}
And given my implementation of the processing of an xml:
d3.xml("mappings/Customer.hbm.xml","application/xml", function(error,xml){
if (error) throw error;
// do stuff with the data retrieved from Customer.hbm.xml
});
Question
How do I combine above two snippets in such a way that I dont have to write the locations of the xml hard coded and pass all the parameters to the analyze function? Any nudge in the right direction would be much appreciated.
In psuedocode I have tried to code something like the following (but I cant get it to work):
function to get all names of the xmls from the mappings folder (probably with node.js fs.readdir or fs.readdirSync methods, but I am unsure of how that would work exactly)
for each xml .defer(d3.xml, nameofxml)
pass all the found names as parameters to the analyze function
In Java I would have chosen to do this with a var...args but I dont know how to do it in JS.
There's really two parts to this question:
How do I get a list of server-side files to client-side JavaScript?
Short answer is you don't without having a server-side api that can return that list. Depending on what backend you are using, you write a method that returns a JSON array of the files in your target directory. You call this first, get the response and then process them all with queue:
d3.json('/get/list/of/xml/files', function(error, fileArray){
var q = d3.queue();
fileArray.forEach(function(d){
q = q.defer(d3.xml, d);
});
q.await(analyze);
});
How do a process a variable number of arguments in JavaScript?
This is actually very well supported in JavaScript.
function analyze(error) {
if(error) { console.log(error); }
// skip 0 it's error variable
for (i = 1; i < arguments.length; i++) {
var xml = arguments[i];
...
}
}
Related
Please excuse the rookie question as I'm not a programmer :)
We're using Pentaho 8
I'm looking for a way to have Javascript or Java read a directory and return the file names of any files that are older than a date that will be provided by a Pentaho parameter.
Here is what I currently have using a Modified Java Script Value step that only lists the directory contents:
var _getAllFilesFromFolder = function(dir) {
var filesystem = require("fs");
var results = [];
filesystem.readdirSync(dir).forEach(function(file) {
file = dir+'\'+file;
var stat = filesystem.statSync(file);
if (stat && stat.isDirectory()) {
results = results.concat(_getAllFilesFromFolder(file))
} else results.push(file);
});
return results;
};
Is Javascript/Java the right way to do this?
There's a step called "Get file names". You just need to provide the path you want to poll. It also allows doing so recursively, only showing filenames that match a given filter, and in the filters tab allow you to show only folders, only files, or both.
nsousa's answer would be the easiest, then after you get your file list you can use a filter rows step on the lastmodifiedtime returned from the Get file names. 2 -steps, 3 if you want to format the date/time returned to something easier to sort/filter through. This is the approach I use and its is faster then the transformations can keep up with generally.
I have another interesting case which I have never faced before, so I'm asking help from SO community and also share my experience with it.
The case || What we have:
A csv file (exported from other SQL DB) with such structure
(headers):
ID,SpellID,Reagent[0],Reagent[1..6]Reagent[7],ReagentCount[0],ReagentCount[1..6],ReagentCount[7]
You could also check a full -csv data file here, at my
dropbox
My gist from Github, which helps you to understand how MongoImport works.
What we need:
I'd like to receive such structure(schema) to import it into MongoDB collection:
ID(Number),SpellID(Number),Reagent(Array),ReagentCount(Array)
6,898,[878],[1]
with ID, SpellID, and two arrays, in first we store all Reagent IDs, like [0,1,2,3,4,5,6,7] from all Reagent[n] columns, and in the second array we have the array with the same length that represent quantity of ReagentIDs, from all ReagentCount[n]
OR
A transposed objects with such structure (schema):
ID(Number),SpellID(Number),ReagentID(Number),Quantity/Count(Number)
80,2675,1,2
80,2675,134,15
80,2675,14,45
As you may see, the difference between the first example and this one, that every document in the collection represents each ReagentID and it's quantity to SpellID. So if one Spell_ID have N different reagents it will be N documents in the collection, cause we all know, that there can't be more then 7 unique Reagent_ID belonging to one Spell_ID according to our -csv file.
I am working on this problem right now, with the help of node js and npm i csv (or any other modules for parsing csv files). Just to make my csv file available for importing to my DB via mongoose. I'll be very thankful for all those, who could provide any relevant contribution to this case. But anyway, I will solve this problem eventually and share my solution in this question.
As for the first variant I guess there should be one-time script for MongoImport that could concat all columns from Reagent[n] & ReagentCount[n] to two separate arrays like I mentioned above, via -fields but unfortunately I don't know it, and there are no examples on SO or official Mongo docs relevant to it. So if you have enough experience with MongoImport feel free to share it.
Finally I solve my problem as I want it to, but without using mongoimport
I used npm i csv and write function for parsing my csv file. In short:
async function FuncName (path) {
try {
let eva = fs.readFileSync(path,'utf8');
csv.parse(eva, async function(err, data) {
//console.log(data[0]); we receive headers, if they exist
for (let i = 1; i < data.length; i++) { //we start from 1, because 0 is headers, if we don't have it, then we start from 0
console.log(data[i][34]); //where i is row number and j(34) is a header address
}
});
} catch (err) {
console.log(err);
}
}
It loops over csv file and shows data in array that allows you to operate with them as you want it to.
I am running an angular app. I am having the strangest effect ever...
I am calling a backend which returns a json. I parse that json and build an object structure client side. It works perfectly in dev but the exact same code does provide strange effects on prod. See code inline comments for hints. The only thing I could think of is that the data comes different from prod...
I can't see what's wrong, as it's the exact same code, and it's driving me completely nuts, probably the worse thing I ever saw in 10+ years programming!
Basically the json structure is a list of objects, every object has a reference ID, and several objects are correlated by the same reference ID - I need a structure where I'd access all objects with the same reference ID.
Maybe I'll make a fool of myself here but I really can't see it...I just ran the data in two JSON validators and both say the data is valid.
app.factory('ItemService', ['ItemProvider', function(itemProvider) {
var itemSrv;
var obj_by_id = {}; //create empty object
itemSrv = function(callback) {
itemProvider.get_data()
.success(function(data) { // callback ok, data has json content
for (var i=0; i<data.length; i++) {
obj = data[i]; // I get the object in the json
if (! (obj.identificador in obj_by_id)) {
obj_by_id[obj.identificador] = {}; //build a key in the object if not already available
}
obj_by_id[obj.identificador][obj.campo_nombre] = obj; //assign the object by keys
console.log(obj_by_id); **//HERE obj_by_id is ALWAYS EMPTY!!!! BUT ONLY ON PROD!!! On dev works fine...**
}
callback(obj_by_id); //Here I would get the whole structure, but it's empty on prod...
})
.error(function(data) {
console.log("Error getting data from server");
});
}
//factory function body that constructs shinyNewServiceInstance
return itemSrv;
}]);
EDIT: console.log(data) right after success, on request
dev:
http://imgur.com/10aQsx2,rJJw2bb#0
prod:
http://imgur.com/10aQsx2,rJJw2bb#1
EDIT2: you can have a look at the returned data here (will remove this link later!): http://bibliolabs.cc/milfs/api.php/ison?id=2
I am concerned about all those \u unicode chars, could that be an issue?
Questions
How to serve javascript file dynamically? Specifically, the scripts maintain most of its body but with some variables changable (imagine HTML Jade template, but this is for pure javascript).
Scenario
When user or browser (http GET in general) visits /file.js passing parameter api, e.g. /file.js?api=123456, I would like to output pure javascript where I can take that 123456 and put in inside of my code, dynamically. Content-Type is application/javascript.
Sample:
var api = #{req.query.api}; //Pseudo
//The rest of my javascripts template
...
From my main .js file, I have set up the route:
app.get( '/file.js', function( req, res ) {
//Pseudo code that I would like to achieve
var name = req.query.name;
res.render( 'out_put_javascript_file_from_jade_file.jade', { name: name } );
});
So when a person visits /file.js, the script file will be rendered differently based on the parameter api passed in the URL. The only possible dynamic way I can think of is using Jade, but it doesn't allow pure javascript template. I believe there must be other solutions.
Please excuse my explanation. The problem is somewhat like this: How to generate a pure JavaScript file with Jade
If you want to do something quick and dirty, then you can do something like this (based on your example in the comments).
App init - read the .js template file and cache it:
// this should be async, but hey, not teaching you that part here yet
var fileJs = fs.readFileSync('file.js.template');
File.js:
(function() {
$(window).on('load', function() {
alert('Your api key is API_KEY_CONST');
});
})();
Request:
GET /api/file.js?key=123
Router:
app.get('/api/file.js', function(req, res) {
var key = req.query.key;
var key = fetchKeyFromDBSync(); // just to make it easier here, no async.
var out = fileJs.replace(API_KEY_CONST, key);
res.setHeader('content-type', 'text/javascript');
res.write(out);
res.end();
});
Now, this is really dumb and you should not try it at home, but it simply demonstrates how to do what you wanted.
Edit:
Depending on the file length, you might perform a bit better if you put the chunks of the file into an array, like:
var fileChunks = ['(function(){ blablabla;', 'var myAPIKey=', 'KEY_PLACEHOLDER', '; alert (myAPIKey);', '})()']
So later when you're resolving it with the real API key, you join the file.
fileChunks[2] = '12345';
var responseData = fileChunks.join('');
res.write(responseData);
But your last-accessed api key is then held in an array. Not quite future proof, but it shouls work if you need something quick.
I am learning JQuery with a MVC3 book. I find that Json data is really easy to use, but it may not be safe.
Consider the scenario, say, I got a CRM with senstive customer infomation. Ajax returns Json array as search results. The search textbox ajax autocomplete also return Json array of senstive keywords from database. etc...They all use GET method.
However, it is said that GET method has vulnerabilities when passing around Json array data:
http://haacked.com/archive/2009/06/25/json-hijacking.aspx
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx
How do you JQuery experts out there go about fixing this issue? Please help.
--- EDIT: ---
#Gren. Awesome. Thank you. Based on your tips, here is what I figured out.
The normal autocomplete returning json array
and a mod one with a json object wrapping the array
Here is the code, assuming we got a global List named txtlst in the controller.cs...
// normal one
public JsonResult AutoCompleteHelper1(string term) {
//if (!Request.IsAjaxRequest()) return null;
var lst = txtlst.Where(s => s.StartsWith(term)).ToList();
var res = lst.Select(x => new { value = x }).ToList();
return Json(res, JsonRequestBehavior.AllowGet);
}
//mod one
public JsonResult AutoCompleteHelper2(string term) {
//if (!Request.IsAjaxRequest()) return null;
var lst = txtlst.Where(s => s.StartsWith(term)).ToList();
var res = lst.Select(x => new { value = x }).ToList();
return Json(new { wrapper= res, name="wrapper" }, JsonRequestBehavior.AllowGet);
}
}
and then in the .cshtml file...
<p>Auto Complete Example</p>
<input type="text" name="q" id="MyInput1" data-autocomplete-source="#Url.Action("AutoCompleteHelper1", "Home")"/>
<input type="text" name="q" id="MyInput2" data-autocomplete-source="#Url.Action("AutoCompleteHelper2", "Home")" />
and then in the .js file...
$(document).ready(function () {
// normal autocomplete
$("#MyInput1").autocomplete({ source: $("#MyInput1").attr("data-autocomplete-source") });
// mod autocomplete with a wrap
$("#MyInput2").autocomplete({
source: function (req, add) {
$.getJSON($("#MyInput2").attr("data-autocomplete-source"), req, function (data) {
var suggestions = [];
$.each(data.wrapper, function (i, o) {
suggestions.push(o.value);
});
add(suggestions);
});
}
});
});
--- EDIT 2: ---
Please ignore those comments that are telling me to use POST. They
are not reading the blog links or do not understand the issue.
The other option is to wrap your JSON Arrays within JSON objects. The article and comments in it answered this question.
Edit:
From the article:
The fact that this is a JSON array is important. It turns out that a script that contains a JSON array is a valid JavaScript script and can thus be executed. A script that just contains a JSON object is not a valid JavaScript file.
If you wrap your json array in an object {"myJsonArray":[{"name":"sensitive"},{"name":"data"}]} the HTML script tag would not be able to execute.
Security of an Ajax/JSONP/JSON call is the same exact thing as the security of an http call since Ajax requests are http requests. Nothing changes in how you handle it. You make sure the user is logged in and can access the information.
If you are worried about data being cached, use Post or set the proper no caching headers with the backend.
EDIT:
Things you can do to prevent JOSN from being read is the infinite loop trick. Stick an infinte loop in front of the call, means your Ajax call will have to strip this before using it.
You can use keys, third party site would not have the keys needed to validate the request.
You can check referrers.