Scraping data from an interactive chart - javascript

I am trying to retrieve data which is generating a chart in javascript on this page:
https://www.energy-charts.de/price.htm
I found the SVG elements which draw the lines with their M and L elements, but I don't know where to start looking for the javascript array which holds the actual data. (I am assuming there must be an array somewhere).
I am thankful for any tipps and hints where I need to start looking for this data.

Short Answer
The data is stored in a JSON file at https://www.energy-charts.de/price/week_2019_21.json
Long Answer
If you open developer tools (F12), you can see a load of console.logs from the file price.js. Most of them are of no use, but the line got chartTitle from JSON! _chartTitle: Electricity production and spot prices in Germany in week 21 2019 looks like it could be of use to us.
Opening up price.js and searching for "got chartTitle", I found a function named createChart, which appears to be loading JSON files. I am assuming these will be returned from an API of some sort and not stored directly in JS files.
Scrolling up from "got chartTitle", I noticed this line:
d3.json(filepath, function(error, json) {. To me, this is loading JSON from a file path. Searching for "filepath", I found it declared as a global variable. Typing this into the javascript console I see that the value is "./price/week_2019_21.json", so navigating to that URL (https://www.energy-charts.de/price/week_2019_21.json) should be the data you are looking for!
This URL is calculated in the following code block:
if(defaultweek < 10){
filepath = "./price/week_" + defaultyear +"_0"+ defaultweek +".json"; //default file on first-load
}
else{
filepath = "./price/week_" + defaultyear +"_"+ defaultweek +".json"; //default file on first-load
}
The default values are set in energy-charts_default.js.
Hope this helps!

Related

How to filter out non-json documents in MarkLogic?

I have a lot of data loaded in my database where some of the documents loaded are not JSON files & just binary files. Correct data looks like this: "/foo/bar/1.json" but the incorrect data is in the format of "/foo/bar/*". Is there a mechanism in MarkLogic using JavaScript where I can filter out this junk data and delete them?
PS: I'm unable to extract files with mlcp that have a "?" in the URI and maybe when I try to reload this data I get this error. Any way to fix that extract along with this?
If all of the document URIs contain a ? and are in that directory, then you could use cts.uriMatch()
declareUpdate();
for (const uri of cts.uriMatch('/foo/bar/*?*') ) {
xdmp.documentDelete(uri)
}
Alternatively, if you are looking to find the binary() documents, you can apply the format-binary option to a cts.search() with a cts.directoryQuery() and then delete them.
declareUpdate();
for (const doc of cts.search(cts.directoryQuery("/foo/bar/"), ['format-json']) ) {
xdmp.documentDelete(fn.baseUri(doc));
}
They are probably being persisted as binary because there is no file extension when the URI ends with a question mark and some querystring parameter values i.e. 1.json?foo=bar instead of 1.json
It is difficult to diagnose and troubleshoot without seeing what your MLCP job configs are and knowing more about what you are doing to load the data.

MongoImport csv combine/concat various columns to one array for import

I have another interesting case which I have never faced before, so I'm asking help from SO community and also share my experience with it.
The case || What we have:
A csv file (exported from other SQL DB) with such structure
(headers):
ID,SpellID,Reagent[0],Reagent[1..6]Reagent[7],ReagentCount[0],ReagentCount[1..6],ReagentCount[7]
You could also check a full -csv data file here, at my
dropbox
My gist from Github, which helps you to understand how MongoImport works.
What we need:
I'd like to receive such structure(schema) to import it into MongoDB collection:
ID(Number),SpellID(Number),Reagent(Array),ReagentCount(Array)
6,898,[878],[1]
with ID, SpellID, and two arrays, in first we store all Reagent IDs, like [0,1,2,3,4,5,6,7] from all Reagent[n] columns, and in the second array we have the array with the same length that represent quantity of ReagentIDs, from all ReagentCount[n]
OR
A transposed objects with such structure (schema):
ID(Number),SpellID(Number),ReagentID(Number),Quantity/Count(Number)
80,2675,1,2
80,2675,134,15
80,2675,14,45
As you may see, the difference between the first example and this one, that every document in the collection represents each ReagentID and it's quantity to SpellID. So if one Spell_ID have N different reagents it will be N documents in the collection, cause we all know, that there can't be more then 7 unique Reagent_ID belonging to one Spell_ID according to our -csv file.
I am working on this problem right now, with the help of node js and npm i csv (or any other modules for parsing csv files). Just to make my csv file available for importing to my DB via mongoose. I'll be very thankful for all those, who could provide any relevant contribution to this case. But anyway, I will solve this problem eventually and share my solution in this question.
As for the first variant I guess there should be one-time script for MongoImport that could concat all columns from Reagent[n] & ReagentCount[n] to two separate arrays like I mentioned above, via -fields but unfortunately I don't know it, and there are no examples on SO or official Mongo docs relevant to it. So if you have enough experience with MongoImport feel free to share it.
Finally I solve my problem as I want it to, but without using mongoimport
I used npm i csv and write function for parsing my csv file. In short:
async function FuncName (path) {
try {
let eva = fs.readFileSync(path,'utf8');
csv.parse(eva, async function(err, data) {
//console.log(data[0]); we receive headers, if they exist
for (let i = 1; i < data.length; i++) { //we start from 1, because 0 is headers, if we don't have it, then we start from 0
console.log(data[i][34]); //where i is row number and j(34) is a header address
}
});
} catch (err) {
console.log(err);
}
}
It loops over csv file and shows data in array that allows you to operate with them as you want it to.

Python Flask data feed from Pandas Dataframe, dynamically define with unique endpoint

Hi I am building a web app with Flask Python. I got a problem here:
#app.route('/analytics/signals/<ticker_url>')
def analytics_signals_com_page(ticker_url):
all_ticker = full_list
ticker_name = com_name
ticker = ticker_url.upper()
pricerec = sp500[ticker_url.upper()].tolist()
timerec = sp500[ticker_url.upper()].index.tolist()
return render_template('company.html', all_ticker=all_ticker, ticker_name=ticker_name, ticker=ticker, pricerec=pricerec, timerec=timerec)
Here I am defining company pages based on the a page will contain different content. The problem is that everything is fine upto ticker = ticker_url.upper(). It works perfectly fine. But for pricerec and timerec, they make problems.
sp500 is a pandas DataFrame columns being companies like "AAPL", "GOOG","MSFT", and so forth 505 companies and the index are timestamps, and values are the prices at each time.
So what I am doing for the pricerec, I am taking the ticker_url and use it to take the specific company's price and make it as a list. And timerec is to take the index (timestamps) and make it as a list. And I am passing these two variables into the company.html page.
But it makes internal server error. I do not know why it happens.
My expectation was that when a user click a button that href to "~/analytics/signals/aapl" then the company.html page will contain the pricerec and timerec for me to draw a graph. But it didn't work like that. It makes internal server error. I defined those two variables in the javascript also like I did for the other variables(all_ticker, ticker_name, and ticker)
Can anyone help me with this issue?
Thanks!

Dygraph not plotting graphs using csv link

I am not able to plot graph using dygraph with csv link.
If I use http://jsfiddle.net/eM2Mg/ this link it works. When I replace data with link, it just shows empty graph. I tested in debugger tool and I do get proper response from file. If I just try to plot the graph using same data from file but adding the data in javascript as static content like the example provided in jsfiddle it works.
Things I tried -
1. I tried .txt, .csv and file without extension and nothing worked
2. I tried on different data and when I insert data in static way in javascript it works. So data is in url is definitely not incorrect.
3. When checked response for url in debugger tool I get correct response
image of response
html code -
<div id="graph"></div>
Javascript -
g = new Dygraph(document.getElementById("graph"),
// For possible data formats, see http://dygraphs.com/data.html
"https://files.fm/down.php?i=8v88usam&n=testing_file_2.txt",
{
});
Your dates are in the wrong format, see Dygraphs Data Format
Here are some valid date formats for CSV:
2009-07-12
2009/07/12
2009/07/12 12
2009/07/12 12:34
2009/07/12 12:34:56

How go I get csv data into netsuite?

I've got an update to my question.
What I really wanted to know was this:
How do I get csv data into netsuite?
Well, it seems I use the csv import tool to create a mapping and use this call to import the csv nlapiSubmitCSVImport(nlobjCSVImport).
Now my question is: How do I iterate through the object?!
That gets me half way - I get the csv data but I can't seem to find out how I iterate through it in order to manipulate the date. This is, of course, the whole point of a scheduled script.
This is really driving me mad.
#Robert H
I can think of a million reasons why you'd want to import data from a CSV. Billing, for instance. Various reports on data any company keeps and I wouldn't want to keep this in the file cabinet nor would I really want to keep the file at all. I just want the data. I want to manipulate it and I want to enter it.
Solution Steps:
To upload a CSV file we have to use a Suitelet script.
(Note: file - This field type is available only for Suitelets and will appear on the main tab of the Suitelet page. Setting the field type to file adds a file upload widget to the page.)
var fileField = form.addField('custpage_file', 'file', 'Select CSV File');
var id = nlapiSubmitFile(file);
Let's prepare to call a Restlet script and pass the file id to it.
var recordObj = new Object();
recordObj.fileId = fileId;
// Format input for Restlets for the JSON content type
var recordText = JSON.stringify(recordObj);//stringifying JSON
// Setting up the URL of the Restlet
var url = 'https://rest.na1.netsuite.com/app/site/hosting/restlet.nl?script=108&deploy=1';
// Setting up the headers for passing the credentials
var headers = new Array();
headers['Content-Type'] = 'application/json';
headers['Authorization'] = 'NLAuth nlauth_email=amit.kumar2#mindfiresolutions.com, nlauth_signature=*password*, nlauth_account=TSTDRV****, nlauth_role=3';
(Note: nlapiCreateCSVImport: This API is only supported for bundle installation scripts, scheduled scripts, and RESTlets)
Let's call the Restlet using nlapiRequestURL:
// Calling Restlet
var output = nlapiRequestURL(url, recordText, headers, null, "POST");
Create a mapping using Import CSV records available at Setup > Import/Export > Import CSV records.
Inside the Restlet script Fetch the file id from the Restlet parameter. Use nlapiCreateCSVImport() API and set its mapping with mapping id created in step 3. Set the CSV file using the setPrimaryFile() function.
var primaryFile = nlapiLoadFile(datain.fileId);
var job = nlapiCreateCSVImport();
job.setMapping(mappingFileId); // Set the mapping
// Set File
job.setPrimaryFile(primaryFile.getValue()); // Fetches the content of the file and sets it.
Submit using nlapiSubmitCSVImport().
nlapiSubmitCSVImport(job); // We are done
There is another way we can get around this although neither preferable nor would I suggest. (As it consumes a lot of API's if you have a large number of records in your CSV file.)
Let's say that we don't want to use the nlapiCreateCSVImport API, so let's continue from the step 4.
Just fetch the file Id as we did earlier, load the file, and get its contents.
var fileContent = primaryFile.getValue();
Split the lines of the file, then subsequently split the words and store the values into separate arrays.
var splitLine = fileContent.split("\n"); // Splitting the file on the basis of lines.
for (var lines = 1,count=0; lines < splitLine.length; lines++)
{
var words = (splitLine[lines]).split(","); // words stores all the words on a line
for (var word = 0; word < words.length; word++)
{
nlapiLogExecution("DEBUG", "Words:",words[word]);
}
}
Note: Make sure you don't have an additional blank line in your CSV file.
Finally create the record and set field values from the array that we created above.
var myRec = nlapiCreateRecord('cashsale'); // Here you create the record of your choice
myRec.setFieldValue('entity', arrCustomerId[i]); // For example, arrCustomerId is an array of customer ID.
var submitRec = nlapiSubmitRecord(myRec); // and we are done
fellow NetSuite user here, I've been using SuiteScripts for a while now but never saw nlobjCSVImport object nor nlapiSubmitCSVImport .. I looked in the documentation, it shows, but there is no page describing the details, care to share where you got the doc from?
With the doc for the CSVImport object I might be able to provide some more help.
P.S. I tried posting this message as a comment but the "Add comment" link didn't show up for some reason. Still new to SOF
CSV to JSON:
convert csv file to json object datatable
https://code.google.com/p/jquery-csv/
If you know the structure of the CSV file, just do a for loop and map the fields to the corresponding nlapiSetValue.
Should be pretty straightforward.

Categories

Resources