When using Polkadot-JS App, a new account is saved in the polkadot-accounts directory with the following file format:
{"encoded":"DBSPyMN8LAWsoejHHMfqU2/m5YxxeGtD0HJpzQzyY44AgAAAAQAAAAgAAADug5JmpYwzc9oxUJXUY2zIWkZFQRtqoS3lgu/wdhRSLPwx5TKjaMRrIqrrSyO3uxRytoPmKT5wTDV/zGyh2S9xwozzqhQhBmJG7TJwA+oNqDKpQpj6cooWNSivRzKmpqPaMO+af0LPPOREvVGHESwBSf+xHMZ5fISIG97xbXhGXV89Oo4iEBSwpIhjz6+u7AiVtF2akpyxLkhqiIXC","encoding":{"content":["pkcs8","sr25519"],"type":["scrypt","xsalsa20-poly1305"],"version":"3"},"address":"14BrSo66a3zvZmTksQFBmtZYEHvyWGJEBQypVfDynPXbsz2T","meta":{"genesisHash":"0x91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3","isHardware":false,"name":"POLKY123TESTONLY","tags":[],"whenCreated":1665704506323}}
I would like to achieve the same result using the polkadot-js API.
When I add an account to the keyring it is not there when I run the script the next time. Is there some way to store all accounts in a file and then load them all when creating the keyring object?
const keyring = new Keyring({type: 'sr25519'});
It seems odd to me that there isn't an obvious way to do this. How does the App work? it loads all the accounts in the polkadot-accounts directory. So why is there no API call to achieve this? Or is there?
Also if I wanted to create an account and then easily load it into the Polkadot-JS APP, I would need it in the format above, so surely there must be a way to save an account in that format ?
Basically i'm looking for a method to 'create backup file' the same way the App does.
It seems like there should be a simple function to do this?
To generate a backup file similar to the one in Polkadot-JS App you need to use:
console.log(JSON.stringify(keyring.toJson(pair.address)));
Also when creating the key pair you need to add the meta data for genesisHash:
const pair = keyring.addFromUri(mnemonic, { name: 'POLKA_KEY_1', genesisHash:'0x91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3', isHardware:false}, 'sr25519');
Related
Imagine website, where user can generate content via js.
For example.
User clicks button
It requests our api (not user's api)
Api returns object with specific fields.
We show select with user's defined options generated by user's code or some calculated result based on data we sent.
The idea is to give user an ability to edit visible content (using our structures, we know beforehand which fields in returned object do what things).
First solution "developed" in 5 minutes.
Users clicks button
It send all required data as context to our api.
We fetch from database user's defined code
// here is the code which we write (not user) and we know this code is safe
const APP_CONTEXT = parseInput(); // this can be parameters from command line
const ourLibrary = require('ourLibrary');
// APP_CONTEXT is variable which contains data from frontend. We control data inside APP_CONTEXT, user can not write to it
// here is user defined code
const someVar = APP_CONTEXT['fieldDescribedInOurDocumentation'];
const anotherVar = APP_CONTEXT['anotherFieldFromDocumentation'];
ourLibrary.sendToFrontend(someVar + anotherVar);
In this very simple example once user clicked on button, we sent api request to our api, user's code has been executed, we show result of execution. ourLibrary abstract the way the handling is completed.
The main problem as I think is the security. I think about using restricted nodejs process. No network access, no file system access.
Is it possible to deny any import/require in nodejs process? I want to let user only call all builtin js function (Math.min, Math.max, new Date(), +, -), declare functions and so on. So it will work like a sophisticated calculator. And also we should have an ability to send it back to frontend. For example, via rabbitmq + nodejs + websockets. We can use simple console.log if former is the problem.
Some possible solution (not secure, of course) using nodejs interpreter. We execute interpreter every time when action is required.
const APP_CONTEXT = parseInput();
const ourLibrary = require('ourLibrary');
const usersCode = getUsersCode();
eval(usersCode);
Inside usersCode they use ourLibrary.sendToFrontend to produce the result. But this solution allows user to use any builtin nodejs functions, like const fs = require('fs'). Of course access will be restricted using linux system (selinux or similar) but can I configure/setup nodejs to run as simple js interpreter? May be there is some other js interpreter exists which is safe to use? Safe means: only arithmetic, Date function, Math functions and so on. No filesystem access, no network access.
I successfully followed Microsoft's tutorial to create an extension.
I'm trying to get all the work-items of certain sprint, but to be honest, I'm lost...
I'm not sure what to look for - I have the VSS object, with which I can require additional services (such as TFS/WorkItemTracking/Services or TFS/WorkItemTracking/RestClient).
I found some examples like this one, but couldn't find an API to retrieve or query work items.
Do I need a JS object for that, or is it accomplished via some REST call?
You are nearly there.
You need the WIT RestClient (assuming you are using Typescript):
import { WorkItemTrackingHttpClient, getClient } from "TFS/WorkItemTracking/RestClient";
With that you can do
const witClient = ((getClient()) as WorkItemTrackingHttpClient);
and then
const result = await witClient.queryByWiql({ query: query });
The WorkItemTrackingHttpClient is all you need to manipulate work items.
EDIT: You could also have a look at the new SDK and API. But unfortunately its lacking a lot on the documentation side. Although there are some samples.
To query work items, you can can also check this page for WorkItemTrackingHttpClient2_2 client API.
IPromise<Contracts.WorkItemQueryResult> queryById(id, project, team)
IPromise<Contracts.WorkItemQueryResult> queryByWiql(wiql, project, team)
This is also an example about how to get WorkItemTrackingHttpClient and to call Api on Microsoft docs site.
I'm a relatively new Javascript programmer and I'm experimenting with the Marvel API (I need to access the images for a project) and having a little trouble wrapping my head around the requirements.
As I understand it, you need to pass a hash and a ts (timestamp, I presume), when calling the API from a server-side app. But I don't see in the documentation that this is required when using a client-side app.
I tried to do some basic endpoint testing with Insomnia and I receive the message "You must provide a hash.". Apparently I need the hash for client-side access as well?
I have seen some NodeJS examples that show you how to generate the hash (for example, https://www.raymondcamden.com/2014/02/02/Examples-of-the-Marvel-API), but nothing for the client side (that I could find). I also don't know how I would generate this within Insomnia (or Postman). Any pointers in the right direction would be appreciated.
I'd also like to ask what role the authorized domains play when accessing the Marvel API from a local machine. Do I need to add localhost to this list?
Thanks for any help!
Follow the steps:
Pick an API Endpoint. eg: https://gateway.marvel.com:443/v1/public/characters
Use a query value for ts. ts could be timestamp or any long string.
eg: ts=thesoer
Generate a MD5 hash of ts+privatekey+publickey through code or preferrably online. eg: md5(ts + privKey + pubKey)
For md5 hash: http://www.md5.cz/
Join the dots. URL?ts=val&apikey=key&hash=md5Hash.
eg. https://gateway.marvel.com:443/v1/public/characters?ts=thesoer&apikey=001ac6c73378bbfff488a36141458af2&hash=72e5ed53d1398abb831c3ceec263f18b
Add a pre-requisite script to your postman collection.
var pubkey = "your_public_key";
var pvtkey = "your_private_key";
var ts = new Date().getTime();
pm.environment.set("ts", ts)
pm.environment.set("apikey", pubkey)
var message = ts+pvtkey+pubkey;
var a = CryptoJS.MD5(message);
pm.environment.set("hash", a.toString())
And then you can make your calls like such
https://gateway.marvel.com/v1/public/characters?ts={{ts}}&apikey={{apikey}}&hash={{hash}}
See this collection.
Regarding your authorized domains, add your public IP.
I'm currently using AWS's Javascript SDK to launch custom EC2 instances and so far so good.
But now, I need these instances to be able to run some tasks when they are created, for example, clone a repo from Github, install a software stack and configure some services.
This is meant to emulate a similar behaviour I have for local virtual machine deployment. In this case, I run some provisioning scripts with Ansible that get the job done.
For my use case, which would be the best option amongst AWS's different services to achieve this using AWS's Javascript SDK?
Is there anyway I could maybe have a template script to which I passed along some runtime obtained variables to execute some tasks in the instance I just created? I read about user-data but I can't figure out how that wraps with AWS's SDK. Also, it doesn't seem to be customisable.
At the end of the day, I think I need a way to use the SDK to do this:
"On the newly created instance, run this script that is stored in such place, replacing these
placeholder values in the script with these I'm giving you now"
Any hints?
As Mark B. stated, UserData is the way to go for executing commands on instance launch. As you tagged the question with javascript here's an example on passing this in the ec2.runInstances command:
let AWS = require('aws-sdk')
let ec2 = new AWS.EC2({region: 'YOUR_REGION'})
// Example commands to create a folder, a file and delete it
let commands = [
'#!/usr/bin/env bash',
'mkdir /home/ubuntu/test',
'touch /home/ubuntu/test/examplefile',
'rm -rf /home/ubuntu/test'
];
let params = {
...YOUR PARAMS HERE...
UserData: new Buffer(commands.join("\n")).toString('base64')
}
// You need to encode it with Base64 for it to be executed by the userdata interpreter
ec2.runInstances(params).promise().then(res => { console.log(res); })
When you launch the new instances you can provide the user-data at that time, in the same AWS SDK/API call. That's the best place to put any server initialization code.
The only other way to kick off a script on the instance via the SDK is via the SSM service's Run Command feature. But that requires the instance to already have the AWS SSM agent installed. This is great for remote server administration, but user-data is more appropriate for initializing an instance on first boot.
I am using Parse.com with my iPhone app.
I ran into a problem earlier where I was trying to add the currently logged in user to another user's PFRelation key/column called "friendsRelation" which is basically the friends list.
The only problem, is that you are not allowed to save changes to any other users besides the one that is currently logged in.
I then learned, that there is a workaround you can use, using the "master key" with Parse Cloud Code.
I ended up adding the code here to my Parse Cloud Code: https://stackoverflow.com/a/18651564/3344977
This works great and I can successfully test this and add an NSString to a string column/key in the Parse database.
However, I do not know how to modify the Parse Cloud Code to let me add a user to another user's PFRelation column/key.
I have been trying everything for the past 2 hours with the above Parse Cloud Code I linked to and could not get anything to work, and then I realized that my problem is with the actual cloud code, not with how I'm trying to use it in xcode, because like I said I can get it to successfully add an NSString object for testing purposes.
My problem is that I do not know javascript and don't understand the syntax, so I don't know how to change the Cloud Code which is written in javascript.
I need to edit the Parse Cloud Code that I linked to above, which I will also paste below at the end of this question, so that I can add the currently logged in PFUser object to another user's PFRelation key/column.
The code that I would use to do this in objective-c would be:
[friendsRelation addObject:user];
So I am pretty sure it is the same as just adding an object to an array, but like I said I don't know how to modify the Parse Cloud Code because it's in javascript.
Here is the Parse Cloud Code:
Parse.Cloud.define('editUser', function(request, response) {
var userId = request.params.userId,
newColText = request.params.newColText;
var User = Parse.Object.extend('_User'),
user = new User({ objectId: userId });
user.set('new_col', newColText);
Parse.Cloud.useMasterKey();
user.save().then(function(user) {
response.success(user);
}, function(error) {
response.error(error)
});
});
And then here is how I would use it in xcode using objective-c:
[PFCloud callFunction:#"editUser" withParameters:#{
#"userId": #"someuseridhere",
#"newColText": #"new text!"
}];
Now it just needs to be modified for adding the current PFUser to another user's PFRelation column/key, which I am pretty sure is technically just adding an object to an array.
This should be fairly simple for someone familiar with javascript, so I really appreciate the help.
Thank you.
I would recommend that you rethink your data model, and extract the followings out of the user table. When you plan a data model, especially for a NoSQL database, you should think about your queries first and plan your structure around that. This is especially true for mobile applications, as server connections are costly and often introduces latency issues if your app performs lots of connections.
Storing followings in the user class makes it easy to find who a person is following. But how would you solve the task of finding all users who follow YOU? You would have to check all users if you are in their followings relation. That would not be an efficient query, and it does not scale well.
When planning a social application, you should build for scalabilty. I don't know what kind of social app you are building, but imagine if the app went ballistic and became a rapidly growing success. If you didn't build for scalability, it would quickly fall apart, and you stood the chance of losing everything because the app suddenly became sluggish and therefore unusable (people have almost zero tolerance for waiting on mobile apps).
Forget all previous prioities about consistency and normalization, and design for scalability.
For storing followings and followers, use a separate "table" (Parse class) for each of those two. For each user, store an array of all usernames (or their objectId) they follow. Do the same for followers. This means that when YOU choose to follow someone, TWO tables need to be updated: you add the other user's username to the array of who you follow (in the followings table), and you also add YOUR username to the array of the other user's followers table.
Using this method, getting a list of followers and followings is extremely fast.
Have a look at this example implementation of Twitter for the Cassandra NoSQL database:
https://github.com/twissandra/twissandra