Can I send transactions async using polkadot-js - javascript

I have walked through the official document and found a page about how to transfer using polkadot-js https://polkadot.js.org/docs/api/examples/promise/make-transfer
const transfer = api.tx.balances.transfer(BOB, 12345);
const hash = await transfer.signAndSend(alice);
I want to know if I can split the signAndSend method into two and execute at different machines. like in a client machine, in the browser compute the signature.
const transfer = api.tx.balances.transfer(BOB, 12345);
const signature = await transfer.signAsync(alice);
and then in the server side send the transfer transaction.
const mockSigner = createMockSigner(signature); // signature is computed from the client side and send to server over HTTP
const transfer = api.tx.balances.transfer(BOB, 12345);
const res = transfer.send({signer: mockSigner});
The above example doesn't work, I just want to express if I can do sign and send in different machines.

Signing a transaction on one computer and sending it from a second computer is definitely possible.
PolkadotJS Tools contains a method for building and signing a transaction offline. You can find the source here. Please note that building the transaction in the browser will still require access to a polkadot node (the endpoint in the code).
The signer sendOffline command has the exact same API, but will not
broadcast the transaction. submit and sendOffline must be connected to
a node to fetch the current metadata and construct a valid
transaction. Their API has the format:
Therefore, you'd need to run a light client in the browser in order to have access to current block information or attach to some other node endpoint outwith the browser.

The offline sign version:
https://gist.github.com/xcaptain/4d190232411dcf27441d9fadd7ff6988
The online sign version:
const transfer = api.tx.balances.transfer(BOB, 12345);
const signedExtrinsic = await transfer.signAsync(alice).toJSON();
await api.rpc.author.submitExtrinsic(signedExtrinsic);
Don't know what's the difference, but they both work.

Related

AWS fetching multiple images

I have following question:
In my first scenario i have an S3 bucket. I can use the Storage API from the amplify SDK
const result = await Storage.put("profile.png", profileImage, { level: "protected" });
and later i can get it
const result = await Storage.get("profile.png", { level: "protected" });
Everything works fine. Everybody can read it, but only i can read/delete/update it.
But now here my question....
In my application a user can be the admin of a group. He can see all group members.
What if he fetch the list of all users. Lets say he fetch 10 users. Does this mean i need to make 10 requests for each image? This also means i need to save somewhere the ID of the other user.
Is this the correct way?
According to the docs Storage.get returns a pre-signed URL. A pre-signed URL is for a single object so yes, for 10 users you'd need to make 10 get requests.
However, there may be alternatives depending on your app requirements. For example, a common approach for S3 is to use a key prefix to group items. You could prepend the group-name/group-id for all profile pictures of people within that group.
S3 allows listing keys by a prefix, which is the Storage.list call. E.g. you could use a format like s3://my-bucket/{group-id}/{user-id}/profile.png then for the admin user you can get all items in the group with a call like const result = await Storage.list('{group-id}/');

Socket.io client initialization doesn't seem to be working as documented (4.x)

My scoket.io server is hosted on virtualized hardware exposed at an endpoint of example.com/<id>/socket.io/
And socket.io by default will try to connect to example.com/socket.io/ so I need to customize the endpoint used. According to the docs for 4.x:
In the examples above, the client will connect to the main namespace.
Using only the main namespace should be sufficient for most use cases,
but you can specify the namespace with:
// same origin version
const socket = io("/admin");
// cross origin version
const socket = io("https://server-domain.com/admin");
So I went with:
const socket = io("/<id>/socket.io");
and also tried:
const socket = io("/<id>/socket.io/");
and
const socket = io("https://example.com/<id>/socket.io");
and
const socket = io("https://example.com/<id>/socket.io/");
But the failing polling messages show that it's trying to connect to the default https://example.com/socket.io/.
I can verify that the socket server is running at the correct endpoint because visiting https://example.com/<id>/socket.io/ serves a message: {"code":0,"message":"Transport unknown"} and https://example.com/<id>/socket.io/socket.io.js serves Socket.IO v4.0.0
What can I do to further debug or fix this?
Edit: Tried with v3 and experienced the same result.
What you're trying to do is to get Socket.IO to connect to a different path, not a namespace. In short, you should use the path option, like so:
// The string passed as the first argument is the Namespace to use,
// not the path to connect to.
const socket = io("/", {
// Set the <id> to your actual path ID.
path: "/<id>/socket.io/"
});
Read more about Socket.IO Namespaces here.

Embeddable js interpreter for user's code?

Imagine website, where user can generate content via js.
For example.
User clicks button
It requests our api (not user's api)
Api returns object with specific fields.
We show select with user's defined options generated by user's code or some calculated result based on data we sent.
The idea is to give user an ability to edit visible content (using our structures, we know beforehand which fields in returned object do what things).
First solution "developed" in 5 minutes.
Users clicks button
It send all required data as context to our api.
We fetch from database user's defined code
// here is the code which we write (not user) and we know this code is safe
const APP_CONTEXT = parseInput(); // this can be parameters from command line
const ourLibrary = require('ourLibrary');
// APP_CONTEXT is variable which contains data from frontend. We control data inside APP_CONTEXT, user can not write to it
// here is user defined code
const someVar = APP_CONTEXT['fieldDescribedInOurDocumentation'];
const anotherVar = APP_CONTEXT['anotherFieldFromDocumentation'];
ourLibrary.sendToFrontend(someVar + anotherVar);
In this very simple example once user clicked on button, we sent api request to our api, user's code has been executed, we show result of execution. ourLibrary abstract the way the handling is completed.
The main problem as I think is the security. I think about using restricted nodejs process. No network access, no file system access.
Is it possible to deny any import/require in nodejs process? I want to let user only call all builtin js function (Math.min, Math.max, new Date(), +, -), declare functions and so on. So it will work like a sophisticated calculator. And also we should have an ability to send it back to frontend. For example, via rabbitmq + nodejs + websockets. We can use simple console.log if former is the problem.
Some possible solution (not secure, of course) using nodejs interpreter. We execute interpreter every time when action is required.
const APP_CONTEXT = parseInput();
const ourLibrary = require('ourLibrary');
const usersCode = getUsersCode();
eval(usersCode);
Inside usersCode they use ourLibrary.sendToFrontend to produce the result. But this solution allows user to use any builtin nodejs functions, like const fs = require('fs'). Of course access will be restricted using linux system (selinux or similar) but can I configure/setup nodejs to run as simple js interpreter? May be there is some other js interpreter exists which is safe to use? Safe means: only arithmetic, Date function, Math functions and so on. No filesystem access, no network access.

Using IdentityServer3 with a standalone EXE (without using 'redirect_uri'?)

I have a web site I'm developing that uses IdentityServer3 to authentication with, and it also returns an access token to enable access to my APIs. All works well. I'm now rewriting the web site into a standalone windows EXE. The EXE is written in React and Electron to make it a standalone application. My problem is: I call the 'authorize' endpoint to my identityserver3 server, but I do not know what to put in for the 'redirect_uri' required by identityserver3? Since my EXE is standalone it has no uri address?
Is there a way to use IdentityServer3 as an API, where I can send the 'authorize' url to IdentityServer3, and have it return the access token as a response (to the API call?)
You can do as Kirk Larkin says - use the Client Credentials flow.
The following code is in .NET:
var client = new TokenClient(
BaseAddress + "/connect/token",
"clientId",
"clientSecret");
var result = client.RequestClientCredentialsAsync(scope: "my.api").Result;
var accessToken = result.AccessToken;
Where BaseAddress is your IDS address.
Of course you will have to register your client in the IDS clients list with the appropriate flow (Client Credentials), and the scope is just optional, but I guess you will need one.
Then accessing a protected API is fairly easy:
var client = new HttpClient();
client.SetBearerToken(accessToken);
var result = client.GetStringAsync("https://protectedapiaddress").Result;
EDIT: For JavaScript approach:
This and this look like a working solution, but I've tried neither of them

How to make sure that only a specific domain can query from your REST api?

I have an app that has a REST api. I want it so that the only requests that can be made to the REST api are ones originating from the app itself. How can I do that?
I am using a node.js+express server too.
EDIT: the app is fully a public web app.
Simply define the header in your request, what this does is, it allows requests only from a certain domain, and instantly rejects any other domain.
response.set('Access-Control-Allow-Origin', 'domain.tld');
EDIT: IF you're really keen against web scraping stuff, you could make a function to double check client's origin.
function checkOrigin (origin) {
if (origin === "your.domain.tld") {
return true;
} else {
return false;
}
}
/* Handling it in response */
if (checkOrigin(response.headers.origin)) {
// Let client get the thing from API
} else {
response.write("Send them error that they're not allowed to use the API");
response.end();
}
Above example should work for the default HTTP/HTTPS module, and should also work for Express, if I'm not mistaken.
EDIT 2: To back my claim up that it should also work for Express, I found this quotation at their documentation;
The req (request) and res (response) are the exact same objects that Node provides, so you can invoke req.pipe(), req.on('data', callback), and anything else you would do without Express involved.
I would recommend using an API key from the client. CORS filters are too easy to circumvent.
A simple approach for securing a How to implement a secure REST API with node.js
Overview from above post:
Because users can CREATE resources (aka POST/PUT actions) you need to secure your api. You can use oauth or you can build your own solution but keep in mind that all the solutions can be broken if the password it's really easy to discover. The basic idea is to authenticate users using the username, password and a token, aka the apitoken. This apitoken can be generated using node-uuid and the password can be hashed using pbkdf2
Then, you need to save the session somewhere. If you save it in memory in a plain object, if you kill the server and reboot it again the session will be destroyed. Also, this is not scalable. If you use haproxy to load balance between machines or if you simply use workers, this session state will be stored in a single process so if the same user is redirected to another process/machine it will need to authenticate again. Therefore you need to store the session in a common place. This is typically done using redis.
When the user is authenticated (username+password+apitoken) generate another token for the session, aka accesstoken. Again, with node-uuid. Send to the user the accesstoken and the userid. The userid (key) and the accesstoken (value) are stored in redis with and expire time, e.g. 1h.
Now, every time the user does any operation using the rest api it will need to send the userid and the accesstoken.

Categories

Resources