Evaporate.js upload file with x-amz-security-token: SignatureDoesNotMatch - javascript

I want upload a file with evaporate.js and crypto-js using x-amz-security-token:
import * as Evaporate from 'evaporate';
import * as crypto from "crypto-js";
Evaporate.create({
aws_key: <ACCESS_KEY>,
bucket: 'my-bucket',
awsRegion: 'eu-west',
computeContentMd5: true,
cryptoMd5Method: data => crypto.algo.MD5.create().update(String.fromCharCode.apply(null, new Uint32Array(data))).finalize().toString(crypto.enc.Base64),
cryptoHexEncodedHash256: (data) => crypto.algo.SHA256.create().update(data as string).finalize().toString(crypto.enc.Hex),
logging: true,
maxConcurrentParts: 5,
customAuthMethod: (signParams: object, signHeaders: object, stringToSign: string, signatureDateTime: string, canonicalRequest: string): Promise<string> => {
const stringToSignDecoded = decodeURIComponent(stringToSign)
const requestScope = stringToSignDecoded.split("\n")[2];
const [date, region, service, signatureType] = requestScope.split("/");
const round1 = crypto.HmacSHA256(`AWS4${signParams['secret_key']}`, date);
const round2 = crypto.HmacSHA256(round1, region);
const round3 = crypto.HmacSHA256(round2, service);
const round4 = crypto.HmacSHA256(round3, signatureType);
const final = crypto.HmacSHA256(round4, stringToSignDecoded);
return Promise.resolve(final.toString(crypto.enc.Hex));
},
signParams: { secretKey: <SECRET_KEY> },
partSize: 1024 * 1024 * 6
}).then((evaporate) => {
evaporate.add({
name: 'my-key',
file: file, // file from <input type="file" />
xAmzHeadersCommon: { 'x-amz-security-token': <SECURITY_TOKEN> },
xAmzHeadersAtInitiate: { 'x-amz-security-token': <SECURITY_TOKEN> },
}).then(() => console.log('complete'));
},
(error) => console.error(error)
);
but it produce this output
AWS Code: SignatureDoesNotMatch, Message:The request signature we calculated does not match the signature you provided. Check your key and signing method.status:403
What I'm doing wrong
SIDE NOTE
This is the versione I'm using on browser side:
{
"crypto-js": "^4.1.1",
"evaporate": "^2.1.4"
}

You have your crypto.HmacSHA256 parameters reversed. They should all be the other way around. I've been bashing my head against a wall trying to get evaporate 2.x to work for the last week, it's been very frustrating.
I tried your code above and looked over all the docs and forum posts related to this, and I think using Cognito for this auth just doesn't work or isn't obvious how it's supposed to work, even though the AWS docs suggest it's possible.
In the end I went with using Lambda authentication and finally got it working after seeing much misinformation about how to use various crypto libraries to sign this stuff. I got it working last night after rigorously examining every bit of what was going on. Reading the docs also helped get me on the right path as to how the crypto needs to work, it gives example inputs and outputs so you can test for sure that your crypto methods are working as AWS expects them to work:
https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
Tasks 1, 2 and 3 are especially important to read and understand.

Related

why messaging().sendtodevice is not working sometimes?

I'm using the following code to send a notification from one device to another using FCM. Everything works fine until before return admin.messaging().sendToDevice(...). The 'Token ID: ' log displays token ID of the receiver, but when I set the variable token_id to the sendToDevice function, the notification is not called, therefore the notification is not sent. Can someone tell me what's wrong?
var firebase = require("firebase-admin");
var serviceAccount = require("./julla-tutorial.json");
console.log("enter in then Firebase Api");
const firebaseToken = [
'e0T6j1AiRjaa7IXweJniJq:APA91bHNznSHSIey08s-C-c3gchci6wepvhP1QxQyYbmZ8LySI3wnu64iW7Q23GhA6VCdc4yodZoCFOgynfAb5C8O8VE81OcSv_LL-K3ET1IKGZ_6h35n-_q5EKFtfJWlzOqZr4IvpiB',
'dNWnSqyCQbufzv1JutNEWr:APA91bFcI9FDyRxHRBEcdw4791X0e-V0k1FjXcSstUA67l94hSojMRCd6LWr2b57azNEt3z_XLwLljMX4u2mc9cZDrAVm55Mw9CHGyue-09KofWnnHNR9XWBibc4T76xOV_DWX7T2RvW',
'cq65rtuaTCKGk5lHk7UabN:APA91bFR3kAArg6lhuBq7ktNuBk7Z9MXXk3PskqhYa8CgNaEl6MX4TQ5lo35d6XhnCQ4fEkCkyZ_j08evxE9Y4oVCRTEdqsrkccCVTE8Di47lfmDR3i1NdoL3re9oLw6F_uNsnvRoQcq'
]
firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount)
})
const payload = {
notification: {
title: 'Demo 2345',
body: 'dfghj',
sound: 'default',
color: 'yellow',
android_channel_id: 'default',
channel_id: 'default'
},
data: { id: 'broadcast', channelId: 'default' }
}
const options = {
priority: 'high',
timeToLive: 60 * 60 * 24, // 1 day
};
console.log('------payload---',payload);
console.log('-----TOKEN_Array----',firebaseToken);
console.log('-------options-----',options);
firebase.messaging().sendToDevice(firebaseToken, payload, options).then(function (response) {
console.log('--------response',response);
}) .catch(function (error) {
console.log('-------rejet',reject);
});
It looks like you did not change the code from this tutorial:
https://medium.com/#jullainc/firebase-push-notifications-to-mobile-devices-using-nodejs-7d514e10dd4
you will need to change the 2nd line of code:
var serviceAccount = require("./julla-tutorial.json");
to actually point to your own firebase-push-admin.json file which holds your private keys registering your backend app with the firebase cloud messaging api. you can download this file from the firebase console as mentioned in the above article.
I recommend hiding this file from your git history by adding it to .gitignore so you dont accidentally push your private keys to a public repo.
I will link you another resource in addition to above link which helped me implement firebase push notifications in a nodeJS backend app.
https://izaanjahangir.medium.com/setting-schedule-push-notification-using-node-js-and-mongodb-95f73c00fc2e
https://github.com/izaanjahangir/schedule-push-notification-nodejs
Further I will also link you another repo where I am currently working on a fully functional firebase push notification implementation. Maybe it helps to actually see some example code.
https://gitlab.com/fiehra/plants-backend

How to correctly configure unit tests for Probot with Scheduler Extension?

I am using the following minimal probot app and try to write Mocha unit tests for it.
Unfortunately, it results in the error below, which indicates that some of my setup for the private key or security tokens is not picked up.
I assume that the configuration with my .env file is correct since I do not get the same error when I start the probot via probot-run.js.
Are there any extra steps needed to configure probot when used with Mocha?
Any suggestions on why the use of the scheduler extension may result in such issue would be great.
Code and error below:
app.ts
import createScheduler from "probot-scheduler";
import { Application } from "probot";
export = (app: Application) => {
createScheduler(app, {
delay: !!process.env.DISABLE_DELAY, // delay is enabled on first run
interval: 24 * 60 * 60 * 1000 // 1 day
});
app.on("schedule.repository", async function (context) {
app.log.info("schedule.repository");
const result = await context.github.pullRequests.list({owner: "owner", repo: "test"});
app.log.info(result);
});
};
test.ts
import createApp from "../src/app";
import nock from "nock";
import { Probot } from "probot";
nock.disableNetConnect();
describe("my scenario", function() {
let probot: Probot;
beforeEach(function() {
probot = new Probot({});
const app = probot.load(createApp);
});
it("basic feature", async function() {
await probot.receive({name: "schedule.repository", payload: {action: "foo"}});
});
});
This unfortunately results in the following error:
Error: secretOrPrivateKey must have a value
at Object.module.exports [as sign] (node_modules/jsonwebtoken/sign.js:101:20)
at Application.app (node_modules/probot/lib/github-app.js:15:39)
at Application.<anonymous> (node_modules/probot/lib/application.js:260:72)
at step (node_modules/probot/lib/application.js:40:23)
at Object.next (node_modules/probot/lib/application.js:21:53)
Turns out that new Probot({}); as suggested in the documentation initializes the Probot object without any parameters (the given options object {} is empty after all).
To avoid the error, one can provide the information manually:
new Probot({
cert: "...",
secret: "...",
id: 12345
});

Google Speech API - Server Unavailable Error for long audio files

I'm using Google's nodejs-speech package to use the longRunningRecognize endpoint/function in Google's Speech API.
I've used both v1 and v1p1beta, and run into an error with longer files. (48 mins is as long as I've tried, and 15 mins causes the same problem, though 3 mins does not). I've tried both the promise pattern and separating the request into two parts -- one to start the longRunningRecognize process, and the other to check on results after waiting. The error is shown below the code samples for both.
Example promise version of request:
import speech from '#google-cloud/speech';
const client = new speech.v1p1beta1.SpeechClient();
const audio = {
uri: 'gs://my-bucket/file.m4a'
};
const config = {
encoding: 'AMR_WB',
sampleRateHertz: 16000,
languageCode: 'en-US',
enableWordTimeOffsets: true,
enableSpeakerDiarization: true
};
const request = {
audio,
config
};
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
return operation.promise();
})
.then(data => {
const response = data[0];
const results = response.results;
const transcription = results
.filter(result => result.alternatives)
.map(result => result.alternatives[0].transcript)
.join('\n');
console.log(transcription);
})
.catch(error => {
console.error(error);
});
(I've since closed the tab with the results, but I think this returned an error object that just said { error: { code: 13 } }, which matches the below, more descriptive error).
Separately, I've tried a version where instead of chaining promises to get the final transcription result, I collect the name from the operation, and make a separate request to get the result.
Here's that request code:
... // Skipping setup
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
console.log(operation.latestResponse.name);
})
.catch(error => {
console.error(error);
});
When I hit the relevant endpoint (https://speech.googleapis.com/v1p1beta1/operations/81703347042341321989?key=ABCD12345) before it's had time to process, I get this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"startTime": "2018-08-16T19:33:26.166942Z",
"lastUpdateTime": "2018-08-16T19:41:31.456861Z"
}
}
Once it's fully processed, though, I've been running into this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"progressPercent": 100,
"startTime": "2018-08-16T17:20:28.772208Z",
"lastUpdateTime": "2018-08-16T17:44:40.868144Z"
},
"done": true,
"error": {
"code": 13,
"message": "Server unavailable, please try again later."
}
}
I've tried with shorter audio files (3 mins, same format and encoding), and the above processes both worked.
Any idea what's going on?
A possible workaround is changing the audio format to FLAC, which is the recommended encoding type for Cloud Speech-to-text API due to its lossless compression.
For reference, this can be done using sox, through the following command:
sox file.m4a --rate 16k --bits 16 --channels 1 file.flac
Additionally, this error may also happen when there is a long period of silence at the beginning. In this case, the audio files can be trimmed by specifying after trim the amount of seconds the audio should skip at the beginning and at the end of the file:
sox input.m4a --rate 16k --bits 16 --channels 1 output.flac trim 20 5

Workbox Service Worker with Background Sync module

I've been using Workbox for a few days and i correctly set it up for generating a service worker from a source and dont let Workbox generate it for me.
It works ok but now im trying to include the workbox-background-sync module for storing some failed POST requests and i cant get it to work.
After running the SW, i get "Uncaught TypeError: Cannot read property 'QueuePlugin' of undefined" on the first backgroundSync line (line number 9). This is the generated ServiceWorker file:
importScripts('workbox-sw.prod.v2.1.2.js');
importScripts('workbox-background-sync.prod.v2.0.3.js');
const workbox = new WorkboxSW({
skipWaiting: true,
clientsClaim: true
})
let bgQueue = new workbox.backgroundSync.QueuePlugin({
callbacks: {
replayDidSucceed: async(hash, res) => {
self.registration.showNotification('Background sync demo', {
body: 'Product has been purchased.',
icon: '/images/shop-icon-384.png',
});
},
replayDidFail: (hash) => {},
requestWillEnqueue: (reqData) => {},
requestWillDequeue: (reqData) => {},
},
});
const requestWrapper = new workbox.runtimeCaching.RequestWrapper({
plugins: [bgQueue],
});
const route = new workbox.routing.RegExpRoute({
regExp: new RegExp('^https://jsonplaceholder.typicode.com'),
handler: new workbox.runtimeCaching.NetworkOnly({requestWrapper}),
});
const router = new workbox.routing.Router();
router.registerRoute({route});
workbox.router.registerRoute(
new RegExp('.*\/api/catalog/available'),
workbox.strategies.networkFirst()
);
workbox.router.registerRoute(
new RegExp('.*\/api/user'),
workbox.strategies.networkFirst()
);
workbox.router.registerRoute(
new RegExp('.*\/api/security-element'),
workbox.strategies.networkOnly()
);
workbox.precache([]);
I've tried to load it with workbox.loadModule('workbox-background-sync'), a workaround i've found on github, but it still not working. Also, trying with
self.workbox = new WorkboxSW(), with the same faith.
P.S: Is there a way i can hook a function AFTER a strategy like networkFirst has failed and its going to respond with cache? because i want to know, if im getting a cached response i would like to tell the user that by modifying the incoming response and handle it later in Vue for example. Thanks for reading!
I solved it.. it was actually pretty silly, i was accesing the const workbox instead of instantiating a new workbox.backgroundSync, i fixed it by simply renaming the const workbox to const workboxSW

Getting PUT routes to work in Angular

I'm seeking some wisdom from the Angular community. I am working on a simple project using the MEAN stack. I have set up my back-end api and everything is working as expected. Using Postman, I observe expected behavior for both a GET and PUT routes to retrieve/update a single value - a high score - which is saved in it's own document in its own collection in a MongoDB. So far so good.
Where things go off track is when trying to access the PUT api endpoint from within Angular. Accessing the GET endpoint is no problem, and retrieving and displaying data works smoothly. However, after considerable reading and searching, I am stll unable to properly access the PUT endpoint and update the high score data when that event is triggered by gameplay. Below are the snippets of code that I believe to be relevant for reference.
BACK-END CODE:
SCHEMA:
const _scoreSchema = {
name: { type: String, required: true },
value: { type: Number, "default": 0 }
};
ROUTES:
router
.route('/api/score/:highScore')
.put(scoreController.setHighScore);
CONTROLLER:
static setHighScore(req, res) {
scoreDAO
.setHighScore(req.params.highScore)
.then(highScore => res.status(200).json(highScore))
.catch(error => res.status(400).json(error));
}
DAO:
scoreSchema.statics.setHighScore = (value) => {
return new Promise((resolve, reject) => {
score
.findOneAndUpdate(
{"name": "highScore"},
{$set: {"value": value} }
)
.exec(function(err, response) {
err ? reject(err)
: resolve(response);
});
});
}
ANGULAR CODE:
CONTROLLER:
private _updateHighScore(newHighScore): void {
console.log('score to be updated to:', newHighScore)
this._gameService
.updateHighScore(newHighScore);
}
SERVICE:
updateHighScore(newHighScore: Number): Observable<any> {
console.log(newHighScore);
let url = '/api/score/' + newHighScore;
let _scoreStringified = JSON.stringify({value: newHighScore});
let headers = new Headers();
headers.append("Content-Type", "application/json");
return this._http
.put(url , _scoreStringified, {headers})
.map((r) => r.json());
}
Note that the console.log(newHighScore) in the last block of code above correctly prints the value of the new high score to be updated, it's just not being written to the database.
The conceptual question with PUT routes in angular is this: If the api is already set up such that it receives all the information it needs to successfully update the database (via the route param) why is it required to supply all of this information again in the Angular .put() function? It seems like reinventing the wheel and not really utilizing the robust api endpoint that was already created. Said differently, before digging into the docs, I naively was expecting something like .put(url) to be all that was required to call the api, so what is the missing link in my logic?
Thanks!

Categories

Resources