I'm using Google's nodejs-speech package to use the longRunningRecognize endpoint/function in Google's Speech API.
I've used both v1 and v1p1beta, and run into an error with longer files. (48 mins is as long as I've tried, and 15 mins causes the same problem, though 3 mins does not). I've tried both the promise pattern and separating the request into two parts -- one to start the longRunningRecognize process, and the other to check on results after waiting. The error is shown below the code samples for both.
Example promise version of request:
import speech from '#google-cloud/speech';
const client = new speech.v1p1beta1.SpeechClient();
const audio = {
uri: 'gs://my-bucket/file.m4a'
};
const config = {
encoding: 'AMR_WB',
sampleRateHertz: 16000,
languageCode: 'en-US',
enableWordTimeOffsets: true,
enableSpeakerDiarization: true
};
const request = {
audio,
config
};
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
return operation.promise();
})
.then(data => {
const response = data[0];
const results = response.results;
const transcription = results
.filter(result => result.alternatives)
.map(result => result.alternatives[0].transcript)
.join('\n');
console.log(transcription);
})
.catch(error => {
console.error(error);
});
(I've since closed the tab with the results, but I think this returned an error object that just said { error: { code: 13 } }, which matches the below, more descriptive error).
Separately, I've tried a version where instead of chaining promises to get the final transcription result, I collect the name from the operation, and make a separate request to get the result.
Here's that request code:
... // Skipping setup
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
console.log(operation.latestResponse.name);
})
.catch(error => {
console.error(error);
});
When I hit the relevant endpoint (https://speech.googleapis.com/v1p1beta1/operations/81703347042341321989?key=ABCD12345) before it's had time to process, I get this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"startTime": "2018-08-16T19:33:26.166942Z",
"lastUpdateTime": "2018-08-16T19:41:31.456861Z"
}
}
Once it's fully processed, though, I've been running into this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"progressPercent": 100,
"startTime": "2018-08-16T17:20:28.772208Z",
"lastUpdateTime": "2018-08-16T17:44:40.868144Z"
},
"done": true,
"error": {
"code": 13,
"message": "Server unavailable, please try again later."
}
}
I've tried with shorter audio files (3 mins, same format and encoding), and the above processes both worked.
Any idea what's going on?
A possible workaround is changing the audio format to FLAC, which is the recommended encoding type for Cloud Speech-to-text API due to its lossless compression.
For reference, this can be done using sox, through the following command:
sox file.m4a --rate 16k --bits 16 --channels 1 file.flac
Additionally, this error may also happen when there is a long period of silence at the beginning. In this case, the audio files can be trimmed by specifying after trim the amount of seconds the audio should skip at the beginning and at the end of the file:
sox input.m4a --rate 16k --bits 16 --channels 1 output.flac trim 20 5
Related
I want upload a file with evaporate.js and crypto-js using x-amz-security-token:
import * as Evaporate from 'evaporate';
import * as crypto from "crypto-js";
Evaporate.create({
aws_key: <ACCESS_KEY>,
bucket: 'my-bucket',
awsRegion: 'eu-west',
computeContentMd5: true,
cryptoMd5Method: data => crypto.algo.MD5.create().update(String.fromCharCode.apply(null, new Uint32Array(data))).finalize().toString(crypto.enc.Base64),
cryptoHexEncodedHash256: (data) => crypto.algo.SHA256.create().update(data as string).finalize().toString(crypto.enc.Hex),
logging: true,
maxConcurrentParts: 5,
customAuthMethod: (signParams: object, signHeaders: object, stringToSign: string, signatureDateTime: string, canonicalRequest: string): Promise<string> => {
const stringToSignDecoded = decodeURIComponent(stringToSign)
const requestScope = stringToSignDecoded.split("\n")[2];
const [date, region, service, signatureType] = requestScope.split("/");
const round1 = crypto.HmacSHA256(`AWS4${signParams['secret_key']}`, date);
const round2 = crypto.HmacSHA256(round1, region);
const round3 = crypto.HmacSHA256(round2, service);
const round4 = crypto.HmacSHA256(round3, signatureType);
const final = crypto.HmacSHA256(round4, stringToSignDecoded);
return Promise.resolve(final.toString(crypto.enc.Hex));
},
signParams: { secretKey: <SECRET_KEY> },
partSize: 1024 * 1024 * 6
}).then((evaporate) => {
evaporate.add({
name: 'my-key',
file: file, // file from <input type="file" />
xAmzHeadersCommon: { 'x-amz-security-token': <SECURITY_TOKEN> },
xAmzHeadersAtInitiate: { 'x-amz-security-token': <SECURITY_TOKEN> },
}).then(() => console.log('complete'));
},
(error) => console.error(error)
);
but it produce this output
AWS Code: SignatureDoesNotMatch, Message:The request signature we calculated does not match the signature you provided. Check your key and signing method.status:403
What I'm doing wrong
SIDE NOTE
This is the versione I'm using on browser side:
{
"crypto-js": "^4.1.1",
"evaporate": "^2.1.4"
}
You have your crypto.HmacSHA256 parameters reversed. They should all be the other way around. I've been bashing my head against a wall trying to get evaporate 2.x to work for the last week, it's been very frustrating.
I tried your code above and looked over all the docs and forum posts related to this, and I think using Cognito for this auth just doesn't work or isn't obvious how it's supposed to work, even though the AWS docs suggest it's possible.
In the end I went with using Lambda authentication and finally got it working after seeing much misinformation about how to use various crypto libraries to sign this stuff. I got it working last night after rigorously examining every bit of what was going on. Reading the docs also helped get me on the right path as to how the crypto needs to work, it gives example inputs and outputs so you can test for sure that your crypto methods are working as AWS expects them to work:
https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
Tasks 1, 2 and 3 are especially important to read and understand.
I have problem surrounding the difference between pipeline execution and my local environment.
I'm intercepting a request using the following command:
cy.intercept(
{
method: "GET",
pathname: env.path,
query: {
dateFrom: "2020-12-31T22:00:00.000Z",
dateTo: "2022-01-01T21:59:59.000Z",
},
},
(req) => {
expect(req.url).to.include(
"&dateFrom=2020-12-31T22%3A00%3A00.000Z",
"Date From included"
);
expect(req.url).to.include(
"&dateTo=2022-01-01T21%3A59%3A59.000Z",
"Date to included"
);
}
).as("filterByDates");
Executing it on my local environment is fine, but when I run it in the pipeline there is a problem, because server time is UTC by default, the test always fails, because sent time is not as expected.
Now I'm thinking how to approach this, because the time input is not by me, but by the plugin "datePicker", which always inputs the machine time (server/environment), so the question is what is a good approach towards this issue? (i'm using day.js)
Do I convert each time input to UTC ?
Do I intercept the request and make it UTC -2 for the request ?
Or should I just ignore everything after "T" including hours/minutes/seconds, which I'm highly against.
I'll really be glad if i get responses, thanks in advance.
Yes you have to convert expected values to TZ in common with test environment.
But the test also has to know what environment it runs in.
var utc = require('dayjs/plugin/utc')
dayjs.extend(utc)
...
cy.intercept(
...
(req) => {
let expectedDateFrom = "2020-12-31T22:00:00.000Z";
const isCI = Cypress.env('CI');
if (isCI) {
expectedDateFrom = dayjs().utc().format()
}
const encodedDateFrom = encodeURI(expectedDateFrom)
expect(req.url).to.include(`&dateFrom=${encodedDateFrom}`,"Date From included");
Since you have two dates, maybe a conversion function?
const toEnvFormat = (expected) => {
const isCI = Cypress.env('CI');
if (isCI) {
expected = dayjs().utc().format()
}
return expected;
}
cy.intercept(
...
(req) => {
const expectedDateFrom = toEnvFormat("2020-12-31T22:00:00.000Z");
const encodedDateFrom = encodeURI(expectedDateFrom)
expect(req.url).to.include(`&dateFrom=${encodedDateFrom}`,"Date From included");
Ref dayjs UTC
I've found solution to just add the time zone to the package.json in the root folder as so:
"scripts": {
"start": "TZ=Europe/Sofia npx cypress open",
"run": "TZ=Europe/Sofia npx cypress run --browser chrome"
},
I m react native developer, Now i'm integrating the lazada open platform with react native app, through node js. I cannot generate access token.
My code is,
const LazadaAPI = require('lazada-open-platform-sdk')
const aLazadaAPI = new LazadaAPI('118985', 'MXbPesO8hJXZFoQNRBMaJAfQPYHdKgwu ', 'SINGAPORE')
// console.log('aLazadaAPIWithToken', aLazadaAPI.generateAccessToken)
const authCode = '0_118985_zUFFF5x0Wal7NNNRKPQFVjSZ2236' // replace valid authCode here
const params = {
code: authCode
}
const response = aLazadaAPI
.generateAccessToken(params)
.then(response => console.log(JSON.stringify(response, null, 4)))
.catch(error => console.log(JSON.stringify(error, null, 4)))
getting this error,
"type": "ISV",
"code": "IncompleteSignature",
"message": "The request signature does not conform to lazada standards",
"request_id": "0b86d3f015889470213992399"
Have you checked if your developer profile is active ? You need a developer account before you request APIs. Every developer account needs approval by Lazada Open platform, under which each category would require further approvals. This process takes a couple of days.
Need to generate from the
https://auth.lazada.com/oauth/authorize?response_type=code&force_auth=true&redirect_uri=${app call back url}&client_id=${appkey}
I want to read a JSON file from the API every 11 seconds and display it in the interface
In my case :
the interface server is running at http://localhost:8080/
the API at http://localhost:8088/route (and I need to refresh it every 11 seconds because parameters changes)
and in route.js :
var i=0;
var delayInms = 11000;
var myVar = setInterval(TempFunction, 1000);
function TempFunction() {
router.get('/', (req,res,next)=>{
var text =[
{"carspeed":[233+i,445+i,223+i,444+i,234+i]},
]
console.log(text);
res.status(200).json(text);
});
window.location.reload(true);
i++;
}
********THE PROBLEM is that I get this error:
ReferenceError: window is not defined
I have another question :
to read the JSON (which is updated in http://localhost:8088/route every 11 seconds) I did this :
in car.vue :
<template>
.
.
<ul>
<li v-for="todo of todos" :key="todo.id">{{todo.text}}</li>
</ul>
.
.
</template>
followed by :
<script>
import axios from 'axios';
const WorkersURL="http://localhost:8088/route";
export default {
data: () => ({
drawer: false,
todos:[]
}),
async created()
{
try
{
const res = await axios.get(WorkersURL);
this.todos=res.data;
}
catch(e)
{
console.error(e)
}
}
}
<script>
********AND THE SECOND PROBLEM : it doesn't read the JSON file from http://localhost:8088/route
You'll need to make sure that you are enabling your server to be hit from web pages running at a different host/domain/port. In your case, the server is running on a different port than the webpage itself, so you can't make XHR (which is what Axios is doing) calls successfully because CORS (Cross-Origin Resource Sharing) is not enabled by default. Your server will need to set the appropriate headers to allow that. Specifically, the Access-Control-Allow-Origin header. See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Second, to update your client every 11 seconds there are a few choices. The simplest would be to make you call via axios every 11 seconds using setInterval:
async created()
{
try
{
const res = await axios.get(WorkersURL);
this.todos=res.data;
// set a timer to do this again every 11 seconds
setInterval(() => {
axios.get(WorkersURL).then((res) => {
this.todos=res.data;
});
}, 11000);
}
catch(e)
{
console.error(e)
}
}
There are a couple of options that are more advanced such as serve-sent events (See https://github.com/mdn/dom-examples/tree/master/server-sent-events) or websockets (https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API). Both of these options allow you to control the interval on the server instead of the client. There are some things to consider when setting up your server for this, so the setInterval options is probably best in your case.
I tried to use the javascript MediaUploader.js to upload youtube video to my own account, for some reason, I got this error in onError function:
"errors": [
{
"domain": "youtube.quota",
"reason": "quotaExceeded",
"message": "The request cannot be completed because you have exceeded your \u003ca href=\"/youtube/v3/getting-started#quota\"\u003equota\u003c/a\u003e."
}
],
"code": 403,
"message": "The request cannot be completed because you have exceeded your \u003ca href=\"/youtube/v3/getting-started#quota\"\u003equota\u003c/a\u003e."
I only tested a few times today, but got this strange error.
var signinCallback = function (tokens, file){
console.log("signinCallback tokens: ",tokens);
if(tokens.accessToken) { //tokens.access_token
console.log("signinCallback tokens.accessToken: ",tokens.accessToken);
var metadata = {
id: "101",
snippet: {
"title": "Test video upload",
"description":"Description of uploaded video",
"categoryId": "22",//22
"tags": ["test tag1", "test tag2"],
},
status: {
"privacyStatus": "private",
"embeddable": true,
"license": "youtube"
}
};
console.log("signinCallback Object.keys(metadata).join(','): ",Object.keys(metadata).join(','));
var options = {
url: 'https://www.googleapis.com/upload/youtube/v3/videos?part=snippet%2Cstatus&key=<my api key>',
file: file,
token: tokens.accessToken,
metadata: metadata,
contentType: 'application/octet-stream',//"video/*",
params: {
part: Object.keys(metadata).join(',')
},
onError: function(data) {
var message = data;
// Assuming the error is raised by the YouTube API, data will be
// a JSON string with error.message set. That may not be the
// only time onError will be raised, though.
try {
console.log("signinCallback onError data: ",data);
if(data!="Not Found"){
var errorResponse = JSON.parse(data);
message = errorResponse.error.message;
console.log("signinCallback onError message: ",message);
console.log("signinCallback onError errorResponse: ",errorResponse);
}else{
}
} finally {
console.log("signinCallback error.... ");
}
}.bind(this),
onProgress: function(data) {
var currentTime = Date.now();
var bytesUploaded = data.loaded;
var totalBytes = data.total;
// The times are in millis, so we need to divide by 1000 to get seconds.
var bytesPerSecond = bytesUploaded / ((currentTime - this.uploadStartTime) / 1000);
var estimatedSecondsRemaining = (totalBytes - bytesUploaded) / bytesPerSecond;
var percentageComplete = (bytesUploaded * 100) / totalBytes;
console.log("signinCallback onProgress bytesUploaded, totalBytes: ",bytesUploaded, totalBytes);
console.log("signinCallback onProgress percentageComplete: ",percentageComplete);
}.bind(this),
onComplete: function(data) {
console.log("signinCallback onComplete data: ",data);
var uploadResponse = JSON.parse(data);
this.videoId = uploadResponse.id;
//this.pollForVideoStatus();
}.bind(this)
}
MediaUpload.videoUploader(options);
}
};
I checked the developer console of my quota, my quota limit is so big, there is no way I exceeded my quota, ex, I have total of 89 queries today, and my quota limit is 10,000 queries/day.
Expected: upload my video to my youtube account successfully.
Actual results: quotaExceeded
Corrupt Google Developer Project - create a new one
I am disappointed in Google that this was the case for me.
I had the same issue, no usage at all but "quota exceeded" response. My solution was to create a new project. I guess it's because something changed internally over time and wasn't applied correctly to (at least my) already existing project...
I had stopped using AWS for several reasons and thought Google Cloud would be a refreshing experience but this shows me Google treats existing projects as badly as new products that it kills off. Strike one against Google.
https://github.com/googleapis/google-api-nodejs-client/issues/2263#issuecomment-741892605
Youtube does not give you 10,000 Queries a day, they give you 10,000 units a day; a query can be multiple units, depending on what you're doing:
A simple read operation that only retrieves the ID of each returned
resource has a cost of approximately 1 unit.
A write operation has a cost of approximately 50 units.
A video upload has a cost of approximately 1600 units.
If your 89 queries contain video uploads or write operations, then that would explain your issue
More Information:
https://developers.google.com/youtube/v3/getting-started#quota