I tried to use the javascript MediaUploader.js to upload youtube video to my own account, for some reason, I got this error in onError function:
"errors": [
{
"domain": "youtube.quota",
"reason": "quotaExceeded",
"message": "The request cannot be completed because you have exceeded your \u003ca href=\"/youtube/v3/getting-started#quota\"\u003equota\u003c/a\u003e."
}
],
"code": 403,
"message": "The request cannot be completed because you have exceeded your \u003ca href=\"/youtube/v3/getting-started#quota\"\u003equota\u003c/a\u003e."
I only tested a few times today, but got this strange error.
var signinCallback = function (tokens, file){
console.log("signinCallback tokens: ",tokens);
if(tokens.accessToken) { //tokens.access_token
console.log("signinCallback tokens.accessToken: ",tokens.accessToken);
var metadata = {
id: "101",
snippet: {
"title": "Test video upload",
"description":"Description of uploaded video",
"categoryId": "22",//22
"tags": ["test tag1", "test tag2"],
},
status: {
"privacyStatus": "private",
"embeddable": true,
"license": "youtube"
}
};
console.log("signinCallback Object.keys(metadata).join(','): ",Object.keys(metadata).join(','));
var options = {
url: 'https://www.googleapis.com/upload/youtube/v3/videos?part=snippet%2Cstatus&key=<my api key>',
file: file,
token: tokens.accessToken,
metadata: metadata,
contentType: 'application/octet-stream',//"video/*",
params: {
part: Object.keys(metadata).join(',')
},
onError: function(data) {
var message = data;
// Assuming the error is raised by the YouTube API, data will be
// a JSON string with error.message set. That may not be the
// only time onError will be raised, though.
try {
console.log("signinCallback onError data: ",data);
if(data!="Not Found"){
var errorResponse = JSON.parse(data);
message = errorResponse.error.message;
console.log("signinCallback onError message: ",message);
console.log("signinCallback onError errorResponse: ",errorResponse);
}else{
}
} finally {
console.log("signinCallback error.... ");
}
}.bind(this),
onProgress: function(data) {
var currentTime = Date.now();
var bytesUploaded = data.loaded;
var totalBytes = data.total;
// The times are in millis, so we need to divide by 1000 to get seconds.
var bytesPerSecond = bytesUploaded / ((currentTime - this.uploadStartTime) / 1000);
var estimatedSecondsRemaining = (totalBytes - bytesUploaded) / bytesPerSecond;
var percentageComplete = (bytesUploaded * 100) / totalBytes;
console.log("signinCallback onProgress bytesUploaded, totalBytes: ",bytesUploaded, totalBytes);
console.log("signinCallback onProgress percentageComplete: ",percentageComplete);
}.bind(this),
onComplete: function(data) {
console.log("signinCallback onComplete data: ",data);
var uploadResponse = JSON.parse(data);
this.videoId = uploadResponse.id;
//this.pollForVideoStatus();
}.bind(this)
}
MediaUpload.videoUploader(options);
}
};
I checked the developer console of my quota, my quota limit is so big, there is no way I exceeded my quota, ex, I have total of 89 queries today, and my quota limit is 10,000 queries/day.
Expected: upload my video to my youtube account successfully.
Actual results: quotaExceeded
Corrupt Google Developer Project - create a new one
I am disappointed in Google that this was the case for me.
I had the same issue, no usage at all but "quota exceeded" response. My solution was to create a new project. I guess it's because something changed internally over time and wasn't applied correctly to (at least my) already existing project...
I had stopped using AWS for several reasons and thought Google Cloud would be a refreshing experience but this shows me Google treats existing projects as badly as new products that it kills off. Strike one against Google.
https://github.com/googleapis/google-api-nodejs-client/issues/2263#issuecomment-741892605
Youtube does not give you 10,000 Queries a day, they give you 10,000 units a day; a query can be multiple units, depending on what you're doing:
A simple read operation that only retrieves the ID of each returned
resource has a cost of approximately 1 unit.
A write operation has a cost of approximately 50 units.
A video upload has a cost of approximately 1600 units.
If your 89 queries contain video uploads or write operations, then that would explain your issue
More Information:
https://developers.google.com/youtube/v3/getting-started#quota
Related
i didn't find a better way to word the title, sorry.
I'm building a table of crypto markets. (Vue/Vuetify v-data-table). Data is kept in a Vuex store.
I'm connecting to the (Laravel/Echo) API via Vue-Native-Websocket
Vue.use(VueNativeSock, url, {
format: 'json',
store: store,
connectManually: true,
reconnection: true,
reconnectionAttempts: 5,
reconnectionDelay: 3000,
})
After opening the page in question, i'm subscribing to the update method
subscribeMarketUpdates() {
let marketsToSubscribe = []
const data = this.$store.state.exchange.markets
const marketKeys = Object.keys(data)
for (let i = 0; i < marketKeys.length; i++) {
marketsToSubscribe = marketsToSubscribe.concat(data[marketKeys[i]].map(market => market.name))
}
this.$store.dispatch('sendExchangeSocketMessage', {
"method": "state.subscribe",
"params": marketsToSubscribe, // <- all currently present markets
"id": Date.now(),
})
},
then i get back a success message, followed by the requested market update messages (one update message PER MARKET (!) ).
i handle them as follows:
SOCKET_ONMESSAGE (state, message) {
state.exchangeSocket.message = message
if (message.method === 'state.update') {
this.commit('exchange/UPDATE_MARKET_STATE', { params: message.params, BigNumber })
}
}
UPDATE_MARKET_STATE(state, payload) {
console.log('updating market ', params[0])
let { params, BigNumber } = payload
let coin = params[0].match(/(^.*)-/)[1]
let market = params[0].match(/\w+$/)[0]
let updatedValues = params[1]
let resultIndex = state.markets[market].findIndex((e) => e.ticker === coin)
let result = state.markets[market][resultIndex]
result.amount = new BigNumber(updatedValues.amount).toFixed(8)
result.change = new BigNumber(updatedValues.change).toFixed(8)
result.change_pct = new BigNumber(updatedValues.change_pct).toFixed(4)
result.close = new BigNumber(updatedValues.close).toFixed(8)
result.high = new BigNumber(updatedValues.high).toFixed(8)
result.last = new BigNumber(updatedValues.last).toFixed(8)
result.low = new BigNumber(updatedValues.low).toFixed(8)
result.open = new BigNumber(updatedValues.open).toFixed(8)
result.period = updatedValues.period
result.volume = updatedValues.volume
state.markets[market][resultIndex] = result
}
}
it kind of works, but i noticed it being painfully slow.
a console.time around the commit call reveals 400-700ms (!) for each execution.
each line inside UPDATE_MARKET_STATE reports 0.02ms though.
at first i thought something was wrong on the backend, but the guy working on that ensured me that all of the messages are sent within a couple of ms. there has to be something here on my end, that i'm not understanding. this is the backend log:
[2022-02-24 14:47:38.733082] [29576] [trace]aw_state.c:206(on_timer): send request to 127.0.0.1:1234, cmd: 301, sequence: 2544786, params: ["ETH-BTC", 86400]
[2022-02-24 14:47:38.733096] [29576] [trace]aw_state.c:206(on_timer): send request to 127.0.0.1:1234, cmd: 301, sequence: 2544787, params: ["404-BTC", 86400]
[2022-02-24 14:47:38.733108] [29576] [trace]aw_state.c:206(on_timer): send request to 127.0.0.1:1234, cmd: 301, sequence: 2544788, params: ["2GIVE-BTC", 86400]
[2022-02-24 14:47:38.733121] [29576] [trace]aw_state.c:206(on_timer): send request to 127.0.0.1:1234, cmd: 301, sequence: 2544789, params: ["$PAC-BTC", 86400]
[2022-02-24 14:47:38.733132] [29576] [trace]aw_state.c:206(on_timer): send request to 127.0.0.1:1234, cmd: 301, sequence: 2544790, params: ["1337-BTC", 86400]
(note all of the messages being sent nearly instantly)
the table for displaying the data seems to lock up too (i can only scroll the table each second, and it coincides with a console.log i made, so it really seems there is something blocking, even though i can't see anything that could be the culprit).
17:36:03.863 actions.js?4221:19 updating market 1337-BTC
17:36:04.646 actions.js?4221:19 updating market 2GIVE-BTC
17:36:05.421 actions.js?4221:19 updating market 2X2-BTC
17:36:06.264 actions.js?4221:19 updating market 404-BTC
17:36:07.030 actions.js?4221:19 updating market ABJ-BTC
17:36:07.795 actions.js?4221:19 updating market ACP-BTC
17:36:08.520 actions.js?4221:19 updating market ADC-BTC
17:36:09.415 actions.js?4221:19 updating market AERM-BTC
17:36:10.170 actions.js?4221:19 updating market ALEX-BTC
17:36:10.977 actions.js?4221:19 updating market AMBER-BTC
(note the update being called roughly each second, even though the ws messages are coming in much faster)
so my assumption is: there's something wrong in how i update my vuex store. something is blocking there, but i can't figure out what exactly. i don't see anything there that could possible cause this lock-up.
(i've also tried to handle the updates in an action (and calling the mutation from there) rather than a mutation, the result was the same, so here you see the mutation code)
i hope somebody can spot my error, because i can't :/
thanks for any help
I'm using Google's nodejs-speech package to use the longRunningRecognize endpoint/function in Google's Speech API.
I've used both v1 and v1p1beta, and run into an error with longer files. (48 mins is as long as I've tried, and 15 mins causes the same problem, though 3 mins does not). I've tried both the promise pattern and separating the request into two parts -- one to start the longRunningRecognize process, and the other to check on results after waiting. The error is shown below the code samples for both.
Example promise version of request:
import speech from '#google-cloud/speech';
const client = new speech.v1p1beta1.SpeechClient();
const audio = {
uri: 'gs://my-bucket/file.m4a'
};
const config = {
encoding: 'AMR_WB',
sampleRateHertz: 16000,
languageCode: 'en-US',
enableWordTimeOffsets: true,
enableSpeakerDiarization: true
};
const request = {
audio,
config
};
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
return operation.promise();
})
.then(data => {
const response = data[0];
const results = response.results;
const transcription = results
.filter(result => result.alternatives)
.map(result => result.alternatives[0].transcript)
.join('\n');
console.log(transcription);
})
.catch(error => {
console.error(error);
});
(I've since closed the tab with the results, but I think this returned an error object that just said { error: { code: 13 } }, which matches the below, more descriptive error).
Separately, I've tried a version where instead of chaining promises to get the final transcription result, I collect the name from the operation, and make a separate request to get the result.
Here's that request code:
... // Skipping setup
client.longRunningRecognize(request)
.then(data => {
const operation = data[0];
console.log(operation.latestResponse.name);
})
.catch(error => {
console.error(error);
});
When I hit the relevant endpoint (https://speech.googleapis.com/v1p1beta1/operations/81703347042341321989?key=ABCD12345) before it's had time to process, I get this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"startTime": "2018-08-16T19:33:26.166942Z",
"lastUpdateTime": "2018-08-16T19:41:31.456861Z"
}
}
Once it's fully processed, though, I've been running into this:
{
"name": "81703347042341321989",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1p1beta1.LongRunningRecognizeMetadata",
"progressPercent": 100,
"startTime": "2018-08-16T17:20:28.772208Z",
"lastUpdateTime": "2018-08-16T17:44:40.868144Z"
},
"done": true,
"error": {
"code": 13,
"message": "Server unavailable, please try again later."
}
}
I've tried with shorter audio files (3 mins, same format and encoding), and the above processes both worked.
Any idea what's going on?
A possible workaround is changing the audio format to FLAC, which is the recommended encoding type for Cloud Speech-to-text API due to its lossless compression.
For reference, this can be done using sox, through the following command:
sox file.m4a --rate 16k --bits 16 --channels 1 file.flac
Additionally, this error may also happen when there is a long period of silence at the beginning. In this case, the audio files can be trimmed by specifying after trim the amount of seconds the audio should skip at the beginning and at the end of the file:
sox input.m4a --rate 16k --bits 16 --channels 1 output.flac trim 20 5
To preface, I have Google Cloud Print working through apps script. I have OAuth2 setup, and I was able to setup a Cloud Print API that prints a single file in my Google Drive to a printer on my Cloud Print.
With that said, I'm looking for a way to automate my script so that when a document gets placed in a specific folder on my Google Drive, it will print automatically. I've searched around and was unable to find anything similar. Here's my starting point (which was found here from a very helpful tutorial):
function printGoogleDocument(docId, docTitle) {
// For notes on ticket options see https://developers.google.com/cloud-print/docs/cdd?hl=en
var ticket = {
version: "1.0",
print: {
color: {
type: "STANDARD_COLOR"
},
duplex: {
type: "NO_DUPLEX"
},
}
};
var payload = {
"printerid": myPrinterId,
"content": docId,
"title": docTitle,
"contentType": "google.kix", // allows you to print google docs
"ticket": JSON.stringify(ticket),
};
var response = UrlFetchApp.fetch('https://www.google.com/cloudprint/submit', {
method: "POST",
payload: payload,
headers: {
Authorization: 'Bearer ' + getCloudPrintService().getAccessToken()
},
"muteHttpExceptions": true
});
// If successful, should show a job here: https://www.google.com/cloudprint/#jobs
response = JSON.parse(response);
if (response.success) {
Logger.log("%s", response.message);
} else {
Logger.log("Error Code: %s %s", response.errorCode, response.message);
}
return response;
}
So when I fill in my docID and PrinterID, it works fine for a single document. But like I said, I'm trying to automate this based on new files in a Drive folder. Any suggestions?
We use FileOpener2 plugin for cordova to open a downloaded .apk file from our servers. Recently, we found that Android 6.0 or higher devices are throwing an exception only on the file open process. We were able to trace this down to the cordova.js file, where the posted exception occurs. We have yet to find a cause or a fix, but have put a workaround in place. Any info would be amazing on this so we can maintain our in-app self updating process going on all Android devices.
Code (Working on Android <= 6.0):
// we need to access LocalFileSystem
window.requestFileSystem(LocalFileSystem.PERSISTENT, 5 * 1024 * 1024, function (fs) {
//Show user that download is occurring
$("#toast").dxToast({
message: "Downloading please wait..",
type: "warning",
visible: true,
displayTime: 20000
});
// we will save file in .. Download/OURAPPNAME.apk
var filePath = cordova.file.externalRootDirectory + '/Download/' + "OURAPPNAME.apk";
var fileTransfer = new FileTransfer();
var uri = encodeURI(appDownloadURL);
fileTransfer.download(uri, filePath, function (entry) {
//Show user that download is occurring/show user install is about to happen
$("#toast").dxToast({
message: "Download complete! Launching...",
type: "success",
visible: true,
displayTime: 2000
});
////Use pwlin's fileOpener2 plugin to let the system open the .apk
cordova.plugins.fileOpener2.open(
entry.toURL(),
'application/vnd.android.package-archive',
{
error: function (e) {
window.open(appDownloadURL, "_system");
},
success: function () { console.log('file opened successfully'); }
}
);
},
function (error) {
//Show user that download had an error
$("#toast").dxToast({
message: error.message,
type: "error",
displayTime: 5000
});
},
false);
})
Debugging Information:
THIS IS NOT OUR CODE, BUT APACHE/CORDOVA CODE
Problem File: cordova.js
function androidExec(success, fail, service, action, args) {
// argsJson - "["file:///storage/emulated/0/download/OURAPPNAME.apk","application/vnd.android.package-archive"]"
//callbackId - FileOpener21362683899
//action - open
//service FileOpener2
//bridgesecret - 1334209170
// msgs = "230 F09 FileOpener21362683899 sAttempt to invoke virtual method 'android.content.res.XmlResourceParser //android.content.pm.PackageItemInfo.loadXmlMetaData(android.content.pm.PackageManager, java.lang.String)' on a null object reference"
var msgs = nativeApiProvider.get().exec(bridgeSecret, service, action, callbackId, argsJson);
// If argsJson was received by Java as null, try again with the PROMPT bridge mode.
// This happens in rare circumstances, such as when certain Unicode characters are passed over the bridge on a Galaxy S2. See CB-2666.
if (jsToNativeBridgeMode == jsToNativeModes.JS_OBJECT && msgs === "#Null arguments.") {
androidExec.setJsToNativeBridgeMode(jsToNativeModes.PROMPT);
androidExec(success, fail, service, action, args);
androidExec.setJsToNativeBridgeMode(jsToNativeModes.JS_OBJECT);
} else if (msgs) {
messagesFromNative.push(msgs);
// Always process async to avoid exceptions messing up stack.
nextTick(processMessages);
}
I'm testing a sample code. It has always worked but suddenly i get:
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "dailyLimitExceededUnreg",
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.",
"extendedHelp": "https://code.google.com/apis/console"
}
],
"code": 403,
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup."
}
}
Again, it has ALWAYS worked. Nothing changed. I know to set console dev stuff and blablabla. I would like to know the cause of this issue.
This is my script:
gapi.client.init({
'apiKey': 'xxxxxxxx',
'discoveryDocs': ["https://www.googleapis.com/discovery/v1/apis/calendar/v3/rest"],
'clientId': 'xxxx.apps.googleusercontent.com',
'scope': 'https://www.googleapis.com/auth/calendar.readonly https://www.googleapis.com/auth/calendar',
}).then(function() {
gapi.client.calendar.events.list({
'calendarId': 'primary',
'timeMin': (new Date()).toISOString(),
'showDeleted': false,
'singleEvents': true,
'maxResults': 10,
'orderBy': 'startTime' //from input
}).then(function(response) {
var events = response.result.items;
if (events.length > 0) {
for (var i = 0; i < events.length; i++) {
var event = events[i];
var when = event.start.dateTime;
if (!when) {
when = event.start.date;
}
appendPre(event.summary + ' (' + when + ')created at '+ event.created);
}
} else {
appendPre('No upcoming events found.');
}
});
});
function appendPre(message) {
var pre = document.getElementById('content');
var textContent = document.createTextNode(message + '\n');
pre.appendChild(textContent);
}
Even if you are not authenticating to Calendar as a user, you should create a client project and attach your key to requests so that Google has a project to "bill" the quota usage against. This will prevent these kind of issues in the future. See Google's help article but the general steps would be:
1) Create a Google API Project at https://console.developers.google.com.
2) Enable Calendar API for the project.
3) Get the API key under API Manager > Credentials.
4) Include the key as a parameter for all your Calendar API requests. E.g.
GET https://www.googleapis.com/calendar/v3/calendars/calendarId/events?key={your_key}
Solved with "https://www.googleapis.com/auth/calendar.readonly" scope! It works again without any changes. Maybe it needs some time, but "https://www.googleapis.com/auth/calendar" still not working.