How to handle DeviceNotRegistered error using expo-server-sdk-node - javascript

I built a push notification system on my backend using expo-server-sdk-node. When I want to send notifications, I lookup the expoPushToken in my database. The docs states the following error(s) should be handled:
DeviceNotRegistered: the device cannot receive push notifications
anymore and you should stop sending messages to the corresponding Expo
push token.
However, I am unsure how to handle this error since there are no direct pushTokens available in the error message. See the following example:
[{
status: 'error',
message: '"ExponentPushToken[XXXXXXXXXXXXXXX]" is not a registered push notification recipient',
details: { error: 'DeviceNotRegistered' }
}]
This device should now be removed from my database, but to do that I need the ExponentPushToken[XXXXXXXXXXXXXXX] value. And because the notifications are sent in batches I lose the reference to the user. What is the proper way to do this?
I thought of the following two ways:
1: Just split(") and filter the value, but this depends on the error message.
2: Loop over all my pushTokens, and find where includes(originalValue) in message, but this would mean I'd have to loop over an excessive amount of users every time it fails.
Any recommendations?

I faced the same issue, and here's what I did.
Considering this code
for (let chunk of chunks) {
try {
let ticketChunk = await expo.sendPushNotificationsAsync(chunk);
console.log(ticketChunk);
tickets.push(...ticketChunk);
// If a ticket contains an error code in ticket.details.error
//
} catch (error) {
console.error(error);
}
}
Once I send a batch of notifications (100 most likely).
I loop through the tickets, if ticket.status === 'error' and check for ticket.details.error === 'DeviceNotRegistered' as in the code above.
Since the order of sent notifications is the order in which the response tickets are received.
Using the current index of the tickets loop, I can access the token at the same index in the chunk I sent.
for (let chunk of chunks) {
try {
let ticketChunk = await expo.sendPushNotificationsAsync(chunk);
tickets.push(...ticketChunk);
// If a ticket contains an error code in ticket.details.error
let ticketIndex = 0;
for (let ticket of tickets) {
if (ticket.status === 'error' && ticket.details.error === 'DeviceNotRegistered') {
// Get the expo token from the `chunk` using `ticketIndex`
// Unsubscribe the token or do whatever you want to
}
ticketIndex++;
}
} catch (error) {
console.error(error);
}
}
NB: The code might contain syntax errors, it's the idea I am trying to pass across. I did the same thing with php

It's not documented behaviour (at least I didn't found it in the documentation), but in the ticket error response I can see the expoPushToken in the details object. See the screenshot attached:

Related

'RestConnection Commit failed with error' when trying to write to firestore

I am getting the following error when trying to write to Firestore. This is done in JavaScript(React).Can anyone tell what is this and how can I fix this?
#firebase/firestore: Firestore (8.3.1): RestConnection Commit failed with error: {"code":"failed-precondition","name":"FirebaseError"} url: https://firestore.googleapis.com/v1/projects/{project name}/databases/(default)/documents:commit request: {"writes":[{"update":{"name":"projects/{project name}/databases/(default)/documents/teams/T22yKl1ERQSlfuZNitrvs2vRjSJ2/team-analytics/T22yKl1ERQSlfuZNitrvs2vRjSJ2-Dec-22-2021","fields":{"homePageViews":{"integerValue":"3"},"timeModified":{"timestampValue":"2021-12-22T09:32:00.000000000Z"}}},"updateMask":{"fieldPaths":["homePageViews","timeModified"]},"currentDocument":{"updateTime":"2021-12-22T09:23:08.916511000Z"}}]}
My code that is trying to access Firestore is shown below:
return db.runTransaction(async (transaction) => {
const analyticsDoc = await transaction.get(analyticsReference);
if (analyticsDoc.exists) {
const analytics: any = analyticsDoc.data();
return transaction.update(analyticsReference, { homePageViews: analytics.homePageViews + 1, timeModified: getCurrentDateTime() });
}
const newAnalytics: AnalyticsObject = {
totalViews: 0,
homePageViews: 1,
timeModified: getCurrentDateTime(),
};
return transaction.set(analyticsReference, newAnalytics);
});
I am also getting the following error in my console:
POST https://firestore.googleapis.com/v1/projects/optimx-sports/databases/(default)/documents:commit 400
Edit: After more digging in, I am thinking it might be because I am sending 2 transactions to the same document simultaneously. Is it possible that this error is because of this?
Below are a few Points you can check with:
In Cloud Firestore, you can only update a single document about once
per second, which might be too low for some high-traffic
applications. Have a look at Firestore documentation.
You can refer to the Documentation.
Also You can try with Postman API to access data.
Another way is combining two commits as well.
The issue was that I was sending two transaction commits to one firestore document within a second. The second commit was raising the above error. Fixed it by combining the two commits

Wix currentMember.getMember() returning undefined

I have a custom Wix profile page (made with Velo) that displays the user's information. To get that information, I have to query a database, and to query that database I have to get the currently logged in user's login email address. Fortunately, Wix has an API for that, currentMember. However, using currentMember.getMember() is returning undefined on the profile page. But after I reload the profile page, it returns an object with the correct member data (including the email I need). Why is this happening? Another thing I noticed is that I'm getting the following error in the console:
the error I'm getting
I was thinking that since the error said the URL was preloaded, perhaps the page loaded before I logged in, and thus the member object being returned is undefined, since the user hadn't logged in yet.
Here is the code I used to log the member object to the console:
import { currentMember } from 'wix-members';
$w.onReady(function () {
currentMember.getMember()
.then((member) => {
console.log(member);
});
}
And this was logging undefined, but when I reload, it gives the correct info.
I had a similar issue whereby a member logs in, but member object returned was not available until a refresh of the page occurred.
I solved this with the use of the "wix-members" onLogin() api. Below is the code I hacked together.
authentication.onLogin((memberInfo) => {
const memberId = memberInfo.id;
if (memberId) {
console.log("MEMBER ID: " + memberId);
local.setItem("auth", memberId);
} else {
currentMember.getMember()
.then((member) => {
if (member) {
console.log("MEMBER ID: " + member._id);
local.setItem("auth", member._id);
}
else {
console.log("NOT LOGGED IN!");
}
})
.catch((error) => {
console.error(error);
});
}
});
I had to use an if / else as the memberInfo.id property always appears undefined. I left it in the code (for my use case) in case it begins to work as expected again.
I used this in the masterPage.js onReady().

messageDelete event and partials discord.js v12

Lets say that I have a messageDelete event that I want to get the data from. Since I am required to have the Channel and Message partials for the messageDelete event to fire, the message object returned in the messageDelete event will be partial.
When I'm trying to fetch this message, it returns an error saying that the fetched message is unkown.
So how can I get information like the content etc. from the deleted message?
My current code:
client.on("messageDelete", async message => {
if (message.partial) await message.fetch() // this is where the error occurs
console.log(message.content) // will only work on non partial messages
})
Is there any way around this, cause it would be useful to get the information from past deleted messages.
EDIT
Toasty recommended that I use the audit logs, to which I have used the following code
client.on("messageDelete", async message => {
console.log(message) // seeing whats avaliable in the return
if (message.partial) console.log("message was partial") // checking if the message was partial to compare with non partial messages
if (message.guild) {
const fLogs = await message.guild.fetchAuditLogs({limit:1, type:"MESSAGE_DELETE"}) //getting audit logs
const log = fLogs.entries.first()
let {executor, target} = log
console.log("Message deleted by "+executor.tag+" in "+target) // responding.
}
})
Output:
message was partial
Message deleted by CT-1409 "Echo"#0093 in 606323576714559489
So I can get the who and the (sort of) what of the message that was deleted.
I still cannot get the rest of the message information, as if I tried to fetch the message with the target id, it would give me Unkown Message again. But also when I logged the message object to start with, I noticed that there was a decent amount of information already present, which may mean some data would still be accessible from a partial message. I don't know how much, but perhaps enough for what I need.
That you can't get any information of the deleted message is probably because the message..has been deleted.
As you can read here, this is not possible and would also be a violation of the rules and a violation of the privacy of users.
But...
...if you have a command for deleting messages, you could get the information of the message before you delete it and do stuff with it
Alternatively, you could work with the audit logs

Firebase Realtime Database - Determine if user has access to path

I have updated my Firebase Realtime Database access rules, and have noticed some clients now tries to access paths they do not have access to. This is ok - but my problem is that my code stops after being unable to read a restricted node.
I see below error in my console, and then loading of subsequent data stops:
permission_denied at /notes/no-access-node
I begin by collecting access nodes from /access_notes/uid and continue to read all data from /notes/noteId.
My code for collecting notes from the database below:
//*** SUBSCRIPTION */
database.ref(`access_notes/${uid}`).on('value', (myNotAccessSnaps) => {
let subscrPromises = []
let collectedNots = {}
// Collect all categories we have access to
myNotAccessSnaps.forEach((accessSnap) => {
const noteId = accessSnap.key
subscrPromises.push(
database.ref(`notes/${noteId}`)
.once('value', (notSnap)=>{
const notData = notSnap.val()
const note = { id: notSnap.key, ...notData}
collectedNotes[note.id] = note
},
(error) => {
console.warn('Note does not exist or no access', error)
})
)
})
Promise.all(subscrPromises)
.then(() => {
const notesArray = Object.values(collectedNotes)
...
})
.catch((error) => { console.error(error); return Promise.resolve(true) })
I do not want the client to halt on permission_denied!
Is there a way to see if the user has access to a node /notes/no_access_note without raising an error?
Kind regards /K
I do not want the client to halt on permission_denied!
You're using Promise.all, which MDN documents as:
Promise.all() will reject immediately upon any of the input promises rejecting.
You may want to look at Promise.allSettled(), which MDN documents as:
[Promise.allSettled()] is typically used when you have multiple asynchronous tasks that are not dependent on one another to complete successfully, or you'd always like to know the result of each promise.
Is there a way to see if the user has access to a node /notes/no_access_note without raising an error?
As far as I know the SDK always logs data access permissions errors and this cannot be suppressed.
Trying to access data that the user doesn't have access to is considered a programming error in Firebase. In normal operation you code should ensure that it never encounters such an error.
This means that your data access should follow the happy path of accessing data it knows it has access to. So you store the list of the notes the user has access to, and then from that list access each individual note.
So in your situation I'd recommend finding out why you're trying to read a note the user doesn't have access to, instead of trying to hide the message from the console.

Insert an array of documents into a model

Here's the relevant code:
var Results = mongoose.model('Results', resultsSchema);
var results_array = [];
_.each(matches, function(match) {
var results = new Results({
id: match.match_id,
... // more attributes
});
results_array.push(results);
});
callback(results_array);
});
}
], function(results_array) {
results_array.insert(function(err) {
// error handling
Naturally, I get a No method found for the results_array. However I'm not sure what else to call the method on.
In other functions I'm passing through the equivalent of the results variable here, which is a mongoose object and has the insert method available.
How can I insert an array of documents here?
** Edit **
function(results_array) {
async.eachLimit(results_array, 20, function(result, callback) {
result.save(function(err) {
callback(err);
});
}, function(err) {
if (err) {
if (err.code == 11000) {
return res.status(409);
}
return next(err);
}
res.status(200).end();
});
});
So what's happening:
When I clear the collection, this works fine.
However when I resend this request I never get a response.
This is happening because I have my schema to not allow duplicates that are coming in from the JSON response. So when I resend the request, it gets the same data as the first request, and thus responds with an error. This is what I believe status code 409 deals with.
Is there a typo somewhere in my implementation?
Edit 2
Error code coming out:
{ [MongoError: insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }]
name: 'MongoError',
code: 11000,
err: 'insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }' }
So this is as expected.
Mongo is responding with a 11000 error, complaining that this is a duplicate key.
Edit 3
if (err.code == 11000) {
return res.status(409).end();
}
This seems to have fixed the problem. Is this a band-aid fix though?
You seem to be trying to insert various documents at once here. So you actually have a few options.
Firstly, there is no .insert() method in mongoose as this is replaced with other wrappers such as .save() and .create(). The most basic process here is to just call "save" on each document you have just created. Also employing the async library here to implement some flow control so everything just doesn't queue up:
async.eachLimit(results_array,20,function(result,callback) {
result.save(function(err) {
callback(err)
});
},function(err) {
// process when complete or on error
});
Another thing here is that .create() can just take a list of objects as it's arguments and simply inserts each one as the document is created:
Results.create(results_array,function(err) {
});
That would actually be with "raw" objects though as they are essentially all cast as a mongooose document first. You can ask for the documents back as additional arguments in the callback signature, but constructing that is likely overkill.
Either way those shake, the "async" form will process those in parallel and the "create" form will be in sequence, but they are both effectively issuing one "insert" to the database for each document that is created.
For true Bulk functionality you presently need to address the underlying driver methods, and the best place is with the Bulk Operations API:
mongoose.connection.on("open",function(err,conn) {
var bulk = Results.collection.initializeUnorderedBulkOp();
var count = 0;
async.eachSeries(results_array,function(result,callback) {
bulk.insert(result);
count++;
if ( count % 1000 == 0 ) {
bulk.execute(function(err,response) {
// maybe check response
bulk = Results.collection.initializeUnorderedBulkOp();
callback(err);
});
} else {
callback();
}
},function(err) {
// called when done
// Check if there are still writes queued
if ( count % 1000 != 0 )
bulk.execute(function(err,response) {
// maybe check response
});
});
});
Again the array here is raw objects rather than those cast as a mongoose document. There is no validation or other mongoose schema logic implemented here as this is just a basic driver method and does not know about such things.
While the array is processed in series, the above shows that a write operation will only actually be sent to the server once every 1000 entries processed or when the end is reached. So this truly does send everything to the server at once.
Unordered operations means that the err would normally not be set but rather the "response" document would contain any errors that might have occurred. If you want this to fail on the first error then it would be .initializeOrderedBulkOp() instead.
The care to take here is that you must be sure a connection is open before accessing these methods in this way. Mongoose looks after the connection with it's own methods so where a method such as .save() is reached in your code before the actual connection is made to the database it is "queued" in a sense awaiting this event.
So either make sure that some other "mongoose" operation has completed first or otherwise ensure that your application logic works within such a case where the connection is sure to be made. Simulated in this example by placing within the "connection open" event.
It depends on what you really want to do. Each case has it's uses, with of course the last being the fastest possible way to do this as there are limited "write" and "return result" conversations going back and forth with the server.

Categories

Resources