I am trying to set up a retry system in my Rabbitmq setup. In order to achieve this, I have to declare some exchanges, queues and create the necessary bindings between them. So I have something of this nature
await subscriber.createExchange('TTL-PAYMENTS', 'direct')
await subscriber.createExchange('DLX-PAYMENTS', 'fanout')
await subscriber.createQueues('payments-retry-1-30s', 'DLX-PAYMENTS', 30000)
await subscriber.createQueues('payments-retry-2-50s', 'DLX-PAYMENTS', 50000)
await subscriber.bindExchanges('payments', 'DLX-PAYMETS')
await subscriber.bindExchanges('payments-retry-1-30s', 'TTL-PAYMETS', 'retry-1')
await subscriber.bindExchanges('payments-retry-2-50s', 'TTL-PAYMETS', 'retry-2')
But when I start up the server, I get this error
Error: Channel closed by server: 404 (NOT-FOUND) with message "NOT_FOUND - no exchange 'DLX-PAYMETS' in vhost '/'"
When I check my rabbitmq management dashboard, I can see that the exchanges and necessary queues are created already. From my understanding, an exchange Is idempotent, meaning that if it doesn't exist it will be created, and if it does exist as long as nothing changes in exchange attributes, the exchange should remain the same. So I am quite confused as to why this is happening. What could I be doing wrong? And how can I rectify this issue? Thank you very much!
Looks like there's a typo with DLX-PAYMETS, missing the N. You have created an exchange named DLX-PAYMENTS.
Related
I have the below code
const SimpleStorageFactory = await ethers.getContractFactory(
"SimpleStorage"
)
const simpleStorage = await SimpleStorageFactory.deploy()
await simpleStorage.deployed()
await simpleStorage.deployTransaction.wait(6)
I understand that hardhat's getContractFactory is automatically supplied the abi, binary and specified private key for signing transactions. After this is done I am assuming that SimpleStorageFactory.deploy() is the same as in ethers and deploys the contract to the blockchain, and then I am waiting 6 network confirmations. However, I am confused why hardhat has await simpleStorage.deployed() and what this does?
I have tried reading the documentation on hardhat but have not found an answer to this.
The await simpleStorage.deployed() line of code is waiting for the SimpleStorage contract to be deployed to the Ethereum network. When a contract is deployed, it is uploaded to the network and made available for interaction.
The deployed() method is a Promise that returns a ContractReceipt object, which contains information about the deployment transaction, such as the transaction hash, block number, and contract address.
This method is useful because it allows you to ensure that the contract has been deployed and is available for interaction before you try to call any of its methods or access its state. Without this line, the rest of the code might execute before the contract is fully deployed, which could result in errors.
The await simpleStorage.deployTransaction.wait(6) line of code is waiting for the deployment transaction to be included in six blocks on the Ethereum blockchain. This is done to ensure that the transaction has sufficient confirmations, which helps to ensure the transaction will not be reversed.
This hardhat plugin adds a mechanism to deploy contracts to any network, keeping track of them and replicating the same environment for testing.
My main function is I am creating a link-shortening app. When someone entered a long URL, it will give a short URL. If the user clicked on the short link it will search for the long URL on the DB and redirect it to the long URL.
Meantime I want to get the click count and clicked user's OS.
I am currently using current code :
app.get('/:shortUrl', async (req, res) => {
const shortUrl = await ShortUrl.findOne({short: req.params.shortUrl})
if (shortUrl == null) return res.sendStatus(404)
res.redirect(shortUrl.full)
})
findOne is finding the Long URL on the database using ShortID. I used mongoDB here
My questions are :
Are there multiple redirect methods in JS?
Is this method work if there is a high load?
Any other methods I can use to achieve the same result?
What other facts that matter on redirect time
What is 'No Redirection Tracking'?
This is a really long question, Thanks to those who invested their time in this.
Your code is ok, the only limitation is where you run it and mongodb.
I have created apps that are analytics tracker, handling billion rows per day.
I suggest you run your node code using AWS Beanstalk APP. It has low latency and scales on your needs.
And you need to put redis between your request and mongodb, you will call mongodb only if your data is not yet in redis. Mongodb has more read limitations than a straight redis instance.
Are there multiple redirect methods in JS?
First off, there are no redirect methods in Javascript. res.redirect() is a feature of the Express http framework that runs in nodejs. This is the only method built into Express, though all a redirect response consists of is a 3xx (often 302) http response status and setting the Location header to the redirect location. You can manually code that just as well as you can use res.redirect() in Express.
You can look at the res.redirect() code in Express here.
The main things it does are set the location header with this:
this.location(address)
And set the http status (which defaults to 302) with this:
this.statusCode = status;
Then, the rest of the code has to do with handling variable arguments, handling an older design for the API and sending a body in either plain text or html (neither of which is required).
Is this method work if there is a high load?
res.redirect() works just fine at a high load. The bottleneck in your code is probably this line of code:
const shortUrl = await ShortUrl.findOne({short: req.params.shortUrl})
And, how high a scale that goes to depends upon a whole bunch of things about your database, configuration, hardware, setup, etc... You should probably just test how many request/sec of this kind your current database can handle.
Any other methods I can use to achieve the same result?
Sure there are. But, you will have to use some data store to look up the shortUrl to find the long url and you will have to create a 302 response somehow. As said earlier, the scale you can achieve will depend entirely upon your database.
What other facts that matter on redirect time
This is pretty much covered above (hint, its all about the database).
What is 'No Redirection Tracking'?
You can read about it here on MDN.
We are running an application on Firestore and got a simple trigger that when order's details are created or updated some of it's information should be rewritten in the parent order collection.
The function for this got following code
export const updateOrderDetails = functions
.region(FUNCTION_REGION)
.firestore.document("orders/{orderId}/details/pickupAndDropoff")
.onWrite(async (change, context) => {
return await admin
.firestore()
.collection("orders")
.doc(context.params.orderId)
.set({ pickupAndDropoff: change.after.data() }, { merge: true });
});
It was work fine before, but now at random about every third of its executions is delayed. Sometimes by few minutes. In Cloud Function logs we see normal execution times <200ms, so it seems the trigger runs after a huge pause.
What's worse from time to time our change.after.data() is undefined, but we never delete anything - it's just updates and creates.
It was working fine, we did not changed nothing since last week, but now it started to have this unexpected delays. We've also checked the firebase status, but there are no malfunctions in firebase functions service. What can be the cause of this?
The problem can be due to the monotonically increasing orderId as the parameter passed here:
...
.collection("orders")
.doc(context.params.orderId)
...
If you can check once if the orderId passed here is monotonically increasing with each request? It can lead to hotspots which impacts latency.
To explain, I think the write rate must be changing at different day's and time's - as the user traffic using the application or load testing requests changes - which is creating the unexpected kind of behaviour. At low write rate, the requests are working as expected most of the time. At high write rate, the requests are facing hotspot situation in the firestore as mentioned in the firestore documentation resulting in delays (latency issue).
Here is the relevant link to firestore best practices documentation.
Thanks to Frank van Puffelen suggestion we've sent this question directly to Firebase support and after their internal investigation we've got the reply from an engineering team that it was in fact an infrastructure malfunction.
The reply I got from them was:
I escalated the issue to recover more information. So far it appears that there was an issue with pub/sub delivering and creating the event. The Firestore team is also communicating with the pub/sub team to investigate the issue and prevent future incidents.
It seems that the best way to deal with such problems is to quickly write directly to Firebase support team, because as they mentioned in the automatic reply I got after sending a support ticket:
For Firebase outages not listed on the status dashboard, we'll respond within 4 hours.
which seems to be the best option.
I have a subscriber which pushes data into queues. Now the messages looks this
{
"Content": {
"_id" ""5ceya67bbsbag3",
"dataset": {
"upper": {},
"lower": {}
}
}
Now a new message can be pushed with same content id but data will be different. So in that i want to delete the old message with same id or replaece the message those id is same & retain only latest message.
I have not found a direct solution for this in rabbitmq. Please guide me how we can do this ?
I have already gone through some posts.
Post 1
Post 2
What you are trying to achieve cannot be trivially solved with RabbitMQ (or rather the AMQP protocol).
RabbitMQ queues are simple FIFO stacks and don't offer any mean of access to the elements beyond publishing at their top and consuming from their bottom.
Therefore, the only way to "update" an already existing message without relying on an another service would be to fetch all the messages until you find the one you are interested in, discard it, and publish the new one with the other messages you fetched together with it.
Overall, the recommendation when using RabbitMQ in regards of message duplication is to make their consumption idempotent. In other words, the consumption of 2 messages deemed to be the same should lead to the same outcome.
One way to achieve idempotency is to rely on a secondary cache where you store the message identifiers and their validity. Once a consumer fetches a new message from RabbitMQ, it would check the cache to see if it's a valid message or not and act accordingly.
I think this is a slightly wrong way to use rabbitMQ.
only immutable (not intended to change) tasks should be put into queues which a worker shall consume.
An alternative way to implement your particular task is
just push immutable data into queue { "content" : { "_id" : "5ceya67bbsbag3"} .. }
store mutable data in db (mongo) or in-mem db (something like redis is suggested here).
whenever update needed, update in db
let your worker fetch required data using your "_id" ref from the db
I am not sure if removing a message is a good idea. If your requirement is to update the data as it comes so that always latest data is maintained for same Id.
Other thing is as messages are getting consumed always the last message data will get updated. So I don't see a issue here in Rabbit MQ.
I have looked up for this issue but could not find any sufficient information about it.
Google Cloud Spanner client libraries handles sessions automatically and its limit is 10.000 sessions for each node, no problem till here.
I have a micro serviced application which also has Google Cloud Functions. I am doing some specific database jobs on Cloud Functions and I'm also calling those functions continuously. After a little while, Cloud Spanner is starting to throw an error;
Too many active sessions in database, limit is 10000. Increase the node count to allow more sessions.
I know about the limits, but there is not any operation that will cause my app to exceed those limits.
After I noticed this, I have two questions which I could not find any answer;
1- Does Cloud Functions creates new session for every call? (I am using HTTP Trigger)
Here is what I did so far;
1- Here is example cloud functions declaration of mine;
exports.myFunction = function myFunction(req, res) {}
I was declaring my database instance out of this scope before I realize this issue;
const db = Spanner({projectId: '[my-project]'}).instance('[my-cs-instance]').database('[my-database]');
exports.myFunction = function myFunction(req, res) {}
After this issue, I have put it in the scope like this, and closed the database session after I'm done;
exports.myFunction = function myFunction(req, res) {
const db = Spanner({projectId: '[my-project]'}).instance('[my-cs-instance]').database('[my-database]');
// codes
db.close();
}
That didn't change anything, it still exceeds the session limit after a while.
Do you have any experience what causes this? Is this related to Cloud Functions or Cloud Spanner itself?
2- If every transaction object use one connection at a time, what happens in this scenario.
I have a REST endpoint other than these Cloud Functions. It creates a database instance when its starting to listen HTTP endpoints and I am not creating any other instance in its lifecycle anymore. At that endpoint, I am making CRUDs and I am using transactions and they all use the same instance which I created at the start of process. My experience is;
Sometimes transactions or other CRUD operations works with a bit delay which does not happen all the time.
My question is;
Is that because when transaction starts to work, does it lock the connection and all other operations should wait until it ends? If so, should I create independent database instances for transactions on that endpoint?
Thanks in advance
This now has been fixed per the issue opened at #89 and the fix at #91, and logged as #71987137 at Google Issue Trackers.
If any issue persists, please report at Google issue tracker they will re-open to examine.