I have been looking during hours for an answer to my question. I think this question (Send email to localhost smtp server in Node.js) is quite the same but the answer didn't help me.
So I have two VM's which run on Centos7. And I want my VM 1 to send an email to my VM 2. Here is what I have done :
VM 1 : Postfix is installed and configured with it : relayhost = [ip of vm 2]:25
VM 2 : The port 25 is opened. There is the code which listen in port 25 :
'use strict'
const fs = require('fs-extra')
const SMTPServer = require('smtp-server').SMTPServer
const simpleParser = require('mailparser').simpleParser
const date = require('date-and-time')
const builder = require('xmlbuilder') //9.0.4
const say = msg => {
console.log(msg)
}
const server = new SMTPServer({
logger: true,
//secure: true,
//authOptional: true,
//disabledCommands: ['AUTH'],
disabledCommands: ['AUTH', 'STARTTLS'],
// By default only PLAIN and LOGIN are enabled
authMethods: ['PLAIN', 'LOGIN', 'CRAM-MD5'],
onConnect: function(session, callback){
say('hello')
callback()
},
onData: function(stream, session, callback) {
say('received');
callback()
},
})
server.on('error', err => {
say('Error %s', err.message)
})
server.listen(25)
From VM 2 : When I send a mail to locahost with sendmail :
sendmail ldecaudin#localhost < email.txt
, I can see the "hello" and the "received"
From VM 1 : When I send a mail with sendmail (which is automatically relayed by postfix to my VM2) using it :
sendmail ldecaudin#[192.168.56.101] < email.txt
I can only see the "hello", so the connection is running, but I am not able to pass in the "onData" function to get the stream I need.
Also from my VM1, I have a node code which send mails using nodemailer, this code works. When I use this code, I have the same result ("hello" but not "received"), but there are 2 connections instead of one.
So I am totally lost, I tried to add the "onAuth" function, to try many options of "new SMTPServer".
I wonder if this is a problem with a port (but port 25 is open!), or maybe I forgot to put something on Postfix. I don't know.
Can you help me please ?
Thank you a lot by advance !
I might be late to answer but I faced a similar issue a couple of hours ago writing this answer. Answering in a hope that somebody will find it helpful in the future...
As per the SMTP standards, it sends mailform, rcptto commands before actually sending the body.. and hence, here in this nodeJs lib, the handler methods equivalent to those commands mailfrom -> onMailFrom(...) & rcptto -> onRcptTo(...) is getting called and ARE EXPECTED TO CALL callback() in order to send success signal on the connected SMTP socket.
In my case, I wrote an empty handler and was not calling the callback on onMailFrom & onRcptTo. This was creating the problem.
Calling the callbacks from the above handlers fixed it.
Code sample when I faced the issue:
.
.
.
onMailFrom: (address, session, callback) => {},
onRcptTo: (address, session, callback) => {},
.
.
Code sample when I called callback:
.
.
.
onMailFrom: (address, session, callback) => {
return callback();
},
onRcptTo: (address, session, callback) => {
return callback();
},
.
.
.
Adding a note: Do check on other handler functions as well, if this solution doesn't work. Debugging helped me here to identify the problem in my case.
Related
I'm having a problem trying to get a service URL discover by eureka.
I'm using eureka-js-client to connect to Eureka and for testing purposes I've created two microservices, I've called it: ms1 and ms2.
What I've tried is:
Start Eureka server to allow services register into it
Start ms1 and register into Eureka
Start ms2, register into Eureka and get ms1 URL.
To accomplish this I've launched eureka server as a Spring Boot app using #EnableEurekaServer. This part works fine, I can access http://localhost:8761/ and see the dashboard.
Then, in my microservices I've this configuration
this._client = new Eureka({
instance: {
app: 'ms1',
instanceId: 'ms1',
hostName: 'localhost',
ipAddr: '127.0.0.1',
statusPageUrl: `http://localhost:${port ? port : this._port}`,
healthCheckUrl: `http://localhost:${port? port : this._port}/health`,
port: {
'$': port? port: this._port,
'#enabled': true,
},
vipAddress: 'myvip',
dataCenterInfo: {
'#class': 'com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo',
name: 'MyOwn',
},
},
eureka: {
host: 'localhost',
port: 8761,
servicePath: '/eureka/apps/'
},
})
And the same for ms2 changing the name.
When I run the project it output registered with eureka: ms1/ms1 and services seems to be registered in eureka correctly:
But now the problem is trying to get the URL of one of the two services. From either of the two services, if I try to get the Eureka instances I always get an empty list.
I have this code:
let instances: any = this.getClient().getInstancesByAppId(microserviceName);
let instance = null;
let url = ''
if (instances != null && instances.length > 0) {
instance = instances[0];
let protocol = instance.securePort["#enabled"] == "true" ? "https" : "http";
url = `${protocol}//${instance.ipAddr}:${instance.port.$}/`
}
Where in "microserviceName" variable I've tried:
"ms1"
"MS1"
"ms1/ms1"
But the response is always an empty array with this output:
Unable to retrieve instances for appId: ms1
So, what's the problem? Have I missed something? I think the flow is correct:
Start Eureka server.
Register services into server.
Look for instances in the server.
Thanks in advance.
Finally I solved my own issue. All was working good, the ms2 was able to find ms1 using the code I posted, so the problem was:
My ms2 file was like this:
EurekaService.getClient().start()
EurekaService.getUrl('ms1')
EurekaService.getClient()?.stop()
And it seems like EurekaService.getClient().start() does not block until it ends (or is available or whatever), so the client is not up and can't get the instance ms1.
Note that the method getUrl() has the code provided in the OP:
let instances: any = this.getClient().getInstancesByAppId(microserviceName);
let instance = null;
...
So I've changed the code like this:
start()
async function start(){
EurekaService.getClient().start()
await new Promise(f => setTimeout(f, 1000));
const url = EurekaService.getUrl('ms1')
console.log("url = ",url)
EurekaService.getClient()?.stop()
}
And works perfectly, the output log is:
registered with eureka: ms2/ms2
url = http//127.0.0.1:8002/
de-registered with eureka: ms2/ms2
So, start method is not async so I can't use await or .then(), I have to set a timeout and wait to complete.
I don't know if there is a better way to do this or by the nature of the architecture can't be controlled when is available.
By the way, for me, 1 second timeout is enough.
I'm currently attempting to setup an XMPP client using Stanza.js
https://github.com/legastero/stanza
I have a working server that accepts connections from a Gajim client, however when attempting to connect using Stanza.js client.connect method, the server opens up a websocket connection, but no events for authentication, or session started ever fire.
The server logs do not show any plaintext password authentication attempts.
How can I actually see any of the stanza logs to debug this issue?
import * as XMPP from 'stanza';
const config = { credentials: {jid: '[jid]', password: '[password]'}, transports: {websocket: '[socketurl]', bosh: false} };
const client = XMPP.createClient(config)
client.on('raw:*', (data) => {
console.log('data', data)
})
client.connect();
onconnect event does fire, but this is the only event that fires.
Is there a way to manually trigger authentication that isn't expressed in the documentation?
The raw event handler should be able to give you the logging you want - but in your code sample, you are invoking it incorrectly. Try the following.
client.on('raw:*', (direction, data) => {
console.log(direction, data)
})
For reference, the docs state that the callback for the raw data event handler is
(direction: incoming | outgoing, data: string) => void
So the data that you are looking for is in the second argument, but your callback only has one argument (just the direction string "incoming" or "outgoing", although you have named the argument "data").
Once you fix the logging I expect you will see the stream immediately terminates with a stream error. Your config is incorrect. The jid and password should be top level fields. Review the stanza sample code. For the options to createClient - there is no credentials object. Try the following:
const config = { jid: '[jid]', password: '[password]', transports: {websocket: '[socketurl]', bosh: false} };
Since your username and password are hidden behind an incorrect credentials object, stanza.io does not see them and you are effectively trying to connect with no username and password so no authentication is even attempted.
This issue happened to be caused by a configuration problem.
The jabber server was using plain authentication.
Adding an additional line to the client definition file helped.
Also adding
client.on('*', console.log)
offered more complete server logs.
client.sasl.disable('X-OAUTH2')
How can I actually see any of the stanza logs to debug this issue?
If the connection is not encrypted, you can sniff the XMPP traffic with tools like
sudo tcpflow -i lo -Cg port 5222
You can force ejabberd to not allow encryption, so your clients don't use that, and you can read the network traffic.
Alternatively, in ejabbed.yml you can set this, but probably it will generate a lot of log messages:
loglevel: debug
I am trying to send a message from one connected node server to the other using the following code in server.js and server1.js:
const hyperswarm = require('hyperswarm')
const crypto = require('crypto')
const swarm = hyperswarm()
// look for peers listed under this topic
const topic = crypto.createHash('sha256')
.update('mycoolstuff')
.digest()
swarm.join(topic, {
lookup: true, // find & connect to peers
announce: true // optional- announce self as a connection target
})
swarm.on('connection', (socket, details) => {
//console.log('new connection!', details)
// you can now use the socket as a stream, eg:
process.stdin.pipe(socket).pipe(process.stdout)
})
The problem is the message from one terminal is duplicated on the other.
For example, if I type the following in server.js's terminal:
test 123
I get the following in server1.js's:
test 123
test 123
. . . and vice versa
I can work around this by setting one of the two servers to not announce:
swarm.join(topic, {
lookup: true, // find & connect to peers
announce: false // <--------- don't announce, stops duplicates
})
But I would prefer that both servers announce.
What am I misunderstanding about sockets, stdin, or hyperswarm here?
Well, I found my own answer inside the node module folder for hyperswarm in the file called example.js
I added the following:
const {
priority,
status,
retries,
peer,
client
} = details
if (client) process.stdin.pipe(socket)
else socket.pipe(process.stdout)
Which solved my problem.
I followed this tutorial :
Running the CC26xx Contiki Examples
but instead of using the cc26xx-demo I used the cc26xx-web-demo and successfully manged to get everything up and running and I can access the 6lbr web page, when I access the sensorTag page I see a mqtt configuration page as shown:
and if I click index in the sensorTag page (pic above) I get to see the data:
the question is , how can I write a simple nodejs js file that uses the mqtt broker information to grab all the sensorTag sensors data and save it in an local object.
I tried to do run this example but no luck
var mqtt = require('mqtt')
client = mqtt.createClient(1883, '192.168.1.109');
client.subscribe(what do I write here);
client.on('message', function(topic, message) { console.log(message); });
I don't know what I'm doing wrong
UPDATE:
mqtt configuration page :
javascript file :
and I run the js with node and listen on port 1883:
tcpdump seems to detect mqqt packets on 1883 port but I can't to seem to be able to console.log the sensor data when I run the js file with node ??
I went on the contiki wiki and came across this info
"You can also subscribe to topics and receive commands, but this will only work if you use "Org ID" != 'quickstart'. Thus, if you provide a different Org ID (do not forget the auth token!), the device will subscribe to:
iot-2/cmd/+/fmt/json"
does this mean that the topic to subscribe to is quickstart but even if that's so, I used '+/#' which subrcibes to all topics and still got nothing printing on the console ?
Hope this works for you:
var mqtt = require('mqtt');
var fs = require('fs');
var options = { port: PORT, host: HOST };/*user, password and others authentication if there.*/
var client = mqtt.connect('HOST', options);
client.on('connect', function ()
{
client.subscribe("topic, command or data");
client.publish("topic, command or data", data, function () {
});
});
client.on('error', function () { });
client.on('offline', function () { });
client.on('message', function (topic, message) {
console.log(message.toString());
});
I'm using Nodemailer to send mailings in my NodeJS / Express server. Instead of sending the mail directly I want to wait 20 minutes before sending the mail. I think this feels more personal then sending mail directly.
But I have no idea how to achieve this. I guess I don't need something like a NodeJS cronjob like this NodeCron package, or do I?
router.post('/', (req, res) => {
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
transporter.sendMail(mailOptions, (error, info) => {
if (error) res.status(500).json({ responseText: error });
res.status(200).json({ responseText: 'Message send!' });
});
}
});
My router looks like as shown above. So if post is called I want this request to wait 20 minutes. Instead of with a cronjob I want to execute the post just once, but with a bit of a delay. Any suggestions on how to do this?
Well some folks may come here and tell you to use an external queue system and bla bla... But you could simply use plain old Javascript to schedule the sending 20*60*1000 milliseconds into the future to get things started. :)
There's however a problem with your code: you're waiting for the mailer to succeed before sending the 200 - 'Message sent' response to the user. Call me a madman but I'm pretty sure the user won't be staring at the browser window for 20 minutes, so you'll probably have to answer as soon as possible and then schedule the mail. Modifying your code:
router.post('/', (req, res) => {
const DELAY = 20*60*1000 // min * secs * milliseconds
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
res.status(200).json({ responseText: 'Message queued for delivery' });
setTimeout(function(){
transporter.sendMail(mailOptions, (error, info) => {
if (error)
console.log('Mail failed!! :(')
else
console.log('Mail sent to ' + mailOptions.to)
}),
DELAY
);
}
});
There are however many possible flaws to this solution. If you're expecting big traffic on that endpoint you could end up with many scheduled callbacks that will eat the stack. In addition, if something fails the user of course won't be able to know.
If this is a big / serious project, consider using that cronjob package or using an external storage mechanism where you can queue this "pending" messages (Redis would do and it's incredible simple), and have a different process read tasks from there and perform the email sending.
EDIT: saw some more things on your code.
1) You probably don't need to create a new transport inside your POST handler, create it outside and reuse it.
2) In addition to the mentioned problems, if your server crashed no email will be ever sent.
3) If you still want to do it in a single Node.js app, instead of scheduling an email on every request to this endpoint, you'd be better storing the email data (from, to, subject, body) somewhere and schedule every 20 minutes a function that will get all pending emails, send them one by one, and then reschedule itself to re-run 20 minutes later. This will keep you memory usage low. Server crash still make all emails lost, but if you add REDIS into the mix then you can simply grab all pending emails from REDIS when your app start.
Probably too much for an answer, sorry if it wasn't needed! :)
I think CharlieBrown's answer is correct and since I had two answers in my mind while reading the question, I thank him for simplifying my answer to be the alternative of his.
setTimeout is actually a good idea, but it has a drawback: in the case when there is any reason to stop the server code (server restart, module installation, file management, etc.) your callbacks scheduled at the end of the setTimeout's time parameter will not be executed and some users will not receive emails.
If the problem above is serious-enough, then you might want to store scheduled emails to be sent in the database or into Redis and use a cron job to periodically check the email set and send the emails if there are some.
I think that either this answer or CharlieBrown's should suffice for you, depending on your preferences and needs.