Exception from WebAuthn Authentication API with Yubikey - javascript

I'm working on adding WebAuthn support to a newly-minted web site and am running into a problem during the navigator.credentials.get() call. The client is Firefox 85.0 on Fedora 33. In case it matters, the server is Apache httpd on Fedora 33. The token is either a Yubikey 4 or a Yubikey 5NFC (the results are the same). This is the function making the API call. Obviously the credential IDs hard-coded here are for testing, not part of the final product:
function handleUserAuthenticationResponse(r) {
var cid1 = {type: "public-key", id: base64ToArrayBuffer("gL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCg==")};
var cid2 = {type: "public-key", id: base64ToArrayBuffer("tjW1RPqtAJm69I/qeV7eRFJx6h87J3NPeJ/hhbkjttcCc2BWHQ2v2nueoKBSGabw1eYsT8S+lhJv1l1mYWX+Uw==")};
var options = {
rpID: "http://localhost",
challenge: base64ToArrayBuffer(r.challenge),
allowCredentials: [cid1,cid2],
timeout: 60000
};
if (!window.PublicKeyCredential) {
throw new Error("Unable to access credentials interface");
}
navigator.credentials.get({"publicKey":options})
.then(assertion => handleTokenAssertion(assertion))
.catch(e => {console.log("Error fetching token assertion:",e);});
}
function base64ToArrayBuffer(base64) {
var binary_string = window.atob(base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
}
function handleTokenAssertion(a) {
alert("Got an assertion!");
}
Everything seems to work, the Yubikey LED blinks, I press the touchpad, but then I get back an exception:
Error fetching token assertion: DOMException: An attempt was made to use an object that is not, or is no longer, usable
This seems to be a bit of a Firefox catch-all. It could indicate that the token doesn't match one of the allowedCredentials[], or perhaps other things. It's hard to tell. The FIDO2 credential on the Yubikey was created with fido2-cred(1) tool packaged with the libfido2 source. In this case the credentialId is from the fido2-cred -M output:
CuCEGL10uPhBmNCY4NsGaAz0gir/68UMGFQn0pfb6tc=
http://localhost
fido-u2f
WMSGBbi6CMINQvnkVRUYcYltDg3pgFlihvtzbRHuwBPipEEAAAAAAAAAAAAAAAAAAAAAAAAAAABAgL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCqUBAgMmIAEhWCA5itRRCBO0lnsztvPvI1waVZLBCZ1XMJjOvlN2oZmBCyJYILFaRjThs5Paj1sOp81iID1LpUBYHJhp4dizC0eI/RrE
gL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCg==
MEQCIFfs8PagKhNnDgzxfurVzdkTDVTT6ixKk0ak/2qrbSPUAiAf64w390rX1cyY58JgSC/Ac97w6TLcYKuqxOSn5lxV0g==
<long assertion certificate>
You can see the credentialId on line 5, and that it matches cid1 in the Javascript function. Furthermore, if I request an assertion from the token using this credentialId and all else identical (except the challenge) with fido2-assert -G, everything works fine: I get the assertion and it verifies correctly using fido2-assert -V.
Without a more meaningful diagnostic it's hard to know what to try, so I thought I would ask here and see if anyone has any hints. Perhaps I've made some basic error with either Javascript or the credentials API?
Thanks!
UPDATE: One possibility I thought might be worth trying was removing the scheme from the RP ID but that made no difference.
UPDATE: Looking at the firefox source code, the error is apparently NS_ERROR_DOM_INVALID_STATE_ERR, which covers several different situations but in this case is most likely a translation of U2F_ERROR_INVALID_STATE (in dom/webauthn/U2FHIDTokenManager.h). U2F_ERROR_INVALID_STATE, in turn, is defined in third_party/rust/authenticator/src/u2fhid-capi.h as a simple numerical value (3), with no indication of where the value came from. Perhaps it's defined by the underlying HID driver for the Yubikey, but it's not clear what driver that corresponds to. The hunt continues...

It turns out that the problem was indeed the format of the relying party ID. Based on example code from the net (which may have worked with other browsers or versions of the code?), I initially used the full scheme://domain format for the rpID (so in my code above, http://localhost), but it turns out that what's needed is just the domain (localhost). Modifying the rpID in this way allows the assertion process to succeed.
Initially I thought this did not work, but it turned out that I'd simply forgotten to commit the change. Having belatedly done that, it works.

Related

Why does jquery return different values than the values submitted?

Update:
Please see the answer noted below as, ultimately, the problem had nothing to do with jsquery.
=============
Issue:
I submit an object to jquery to convert into a serialized string that will become part of a "POST" request to a server, and the data returned from the serialization request is different than the data sent on many occasions.
An example:
The JavaScript code that implements the server POST request:
function send_data(gpg_data) {
var query_string;
query_string = '?' + $.param(gpg_data, traditional = true);
console.log('gpg_data =', gpg_data)
console.log('query_string =', query_string);
$.post(server_address + query_string);
return;
}
This is the structure sent to the jquery param() function.
(copied from the browser console in developer mode.)
gpg_data =
{controller_status: 'Connected', motion_state: 'Stopped', angle_dir: 'Stopped', time_stamp: 21442, x_axis: 0, …}
angle_dir: "Stopped"
controller_status: "Connected"
force: 0
head_enable: 0
head_x_axis: 0
head_y_axis: 0
motion_state: "Stopped"
time_stamp: 21490
trigger_1: 0
trigger_2: 0
x_axis: 0
y_axis: "0.00"
. . . and the returned "query string" was:
query_string = ?controller_status=Connected&motion_state=Stopped&angle_dir=Stopped&time_stamp=21282&x_axis=0&y_axis=0.00&head_x_axis=0&head_y_axis=0&force=0&trigger_1=1&trigger_2=1&head_enable=0
The data received by the server is:
ImmutableMultiDict([('controller_status', 'Connected'), ('motion_state', 'Stopped'), ('angle_dir', 'Stopped'), ('time_stamp', '21282'), ('x_axis', '0'), ('y_axis', '0.00'), ('head_x_axis', '0'), ('head_y_axis', '0'), ('force', '0'), ('trigger_1', '1'), ('trigger_2', '1'), ('head_enable', '0')])
For example, note that "trigger_1" returns 1 when the data sent to it is a zero.
I have tried setting the query to "traditional = true" to revert to an earlier style of query handling as some articles suggested - which did not work.  I tried this with jquery 3.2 and 3.6.
I am not sure exactly how jquery manages to munge the data so I have no idea where to look.
I have looked at my script and at the unpacked jquery code, and I can make no sense out of why or how it does what it does.
Any help understanding this would be appreciated.
P.S.
web searches on "troubleshooting jquery" returned very complex replies that had more to do with editing e-commerce web pages with fancy buttons and logins than with simply serializing data.
P.P.S.
I am tempted to just chuck the jquery and write my own serialization routine. (grrrr!)
===================
Update:
As requested, a link to the browser-side context.
To run: unpack the zip file in a folder somewhere and attach an analog joystick/gamepad to any USB port, then launch index.html in a local browser.  Note that a purely digital gamepad - with buttons only or with a joystick that acts like four buttons - won't work.
You will want to try moving joystick axes 1 and 2, (programmatically axes 0 and 1) and use the first (0th) trigger button.
You will get a zillion CORS errors and it will complain bitterly that it cannot reach the server, but the server side context requires a GoPiGo-3 robot running GoPiGo O/S 3.0.1, so I did not include it.
Note: This does not work in Firefox as Firefox absolutely requires a "secure context" to use the Gamepad API.  It does work in the current version of Chrome, (Version 97.0.4692.99 (Official Build) (64-bit)), but throws warnings about requiring a secure context.
Please also note that I have made every attempt I know how to try to troubleshoot the offending JavaScript, but trying to debug code that depends on real-time event handling in a browser is something I have not figured out how to do - despite continuous searching and efforts.  Any advice on how to do this would be appreciated!
======================
Update:
Researching debugging JavaScript in Chrome disclosed an interesting tidbit:
Including the line // #ts-check as the first line in the JavaScript code turns on additional "linting" (?) or other checks that, (mostly) were a question of adding "var" to the beginning of variable declarations.
However. . . .
There was one comment it made:
gopigo3_joystick.x_axis = Number.parseFloat((jsdata.axes[0]).toFixed(2));
gopigo3_joystick.y_axis = Number.parseFloat(jsdata.axes[1]).toFixed(2);
I could not assign gopigo3_joystick.y_axis to a string object, (or something like that), and I was scratching my head - that was one of the pesky problems I was trying to solve!
If you look closely at that second line, you will notice I forgot a pair of parenthesis, and that second line should look like this:
gopigo3_joystick.y_axis = Number.parseFloat((jsdata.axes[1]).toFixed(2));
Problem solved - at least with respect to that problem.
I figured it out and it had nothing to do with jquery.
Apparently two things are true:
The state of the gpg_data object's structure is "computed", (snapshot taken), the first time the JavaScript engine sees the structure and that is the state that is saved, (even though the value may change later on). In other words, that value is likely totally bogus.
Note: This may only be true for Chrome. Previous experiments with Firefox showed that these structures were updated each time they were encountered and the values seen in the console were valid. Since Firefox now absolutely requires a secure context to use the gamepad API, I could not use Firefox for debugging.
I was trying to be "too clever". Given the following code snippet:
function is_something_happening(old_time, gopigo3_joystick) {
if (gopigo3_joystick.trigger_1 == 1 || gopigo3_joystick.head_enable == 1) {
if (old_time != Number.parseFloat((gopigo3_joystick.time_stamp).toFixed(0))) {
send_data(gopigo3_joystick)
old_time = gopigo3_joystick.time_stamp
}
}
return;
}
The idea behind this particular construction was to determine if "something interesting" is happening, where "something interesting" is defined as:
A keypress, (handled separately)
A joystick movement if either the primary trigger or the pinky trigger is pressed.
Movement without any trigger pressed is ignored so that if the user accidentally bumps against the joystick, the robot doesn't go running around.
Therefore the joystick data only gets updated if the trigger is pressed. In other words, trigger "release" events - the trigger is now = 0 - are not recorded.
The combination of these two events - Chrome taking a "snapshot" of object variables once and once only, (or not keeping them current) - and the trigger value persisting, lead me to believe that jquery was the problem since the values appeared to be different on each side of the jquery call.

IOREDIS - Error Trying to Migrate from Redis to KeyDB

We were using Redis for a plenty of time until we have come to the conclusion that moving to KeyDB may be a good choice for its features.
Environment
OS: Centos 7
NodeJs: v12.18.0
Redis: v6.0.5
Targeted KeyDB: v0.0.0 (git:1069d0b4) // keydb-cli -v showed this. Installed Using Docker.
ioredis: v4.17.3
pm2: v4.2.1 // used for clustering my application.
Background
Referring to the KeyDB documentation, KeyDB is compatible with the latest version of Redis.
KeyDB remains fully compatible with Redis modules API and protocol. As such a migration from Redis to KeyDB is very simple and would be similar to what you would expect migrating in a Redis to Redis scenario. https://docs.keydb.dev/docs/migration/
In the same page they provide a list of redis clients which are compatible with KeyDB. The list contains ioredis which I'm using.
KeyDB is compatible with all Redis clients as listed here so this should not be a concern. Simply use your client as you would with Redis.
https://docs.keydb.dev/docs/migration/
Problem
As said in the documentation. I should be able to migrate easily to KeyDB in a few hours. Well that is not the case! At least not for me! I spent my last 3 days searching on the internet for the solution. I came to the conclusion that I should write to stackoverflow :)
The issue is somehow interesting. The Client is actually working with KeyDb and the process is actually setting and retrieving keys (Not sure but may lose some data during the error.). But On 10% of time it gives me the following error, And continues to work again after a while. As I'm using Redis for storing sessions and other stuff on my production environment; I can not risk ignoring such insisting error.
error: message=write EPIPE, stack=Error: write EPIPE
./app-error-1.log:37: at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:92:16), errno=EPIPE, code=EPIPE, syscall=write
I searched nearly all the internet for this error but no one provides a solution nor an explanation for what is going wrong.
Luckily the process "sometimes" shows a stack for the error. It points to lib/redis/index.ts:711 inside the ioredis codes. Which I have no idea what it does.
(stream || this.stream).write(command.toWritable());
https://github.com/luin/ioredis/blob/master/lib/redis/index.ts#L711
I found some issues on ioredis github repository mentioning some EPIPE error. But most of them were about error handling stuff and all marked as resolved.
I also found some general EPIPE errors on google(Most of them about socket.io which is not something I use.)
Wrap Up
What is wrong with this thing?
As no one wrote an answer at the end of bounty. I am writing my experience on solving the issue for the people who will get this error later.
Please note that this is not a canonical answer. But It is rather a workaround
I am starting with sharing what was happening.
We were trying to migrate from a Redis server hosting nearly 600 000 keys. Standard migration process was taking a lot of time for transfer that amount of keys from Redis to keyDB. So I came across a different solution.
Our KeyDB works on 2 Active-Active replica servers. I will provide the link to those are wondering how this system works.
https://medium.com/faun/failover-redis-like-cluster-from-two-masters-with-keydb-9ab8e806b66c
The solution was re-building up our Redis data using some MongoDB database aggregation and doing some batch operations on KeyDB.
Here is a simulation(Not exact as source. Also I did not tested for syntax errors)
const startPoint =
(Number.parseInt(process.env.NODE_APP_INSTANCE) || 0) * 40000;
let skip = 0 + startPoint;
let limit = 1000;
let results = await SomeMongooseSchema.find({someQueries}).limit(1000).skip(skip);
let counter = 0;
while (results.length){
if(counter > 39) break;
for(const res of results){
const item = {
key: '',
value: ''
};
// do some build ups on item
...
// end n
app.ioRedisClient.set(item.key, item.value);
}
counter++;
skip = i * limit + startPoint;
results = await SomeMongooseSchema.find({someQueries}).limit(limit).skip(skip);
}
Running this code on 16 processes using pm2 sets all the keys to keyDB in about 45 minutes. (compared to 4-5 hours)
pm2 start app.js -i 16
When we run the code on a Redis server. It works but giving the following error on KeyDB.
error: message=write EPIPE, stack=Error: write EPIPE
./app-error-1.log:37: at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:92:16), errno=EPIPE, code=EPIPE, syscall=write
Firstly I started by tuning the code by creating a transaction instead of setting every key separately. and setted a 1-second gap between each 1000 operation. the code changed as following.
const startPoint =
(Number.parseInt(process.env.NODE_APP_INSTANCE) || 0) * 40000;
let skip = 0 + startPoint;
let limit = 1000;
let results = await SomeMongooseSchema.find({someQueries}).limit(1000).skip(skip);
const batch = app.ioredisClient.multi();
let counter = 0;
while (results.length){
if(counter > 39) break;
for(const res of results){
const item = {
key: '',
value: ''
};
// do some build ups on item
...
// end n
batch.set(item.key, item.value);
}
counter++;
await batch.exec();
await sleep();
skip = i * limit + startPoint;
results = await SomeMongooseSchema.find({someQueries}).limit(limit).skip(skip);
}
This reduced the error rate as long as the operation time to 20 minutes. But the error was still persisting.
I suspected that this error may be due to some permission errors on the docker version. So I asked our server administrator to check and if possible remove the docker version and install from rpm repository.
https://download.keydb.dev/packages/rpm/centos7/x86_64/
Did that and it worked. All the errors vanished and successfully managed to migrate in 20 minutes.
I don't consider this as a real answer. But this should be useful for some experts to find out what was wrong.

Jimp.read creating error - zlib binding closed

I am working using Node and Discord.js, making edits to the source code of a Discord bot for a client. As such I won't be able to provide the full source files as the vast majority of the code isn't mine and I'd rather not release the client's code just in case - but I'll be posting the snippets written by me that are relevant to the question.
The task involves making the bot generate an image highlighting the 'daily' items in the Fortnite game shop. Basically, a background/template image, which will have the images of various shop items overlayed onto boxes in the template image. To accomplish this, I've been attempting to use Jimp for the image manipulation/generation involved. However, I've run across a strange issue that only seems to be a problem when the images came from the API that provides the Fortnite item pngs.
This API returns (among other things) a URL to the image, which was what I initially tried to use to read from with Jimp. (Note that I can't actually provide any links to the API docs as it's in a closed beta; I only have access to it because the client gave me their token so I could work on it.) Jimp.read is meant to take an img URL that it processes into a Jimp image - and this seems to work fine when I use an image URL from any other source. When feeding it the URL from this API, though, it causes an exception which console.logs as:
AssertionError [ERR_ASSERTION]: zlib binding closed
(followed by the rest of a stack trace, which I'll post in full down below).
I've been beating my head against a wall for several hours now trying to break through this, Googling, trying to create workarounds, try out alternative libraries, but have still not been able to get anywhere. I tried loading the image into a Buffer and feeding that into Jimp.read, but get the exact same error, word for word. Tried using the new Jimp( ... ) constructor instead but still didn't work.
I've also been Googling to try and find an answer but the zlib binding closed error seems to be extremely uncommon and there were very few mentions of it in any context, and no mentions of it in relation to Jimp that I could find. Googling "zlib binding closed" within quotes provided me only 19 results period. If nothing else, if anyone knows what this error means, that would help me have a better idea where to look to fix it.
I've tried looking into alternatives to the Jimp library, but as far as JavaScript image manipulating libraries go, the Canvas API requires a DOM object and Caman I just couldn't get to install.
I don't generally ask things on StackOverflow but I couldn't find instances of this problem anywhere. Possible solutions or even just explanations of what the error could mean would be extremely helpful, also if anyone has suggestions for a good alternative to Jimp in the case I can't fix this.
(Code snippets/stack traces below, I probably missed some important stuff since I'm tired and completely brainfried from working on this, so let me know if you need anything else from me)
URL Attempt:
Jimp.read("https://image.fnbr.co/outfit/5b90ec38262b40c2dcc98379/icon.png")
.then(image => {
message.channel.send("jimp", {
file: image
});
})
.catch(err => {
console.log(err);
});") // Should just return a URL string
.then(image => {
message.channel.send("jimp", {
file: image
});
})
.catch(err => {
console.log(err);
});
Buffer Attempt:
request.get("https://image.fnbr.co/outfit/5b90ec38262b40c2dcc98379/icon.png", function(error, response, body) {
if (!error && response.statusCode == 200) {
var buffer = new Buffer(body);
Jimp.read(buffer)
.then(image => {
message.channel.send("jimp", {
file: image
});
})
.catch(err => {
console.log(err);
});
} else {
console.log("8(");
}
});
^ Ultimately the above will be getting the image URLs based on the 'daily' results from the shop but for now I'm just trying to get them to work on a hard-coded URL. All URLs from that API follow the same format as the one used here.
Full console.log(err) Console Output:
{ AssertionError [ERR_ASSERTION]: zlib binding closed
at Inflate._processChunk (C:\Users\(user)\Documents\dev\(project)\node_modules\pngjs\lib\sync-inflate.js:108:3)
at zlibBufferSync (C:\Users\(user)\Documents\dev\(project)\node_modules\pngjs\lib\sync-inflate.js:151:17)
at inflateSync (C:\Users\(user)\Documents\dev\(project)\node_modules\pngjs\lib\sync-inflate.js:155:10)
at module.exports (C:\Users\(user)\Documents\dev\(project)\node_modules\pngjs\lib\parser-sync.js:79:20)
at Object.exports.read [as image/png] (C:\Users\(user)\Documents\dev\(project)\node_modules\pngjs\lib\png-sync.js:10:10)
at Jimp.parseBitmap (C:\Users\(user)\Documents\dev\(project)\node_modules\#jimp\core\dist\utils\image-bitmap.js:117:53)
at new Jimp (C:\Users\(user)\Documents\dev\(project)\node_modules\#jimp\core\dist\index.js:425:32)
at _construct (C:\Users\(user)\Documents\dev\(project)\node_modules\#jimp\core\dist\index.js:100:393)
at C:\Users\(user)\Documents\dev\(project)\node_modules\#jimp\core\dist\index.js:885:5
at Promise (<anonymous>)
generatedMessage: false,
name: 'AssertionError [ERR_ASSERTION]',
code: 'ERR_ASSERTION',
actual: undefined,
expected: true,
operator: '==',
methodName: 'constructor' }
Managing packages with npm, running from a Windows computer.
I was having the same issue.
In my case I was testing with node 10.15.3 but when I pkged my app, the node version embedded was on version 8. Since ZLIB is not available in that version, the assert failed. That is the reason why I was seeing the error.

SignalR client calls fail for certain languages

I've been working on a project for a client and had a lot of fun integrating SignalR into the system.
Everything seems to work really well and the client is really excited about how the SignalR gives true real-time feedback for their application.
For the most part everything has gone swimmingly, however I've come into a strange issue I simply cannot pin down.
Everything works great for the following locales:
en-US
en-GB
it
nl
However these languages simply never get a callback from the hub:
fr
de
es
en-ZW - we use English Zimbabwe to check all the strings are translated.
I can step through the code right up until Clients.Client(ConnectionId).update(Result); (where ConnectionId is the correct Connection ID, and Result is the object ready to be serialized, with the first four languages this goes flawlessly and I get my Javascript method with the expected output.
On the last four languages however, the method is fired, but nothing comes through to the other side. Nothing. Zip.
If I replace the Strings.fr.resx file with the default Strings.resx then my site functions as expected, but since the Strings.en-ZW.resx file is identical to Strings.resx (only each string is wrapped in [()]) I doubt that is the issue. I also tried using the fr locale with all unicode translations (`, é, â, etc) removed, but that didn't help.
I've been going over this for almost a full day now and found nothing that would indicate the issue, and the fact that en works fine and en-ZW does not really confuses me.
Anyone have any suggestions?
Hub method:
public class ClientHub : Hub
{
[...]
protected void UpdateRecords(List<Int32> ChangedValues)
{
using (var database = new DbContext())
{
foreach (Record R in database.Records.Where(Rc => ChangedValues.Contains(Rc.Id))
{
SignalRFormattedRecord Serialized = new SignalRFormattedRecord(Record);
foreach (SavedFilter Filter in SavedFilters.ByRecord(Record))
{
// Next line is always called.
Clients.Client(Filter.ConnectionId).updateRow(Serialized);
}
}
}
}
[...]
}
Javascript:
$.connection.clientHub.updateRow = function(value) {
debugger;
// update code works in all languages except FR, DE, ES and en-ZW.
}
$.connection.start();
Turns out the filtering system wasn't language agnostic where it should have been, and I was getting false positives due to dangling connections during debug.
I feel quite stupid now.

Node.js, Cygwin and Socket.io walk into a bar... Node.js throws ENOBUFS and everyone dies

I'm hoping someone here can help me out, I'm not having much luck figuring this out myself. I'm running node.js version 0.3.1 on Cygwin. I'm using Connect and Socket.io. I seem to be having some random problems with DNS or something, I haven't quite figured it out. The end result is that I the server is running fine, but when a browser attempts to connect to it the initial HTTP Request works, Socket.io connects, and then the server dies (output below).
I don't think it has anything to do with the HTTP request because the server gets a lot data posted to it, and it was receiving requests and responding up until my connection that killed it. I've googled around and the closest thing I've found is DNS being set improperly. It's a network program meant to run only on an internal network, so I've set the nameserver x.x.x.x in my /etc/resolv.conf to the internal DNS. I've also added nameserver 8.8.8.8 in addition. I'm not sure what else to check, but would be grateful of any help.
In node.exe.stackdump
Exception: STATUS_ACCESS_VIOLATION at eip=610C51B9
eax=00000000 ebx=00000001 ecx=00000000 edx=00000308 esi=00000000 edi=010FCCB0
ebp=010FCAEC esp=010FCAC4 program=\\?\E:\cygwin\usr\local\bin\node.exe, pid 3296, thread unknown (0xBEC)
cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B
Stack trace:
Frame Function Args
010FCAEC 610C51B9 (00000000, 00000000, 00000000, 00000000)
010FCBFC 610C5B55 (00000000, 00000000, 00000000, 00000000)
010FCCBC 610C693A (FFFFFFFF, FFFFFFFF, 750334F3, FFFFFFFE)
010FCD0C 61027CB2 (00000002, F4B994D5, 010FCE64, 00000002)
010FCD98 76306B59 (00000002, 010FCDD4, 763069A4, 00000002)
End of stack trace
Node Output:
node.js:50
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: ENOBUFS, No buffer space available
at doConnect (net.js:642:19)
at net.js:803:9
at dns.js:166:30
at IOWatcher.callback (dns.js:48:15)
EDIT
I'm hitting an LDAP server using http.createClient immediately after a client connects to get information, and that seems to be where the problem is that is causing ENOBUFS. I've edited the source to include && errno != ENOBUFS which now prevents the server from dying, however now the LDAP request isn't working. I'm not sure what the problem is that would cause that though. As I mentioned this is an internal only application, so I set the DNS servers in /etc/resolv.conf to the DNS servers that are being applied to the host machine. Not sure if this is part of the issue?
EDIT 2
Here's some output from gdb --args ./node_g --debug ../myscript.js. I'm not sure if this is related to ENOBUFS, however, as it seems to be disconnecting immediately after connection with Socket.io
[New thread 672.0x100]
Error: dll starting at 0x76e30000 not found.
Error: dll starting at 0x76250000 not found.
Error: dll starting at 0x76e30000 not found.
Error: dll starting at 0x76f50000 not found.
[New thread 672.0xc90]
[New thread 672.0x448]
debugger listening on port 5858
[New thread 672.0xbf4]
14 Jan 18:48:57 - socket.io ready - accepting connections
[New thread 672.0xed4]
[New thread 672.0xd68]
[New thread 672.0x1244]
[New thread 672.0xf14]
14 Jan 18:49:02 - Initializing client with transport "websocket"
assertion "b[1] == 0" failed: file "../src/node.cc", line 933, function: ssize_t
node::DecodeWrite(char*, size_t, v8::Handle<v8::Value>, node::encoding)
Program received signal SIGABRT, Aborted.
0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
(gdb) backtrace
#0 0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
#1 0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
#2 0x75030816 in WaitForSingleObjectEx ()
from /cygdrive/c/Windows/syswow64/KernelBase.dll
#3 0x0000035c in ?? ()
#4 0x00000000 in ?? ()
(gdb)
OK, I digged around a bit, and after your second edit I found this bug on the issue list.
I doesn't state whether this was encountered under cygwin or not, but the error that it is hitting leads down to this piece of code:
uint16_t * twobytebuf = new uint16_t[buflen];
str->Write(twobytebuf, 0, buflen, String::HINT_MANY_WRITES_EXPECTED);
for (size_t i = 0; i < buflen; i++) {
unsigned char *b = reinterpret_cast<unsigned char*>(&twobytebuf[i]);
assert(b[1] == 0); // this assertion fails
buf[i] = b[0];
}
From what I can read (with my rusted C) it will convert it will create a new uin16 array and write the contents of the V8 string in their, then it will ensure that casting did not write any values outside the range of 0 - 255, and that's exactly what fails here.
I couldn't find anything regarding whether this is a V8 issue or not.
Since the code was added in this commit, the only thing I can suggest here is to try pulling the tree from a commit before the code was added. Since all versions after that have the crashing code.
If that works, I would recommend you to file another bug report on the issue Node.js issue list, although I made do this later this day.
Very hard to answer this one but +1 for the subject line.
Node.js comes with a test suite along with the main build, have you run that? I had built node successfully but because I'd omitted OpenSSL my web socket tests were failing. Install/rebuild fixed it. The test projects helped me diagnose the issue.
suggest doing "make test" as http://nodejs.org/#download describes.

Categories

Resources