IOREDIS - Error Trying to Migrate from Redis to KeyDB - javascript

We were using Redis for a plenty of time until we have come to the conclusion that moving to KeyDB may be a good choice for its features.
Environment
OS: Centos 7
NodeJs: v12.18.0
Redis: v6.0.5
Targeted KeyDB: v0.0.0 (git:1069d0b4) // keydb-cli -v showed this. Installed Using Docker.
ioredis: v4.17.3
pm2: v4.2.1 // used for clustering my application.
Background
Referring to the KeyDB documentation, KeyDB is compatible with the latest version of Redis.
KeyDB remains fully compatible with Redis modules API and protocol. As such a migration from Redis to KeyDB is very simple and would be similar to what you would expect migrating in a Redis to Redis scenario. https://docs.keydb.dev/docs/migration/
In the same page they provide a list of redis clients which are compatible with KeyDB. The list contains ioredis which I'm using.
KeyDB is compatible with all Redis clients as listed here so this should not be a concern. Simply use your client as you would with Redis.
https://docs.keydb.dev/docs/migration/
Problem
As said in the documentation. I should be able to migrate easily to KeyDB in a few hours. Well that is not the case! At least not for me! I spent my last 3 days searching on the internet for the solution. I came to the conclusion that I should write to stackoverflow :)
The issue is somehow interesting. The Client is actually working with KeyDb and the process is actually setting and retrieving keys (Not sure but may lose some data during the error.). But On 10% of time it gives me the following error, And continues to work again after a while. As I'm using Redis for storing sessions and other stuff on my production environment; I can not risk ignoring such insisting error.
error: message=write EPIPE, stack=Error: write EPIPE
./app-error-1.log:37: at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:92:16), errno=EPIPE, code=EPIPE, syscall=write
I searched nearly all the internet for this error but no one provides a solution nor an explanation for what is going wrong.
Luckily the process "sometimes" shows a stack for the error. It points to lib/redis/index.ts:711 inside the ioredis codes. Which I have no idea what it does.
(stream || this.stream).write(command.toWritable());
https://github.com/luin/ioredis/blob/master/lib/redis/index.ts#L711
I found some issues on ioredis github repository mentioning some EPIPE error. But most of them were about error handling stuff and all marked as resolved.
I also found some general EPIPE errors on google(Most of them about socket.io which is not something I use.)
Wrap Up
What is wrong with this thing?

As no one wrote an answer at the end of bounty. I am writing my experience on solving the issue for the people who will get this error later.
Please note that this is not a canonical answer. But It is rather a workaround
I am starting with sharing what was happening.
We were trying to migrate from a Redis server hosting nearly 600 000 keys. Standard migration process was taking a lot of time for transfer that amount of keys from Redis to keyDB. So I came across a different solution.
Our KeyDB works on 2 Active-Active replica servers. I will provide the link to those are wondering how this system works.
https://medium.com/faun/failover-redis-like-cluster-from-two-masters-with-keydb-9ab8e806b66c
The solution was re-building up our Redis data using some MongoDB database aggregation and doing some batch operations on KeyDB.
Here is a simulation(Not exact as source. Also I did not tested for syntax errors)
const startPoint =
(Number.parseInt(process.env.NODE_APP_INSTANCE) || 0) * 40000;
let skip = 0 + startPoint;
let limit = 1000;
let results = await SomeMongooseSchema.find({someQueries}).limit(1000).skip(skip);
let counter = 0;
while (results.length){
if(counter > 39) break;
for(const res of results){
const item = {
key: '',
value: ''
};
// do some build ups on item
...
// end n
app.ioRedisClient.set(item.key, item.value);
}
counter++;
skip = i * limit + startPoint;
results = await SomeMongooseSchema.find({someQueries}).limit(limit).skip(skip);
}
Running this code on 16 processes using pm2 sets all the keys to keyDB in about 45 minutes. (compared to 4-5 hours)
pm2 start app.js -i 16
When we run the code on a Redis server. It works but giving the following error on KeyDB.
error: message=write EPIPE, stack=Error: write EPIPE
./app-error-1.log:37: at WriteWrap.onWriteComplete [as oncomplete] (internal/stream_base_commons.js:92:16), errno=EPIPE, code=EPIPE, syscall=write
Firstly I started by tuning the code by creating a transaction instead of setting every key separately. and setted a 1-second gap between each 1000 operation. the code changed as following.
const startPoint =
(Number.parseInt(process.env.NODE_APP_INSTANCE) || 0) * 40000;
let skip = 0 + startPoint;
let limit = 1000;
let results = await SomeMongooseSchema.find({someQueries}).limit(1000).skip(skip);
const batch = app.ioredisClient.multi();
let counter = 0;
while (results.length){
if(counter > 39) break;
for(const res of results){
const item = {
key: '',
value: ''
};
// do some build ups on item
...
// end n
batch.set(item.key, item.value);
}
counter++;
await batch.exec();
await sleep();
skip = i * limit + startPoint;
results = await SomeMongooseSchema.find({someQueries}).limit(limit).skip(skip);
}
This reduced the error rate as long as the operation time to 20 minutes. But the error was still persisting.
I suspected that this error may be due to some permission errors on the docker version. So I asked our server administrator to check and if possible remove the docker version and install from rpm repository.
https://download.keydb.dev/packages/rpm/centos7/x86_64/
Did that and it worked. All the errors vanished and successfully managed to migrate in 20 minutes.
I don't consider this as a real answer. But this should be useful for some experts to find out what was wrong.

Related

Exception from WebAuthn Authentication API with Yubikey

I'm working on adding WebAuthn support to a newly-minted web site and am running into a problem during the navigator.credentials.get() call. The client is Firefox 85.0 on Fedora 33. In case it matters, the server is Apache httpd on Fedora 33. The token is either a Yubikey 4 or a Yubikey 5NFC (the results are the same). This is the function making the API call. Obviously the credential IDs hard-coded here are for testing, not part of the final product:
function handleUserAuthenticationResponse(r) {
var cid1 = {type: "public-key", id: base64ToArrayBuffer("gL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCg==")};
var cid2 = {type: "public-key", id: base64ToArrayBuffer("tjW1RPqtAJm69I/qeV7eRFJx6h87J3NPeJ/hhbkjttcCc2BWHQ2v2nueoKBSGabw1eYsT8S+lhJv1l1mYWX+Uw==")};
var options = {
rpID: "http://localhost",
challenge: base64ToArrayBuffer(r.challenge),
allowCredentials: [cid1,cid2],
timeout: 60000
};
if (!window.PublicKeyCredential) {
throw new Error("Unable to access credentials interface");
}
navigator.credentials.get({"publicKey":options})
.then(assertion => handleTokenAssertion(assertion))
.catch(e => {console.log("Error fetching token assertion:",e);});
}
function base64ToArrayBuffer(base64) {
var binary_string = window.atob(base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
}
function handleTokenAssertion(a) {
alert("Got an assertion!");
}
Everything seems to work, the Yubikey LED blinks, I press the touchpad, but then I get back an exception:
Error fetching token assertion: DOMException: An attempt was made to use an object that is not, or is no longer, usable
This seems to be a bit of a Firefox catch-all. It could indicate that the token doesn't match one of the allowedCredentials[], or perhaps other things. It's hard to tell. The FIDO2 credential on the Yubikey was created with fido2-cred(1) tool packaged with the libfido2 source. In this case the credentialId is from the fido2-cred -M output:
CuCEGL10uPhBmNCY4NsGaAz0gir/68UMGFQn0pfb6tc=
http://localhost
fido-u2f
WMSGBbi6CMINQvnkVRUYcYltDg3pgFlihvtzbRHuwBPipEEAAAAAAAAAAAAAAAAAAAAAAAAAAABAgL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCqUBAgMmIAEhWCA5itRRCBO0lnsztvPvI1waVZLBCZ1XMJjOvlN2oZmBCyJYILFaRjThs5Paj1sOp81iID1LpUBYHJhp4dizC0eI/RrE
gL0Ig10uA2tn8L0kn2L9FoGqIPSrqvc1lLBwgQhcVDa200b1P94kPv94T6O1bDZyYRrfLbTrLRsubDxuYUxHCg==
MEQCIFfs8PagKhNnDgzxfurVzdkTDVTT6ixKk0ak/2qrbSPUAiAf64w390rX1cyY58JgSC/Ac97w6TLcYKuqxOSn5lxV0g==
<long assertion certificate>
You can see the credentialId on line 5, and that it matches cid1 in the Javascript function. Furthermore, if I request an assertion from the token using this credentialId and all else identical (except the challenge) with fido2-assert -G, everything works fine: I get the assertion and it verifies correctly using fido2-assert -V.
Without a more meaningful diagnostic it's hard to know what to try, so I thought I would ask here and see if anyone has any hints. Perhaps I've made some basic error with either Javascript or the credentials API?
Thanks!
UPDATE: One possibility I thought might be worth trying was removing the scheme from the RP ID but that made no difference.
UPDATE: Looking at the firefox source code, the error is apparently NS_ERROR_DOM_INVALID_STATE_ERR, which covers several different situations but in this case is most likely a translation of U2F_ERROR_INVALID_STATE (in dom/webauthn/U2FHIDTokenManager.h). U2F_ERROR_INVALID_STATE, in turn, is defined in third_party/rust/authenticator/src/u2fhid-capi.h as a simple numerical value (3), with no indication of where the value came from. Perhaps it's defined by the underlying HID driver for the Yubikey, but it's not clear what driver that corresponds to. The hunt continues...
It turns out that the problem was indeed the format of the relying party ID. Based on example code from the net (which may have worked with other browsers or versions of the code?), I initially used the full scheme://domain format for the rpID (so in my code above, http://localhost), but it turns out that what's needed is just the domain (localhost). Modifying the rpID in this way allows the assertion process to succeed.
Initially I thought this did not work, but it turned out that I'd simply forgotten to commit the change. Having belatedly done that, it works.

AWS IoT Core - How to enable infinite reconnection after connection loss (MQTT)

I am using aws-iot-device-sdk-js-v2 primarly on Raspberry PI and for testing purposes on Windows 10.
I am connecting to AWS like this:
const client_bootstrap = new io.ClientBootstrap();
let config_builder = iot.AwsIotMqttConnectionConfigBuilder.new_mtls_builder_from_path(config.aws.cert_path, config.aws.key_path);
config_builder.with_certificate_authority_from_path(undefined, config.aws.root_ca_path);
config_builder.with_clean_session(false);
config_builder.with_client_id(config.aws.thing_name);
config_builder.with_endpoint(config.aws.iot_endpoint);
const mqttConfig = config_builder.build();
const client = new mqtt.MqttClient(client_bootstrap);
connection = client.new_connection(mqttConfig);
shadow = new iotshadow.IotShadowClient(connection);
thingName = config.aws.thing_name;
return connection.connect();
Everything works perfectly when there is connection. If i disconnect from internet, after sometime (20-30 seconds) i will get interrupt event with the following error:
CrtError: aws-c-io: AWS_IO_SOCKET_CLOSED, socket is closed.
at MqttClientConnection._on_connection_interrupted (/home/mstupar/device/node_modules/aws-crt/dist/native/mqtt.js:334:32)
at /home/mstupar/device/node_modules/aws-crt/dist/native/mqtt.js:114:113 {
error: 1051,
error_code: 1051,
error_name: 'AWS_IO_SOCKET_CLOSED'
}
If I connect to internet after this interrupt, i will get an unhandled exception with the following stacktrace:
################################################################################
Resolved stacktrace:
################################################################################
0x00007fc3193b781b: ?? ??:0
0x00000000000b1383: s_print_stack_trace at module.c:?
0x00000000000128a0: __restore_rt at ??:?
0x00007fc31c4bef47: ?? ??:0
0x00007fc31c4c08b1: ?? ??:0
node() [0x95c589]
node(napi_acquire_threadsafe_function+0x27) [0x9cf1e7]
0x00007fc3192f1931: ?? ??:0
0x00000000000b4306: s_on_connection_resumed at mqtt_client_connection.c:?
0x00000000000de34c: s_packet_handler_connack at client_channel_handler.c:?
0x00000000000dee9e: s_process_read_message at client_channel_handler.c:?
0x000000000011269d: s_s2n_handler_process_read_message at s2n_tls_channel_handler.c:?
0x0000000000113bbe: s_do_read at socket_channel_handler.c:?
0x00000000001141f2: s_on_readable_notification at socket_channel_handler.c:?
0x0000000000110a5a: s_on_socket_io_event at socket.c:?
0x000000000010bb9a: s_main_loop at epoll_event_loop.c:?
0x0000000000177c1b: thread_fn at thread.c:?
0x00000000000076db: start_thread at ??:?
0x00007fc31c5a1a3f: ?? ??:0
################################################################################
Raw stacktrace:
################################################################################
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(aws_backtrace_print+0x4b) [0x7fc3193b781b]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0xb1383) [0x7fc3192f1383]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x128a0) [0x7fc31c8928a0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7) [0x7fc31c4bef47]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x141) [0x7fc31c4c08b1]
node() [0x95c589]
node(napi_acquire_threadsafe_function+0x27) [0x9cf1e7]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(aws_napi_queue_threadsafe_function+0x11) [0x7fc3192f1931]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0xb4306) [0x7fc3192f4306]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0xde34c) [0x7fc31931e34c]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0xdee9e) [0x7fc31931ee9e]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x11269d) [0x7fc31935269d]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x113bbe) [0x7fc319353bbe]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x1141f2) [0x7fc3193541f2]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x110a5a) [0x7fc319350a5a]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x10bb9a) [0x7fc31934bb9a]
/home/mstupar/device/node_modules/aws-crt/dist/bin/linux-x64/aws-crt-nodejs.node(+0x177c1b) [0x7fc3193b7c1b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fc31c8876db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fc31c5a1a3f]
Is it possible to avoid AWS_IO_SOCKET_CLOSED with additional configuration?
How should I properly reconnect? Is there some automatic way like socket.io has.
The device should be able to run at least 1 day without restart. In that day, multiple connection losses may occur. Process restart with systemd is not good solution for this case because i am running some other modules in the same process that should not be affected with connection loss.
Is there any additional configuration needed for MqttClientConnection?
Should I call some functions from interrupt to resume connection manually?
Thanks in advance

firebase functions get error stream removed onCreate firestore event

I'm worried about this error on firebase functions,
I have a sendgrid dispatch on this function:
exports.mailDealings = functions.firestore
.document('dealings/current').onCreate(event => {
// send mail
const msg = {...}
sgMail.send(msg);
})
.catch(result => {
console.error("sendgrid error", result);
});
I was able to execute with success before but on one call,
this function give me the error below:
{ Error: Stream removed
at ClientReadableStream._emitStatusIfDone (/user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:255:19)
at ClientReadableStream._receiveStatus (/user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:233:8)
at /user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:705:12 code: 2, metadata: Metadata { _internal_repr: {} } }
There should have an automatic retry for this error, right?
Or at least a method for turning this easy, like result.retry(1000)?
Same problem here (in many functions & randomly) since 3 / 4 days.
Apparently t disappears after deploy...for few minutes
Error: Stream removed
at ClientReadableStream._emitStatusIfDone (/user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:255:19)
at ClientReadableStream._receiveStatus (/user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:233:8)
at /user_code/node_modules/firebase-admin/node_modules/grpc/src/client.js:705:12
From the Google Groups discussion on this bug:
Hello all, Sebastian from the Firestore SDK team here. We believe this
issue is related to the recent update of the GRPC Client SDK and have
been running tests with GRPC 1.7.1. So far, we have not been able to
reproduce this issue with this older GRPC version.
#google-cloud/firestore is now at 0.10.1. If you update your
dependencies, you will be able to pull in this release.
Thanks for your patience.
Sebastian
This seems to have fixed the issue for me!

How can I properly end or destroy a Speaker instance without getting `illegal hardware instruction` error?

I have a script for playing a remote mp3 source through the Speaker module which is working fine. However if I want to stop playing the mp3 stream I am encountering two issues:
If I stop streaming the remote source, eg. by calling stream.pause() as in line 11 of the code below then stdout is flooded with a warning:
[../deps/mpg123/src/output/coreaudio.c:81] warning: Didn't have any audio data in callback (buffer underflow)
The warning in itself makes sense because I'm not providing it with any data anymore, but it is outputting it very frequently which is a big issue because I want to use it for CLI app.
If I attempt to end the speaker calling speaker.end() as in line 13 of the code below then I get the following error:
[1] 8950 illegal hardware instruction node index.js
I have been unable to find anything regarding this error besides some Raspberry Pi threads and I'm quite confused as to what is causing illegal hardware instruction.
How can I properly handle this issue? Can I pipe the buffer underflow warning to something like dev/null or is that a poor way of handling it? Or can I end / destroy the speaker instance in another way?
I'm running Node v7.2.0, npm v4.0.3 and Speaker v.0.3.0 on macOS v10.12.1
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
var decoder = new lame.Decoder()
var speaker = new Speaker()
decoder.pipe(speaker)
var req = request.get(url_to_remote_mp3)
var stream = req.pipe(decoder)
setTimeout(() => {
stream.pause()
stream.unpipe()
speaker.end()
}, 1000)
I think the problem might be in cordaudio.c (guesswork here). It seems like the buffer:written gets large when you flush it. My best guess is that the usleep() time in write_coreaudio() is mis-calculated after the buffer is flushed. I think the program may be sleeping too long, so the buffer gets too big to play.
It is unclear to me why the usleep time is being calculated the way it is.
Anyway... this worked for me:
Change node_modules/speaker/deps/mpg123/src/output/coreaudio.c:258
- usleep( (FIFO_DURATION/2) * 1000000 );
+ usleep( (FIFO_DURATION/2) * 100000 );
Then in the node_modules/speaker/ directory, run:
node-gyp build
And you should be able to use:
const request = require('request')
const lame = require('lame')
const Speaker = require('speaker')
const url = 'http://aasr-stream.iha.dk:9870/dtr.mp3'
const decoder = new lame.Decoder()
let speaker = new Speaker()
decoder.pipe(speaker)
const req = request.get(url)
var stream = req.pipe(decoder)
setTimeout(() => {
console.log('Closing Speaker')
speaker.close()
}, 2000)
setTimeout(() => {
console.log('Closing Node')
}, 4000)
... works for me (no errors, stops when expected).
I hope it works for you too.
You will want to use github issues for that module and the library it uses to make sure no one else already solved the problem and if not open a bug describing it so other module users can reference it since that is the first place they will look.
A "good" way of handling it will probably involve modifying the native part of the speaker module unless you just have a sound library version mismatch.

Node.js, Cygwin and Socket.io walk into a bar... Node.js throws ENOBUFS and everyone dies

I'm hoping someone here can help me out, I'm not having much luck figuring this out myself. I'm running node.js version 0.3.1 on Cygwin. I'm using Connect and Socket.io. I seem to be having some random problems with DNS or something, I haven't quite figured it out. The end result is that I the server is running fine, but when a browser attempts to connect to it the initial HTTP Request works, Socket.io connects, and then the server dies (output below).
I don't think it has anything to do with the HTTP request because the server gets a lot data posted to it, and it was receiving requests and responding up until my connection that killed it. I've googled around and the closest thing I've found is DNS being set improperly. It's a network program meant to run only on an internal network, so I've set the nameserver x.x.x.x in my /etc/resolv.conf to the internal DNS. I've also added nameserver 8.8.8.8 in addition. I'm not sure what else to check, but would be grateful of any help.
In node.exe.stackdump
Exception: STATUS_ACCESS_VIOLATION at eip=610C51B9
eax=00000000 ebx=00000001 ecx=00000000 edx=00000308 esi=00000000 edi=010FCCB0
ebp=010FCAEC esp=010FCAC4 program=\\?\E:\cygwin\usr\local\bin\node.exe, pid 3296, thread unknown (0xBEC)
cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B
Stack trace:
Frame Function Args
010FCAEC 610C51B9 (00000000, 00000000, 00000000, 00000000)
010FCBFC 610C5B55 (00000000, 00000000, 00000000, 00000000)
010FCCBC 610C693A (FFFFFFFF, FFFFFFFF, 750334F3, FFFFFFFE)
010FCD0C 61027CB2 (00000002, F4B994D5, 010FCE64, 00000002)
010FCD98 76306B59 (00000002, 010FCDD4, 763069A4, 00000002)
End of stack trace
Node Output:
node.js:50
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: ENOBUFS, No buffer space available
at doConnect (net.js:642:19)
at net.js:803:9
at dns.js:166:30
at IOWatcher.callback (dns.js:48:15)
EDIT
I'm hitting an LDAP server using http.createClient immediately after a client connects to get information, and that seems to be where the problem is that is causing ENOBUFS. I've edited the source to include && errno != ENOBUFS which now prevents the server from dying, however now the LDAP request isn't working. I'm not sure what the problem is that would cause that though. As I mentioned this is an internal only application, so I set the DNS servers in /etc/resolv.conf to the DNS servers that are being applied to the host machine. Not sure if this is part of the issue?
EDIT 2
Here's some output from gdb --args ./node_g --debug ../myscript.js. I'm not sure if this is related to ENOBUFS, however, as it seems to be disconnecting immediately after connection with Socket.io
[New thread 672.0x100]
Error: dll starting at 0x76e30000 not found.
Error: dll starting at 0x76250000 not found.
Error: dll starting at 0x76e30000 not found.
Error: dll starting at 0x76f50000 not found.
[New thread 672.0xc90]
[New thread 672.0x448]
debugger listening on port 5858
[New thread 672.0xbf4]
14 Jan 18:48:57 - socket.io ready - accepting connections
[New thread 672.0xed4]
[New thread 672.0xd68]
[New thread 672.0x1244]
[New thread 672.0xf14]
14 Jan 18:49:02 - Initializing client with transport "websocket"
assertion "b[1] == 0" failed: file "../src/node.cc", line 933, function: ssize_t
node::DecodeWrite(char*, size_t, v8::Handle<v8::Value>, node::encoding)
Program received signal SIGABRT, Aborted.
0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
(gdb) backtrace
#0 0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
#1 0x7724f861 in ntdll!RtlUpdateClonedSRWLock ()
from /cygdrive/c/Windows/system32/ntdll.dll
#2 0x75030816 in WaitForSingleObjectEx ()
from /cygdrive/c/Windows/syswow64/KernelBase.dll
#3 0x0000035c in ?? ()
#4 0x00000000 in ?? ()
(gdb)
OK, I digged around a bit, and after your second edit I found this bug on the issue list.
I doesn't state whether this was encountered under cygwin or not, but the error that it is hitting leads down to this piece of code:
uint16_t * twobytebuf = new uint16_t[buflen];
str->Write(twobytebuf, 0, buflen, String::HINT_MANY_WRITES_EXPECTED);
for (size_t i = 0; i < buflen; i++) {
unsigned char *b = reinterpret_cast<unsigned char*>(&twobytebuf[i]);
assert(b[1] == 0); // this assertion fails
buf[i] = b[0];
}
From what I can read (with my rusted C) it will convert it will create a new uin16 array and write the contents of the V8 string in their, then it will ensure that casting did not write any values outside the range of 0 - 255, and that's exactly what fails here.
I couldn't find anything regarding whether this is a V8 issue or not.
Since the code was added in this commit, the only thing I can suggest here is to try pulling the tree from a commit before the code was added. Since all versions after that have the crashing code.
If that works, I would recommend you to file another bug report on the issue Node.js issue list, although I made do this later this day.
Very hard to answer this one but +1 for the subject line.
Node.js comes with a test suite along with the main build, have you run that? I had built node successfully but because I'd omitted OpenSSL my web socket tests were failing. Install/rebuild fixed it. The test projects helped me diagnose the issue.
suggest doing "make test" as http://nodejs.org/#download describes.

Categories

Resources