Replies: 2 comments 2 replies
-
Backpressure in uWS is a linear allocation per socket so if you shove something like 1 GB in there, it's going to be slower than your queue. But for small messages like in signalling, it probably performs better. I would keep both solutions and benchmark them in prod. to see if there are significant differences. Problem with your queue solution is that it will slow down everything via the GC. |
Beta Was this translation helpful? Give feedback.
-
I have no idea because I have no idea what kind of signalling you are doing. It's TCP based so it follows TCP first-in-first-out queue. SHARED_COMPRESSOR is more involved than I want to repeat here. You can use https://deepwiki.com/uNetworking/uWebSockets/5.2-websocket-compression |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We are implementing a signaling system. Signals and parameter sizes ranging from 50 bytes to 4kb bytes.
Two implementations, which one is better:
const app= uWS.App({
}).ws('/*', {
compression: uWS.DISABLED,
maxBackpressure: 5 * 1024 // 64 KB,
idleTimeout: 30,
open: (ws) => {
ws.queue = [];
},
message: (ws, msg, isBinary) => {
const result = ws.send(msg, isBinary);
if (!result ) {
ws.queue.push({ data: msg, isBinary });
}
},
drain: (ws) => {
while (ws.queue.length && ws.getBufferedAmount() < (5* 1024)) {
const { data, isBinary } = ws.queue.shift();
const r = ws.send(data, isBinary);
if (!r ) {
ws.queue.unshift({ data, isBinary });
break;
}
}
}
});
const app= uWS.App({
}).ws('/*', {
compression: uWS.DISABLED,
maxBackpressure: 5 * 1024 // 64 KB,
idleTimeout: 30,
open: (ws) => {
},
message: (ws, msg, isBinary) => {
if(ws.getBufferedAmount() < (5* 1024)
ws.send({data,isBinary });
},
drain: (ws) => {
}
});
Beta Was this translation helpful? Give feedback.
All reactions