Hopefully this will shed some light as to why I designed the protocol with channels.
Channel multiplexing lets logically separate concerns share a single encrypted session without the overhead of establishing multiple connections. The main use
cases:
Prioritization and QoS
- High-priority control messages (e.g. stop/pause commands) on a dedicated channel so they aren't queued behind large data payloads
- Real-time telemetry on one channel, bulk file transfer on another — each can have independent flow control
Topic / stream isolation
- Pub/sub: each topic maps to a channel; subscribers receive only what they're interested in without receiver-side filtering overhead
- Sensor feeds: temperature on ch 1, GPS on ch 2, video frames on ch 3 — consumers subscribe selectively
Request/response multiplexing
- Interleave multiple concurrent RPC calls over one session — each call gets its own channel ID to match replies to requests, similar to HTTP/2 streams
Backpressure isolation
- A slow consumer on channel 3 doesn't block or drop events on channel 1 — each channel has its own buffer
- Wiresocket's per-channel events buffer means a full channel drops or blocks independently
Versioned or feature-gated streams
- Send on channel 0 for v1 clients, channel 1 for v2 clients — a server can maintain both protocols simultaneously without separate sockets
Lifecycle decoupling
- A control plane (handshake, session management) and data plane (event stream) on separate channels — the control channel can signal teardown without racing with
in-flight data
Security boundaries
- Different trust levels or tenants sharing a session can be isolated by channel — the application enforces which channels a principal may read/write without
separate TLS termination
In wiresocket specifically, the design is intentionally minimal: 256 channels (0–255), channel 255 reserved for internal close signals. The application layer owns
the semantics of each channel ID, keeping the protocol unopinionated about how multiplexing is used.