Real-time data streaming

2.4M messages per second.
Zero operational pain.

Conduit moves events between every system in your company in under 3ms. Exactly-once delivery, partition ordering, schema evolution — without the JVM, ZooKeeper, or 47-page tuning guide.

conduit benchmark --producers 8 --msg-size 1KB
$ conduit benchmark --producers 8 --msg-size 1KB
Running throughput benchmark...
Warming up brokers (3s)...
Throughput: 2,412,847 msg/sec
Latency p50: 0.8ms
Latency p99: 2.7ms
Latency p999: 4.1ms
Memory: 312MB (10x less than Kafka)
CPU cores: 1
✓ Exactly-once verified | 0 duplicates in 50M messages
$
<3ms
p99 end-to-end latency
2.4M/s
Messages per second per core
10x
Less memory than Kafka
0
Dependencies. Single binary.
Performance

Benchmarks don't lie

Tested on a single c6g.2xlarge instance (8 vCPUs, 16GB RAM). 1KB messages, 8 producers, 3 consumers, replication factor 3. Numbers are medians across 10 runs.

Conduit
2.4M/s
Kafka
865K/s
Pulsar
580K/s
RabbitMQ
285K/s
Latency
p99
Conduit
2.7ms
Kafka
18ms
Pulsar
24ms
RabbitMQ
33ms
Conduit vs Kafka — detailed comparison
Throughput per core
2.4M msg/s
Kafka throughput/core
108K msg/s
Memory at 1M msg/s
312 MB
Kafka memory at 1M msg/s
3.2 GB
Cold start
1.2s
Kafka cold start
28s
Binary size
24 MB
Kafka install size
380 MB

Five lines to your first stream

Native Go and Rust clients. Strong types, schema validation built in, zero reflection magic.

Go
import "conduit-io/go-client"

func main() {
  client := conduit.Connect("localhost:9092")
  defer client.Close()

  // Produce with exactly-once semantics
  client.Produce("payments", &conduit.Message{
    Key:   []byte("order-4821"),
    Value: []byte(`{"amount":99.50,"currency":"USD"}`),
  })

  // Consume with automatic offset management
  consumer := client.Subscribe("payments", conduit.ConsumerOpts{
    Group:       "billing-service",
    StartOffset: conduit.Latest,
  })

  for msg := range consumer.Messages() {
    process(msg)
    msg.Ack()
  }
}
Rust
use conduit_client::{Client, Message};

async fn main() -> Result<()> {
    let client = Client::connect("localhost:9092").await?;

    // Produce with exactly-once semantics
    client.produce("payments", Message {
        key: "order-4821".into(),
        value: r#"{"amount":99.50,"currency":"USD"}"#
            .into(),
    }).await?;

    // Consume with automatic offset management
    let mut consumer = client
        .subscribe("payments")
        .group("billing-service")
        .start_offset(Offset::Latest)
        .build().await?;

    while let Some(msg) = consumer.next().await {
        process(&msg).await?;
        msg.ack().await?;
    }
    Ok(())
}
How it works

Event flow, end to end

Single binary. No JVM. No ZooKeeper. Raft consensus built in.

01
Produce
Clients send messages to topic partitions. Conduit batches writes to the commit log with zero-copy semantics. Messages are durable the moment the producer gets an ack.
producer → partition → commit log → ack
02
Replicate
Raft consensus replicates across brokers. No ZooKeeper, no external coordination. Leader election in <200ms. ISR tracking is automatic.
leader → raft → follower-1, follower-2
03
Consume
Consumer groups with cooperative rebalancing. Partition assignment in <50ms. Offsets committed atomically. No stop-the-world pauses.
topic → consumer group → partitions[] → process
04
Tiered Storage
Hot data on NVMe. Cold data on S3-compatible storage. Seamless reads across tiers. Retention policies per topic.
NVMe (hot) | S3 (cold) | auto-tier
05
Schema Registry
Built-in schema registry. Avro, Protobuf, JSON Schema. Forward and backward compatibility checks on every produce call.
schema.avro → validate → evolve → compat
06
Exactly-Once
Idempotent producers + transactional consumers. No duplicates. No data loss. Verified across 50M messages with zero variance.
txn.begin → produce → consume → txn.commit

One file. That's the whole config.

Sensible defaults for everything. Override only what you need. No XML, no YAML sprawl, no 47-page tuning guide.

Hot reload. Change config without restarting brokers. Topic-level overrides take effect in <100ms.
Typed validation. Config errors caught at startup, not at 3am in production. Every field documented inline.
Environment overrides. CONDUIT_BROKER_PORT=9092 works everywhere. Twelve-factor ready.
Zero dependencies. Single binary. Download, set bind address, start streaming. No JVM, no ZooKeeper, no operator degree.
conduit.toml
# Conduit broker configuration
# Most values have sensible defaults — override only what you need

[broker]
bind = "0.0.0.0:9092"
data_dir = "/var/lib/conduit"
node_id = 1

[cluster]
seeds = ["10.0.1.1:9092", "10.0.1.2:9092"]
replication_factor = 3

[storage]
segment_size = "512MB"
retention = "7d"

[storage.tiered]
enabled = true
backend = "s3"
bucket = "conduit-archive"
tier_after = "24h"

[schema_registry]
enabled = true
compatibility = "backward"

[observability]
metrics_port = 9093
prometheus = true
Observability

Know everything. Fix nothing.

Built-in Prometheus metrics, structured logs, distributed tracing. The dashboard you wished Kafka had.

Throughput — payments Healthy
Current 1,247,392 msg/s
Latency p99 — payments Healthy
Current 2.3ms
Consumer lag — billing-service Healthy
Current 12 messages
Disk I/O — broker-1 Moderate
Current 284 MB/s write

Streaming in 30 seconds

One curl. One binary. No dependencies.

$ curl -fsSL https://get.conduit.io | sh && conduit start
01
Install
Single binary, no JVM. Runs on Linux, macOS, arm64. Docker image at 24MB.
02
Create a topic
conduit topic create payments --partitions 6
03
Start producing
Use the Go or Rust SDK. First message arrives in <3ms. Schema validation built in.