WASI Preview 3 HTTP Proxy in Rust [How-To 2026]
Bottom Line
In May 2026, the practical way to build a WASI Preview 3-style proxy in Rust is to compile the guest against stable WASI Preview 2 interfaces and let a multi-threaded host runtime provide concurrency. That keeps the code deployable today while aligning with the async-and-thread direction of Preview 3.
Key Takeaways
- ›Rust’s official target is still wasm32-wasip2, not a stable wasm32-wasip3 target.
- ›The official component-model roadmap says Preview 3 is primarily about async and thread support.
- ›A usable 2026 pattern is single-request guest logic plus multi-threaded host execution.
- ›wasi:http/proxy lets one Wasm component act as a compact reverse-proxy handler.
Building a multi-threaded HTTP proxy with WASI in 2026 requires one important correction up front: the standards roadmap and the shipping Rust toolchain are not at the same checkpoint. The official WASI repository still lists Preview 2 as stable, while the component-model repository says the subsequent WASI Preview 3 milestone is mainly about async and thread support. So the practical build is a Preview 2 component running inside a multi-threaded Rust host stack.
Prerequisites
Before you start
- Rust with the official wasm32-wasip2 target
- Wasmtime CLI installed via the official installer
- wkg to fetch WIT dependencies
- Basic comfort with Rust async code and HTTP request/response shapes
rustup target add wasm32-wasip2
curl https://wasmtime.dev/install.sh -sSf | bash
cargo install wkgBottom Line
Today’s deployable path is not “spawn Rust threads inside the guest.” It is “compile a small wasi:http/proxy component, then run many requests through a multi-threaded host runtime that is already moving toward WASIp3.”
Step 1: Understand the runtime model
Where Preview 3 fits right now
- The Rust target page documents wasm32-wasip2 as the component-producing target you can use today.
- The Wasmtime WASI docs state that WASIp3 support is experimental, unstable and incomplete.
- The Wasmtime HTTP handler API already exposes both P2 and P3 shapes, which is the strongest signal that the host side is where Preview 3 work is landing first.
The architecture to build
- Write a small Rust component that handles one incoming HTTP request.
- Use wasi:http/proxy for the request boundary.
- Forward the request to an upstream with wstd::http::Client.
- Let the host runtime handle concurrency across many requests and worker threads.
That split matters. It keeps the guest tiny, portable, and capability-scoped, while the host can scale request handling with a multi-threaded executor. Tokio’s official docs show multi_thread as the standard runtime flavor for worker-thread scheduling.
Step 2: Scaffold the component
Create a minimal component project with one Rust source file and one WIT world file.
mkdir wasi-http-proxy
cd wasi-http-proxy
mkdir -p src witCargo.toml
[package]
name = "wasi-http-proxy"
version = "0.1.0"
edition = "2021"
publish = false
[lib]
crate-type = ["cdylib"]
[dependencies]
wstd = "0.6"wit/world.wit
package techbytes:wasi-http-proxy;
world wasi-http-proxy {
include wasi:http/proxy@0.2.2;
}Fetch the WIT packages:
wkg wit fetchThis mirrors the same wasi:http/proxy world used by the official sample-wasi-http-rust project.
Step 3: Write the proxy
The component below does three useful things: exposes a health endpoint, rewrites the upstream URI, and returns a clean 502 when the outbound request fails.
src/lib.rs
use wstd::http::{Body, Client, Error, Request, Response, StatusCode};
use wstd::time::Duration;
const UPSTREAM: &str = "https://postman-echo.com";
#[wstd::http_server]
async fn main(req: Request<Body>) -> Result<Response<Body>, Error> {
let path = req
.uri()
.path_and_query()
.map(|pq| pq.as_str())
.unwrap_or("/get");
if path == "/healthz" {
return Ok(Response::new("ok\n".to_owned().into()));
}
let (parts, body) = req.into_parts();
let target = format!("{UPSTREAM}{path}");
let mut builder = Request::builder()
.method(parts.method)
.uri(target);
for (name, value) in &parts.headers {
if name.as_str().eq_ignore_ascii_case("host") {
continue;
}
builder = builder.header(name, value);
}
let mut client = Client::new();
client.set_connect_timeout(Duration::from_secs(2));
client.set_first_byte_timeout(Duration::from_secs(10));
client.set_between_bytes_timeout(Duration::from_secs(10));
let upstream = builder.body(body)?;
match client.send(upstream).await {
Ok(mut response) => {
response
.headers_mut()
.insert("x-techbytes-proxy", "wasi-p3-ready".parse().unwrap());
Ok(response)
}
Err(err) => Ok(
Response::builder()
.status(StatusCode::BAD_GATEWAY)
.body(format!("upstream request failed: {err}\n").into())?,
),
}
}This uses the wstd::http::Client API directly. If you want to paste both the Rust file and the WIT file into a cleaner before publishing internal docs, TechBytes’ Code Formatter is a useful sanity check for mixed component-model snippets.
Step 4: Build and verify
Build the component with the official Rust target, then serve it with Wasmtime’s HTTP mode.
cargo build --release --target wasm32-wasip2
wasmtime serve -Scli -Shttp target/wasm32-wasip2/release/wasi_http_proxy.wasmThe Wasmtime CLI docs document serve and the -Shttp setting for the wasi:http/proxy world.
Verification and expected output
- Check health:
curl -i http://127.0.0.1:8080/healthzHTTP/1.1 200 OK
content-length: 3
ok- Proxy a real upstream request:
curl -i "http://127.0.0.1:8080/get?source=techbytes"HTTP/1.1 200 OK
x-techbytes-proxy: wasi-p3-ready
content-type: application/json; charset=utf-8
...
{"args":{"source":"techbytes"}, ... }- Fire several requests in parallel:
seq 1 8 | xargs -I{} -P8 curl -s "http://127.0.0.1:8080/get?i={}" > /dev/nullExpected result:
- All requests return 200.
- The proxy remains responsive while multiple requests are in flight.
- Your concurrency comes from the host runtime, which is the correct 2026 design for a Preview 3-ready deployment path.
Troubleshooting and what’s next
Troubleshooting: top 3 issues
- Target not found: if Rust cannot build the component, confirm you ran
rustup target add wasm32-wasip2. - Missing WIT dependencies: if the build complains about wasi:http/proxy, rerun
wkg wit fetchfrom the project root. - Bad Gateway from the proxy: outbound DNS, TLS, or egress rules are usually the culprit. When sharing logs with teammates, strip tokens and cookies first with TechBytes’ Data Masking Tool.
What’s next
- Replace the public echo target with a local upstream service and add path-based routing.
- Add request header allowlists and explicit timeout budgets per route.
- Precompile and benchmark the component once your runtime setup is stable.
- Track Wasmtime’s P3 APIs so you can adopt richer async and thread semantics as they move from experimental to stable.
The practical lesson is straightforward: in May 2026, WASI Preview 3 is best treated as an architecture target, not a guest-compilation target. Build on wasm32-wasip2, keep the component narrow, and let the multi-threaded host do the heavy lifting.
Frequently Asked Questions
Is WASI Preview 3 stable on May 1, 2026? +
Why does this tutorial compile to wasm32-wasip2 if it is about Preview 3? +
Can a Rust WASI component spawn native threads directly today? +
Do I need cargo-component for this proxy? +
wasm32-wasip2 target can build a component directly, and the official sample wasi:http project shows the same style of workflow with plain cargo build plus wasmtime serve.Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.
Related Deep-Dives
Wasm Components & WASI Preview 3 at the Edge [2026]
A roadmap-oriented look at where Preview 3 fits into real edge deployments.
Cloud InfrastructureWebAssembly WASI 2.0 [Deep Dive] Production Edge Guide 2026
A production-focused guide to stable WASI 0.2, components, and runtime choices.
System ArchitectureWebAssembly Component Model 2026: Rust + Go with Wasi-Cloud
A broader component-model walkthrough for engineers building polyglot services.