Rust Webserver Hot Reload
The Problem
Rust is slow as fuck. At compile time at least. This can get pretty annoying while developing something like a webserver.
Well with a couple easy tricks we can get a really nice experience that mostly solves the issues of Rust’s slow compile times.
Setup
Just to make sure we’re starting on the same page, let’s start with something simple. We’ll clone the Axum repo and use the hello-world
example as our starting point.
|
|
In case any of the next steps don’t work, at the time of writing this the main
branch is at commit d703e6f97a0156177466b6741be0beac0c83d8c7
. You can run git reset --hard d703e6f97a0156177466b6741be0beac0c83d8c7; git clean -df
to go back to this point in time.
I’ll assume you already have the Rust toolchain installed if you’re reading this. If you don’t… Google “install rust” I guess.
Now
|
|
and we’ll be at our hello world example.
Iterating
Noob
Run cargo run
. Wait a sec and… boom. The server should be running! Enter 127.0.0.1:3000
in your browser and you should see “Hello, World!”.
Now in src/main.rs
, change the Hello, World!
string to something else. I’ll do Hello, World! 2
. Very creative. Save the file and refresh your browser. Nothing happens. You probably knew that unless you’re a real noob.
We ran the server with cargo run
, so it’s not automatically reloading the server or anything. We have to stop it manually and run it again. We’ll do that just to make sure our change is indeed reflected. Ctrl+C
in the terminal you ran the server in, and then cargo run
again. Refresh your browser. If you refreshed your browser. If you see “Unable to connect” or similar, refresh a couple more times. It should be pretty quick to come back up and you’ll see your updated string on the page.
So while we’ll never quite get a Go-like instant reload experience, we can get something much better than the naive way I often see people complaining about online. One of the last two options should make a majority of people pretty happy.
Still a noob but a little less
We can do better than that. We can use cargo watch to automatically re-run the server when anything in our workspace changes.
To install the watch
subcommand, run
|
|
There are other installation options, check the cargo-watch
repo’s README if you want to do something else.
Now close your server if it’s still running and run
|
|
Your server should start up pretty quickly again. Refresh your browser and make sure you’re still seeing your updated string. Now go back to src/main.rs
and update the message again. I’ll do Hello, World! 3
. Save the file and refresh your browser. The server might still be stopped, but it should start up again shortly and you’ll see the new message.
This is pretty good now. With a small delay, we’ll see our changes reflected in the browser. But if you’re refreshing your browser quickly, you may have seen the next problem. The server stops running for a moment while it’s recompiling. This may seem like a small issue with this dummy example, but when you have a real project, compiling might take significantly longer, and the server may be down for many seconds or even minutes for a larger webserver.
Additionally, we can do more than just restart the server with cargo watch
. We can run cargo’s check
command, and run our tests. In fact, this is probably what you’ll want to do in most cases.
Our real watch
command might look something like cargo watch -x "clippy -- -D clippy::all" -x test -x run
. This will run clippy (Rust’s linter), and if clippy passes, runs our tests, and if tests all pass, restarts the server. This will, even for a simple project with a few tests, take at least 10 seconds.
To simulate this, let’s change our watch
command to simulate doing something that takes longer than just recompiling this dummy program.
Change our watcher to
|
|
Now go back in main.rs
and change the message. Save. Refresh web browser. A bunch of times. You’ll see the server go down for at least 10 seconds each time any file changes. This is what’ll get really annoying. 10 seconds still isn’t so bad. What about when it takes a minute? Or longer? We can still do much better.
Documentation enjoyer
If you love reading ahead, you may have already seen the next step in the cargo watch
repo’s README. There’s a great little tool called systemfd that will make the next step much easier. Taken from their README, systemfd
is “…a tiny process that opens a bunch of sockets and passes them to another process so that that process can then restart itself without dropping connections.”
To install systemfd
, run
|
|
Instead of the hello-world
project we’ve been in, Axum conveniently already has a dummy project set up to properly use systemfd
. Head to axum/examples/auto-reload
.
You’ll see in this folder’s README that you can run systemfd --no-pid -s http::3000 -- cargo watch -x run
to start the server. We’ll just change this slightly to match our previous command that simulates a longer running check/test process running.
|
|
After 10 seconds our server should be running, and you’ll see a fresh “Hello, World!” message in your browser. Again, update this message to whatever makes you happy and save the file. Refresh your browser.
You’ll notice something different this time. We don’t immediately get the “Unable to connect” message from the server being down. It’ll just do seemingly nothing for a little over 10 seconds, and then the page will refresh with your new message. It’s doing something though. Open up dev tools (Ctrl+Shift+I
in Chrome and Firefox) and go to the “Network” tab. Change your message in main.rs
, save the file, and refresh the page once. You’ll see that a GET request is made to localhost
that takes a little over 10 seconds. In my case, it took 10,842 ms.
This is the magic of systemfd
. Even though our server isn’t running, systemfd
is still accepting connections on port 3000, so when our server comes back up, it’s like nothing happened.
This is a solid improvement, and it’s definitely less annoying to have our browser wait 10 seconds for a response instead of seeing the “Unable to connect” message. Depending on your style, this might be the best solution for you. But I prefer the next one personally.
Documentation enjoyerer
If you read even further ahead in the cargo watch
documentation, you’ll have seen an alternative solution to using systemfd
. The goal is to use two different cargo watch
commands to minimize the server’s downtime.
The first cargo watch
will run all your checks and tests. The second command restarts the server. The key is that at the end of the first watcher, we touch
a file that the second watcher will be looking for. Set up this way, the server only gets restarted after all your checks pass, and those checks running don’t take the server down.
For some reason, though, the cargo watch
documentation doesn’t combine this approach with systemfd
. Doing so, though, gives you the best of both worlds.
Before running our cargo watch command, we’ll have to add our .trigger
file to .gitignore
so that a) we don’t accidentally commit it and b) cargo watch
doesn’t keep rerunning repeatedly from the file being touched. Run
|
|
in the auto-reload
directory.
Now run the first cargo watch command that only runs our (simulated) long-running checks and then touches .trigger
when they pass.
|
|
Next, open up a new terminal and we’ll run a systemfd
command that will only restart the server when the .trigger
file is updated.
|
|
And now, go back and update the message in main.rs
, save, and refresh the browser until you see the new message. You’ll notice a couple things:
- Right after saving the file, your server doesn’t go down. It responds instantly, using the old code with the old message.
- After 10 seconds, the server briefly goes down, but with
systemfd
, your browser will simply show that it’s waiting for a response.
The biggest benefit to this over the last approach is that your server is still running with the old code while your checks are running. Thus, if your checks fail, the server isn’t taken down. The working version is still alive and accepting connections.
The downside is that it can be confusing to have your browser load instantly with responses from the old server while the checks are running. Depending on your preferences, experience, and general workflow, this might not be ideal for you.
Conclusion
In the end, we can’t make Rust compile faster than it already does. Well, we can’t. The geniuses actually working on that shit are making Rust compile faster and faster with each new version. Maybe one day it’ll be fast enough that it isn’t such a big negative to using Rust. But even so, we’ll never get Go compilation speeds. That’s why this crap even matters in the first place. No Go developer has ever lost sleep over this.
So anyways. If you’ve wasted a bunch of time waiting for your Rust webservers to reload, I hope this helps you.