First steps in rust

I started to play around with rust and one of the first things we do when we learn a new language is write a little hello, world project. So I typed this in:

fn main() {
println!("Hello, world!");
}

I compiled via rustc main.rs and viola, my terminal shows hello world. A quick check on runtime shows it runs more or less instantaneously:

$ time ./main
Hello, world!

real 0m0.001s
user 0m0.000s
sys 0m0.000s

You may be wondering why I bothered to see how fast this was and if I told you that I’ve spent most of my time on the JVM over the past ten years then you’d know why I’m so paranoid.

But I digress. I looked at the binary size and saw it was 2.5M. Hmmm. My gut says that’s a bit big so I wrote this little C++ program to compare:

#include <iostream>
int main(int argc, char** argv) {
std::cout << "Hello, world!" << std::endl;
return 0;
}

Sure enough the a.out was 17K. Rust must have some option to minimize for size and the docs say it’s -C prefer_dynamic. When I compile the rust code, it generates a 17k binary, just like C++. yay.

But now it won’t run since it’s dynamically linked:

./main: error while loading shared libraries: libstd-205127404fcba336.so: cannot open shared object file: No such file or directory

Crapola where is this library. What else may be missing?

$ ldd main
linux-vdso.so.1 (0x00007ffcc23cc000)
libstd-205127404fcba336.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6a9e912000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6a9eb14000)

Oh, just that one. Linux’s locate didn’t find it. I thought I saw a ~/.cargo directory somewhere.

find ~/.cargo/ -name "libstd*"

Nope. How about:

find ~/.rustup -name "libstd*"
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/wasm32-unknown-unknown/lib/libstd-077104c061bb2ffc.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/analysis/libstd-205127404fcba336.json
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-sys-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-thread-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-io-internals.html

There you are you wascolly rabbit! But I don’t know enough about rust to say if this is recommended approach:

LD_LIBRARY_PATH=./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib ./main
Hello, world!

But who cares that worked!

CPU Cooling

tl; dr;

Make sure your fans point the right way and you apply thermal paste properly.

When I bought my PC, I stuck am AMD 3900X 12/24 core processor in there and used the STOCK heat sink and fan. Out of the box, the processor ran anywhere from 40-60° C and I was pretty happy. Then I added a GPU and stick a few MORE fans into the case (at the top). When I was finished, the CPU was regularly overheating, spiking anywhere from 75-100° C.

This was annoying since my GPU was reading relatively cold 30-40° C.

I was ready to drop $200 for a water cooler, which I know would bring it down to 40° C (or less) even when in heavy use. But I obviously didn’t want to spend the money on that since my upgrade list includes a second M.2 card and a second bank of 32GB of RAM.

I was talking to my son and he said that I’d put in t he top fans wrong. He said the best-practice is to draw air out from the top. I knew this was a best-practice too, but I decided against it because I wanted to see for myself and my intuition said that blowing cold outside air right onto the CPU heat sink was the way to go since those topside chassis fans are right next to the CPU heat sink.

Turns out I was probably wrong.

Another thing was to install those topside fans, I needed to re-set my CPU heat sink since it was “in the way”. When I did this, I believe I didn’t use enough thermal paste since I used the last bit of a syringe and had no more.

To make a long story short, I re-applied thermal paste and flipped the fans to blow out and now the PC idles around 45° C and heats to about 80° max under heavy load. That’s still hot, but lowering peak temp by 20° C seems like a big deal for a minimal investment in thermal paste and a little elbow grease flipping those fans.

I wrote a simple Java program to saturate my CPU’s. It just adds a counter so my gut tells me that lots of transistors in the CPU are unused and the program may not heat up the CPU even though the OS is reporting 100% utilization.  It’s possible a program that either access more memory or more registers/logic gates on the die could theoretically cause the CPU to be busier and thus heat up more. If anyone knows about this, I’d love to learn more.

I may still spring for a water-cooler down the road, but if this configuration remains stable then I guess I’m happy with it.

More Kubernetes: WSL2, Hyper-V, Docker, and X11

Most programming I do at my day job is done via PuTTY and IntelliJ. The latter runs on Linux and I run MobaXterm as an X11 server. I don’t use Moba’s terminal because the chrome around it’s way to gizmo-ey, like I’m at an ugly sweater party or something.

PuTTY is ideal because it’s so minimal and it has all the features I need. I actually use PuTTY Tray, which is a fork that has a few more options than the standard PuTTY. IntelliJ works great over X11 and MobaXterm is pretty snappy on our local network, even from our datacenter to our office.

That said, at home I don’t have access to a fancy $70k server running dual Intel Platinum Xeon processors with 512GB of RAM and 1.6TB of Enterprise-grade Northbridge-connected NVMe drives. I have a 12/24 core AMD Ryzen 3900X and Windows 10 Pro for Workstations. But Windows is so developer-friendly these days, it gives me a ton of options for development.

You can do Linux a few ways, two of which are in a VM via Hyper-V or using Windows Subsystem for Linux (WSL). In the very latest fast-ring builds of Windows, you can enable WSL2, which is an overhaul of the subsystem that’s a lot faster and more compatible with Linux binaries.

WSL2

To get WSL2, I joined the Windows Insider program and set updates to the “Fast Ring”. Right now I’m running Windows 10 Pro for Workstations Build 19025.vb_release.191112-1414, or whatever. Then I followed the steps here to enable WSL2. Yippee!

Docker

Docker isn’t by default available in WSL2, and I’d prefer it was since I like to stay in a bash prompt over PowerShell or cmd.exe. Lucking the latest Docker for Windows Edge version has an option to expose Docker to WSL2. It’s in Settings > Resources > WSL Integration. To check things are working, run wsl -l -v

PS C:\Users\nic> wsl -l -v
  NAME                   STATE           VERSION
* Ubuntu                 Running         2
  docker-desktop         Running         2
  docker-desktop-data    Running         2

X11

I wanted to see if I could get IntelliJ to run under WSL/X11 and my initial attempts were met with utter failure. IntelliJ wouldn’t start up no matter what I set my DISPLAY variable to. localhost:0 (or 0.0) didn’t work, nor 127.0.0.1:0, nor the eth0 IP reported by ifconfig, nor the Windows desktop IP. Finally, I went into /etc.resolve.conf and saw this:

nameserver 172.22.80.1

Setting my DISPLAY to 172.22.80.1:0 worked.

One interesting observation is that the WSL2 network and my Desktop bridged network to my home router are isolated enough that IntelliJ’s license check can’t detect that I had two copies open. One copy running natively on Windows and the second running in the WSL context over X11. On a flat network, the second copy would refuse to run, or the first copy would die on launch of the second. I forget!

Kubernetes

So I went through all of this so I could interact with my Kubernetes cluster at the WSL prompt. I setup Rancher as a container running in Docker for Windows. Once Rancher was running, I created three VM’s running Ubuntu using Hyper-V and called them node1, node2, and node3. Rancher made it easy to have each VM register itself as a node in a cluster called “Lab”. At this point I’m using 27GB of my 32GB of RAM, which is surprising so I’ll probably scale down Docker and the VM’s RAM. maybe giving each VM 4GB was a little excessive.

I still have two free DIMM slots so I guess it will be time to get to 64GB soon.

The cluster is extremely unstable and I’m in the process of figuring out why. When I reboot the desktop, the VM’s (cluster nodes) shut down (along with Rancher’s container) and after the reboot rancher comes up, tons of containers in each node come up, but the status is redder than Carrie at her Prom.

For some reason, the containers running on each VM node do not auto-start until I invoke docker somehow, like sudo docker ps. I’m still researching why this is.

Building a Command-Line Toolset Part I – Root Command

Introduction

Like many programmers, I have a deep affinity for the command-line. In my head, all good backend systems start with a solid core driven by terminal commands. This core system runs “silently” outside of a GUI context and relies on configuration and signals received during runtime to dictate its behavior. GUI’s have a lot of great characteristics, such as contextual linking, but they come later IMHO.

Continue reading “Building a Command-Line Toolset Part I – Root Command”