Categories
Gaming Programming

pmud – Redis Functions

Introduction

This is another installment in my adventures in writing a MUD in C++, which I'm calling pmud. I abandoned the ostentatious titling "Programming a MUD: Part 19 - Nick Drinks Some Scotch" because such bravado only serves to make me cringe when I read these posts twenty years from now.

Tonight's task was to implement a Redis function in a Lua script and load it into Redis. The purpose of the function is to register an event in the system. The long-term goal is to create as many functions as I have mutation events in the system. And the goal of all these functions is to write their respective content to a Redis stream which serves as the event bus for the game.

What's a Redis function you ask? Its a function, written in the Lua language, and which runs inside the server. Redis supports one language engine right now, and that's Lua.

There are numerous advantages to having a function live inside the database, but principle among these is the ability to access data at very low latency. After all, nothing needs to leave the Redis process inside a Redis function, unless of course the purpose of the function is to return a bunch of data.

There are other advantages, all explained in the documentation on Redis Functions.

How it went down

I started the evening by reading a bit on the Redis website, read a bit in the book Mastering Redis, by Jeremy Nelson, as well as poking around redis-cli.

Then I downloaded two plugins for vscode, one for Lua and one to allow me to connect to Redis, the Database Client plugin (cweijan.vscode-database-client2). This latter plugin is awesome. It's not awesome because it allows me to easily browse all of the keys in my instance, nor is it because I can see server status at-a-glance, nor is it because I can make changes to the data visually. It's awesome because when I went to use the redis-cli capability, I found myself in front of a paywall. The message from the plugin's author was so delightfully passive-aggressive that I immediately purchased it because I felt the author's pain as he described his rage at receiving negative reviews from internet trolls who complained about some feature that didn't work the way they wanted and likely had no patience or appreciation for the one-thousand-plus hours the author likely put into this plugin. In my eyes, the author deserved some cheddar.

So my modest function looks like this:

local function eventUserLogin(keys, args)
  local stream_name = keys[1]
  assert(args[1] == 'username')
  assert(args[3] == 'password')

  return redis.call('XADD', stream_name, "*", args[1], args[2], args[3], args[4])
end

redis.register_function('eventUserLogin', eventUserLogin)

And to load it, I did this:

cat pmud/scripts/redis-functions/events.lua | redis-cli -h singularity.lan -x function load lua mylib REPLACE

But I got errors for about an hour before I realized the "FUNCTION" capability is new and only in Redis 7. The default container I pulled from docker.io was Redis 6. Luckily there were release-candidates I was able to pull. I did lose all of my data, which is unfortunate because I set up my system to auto-load the data on startup so something is funky there with the upgrade. This didn't cause me too much pain because I have scripts to load the data back in and these scripts worked marvelously.

(Previously, Lua could be used, but it was a one-script-at-a-time model.)

Once I was able to load the function, calling it was a cinch:

fcall eventUserLogin 1 events username nick password poop

Then, reading it:

> xread streams events 0
1) 1) "events"
   2) 1) 1) "1647315387974-0"
         2) 1) "username"
            2) "nick"
            3) "password"
            4) "poop"

That's the first event in the system, yay!

Categories
Gaming MUD Programming

Programing a MUD: Part 2—CQRS and Event-Sourcing

Introduction

I decided to work towards making the MUD work on an event-sourcing model, or CQRS. CQRS stands for something like Command Query Responsibility Separation. It just a fancy way to say that the manner in which we inject data into the system (in this case, as events) can be different from the way in which one reads the state (such as a Redis query).

This change in technical direction requires that I create an event for every kind of state mutation I plan to allow. I’ve never implemented CQRS, so I expect to learn a few things.

Currently, to seed the data, I run a Yaml-to-Redis converted I wrote. This is a generic facility that will basically take any YAML and mirror it in Redis. Whereas Redis doesn’t support hierarchical values, I simulate this by extending the key names as a kind of path. At the top level we simply have values, lists, and maps. Map entries are scalar.

90) "monsters:monster:blob"
91) "monsters:monster:golem"
92) "items:armor:medium armor"
93) "places:population-center:Karpus"
94) "places:population-centers"

But this won’t do in a system based on the event-source model. I can’t just mutate the state all at once like that. It will invalidate principles of event-sourcing like being able to replay, and, during replay, have the system process an event with the correct starting state.

For instance, if a character starts out at level 1 with 10hp, and we reply a set of events created during an adventure, the character might be level 2 at the end of those events. If we choose to replay and ignore the state changes of the character (such as them starting at level 1), then the events will be handled and the system will think the character started at level 2. So, spawn character is an important event as it is an event which establishes a new character and interactions with that character should start from that point.

It’s possible I could create an “event” such as “load-yaml-file” and the event has the yaml file contents, but I think the lack of granularity might prove unworkable.

Instead of injecting a map for a monster:

Monster::Gargoyle
    Hit-dice: 4d6

I’ll say something like “add-update-monster”, and this event should have all relevant data related to adding or updating an existing monster state.

This way, as the event makes its way across the system, components have a signal to do something or ignore the message. This is the replay ability to a CQRS system.

Redis Streams

Redis has a stream feature which is ideal for modeling these events. For now, I’m going to use the simple XREAD function, which allows any client to digest every message in the stream.

I did some benchmarking

Hardware:

  • Redis runs on a dedicated linux box (Intel Core i5-3570K CPU @ 3.40GHz)
  • stream writer/reader programs to send/receive data from Redis (AMD Ryzen 9 3900X 12-Core Processor)

When source/sink are running on my Windows desktop, running in a WSL2 Debian container, we get:

~ 770.1650232441272mps

But the CPU was more or less idle on both Windows and Redis. So, my network must be slow. I repeated the test by running the writer/reader apps on the Redis Linux box and results were 10x faster:

When I the run source/sink programs on the redis hardware, eliminating the network, the results are much better:

~ 7011.405656912864mps~

I could not get single-threaded writes to be much faster than that, but I did find a big in the reader and when I fixed it, I was able to received 100-200k messages/sev, which is great.

Now that I have a technical foundation well-understood, it’s time to start defining some mutation messages and modeling how they’ll appear on the stream.

Based on work so far, I have these events to define first:

  • UserInputReceived
  • UserOutputSent
  • MonsterAddUpdate
  • ItemAddUpdate
  • PlaceAddUpdate

This is probably suboptimal naming, but I’m still new to this, so I expect to refine the event taxonomy over time as I see more events.

More on that later.

Categories
Programming

Building a MUD – Part 1

This is part 1 of N parts where I'll discuss my adventures in building a MUD (Multi-User Dungeon) in C++. I've always wanted to write games but never got around to it for a number of reasons I'll discuss over time. So when I decided to finally try my hand at creating one, I chose the language and tools I'm most familiar with, and which I use in my day-job.

You might be wondering why I chose a MUD over a platformer, or a shooter, or a mobile game of some kind. The reason is because I'm primarily a backend developer and a MUD seems more or less like a backend project to me. I have a buddy who's a JavaScript/3D expert and I figure possibly one day I can enhance the protocol to drive the MUD with a bit more flair down the road. It's a bit early for that.

I'm also choosing to make the repo public as I do this for two main reasons:

  • I'm chatting about my progress and I want to send code links to my friends and don't want to bother with adding them as special collaborators
  • I invite comments and criticism on the way I do things because this is a learning project above all else

The Stack

First, the repo is here: https://github.com/NickCody/cpp-game. Be warned, the repo is not clean, as it contains numerous little code projects tucked away in various directories as I played around with ncurses, graph algorithms, the game of life, etc. If you poke around, you may be surprised at how many little things are in there.

Visual Studio Code is the primary tool, and more specifically I started with one of their stock C++ dev containers (described here). I've heavily modified the container's Dockerfile to include the latest tools I could, among these are:

  • Debian Bullseye
  • gcc 10.2 (for C++ 20 compatibility)
  • vim, graphviz, ninja, clang-format, and a few other goodies
  • The CAF Actor Framework (more on this below)
  • Bazel for builds
  • Yaml for configuration
  • Redis as the primary store for user/game data
  • Google Cloud SDK for "production" deployments

Of these, I feel most people might shake their head at why I'm using an actor library for my C++ code. I'm a huge fan of the actor model having worked with Akka on Scala for years. The model makes sense to me and I wanted to learn more about this library for C++. I'll be speaking about it in detail in coming posts.

Home Setup

At home, I have my Windows desktop on which I have Docker Desktop and WSL2 installed. I don't find myself in WSL2 much, not directly, but the repo lives there and I spawn vscode devcontainer from WSL2.

For edit/compile/run cycles, I run the MUD on my desktop. Redis is running on a home Linux server I have tucked under my desk. It's only a Intel Core i5-3570K CPU @ 3.40GHz with 32GB RAM. Its plenty of power for what I need. This machine is my "QA" environment, where I can test my deploy scripts. I have a version running in the cloud, too, using Google's Cloud SDK.

Why Redis?

There may be better choices for my persistent store, since Redis is primarily a fast in-memory data structure storage, with persistence capabilities. I was curious to learn it so I figure if it turns out to not be the right choice, I'll eventually figure out why and then I'll have learned something.

For now, it's pretty amazing. Fast, lean, and no bullshit. I'm using hiredis, which is a bare-bones simple thin wrapper based on C. I considered one of the many "modern" C++ wrappers, but I felt like I could write my own wrapper and use my own wrapper as an abstraction around the storage engine that might help me move off Redis down the road if I eventually decide it's not the right storage engine for this project.

Conclusion

That's all for now. I'm not sure what the topic of the next post will be, as there are dozens of topics to discuss, but thanks for reading.

Categories
Programming

First steps in rust

I started to play around with rust and one of the first things we do when we learn a new language is write a little hello, world project. So I typed this in:

fn main() {
println!("Hello, world!");
}

I compiled via rustc main.rs and viola, my terminal shows hello world. A quick check on runtime shows it runs more or less instantaneously:

$ time ./main
Hello, world!

real 0m0.001s
user 0m0.000s
sys 0m0.000s

You may be wondering why I bothered to see how fast this was and if I told you that I've spent most of my time on the JVM over the past ten years then you'd know why I'm so paranoid.

But I digress. I looked at the binary size and saw it was 2.5M. Hmmm. My gut says that's a bit big so I wrote this little C++ program to compare:

#include <iostream>
int main(int argc, char** argv) {
std::cout << "Hello, world!" << std::endl;
return 0;
}

Sure enough the a.out was 17K. Rust must have some option to minimize for size and the docs say it's -C prefer_dynamic. When I compile the rust code, it generates a 17k binary, just like C++. yay.

But now it won't run since it's dynamically linked:

./main: error while loading shared libraries: libstd-205127404fcba336.so: cannot open shared object file: No such file or directory

Crapola where is this library. What else may be missing?

$ ldd main
linux-vdso.so.1 (0x00007ffcc23cc000)
libstd-205127404fcba336.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6a9e912000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6a9eb14000)

Oh, just that one. Linux's locate didn't find it. I thought I saw a ~/.cargo directory somewhere.

find ~/.cargo/ -name "libstd*"

Nope. How about:

find ~/.rustup -name "libstd*"
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/wasm32-unknown-unknown/lib/libstd-077104c061bb2ffc.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/analysis/libstd-205127404fcba336.json
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-sys-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-thread-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-io-internals.html

There you are you wascolly rabbit! But I don't know enough about rust to say if this is recommended approach:

LD_LIBRARY_PATH=./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib ./main
Hello, world!

But who cares that worked!

Categories
Hardware Programming

CPU Cooling

tl; dr;

Make sure your fans point the right way and you apply thermal paste properly.

When I bought my PC, I stuck am AMD 3900X 12/24 core processor in there and used the STOCK heat sink and fan. Out of the box, the processor ran anywhere from 40-60° C and I was pretty happy. Then I added a GPU and stick a few MORE fans into the case (at the top). When I was finished, the CPU was regularly overheating, spiking anywhere from 75-100° C.

This was annoying since my GPU was reading relatively cold 30-40° C.

I was ready to drop $200 for a water cooler, which I know would bring it down to 40° C (or less) even when in heavy use. But I obviously didn't want to spend the money on that since my upgrade list includes a second M.2 card and a second bank of 32GB of RAM.

I was talking to my son and he said that I'd put in t he top fans wrong. He said the best-practice is to draw air out from the top. I knew this was a best-practice too, but I decided against it because I wanted to see for myself and my intuition said that blowing cold outside air right onto the CPU heat sink was the way to go since those topside chassis fans are right next to the CPU heat sink.

Turns out I was probably wrong.

Another thing was to install those topside fans, I needed to re-set my CPU heat sink since it was "in the way". When I did this, I believe I didn't use enough thermal paste since I used the last bit of a syringe and had no more.

To make a long story short, I re-applied thermal paste and flipped the fans to blow out and now the PC idles around 45° C and heats to about 80° max under heavy load. That's still hot, but lowering peak temp by 20° C seems like a big deal for a minimal investment in thermal paste and a little elbow grease flipping those fans.

I wrote a simple Java program to saturate my CPU's. It just adds a counter so my gut tells me that lots of transistors in the CPU are unused and the program may not heat up the CPU even though the OS is reporting 100% utilization.  It's possible a program that either access more memory or more registers/logic gates on the die could theoretically cause the CPU to be busier and thus heat up more. If anyone knows about this, I'd love to learn more.

I may still spring for a water-cooler down the road, but if this configuration remains stable then I guess I'm happy with it.