Gaming Illustration

Maps of Primordia – The Lost City of Kharmanus

I love drawing maps. On days where I don’t have the mental energy to code, I try and do something to move my creative energy forward. Today, I took an old map I drew in college and tweaked it a bit in Adobe Fresco on my iPad.

The map is called Kharmanus, or “The Lost City of Kharmanus” and in my gaming world, this is a city that was buried in some traumatic event in the past of my fantasy world, Primordia.

Now, I know all well that if a city is “buried” then there is no space for people to walk around, as the “burying” bit tends to close up all open spaces.

But imagine that some great wizard who lived in the city, in his last heroic effort to save his beloved city, did some magic to ensure said city, while buried, provided some “space” between the floor and the overhanging cave ceiling that was sufficient to have adventures lurk around and steal shit from all of the dead people. I mean, that’s good adventuring backstory if I ever heard it.

And that’s precisely the origin of The Lost City of Kharmanus.

So here is the map. It’s a work in progress, but I hope you like it.

Gaming Programming

pmud – Redis Functions


This is another installment in my adventures in writing a MUD in C++, which I'm calling pmud. I abandoned the ostentatious titling "Programming a MUD: Part 19 - Nick Drinks Some Scotch" because such bravado only serves to make me cringe when I read these posts twenty years from now.

Tonight's task was to implement a Redis function in a Lua script and load it into Redis. The purpose of the function is to register an event in the system. The long-term goal is to create as many functions as I have mutation events in the system. And the goal of all these functions is to write their respective content to a Redis stream which serves as the event bus for the game.

What's a Redis function you ask? Its a function, written in the Lua language, and which runs inside the server. Redis supports one language engine right now, and that's Lua.

There are numerous advantages to having a function live inside the database, but principle among these is the ability to access data at very low latency. After all, nothing needs to leave the Redis process inside a Redis function, unless of course the purpose of the function is to return a bunch of data.

There are other advantages, all explained in the documentation on Redis Functions.

How it went down

I started the evening by reading a bit on the Redis website, read a bit in the book Mastering Redis, by Jeremy Nelson, as well as poking around redis-cli.

Then I downloaded two plugins for vscode, one for Lua and one to allow me to connect to Redis, the Database Client plugin (cweijan.vscode-database-client2). This latter plugin is awesome. It's not awesome because it allows me to easily browse all of the keys in my instance, nor is it because I can see server status at-a-glance, nor is it because I can make changes to the data visually. It's awesome because when I went to use the redis-cli capability, I found myself in front of a paywall. The message from the plugin's author was so delightfully passive-aggressive that I immediately purchased it because I felt the author's pain as he described his rage at receiving negative reviews from internet trolls who complained about some feature that didn't work the way they wanted and likely had no patience or appreciation for the one-thousand-plus hours the author likely put into this plugin. In my eyes, the author deserved some cheddar.

So my modest function looks like this:

local function eventUserLogin(keys, args)
  local stream_name = keys[1]
  assert(args[1] == 'username')
  assert(args[3] == 'password')

  return'XADD', stream_name, "*", args[1], args[2], args[3], args[4])

redis.register_function('eventUserLogin', eventUserLogin)

And to load it, I did this:

cat pmud/scripts/redis-functions/events.lua | redis-cli -h singularity.lan -x function load lua mylib REPLACE

But I got errors for about an hour before I realized the "FUNCTION" capability is new and only in Redis 7. The default container I pulled from was Redis 6. Luckily there were release-candidates I was able to pull. I did lose all of my data, which is unfortunate because I set up my system to auto-load the data on startup so something is funky there with the upgrade. This didn't cause me too much pain because I have scripts to load the data back in and these scripts worked marvelously.

(Previously, Lua could be used, but it was a one-script-at-a-time model.)

Once I was able to load the function, calling it was a cinch:

fcall eventUserLogin 1 events username nick password poop

Then, reading it:

> xread streams events 0
1) 1) "events"
   2) 1) 1) "1647315387974-0"
         2) 1) "username"
            2) "nick"
            3) "password"
            4) "poop"

That's the first event in the system, yay!

Gaming MUD Programming

Programing a MUD: Part 2—CQRS and Event-Sourcing


I decided to work towards making the MUD work on an event-sourcing model, or CQRS. CQRS stands for something like Command Query Responsibility Separation. It just a fancy way to say that the manner in which we inject data into the system (in this case, as events) can be different from the way in which one reads the state (such as a Redis query).

This change in technical direction requires that I create an event for every kind of state mutation I plan to allow. I’ve never implemented CQRS, so I expect to learn a few things.

Currently, to seed the data, I run a Yaml-to-Redis converted I wrote. This is a generic facility that will basically take any YAML and mirror it in Redis. Whereas Redis doesn’t support hierarchical values, I simulate this by extending the key names as a kind of path. At the top level we simply have values, lists, and maps. Map entries are scalar.

90) "monsters:monster:blob"
91) "monsters:monster:golem"
92) "items:armor:medium armor"
93) "places:population-center:Karpus"
94) "places:population-centers"

But this won’t do in a system based on the event-source model. I can’t just mutate the state all at once like that. It will invalidate principles of event-sourcing like being able to replay, and, during replay, have the system process an event with the correct starting state.

For instance, if a character starts out at level 1 with 10hp, and we reply a set of events created during an adventure, the character might be level 2 at the end of those events. If we choose to replay and ignore the state changes of the character (such as them starting at level 1), then the events will be handled and the system will think the character started at level 2. So, spawn character is an important event as it is an event which establishes a new character and interactions with that character should start from that point.

It’s possible I could create an “event” such as “load-yaml-file” and the event has the yaml file contents, but I think the lack of granularity might prove unworkable.

Instead of injecting a map for a monster:

    Hit-dice: 4d6

I’ll say something like “add-update-monster”, and this event should have all relevant data related to adding or updating an existing monster state.

This way, as the event makes its way across the system, components have a signal to do something or ignore the message. This is the replay ability to a CQRS system.

Redis Streams

Redis has a stream feature which is ideal for modeling these events. For now, I’m going to use the simple XREAD function, which allows any client to digest every message in the stream.

I did some benchmarking


  • Redis runs on a dedicated linux box (Intel Core i5-3570K CPU @ 3.40GHz)
  • stream writer/reader programs to send/receive data from Redis (AMD Ryzen 9 3900X 12-Core Processor)

When source/sink are running on my Windows desktop, running in a WSL2 Debian container, we get:

~ 770.1650232441272mps

But the CPU was more or less idle on both Windows and Redis. So, my network must be slow. I repeated the test by running the writer/reader apps on the Redis Linux box and results were 10x faster:

When I the run source/sink programs on the redis hardware, eliminating the network, the results are much better:

~ 7011.405656912864mps~

I could not get single-threaded writes to be much faster than that, but I did find a big in the reader and when I fixed it, I was able to received 100-200k messages/sev, which is great.

Now that I have a technical foundation well-understood, it’s time to start defining some mutation messages and modeling how they’ll appear on the stream.

Based on work so far, I have these events to define first:

  • UserInputReceived
  • UserOutputSent
  • MonsterAddUpdate
  • ItemAddUpdate
  • PlaceAddUpdate

This is probably suboptimal naming, but I’m still new to this, so I expect to refine the event taxonomy over time as I see more events.

More on that later.