Categories
Gaming Illustration

Maps of Primordia – The Lost City of Kharmanus

I love drawing maps. On days where I don’t have the mental energy to code, I try and do something to move my creative energy forward. Today, I took an old map I drew in college and tweaked it a bit in Adobe Fresco on my iPad.

The map is called Kharmanus, or “The Lost City of Kharmanus” and in my gaming world, this is a city that was buried in some traumatic event in the past of my fantasy world, Primordia.

Now, I know all well that if a city is “buried” then there is no space for people to walk around, as the “burying” bit tends to close up all open spaces.

But imagine that some great wizard who lived in the city, in his last heroic effort to save his beloved city, did some magic to ensure said city, while buried, provided some “space” between the floor and the overhanging cave ceiling that was sufficient to have adventures lurk around and steal shit from all of the dead people. I mean, that’s good adventuring backstory if I ever heard it.

And that’s precisely the origin of The Lost City of Kharmanus.

So here is the map. It’s a work in progress, but I hope you like it.

Categories
Gaming Programming

pmud – Redis Functions

Introduction

This is another installment in my adventures in writing a MUD in C++, which I'm calling pmud. I abandoned the ostentatious titling "Programming a MUD: Part 19 - Nick Drinks Some Scotch" because such bravado only serves to make me cringe when I read these posts twenty years from now.

Tonight's task was to implement a Redis function in a Lua script and load it into Redis. The purpose of the function is to register an event in the system. The long-term goal is to create as many functions as I have mutation events in the system. And the goal of all these functions is to write their respective content to a Redis stream which serves as the event bus for the game.

What's a Redis function you ask? Its a function, written in the Lua language, and which runs inside the server. Redis supports one language engine right now, and that's Lua.

There are numerous advantages to having a function live inside the database, but principle among these is the ability to access data at very low latency. After all, nothing needs to leave the Redis process inside a Redis function, unless of course the purpose of the function is to return a bunch of data.

There are other advantages, all explained in the documentation on Redis Functions.

How it went down

I started the evening by reading a bit on the Redis website, read a bit in the book Mastering Redis, by Jeremy Nelson, as well as poking around redis-cli.

Then I downloaded two plugins for vscode, one for Lua and one to allow me to connect to Redis, the Database Client plugin (cweijan.vscode-database-client2). This latter plugin is awesome. It's not awesome because it allows me to easily browse all of the keys in my instance, nor is it because I can see server status at-a-glance, nor is it because I can make changes to the data visually. It's awesome because when I went to use the redis-cli capability, I found myself in front of a paywall. The message from the plugin's author was so delightfully passive-aggressive that I immediately purchased it because I felt the author's pain as he described his rage at receiving negative reviews from internet trolls who complained about some feature that didn't work the way they wanted and likely had no patience or appreciation for the one-thousand-plus hours the author likely put into this plugin. In my eyes, the author deserved some cheddar.

So my modest function looks like this:

local function eventUserLogin(keys, args)
  local stream_name = keys[1]
  assert(args[1] == 'username')
  assert(args[3] == 'password')

  return redis.call('XADD', stream_name, "*", args[1], args[2], args[3], args[4])
end

redis.register_function('eventUserLogin', eventUserLogin)

And to load it, I did this:

cat pmud/scripts/redis-functions/events.lua | redis-cli -h singularity.lan -x function load lua mylib REPLACE

But I got errors for about an hour before I realized the "FUNCTION" capability is new and only in Redis 7. The default container I pulled from docker.io was Redis 6. Luckily there were release-candidates I was able to pull. I did lose all of my data, which is unfortunate because I set up my system to auto-load the data on startup so something is funky there with the upgrade. This didn't cause me too much pain because I have scripts to load the data back in and these scripts worked marvelously.

(Previously, Lua could be used, but it was a one-script-at-a-time model.)

Once I was able to load the function, calling it was a cinch:

fcall eventUserLogin 1 events username nick password poop

Then, reading it:

> xread streams events 0
1) 1) "events"
   2) 1) 1) "1647315387974-0"
         2) 1) "username"
            2) "nick"
            3) "password"
            4) "poop"

That's the first event in the system, yay!

Categories
Gaming MUD Programming

Programing a MUD: Part 2—CQRS and Event-Sourcing

Introduction

I decided to work towards making the MUD work on an event-sourcing model, or CQRS. CQRS stands for something like Command Query Responsibility Separation. It just a fancy way to say that the manner in which we inject data into the system (in this case, as events) can be different from the way in which one reads the state (such as a Redis query).

This change in technical direction requires that I create an event for every kind of state mutation I plan to allow. I’ve never implemented CQRS, so I expect to learn a few things.

Currently, to seed the data, I run a Yaml-to-Redis converted I wrote. This is a generic facility that will basically take any YAML and mirror it in Redis. Whereas Redis doesn’t support hierarchical values, I simulate this by extending the key names as a kind of path. At the top level we simply have values, lists, and maps. Map entries are scalar.

90) "monsters:monster:blob"
91) "monsters:monster:golem"
92) "items:armor:medium armor"
93) "places:population-center:Karpus"
94) "places:population-centers"

But this won’t do in a system based on the event-source model. I can’t just mutate the state all at once like that. It will invalidate principles of event-sourcing like being able to replay, and, during replay, have the system process an event with the correct starting state.

For instance, if a character starts out at level 1 with 10hp, and we reply a set of events created during an adventure, the character might be level 2 at the end of those events. If we choose to replay and ignore the state changes of the character (such as them starting at level 1), then the events will be handled and the system will think the character started at level 2. So, spawn character is an important event as it is an event which establishes a new character and interactions with that character should start from that point.

It’s possible I could create an “event” such as “load-yaml-file” and the event has the yaml file contents, but I think the lack of granularity might prove unworkable.

Instead of injecting a map for a monster:

Monster::Gargoyle
    Hit-dice: 4d6

I’ll say something like “add-update-monster”, and this event should have all relevant data related to adding or updating an existing monster state.

This way, as the event makes its way across the system, components have a signal to do something or ignore the message. This is the replay ability to a CQRS system.

Redis Streams

Redis has a stream feature which is ideal for modeling these events. For now, I’m going to use the simple XREAD function, which allows any client to digest every message in the stream.

I did some benchmarking

Hardware:

  • Redis runs on a dedicated linux box (Intel Core i5-3570K CPU @ 3.40GHz)
  • stream writer/reader programs to send/receive data from Redis (AMD Ryzen 9 3900X 12-Core Processor)

When source/sink are running on my Windows desktop, running in a WSL2 Debian container, we get:

~ 770.1650232441272mps

But the CPU was more or less idle on both Windows and Redis. So, my network must be slow. I repeated the test by running the writer/reader apps on the Redis Linux box and results were 10x faster:

When I the run source/sink programs on the redis hardware, eliminating the network, the results are much better:

~ 7011.405656912864mps~

I could not get single-threaded writes to be much faster than that, but I did find a big in the reader and when I fixed it, I was able to received 100-200k messages/sev, which is great.

Now that I have a technical foundation well-understood, it’s time to start defining some mutation messages and modeling how they’ll appear on the stream.

Based on work so far, I have these events to define first:

  • UserInputReceived
  • UserOutputSent
  • MonsterAddUpdate
  • ItemAddUpdate
  • PlaceAddUpdate

This is probably suboptimal naming, but I’m still new to this, so I expect to refine the event taxonomy over time as I see more events.

More on that later.

Categories
Programming

Building a MUD – Part 1

This is part 1 of N parts where I'll discuss my adventures in building a MUD (Multi-User Dungeon) in C++. I've always wanted to write games but never got around to it for a number of reasons I'll discuss over time. So when I decided to finally try my hand at creating one, I chose the language and tools I'm most familiar with, and which I use in my day-job.

You might be wondering why I chose a MUD over a platformer, or a shooter, or a mobile game of some kind. The reason is because I'm primarily a backend developer and a MUD seems more or less like a backend project to me. I have a buddy who's a JavaScript/3D expert and I figure possibly one day I can enhance the protocol to drive the MUD with a bit more flair down the road. It's a bit early for that.

I'm also choosing to make the repo public as I do this for two main reasons:

  • I'm chatting about my progress and I want to send code links to my friends and don't want to bother with adding them as special collaborators
  • I invite comments and criticism on the way I do things because this is a learning project above all else

The Stack

First, the repo is here: https://github.com/NickCody/cpp-game. Be warned, the repo is not clean, as it contains numerous little code projects tucked away in various directories as I played around with ncurses, graph algorithms, the game of life, etc. If you poke around, you may be surprised at how many little things are in there.

Visual Studio Code is the primary tool, and more specifically I started with one of their stock C++ dev containers (described here). I've heavily modified the container's Dockerfile to include the latest tools I could, among these are:

  • Debian Bullseye
  • gcc 10.2 (for C++ 20 compatibility)
  • vim, graphviz, ninja, clang-format, and a few other goodies
  • The CAF Actor Framework (more on this below)
  • Bazel for builds
  • Yaml for configuration
  • Redis as the primary store for user/game data
  • Google Cloud SDK for "production" deployments

Of these, I feel most people might shake their head at why I'm using an actor library for my C++ code. I'm a huge fan of the actor model having worked with Akka on Scala for years. The model makes sense to me and I wanted to learn more about this library for C++. I'll be speaking about it in detail in coming posts.

Home Setup

At home, I have my Windows desktop on which I have Docker Desktop and WSL2 installed. I don't find myself in WSL2 much, not directly, but the repo lives there and I spawn vscode devcontainer from WSL2.

For edit/compile/run cycles, I run the MUD on my desktop. Redis is running on a home Linux server I have tucked under my desk. It's only a Intel Core i5-3570K CPU @ 3.40GHz with 32GB RAM. Its plenty of power for what I need. This machine is my "QA" environment, where I can test my deploy scripts. I have a version running in the cloud, too, using Google's Cloud SDK.

Why Redis?

There may be better choices for my persistent store, since Redis is primarily a fast in-memory data structure storage, with persistence capabilities. I was curious to learn it so I figure if it turns out to not be the right choice, I'll eventually figure out why and then I'll have learned something.

For now, it's pretty amazing. Fast, lean, and no bullshit. I'm using hiredis, which is a bare-bones simple thin wrapper based on C. I considered one of the many "modern" C++ wrappers, but I felt like I could write my own wrapper and use my own wrapper as an abstraction around the storage engine that might help me move off Redis down the road if I eventually decide it's not the right storage engine for this project.

Conclusion

That's all for now. I'm not sure what the topic of the next post will be, as there are dozens of topics to discuss, but thanks for reading.

Categories
Hardware Tools

Upgrade Your Storage Every Decade!

I got a home NAS (Network Attached Storage) with a formatted volume size of about 11TB (4x4TB drives in a custom RAID). I've had a few local USB/Firewire-based home storage solutions in the past such as a Drobo, but this time I wanted to go to the network because I thought it would be more flexible. The drivers were:

  • Its good to upgrade storage periodically over your lifetime (see below)
  • The NAS is more flexible that a single-computer USB-based connection
  • It has more storage
  • The NAS device has a lot more features than a simple storage array

More on all of this below.

The model I chose is a Synology DS920+.

Synology 920+ NAS
Synology 920+ NAS

The DS920+ has four drive bays and two m.2 slots for a read/write SSD cache. This is a nice feature because you can get inexpensive high-capacity spinning disk drives and use the SSD as read/write cache to mitigate the performance issues you'd normally see on low-rpm spindle-based storage such as horrible parallel reads/writes and slow random access/seek speeds.

I got four Seagate IronWolf 4TB NAS Internal Hard Drive HDD.  These have a modest 64MB cache and only spin at 5900rpm. I also filled both m.2 slots with a pair of Samsung 500GB  970 EVO SSD's.

The DS920+ also offered a 4GB RAM expansion, bringing total RAM to 8GB. This is useful since the NAS is really a Linux computer and my gut says 8GB will give the OS more breathing room for disk cache and running various apps and protocol servers it supports.

On that note, the Synology DS920+ has a ton of apps and services and I've only scratched the surface on what I use. Some of the highlights:

  • SMB server (for WIndows sharing)
  • AFP/Bonjour server (for Mac)
  • rsync
  • NFS

SMB and AFP are pretty slow protocols and I always hate using them for large file transfers, like the multi-terabyte transfers I need to make to get all of my home videos and photos off my old Drobo. I found a great writeup on the performance of these protocols by Nasim Mansurov, here at the Photography Life blog. These protocols are great for general use, but not for the initial data-loading phase.

Part of my apprehension is not knowing the state of a large transfer, particularly if its interrupted. If I moved files and the transfer was interrupted, were they really moved?  Sometimes I'm dealing with 50k files and it's not easy to get a warm and fuzzy feeling about whether a large file transfer worked or not even if it appeared to finish. Sure, when the copy is done I could compare file counts and byte sizes between both source and destination. This would give me some confidence that I can now delete the source directory, but that's not the real problem.

The real problem is managing a large transfer and being able to optimally start/stop/resume it at will. This is trash using a GUI and AFP/SMB. For instance, if I want to resume a copy, do I need to copy the whole thing again? Do I manage the copy by doing one folder at a time and sit and wait for it to finish before starting the next folder? LOL, I've been there! Also, what happens when I find destination files that already exist? Walking into my office at 7am to check on the progress of a a twelve hour file transfer only to find that twenty minutes after I went to bed the system politely prompted me about how I want to handle a destination file conflict. I never want to be that angry holding a cup of hot coffee ever again. Screw all that.

The answer of course is rsync, a tried and true utility that's a foundation of Linux. Since the NAS is a Linux machine, it's got an rsync facility. rsync is single-purposed yet sophisticated piece of software that runs on both ends of a file transfer. The client and server both have access to their respective file systems and can negotiate a large transfer efficiently. If it's good enough for Fortune 500 companies to use in production, it's good enough for home videos of my kids fighting in their diapers.

rsync negotiates file listings and checksums on both sides of the transfer and will always do the right thing in terms of resuming a partial transfer. It's like magic.

To get this to work smoothly, I had to root into the NAS drive and change a few SSH options to allow for a more or less automated transfer. Right now I'm in the middle of what will be a 2+ day transfer of 1.6TB of home videos.

It's Good to Upgrade Storage

On a final note, I wanted to say one simple thing about why I like to upgrade my storage every decade or so. It's based on a few simple points:

  • Hard drives have a life expectancy and they don't last forever
  • New connectors and protocols are constantly getting faster and more robust
  • Old connectors and protocols are always becoming obsolete. If you plan the frequency of your copies properly, you'll always be in an era where both the old and new technology are still around and you can get adapters and whatnot to ensure the transfer is possible
  • I like to play with new things

Ok, the last point is probably the real reason. You figured me out.