Upgrade Your Storage Every Decade!

I got a home NAS (Network Attached Storage) with a formatted volume size of about 11TB (4x4TB drives in a custom RAID). I’ve had a few local USB/Firewire-based home storage solutions in the past such as a Drobo, but this time I wanted to go to the network because I thought it would be more flexible. The drivers were:

  • Its good to upgrade storage periodically over your lifetime (see below)
  • The NAS is more flexible that a single-computer USB-based connection
  • It has more storage
  • The NAS device has a lot more features than a simple storage array

More on all of this below.

The model I chose is a Synology DS920+.

Synology 920+ NAS
Synology 920+ NAS

The DS920+ has four drive bays and two m.2 slots for a read/write SSD cache. This is a nice feature because you can get inexpensive high-capacity spinning disk drives and use the SSD as read/write cache to mitigate the performance issues you’d normally see on low-rpm spindle-based storage such as horrible parallel reads/writes and slow random access/seek speeds.

I got four Seagate IronWolf 4TB NAS Internal Hard Drive HDD.  These have a modest 64MB cache and only spin at 5900rpm. I also filled both m.2 slots with a pair of Samsung 500GB  970 EVO SSD’s.

The DS920+ also offered a 4GB RAM expansion, bringing total RAM to 8GB. This is useful since the NAS is really a Linux computer and my gut says 8GB will give the OS more breathing room for disk cache and running various apps and protocol servers it supports.

On that note, the Synology DS920+ has a ton of apps and services and I’ve only scratched the surface on what I use. Some of the highlights:

  • SMB server (for WIndows sharing)
  • AFP/Bonjour server (for Mac)
  • rsync
  • NFS

SMB and AFP are pretty slow protocols and I always hate using them for large file transfers, like the multi-terabyte transfers I need to make to get all of my home videos and photos off my old Drobo. I found a great writeup on the performance of these protocols by Nasim Mansurov, here at the Photography Life blog. These protocols are great for general use, but not for the initial data-loading phase.

Part of my apprehension is not knowing the state of a large transfer, particularly if its interrupted. If I moved files and the transfer was interrupted, were they really moved?  Sometimes I’m dealing with 50k files and it’s not easy to get a warm and fuzzy feeling about whether a large file transfer worked or not even if it appeared to finish. Sure, when the copy is done I could compare file counts and byte sizes between both source and destination. This would give me some confidence that I can now delete the source directory, but that’s not the real problem.

The real problem is managing a large transfer and being able to optimally start/stop/resume it at will. This is trash using a GUI and AFP/SMB. For instance, if I want to resume a copy, do I need to copy the whole thing again? Do I manage the copy by doing one folder at a time and sit and wait for it to finish before starting the next folder? LOL, I’ve been there! Also, what happens when I find destination files that already exist? Walking into my office at 7am to check on the progress of a a twelve hour file transfer only to find that twenty minutes after I went to bed the system politely prompted me about how I want to handle a destination file conflict. I never want to be that angry holding a cup of hot coffee ever again. Screw all that.

The answer of course is rsync, a tried and true utility that’s a foundation of Linux. Since the NAS is a Linux machine, it’s got an rsync facility. rsync is single-purposed yet sophisticated piece of software that runs on both ends of a file transfer. The client and server both have access to their respective file systems and can negotiate a large transfer efficiently. If it’s good enough for Fortune 500 companies to use in production, it’s good enough for home videos of my kids fighting in their diapers.

rsync negotiates file listings and checksums on both sides of the transfer and will always do the right thing in terms of resuming a partial transfer. It’s like magic.

To get this to work smoothly, I had to root into the NAS drive and change a few SSH options to allow for a more or less automated transfer. Right now I’m in the middle of what will be a 2+ day transfer of 1.6TB of home videos.

It’s Good to Upgrade Storage

On a final note, I wanted to say one simple thing about why I like to upgrade my storage every decade or so. It’s based on a few simple points:

  • Hard drives have a life expectancy and they don’t last forever
  • New connectors and protocols are constantly getting faster and more robust
  • Old connectors and protocols are always becoming obsolete. If you plan the frequency of your copies properly, you’ll always be in an era where both the old and new technology are still around and you can get adapters and whatnot to ensure the transfer is possible
  • I like to play with new things

Ok, the last point is probably the real reason. You figured me out.

IntelliJ IDEA falls short on font selection

IntelliJ IDEA has a problem with selecting font variants, like variants on weight and subfamily. The UI doesn’t distinguish them and only provides a coarse-grained collection in their Font dropdown. Consider the font selector:

Font selection in IntelliJ IDEA

Notice we have Iosevka and Iosevka SS09. But FontBook shows the truer picture. Look at all of those variants that aren’t selectable in IntelliJ:

Font selection in MacOS FontBook

When you’re running on a non-retina display, such as a 96dpi or any sub-retina resolution (notice the proper use of the term!), fonts can often look horrendous, draining all of your productivity. This is particularly true if you’ve got some form of font quality OCD like I have. On Retina displays, most fonts look amazing and it’s just a matter of style since the fidelity of the font is preserved on these high-dpi displays.

The problem here is IntelliJ doesn’t have a sophisticated enough user interface. Consider iTerm2’s interface, which is top-notch:

Font selection in iTerm2

VSCode has a text-based property system and I spent a little time to figure out how it worked. The advantage of a dropdown is you know what you’re getting since the dropdown choices will all be valid. But when you’re entering the font name manually, you might spend some time scratching your head on exactly what name should appear. I tried entering the font name exactly as it appeared in Windows Font previewer (don’t have a screenshot handy) and/or how it appears in FontBook. Here, Iosevka SS09 Extralight Extended:

FontBook’s proper name for a font

But it turns out this does not work on neither the Mac or Windows. How infuriating. But, the true power of text and the CLI is that you have the ability to tweak things. This is unlike being presented with a fixed list of fonts as in IntelliJ’s dropdown. Both on Windows and Mac, if you find the TTF file via Show in Finder on Mac or Show in Explorer (I think?) on Windows, you’ll see the ttf filename:

Finder’s listing of font variant filenames

So if you enter the filename, without the extension, into VSCode then, viola!, it works:

VSCode’s font selection using the font filename, sans extension

Note, the filename has dashes so there is no need to surround the name in single-quotes as we see ‘Courier New’ is so decorated.

I wonder where this enhancement sits in the JetBrains issue queue? Easier than adding a more granular font selection UI, they could simply allow text-entry override for font selection. Of course this makes me wonder if I can hack this to work by going into the system config files and modifying some XML file… an effort for another day.

Not to abandon all hacks, I did spend some time fixing this for myself. You can, gulp, follow these steps:

  • Download a utility like TTF Edit
  • Open the TTF file of the variant you want to expose to IntelliJ as a unique font (so it will show up in the dropdown)
  • Modify the metadata
  • Save As to a new filename
  • Load the new file into Windows or Mac, usually by double-clicking on the file

Here is an example of where the fonts are located on my system, required by the TTF Edit’s Open dialog. Good luck! Make sure you replace EVERY field of metadata wit h the new name since I’m nt sure which field is the important one that IntelliJ will use to distinguish unique fonts in its listing.

Hahahaha, have fun.

First steps in rust

I started to play around with rust and one of the first things we do when we learn a new language is write a little hello, world project. So I typed this in:

fn main() {
println!("Hello, world!");
}

I compiled via rustc main.rs and viola, my terminal shows hello world. A quick check on runtime shows it runs more or less instantaneously:

$ time ./main
Hello, world!

real 0m0.001s
user 0m0.000s
sys 0m0.000s

You may be wondering why I bothered to see how fast this was and if I told you that I’ve spent most of my time on the JVM over the past ten years then you’d know why I’m so paranoid.

But I digress. I looked at the binary size and saw it was 2.5M. Hmmm. My gut says that’s a bit big so I wrote this little C++ program to compare:

#include <iostream>
int main(int argc, char** argv) {
std::cout << "Hello, world!" << std::endl;
return 0;
}

Sure enough the a.out was 17K. Rust must have some option to minimize for size and the docs say it’s -C prefer_dynamic. When I compile the rust code, it generates a 17k binary, just like C++. yay.

But now it won’t run since it’s dynamically linked:

./main: error while loading shared libraries: libstd-205127404fcba336.so: cannot open shared object file: No such file or directory

Crapola where is this library. What else may be missing?

$ ldd main
linux-vdso.so.1 (0x00007ffcc23cc000)
libstd-205127404fcba336.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6a9e912000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6a9eb14000)

Oh, just that one. Linux’s locate didn’t find it. I thought I saw a ~/.cargo directory somewhere.

find ~/.cargo/ -name "libstd*"

Nope. How about:

find ~/.rustup -name "libstd*"
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/wasm32-unknown-unknown/lib/libstd-077104c061bb2ffc.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/analysis/libstd-205127404fcba336.json
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-sys-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-thread-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-io-internals.html

There you are you wascolly rabbit! But I don’t know enough about rust to say if this is recommended approach:

LD_LIBRARY_PATH=./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib ./main
Hello, world!

But who cares that worked!

Through the rabbit hole I found a blank screen

With my home lab, which I affectionately named SINGULARITY, I mostly interact with it via ssh over my home network. However, I’ve been tweaking BIOS settings a lot so I hooked the PC up to my wifes’ keyboard/video/mouse and was doing a lot of reboots.

But then I stopped tweaking and went back to using the machine as a node on my network. But as I sat there doing my work I started feeling anxious and I didn’t know why. I felt as though someone were watching me. Try as I may, I couldn’t shake this anxious feeling. It turns out that the login prompt was showing and the monitor wasn’t turning off! Mercy me!

I run the machine headless, which means its running at a lower runlevel, so I had no GUI tools or screensaver to set.

Incidentally, I enabled the low runlevel by issuing this simple command:

sudo systemctl set-default multi-user.target

If I wanted the GUI back, I’d just do:

sudo systemctl enable graphical.target --force
sudo systemctl set-default graphical.target

But I digress.

I poked around and didn’t find a direct solution. It did seem like the command I wanted to run was setterm -blank 1, which would blank the screen and put the monitor into power-save mode. This is exactly what i wanted but the command vomits if you set it in a remote terminal, like Windows Terminal:

$ setterm -blank 1
setterm: terminal xterm-256color does not support --blank

After a little experimenting, I found that when I’m logged directly into the terminal, $TERM is set to linux. So with this little bit in my .bashrc, I was able to run setterm -blank 1 only when I’m logged in locally:

if [ "$TERM" == "linux" ]; then
setterm -blank 1
elif [ -e /usr/share/terminfo/x/xterm-256color ]; then
export TERM='xterm-256color'
else
export TERM='xterm-color'
fi

Great! Now when I login, a minute of inactivity blanks the monitor!

But the problem here was on reboot, I wouldn’t be logged in. So I dived into login details more and found there’s a file called /lib/systemd/system/getty@.service that specifies the command used for the login prompt:

ExecStart=-/sbin/agetty -o '-p -- \u' --noclear %I $TERM

Still, I didn’t see a way to inject setterm -blank 1 in there. But agetty does take a -l parameter allowing you to specify a login program. So I created /bin/login2:

!/usr/bin/sh
setterm -blank 1
/bin/login $*

That’s so ugly I love it! I then invoked this piece of art from agetty:

ExecStart=-/sbin/agetty -l /bin/login2 -o '-p -- \u' --noclear %I $TERM

After a reboot, I found this did not work at all.

Why? because agetty blocks on awaiting login name. It queries for a login name and then passes it to /bin/login. So setterm -blank 1 within login2 does not get invoked until you enter the login name. Damn.

But wait, there’s more! agetty also takes a -a argument, autologin name! So I descend further into the rabbit hole and create this beauty:

ExecStart=-/sbin/agetty -a nic -l /bin/login2 -o '-p -- \u' --noclear %I $TERM

This bypassed the username prompt, executed setterm -blank 1, but this left me with an awkward prompt for the password, and this filled me with even more anxiety than the original non-blanking screen!

I saw that /bin/login can take a -f argument to force a login. Documentation says “Do not perform authentication, user is preauthenticated.” Hahahah, so of course I went ahead and added it. Fuck it, it’s a home lab and this is only a security vulnerability for local access.

Now when I reboot, I basically get logged in automatically. But then, at this point, my .bashrc does the job of calling setterm -blank 1 so why the hell do I need /bin/login2 for?

Sooooo… I started thinking about if I can pass -f from the original agetty invocation and found that yes, this is possible. That’s what the -o option is for. Now I can get rid of /bin/login2 and just have this:

ExecStart=-/sbin/agetty -a nic -o '-f -p -- \u' --noclear %I $TERM

QED

Home Kubernetes Lab

Over the weekend, I created a home lab. I had an aging PC that had windows on it and I decided to wipe it away and install Ubuntu.

I created a home lab late last year on my main PC, but I didn’t like having all of that infrastructure on a PC where I did general purpose programming and gaming. Right off the bat, Windows hypervisor and the VM’s I’d need for a proper lab took up precious RAM. And there was also the matter of having the VM’s up and running as nodes in my Kubernetes cluster, having Rancher running etc. It all seemed like extra baggage for my PC which I wanted to be as lean as possible and ready to handle large tasks (like running games) using all of the resources in the machine.

So this PC has 16GB of RAM and an i5 4-core processor.  It’s unfortunate that I have 32GB of DIMMs installed on the box. My mother board says it supports 32GB of RAM but for some reason I can’t get the BIOS to recognize it. Turns out to a CLI-only Ubuntu, this is all of the power I need. Anyway, figuring out how to use the other 16GB of ram is an Area of Research! The box has an old SSD I pulled from my old Apple Mac Pro and performance of everything is just lightning.

When it came to a hypervisor to use, I chose kvm. It seemed like the right choice for my needs and did’t take a lot of know-how to get working. This link helped: https://help.ubuntu.com/community/KVM/Installation

When it came to VM’s to create, I used this Minimal CD image of Ubuntu: https://help.ubuntu.com/community/Installation/MinimalCD. During the installation of each VM, I chose OpenSSH server only since that’s all I needed. VM installation was still manual and a little time-consuming. At some point I hope to automate it, but since I only have 16GB of RAM, I’m only creating 203 VM’s as Kubernetes worker nodes and that will be that.

A few useful installs:

  • sudo apt install libguestfs-tools
  • sudo apt install virtinst
  • sudo apt install libosinfo-bin
  • sudo apt install cpu-checker

I decided to use Rancher as the k8s cluster management layer since we use that at work and I figure it would be a good idea to get more familiar with it. Obviously the new server needed docker and this article was helpful in describing how to get docker to install properly on Ubuntu: https://docs.docker.com/engine/install/ubuntu/. It turns out just saying “apt install docker.io” is not the best choice. You need a few extra steps.

Rancher was up and running without much effort. Rancher runs in a container directly on the host, on a server I call SINGULARITY. I have two VM’s called star01 and star02 which run as work nodes. star01 also runs etcd and the control plane. I’ll probably create a new VM star03 soon so I can experiment with deploying various software in clusters which require 3+ nodes, but I can prob get by with 2-nodes for a while.

An open item is having star01/02 DNS available on my LAN without using a bridged network. I tried implementing the technique described here, but it didn’t work on my first try. I’ll have to try again. It would probably help if I understood NetworkManager and DNS better. More areas of research!

A few things I’m interested in learning about in the short term:

  • How to create a docker image that manipulates mounts on the host for external storage for configuration. Rancher does this and it’s kind of magic to me right now.
  • Figure out how to have all of my kvm guest hostnames recognized on the host, if not my entire home LAN.
  • How to backup my Rancher and cluster configuration so I can restore it if I want to blow away my lab and start from scratch. This blog post is good documentation if I want to start over, but obviously I’d rather automate away the toil.

Some useful commands:

  • Make sure your kvm guests restart after host reboot: virsh autostart vmName
  • List kvm guests and their IP addresses: virsh net-dhcp-leases default
  • Dump network details: sudo virsh net-dumpxml default
  • Edit network details: sudo virsh net-edit default

Ciao for now.

Falloff Gradients

So I’m sitting there in my office at home, coding up Anton’s Triangle Tutorial program in an effort to learn OpenGL. While I do this, the COVID-19 virus is busy multiplying in the world and getting a lot of people sick. But since I’m quarantined, I keep myself busy translating Anton’s C/C++ code into Java. I’m using Java/Scala for my project because that’s where I’ve spent of the majority of time in my career for the past ten years and I was curious how far I can push the platform.

In Java I started to use the Lightweight Java Game Library (lwjgl) which has a very good OpenGL bindings. Small note that the light library I’m writing on top of lwjgl is written in pure Java, but most of my client code is written in Scala. The lwjgl bindings are good because they take advantage of a few features of the JVM which make it straightforward to work with native libraries.

The first of these features is Java NIO. A few weeks ago I went through Jenkov.com’s Java NIO Tutorials since I was a bit rusty on Java NIO. Using Java NIO you can expose native memory to the native OpenGL API’s through a safe JVM interface, which is exactly what the native OpenGL library requires.

The second feature was introduced in Java 5 in 2004, static imports. Using Static imports you can simulate Standard C/C++ #include directive, which exposes variables, functions, and classes to your default namespace without any object or static class qualifiers. So, OpenGl code in C++ like this:

while(!glfwWindowShouldClose(window)) {
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glUseProgram(shader_prog);
  glBindVertexArray(vao);
  glDrawArrays(GL_TRIANGLES, 0, 3);
  glfwPollEvents();
  glfwSwapBuffers(window);
}

Looks like this in Java. Yes, they are identical.

while(!glfwWindowShouldClose(window)) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
    glUseProgram(shader_prog);
    glBindVertexArray(vao);
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glfwPollEvents();
    glfwSwapBuffers(window);
 }

Not all of the code is identical, however. Instead of this block of C++:

float points[] = {
   0.0f,  0.5f,  0.0f,
   0.5f, -0.5f,  0.0f,
  -0.5f, -0.5f,  0.0f
};

GLuint vbo = 0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);

Your Java code will look like this (assuming you have a similar definition for points:

# Scala
private val points = Array(
     0.0f,  0.75f, 0.0f,
     0.75f, -0.75f, 0.0f,
    -0.75f, -0.75f, 0.0f)

# Java
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
Buffer buffer = BufferUtils.createFloatBuffer(data.length).put(data);
glBufferData(GL_ARRAY_BUFFER, (FloatBuffer)buffer.flip(), GL_STATIC_DRAW);

The program generates a color triangle and uses a mouse uniform to place a highlight over the cursor. Click on the image below to see the image in its full fidelity.

Default linear fall off gradient

The fall off is linear with the percentage of white being a linear fall off from 1->0 proportional with the distance from the mouse cursor over 300 pixels. The shader code looks like this:

#version 400

const float PI_2 = 1.57079632679489661923;

uniform vec2 u_resolution;
uniform vec2 u_mouse;

in vec3 color;
out vec4 frag_color;

#define FALLOFF (u_resolution.x / 16.0)

void main() {
    float distMouse = min(FALLOFF, distance(gl_FragCoord.xy, u_mouse));
    float lin_falloff = 1.0-(min(distMouse, FALLOFF) / FALLOFF);
    vec3 finalColor = mix(color, vec3(1.0,1.0,1.0), lin_falloff);
    frag_color = vec4(finalColor, 1.0);
}

In mathematical terms, it looks like this (generated from GNU Octave):

Which didn’t look so bad in Octave, but in the original triangle image above the edge was too harsh and I wanted to soften it. I tried to square the falloff, yielding this curve:

But I felt the falloff dropped too abruptly. I decided to try cos(x). Notice how the center bulge is brighter:

Yet when viewed in OpenGL, the edge was still too abrupt, maybe even moreso!

Falloff based on cos(x)

So I decided to square cos(x) and notice t he beautiful S-curve I was looking for!

So I decided to generate a bunch of curves and try them all out:

Trying a bunch of curves.

In the end, none of the linear falloff variants pleased me and I went with cos(x)^2

This gave me the best central bulge and a smooth gradient that faded away without any noticeable edge. QED!

Computer Graphics

One of my long-time passions is computer graphics. Being nostalgic, I have been spending some time lately remembering how to do all of this stuff. I do most of my work on the JVM these days so I wanted to see what I could do with it.

So I hunted around and found a ton of options.

Here is a recording I made from n-body sample code in the aparapi-samples project on GitHub: https://github.com/Syncleus/aparapi-e… The first few seconds are a recording from the original sample code which was originally presented by Gary Frost from AMD at the 2010 Java One Conference. The last part of the video records numerous enhancements I made as I learned aparapi and OpenGL on the JVM.

My next post will talk about some of the API’s I tried before I settled on one I liked. Enjoy!

Fin.

CPU Cooling

tl; dr;

Make sure your fans point the right way and you apply thermal paste properly.

When I bought my PC, I stuck am AMD 3900X 12/24 core processor in there and used the STOCK heat sink and fan. Out of the box, the processor ran anywhere from 40-60° C and I was pretty happy. Then I added a GPU and stick a few MORE fans into the case (at the top). When I was finished, the CPU was regularly overheating, spiking anywhere from 75-100° C.

This was annoying since my GPU was reading relatively cold 30-40° C.

I was ready to drop $200 for a water cooler, which I know would bring it down to 40° C (or less) even when in heavy use. But I obviously didn’t want to spend the money on that since my upgrade list includes a second M.2 card and a second bank of 32GB of RAM.

I was talking to my son and he said that I’d put in t he top fans wrong. He said the best-practice is to draw air out from the top. I knew this was a best-practice too, but I decided against it because I wanted to see for myself and my intuition said that blowing cold outside air right onto the CPU heat sink was the way to go since those topside chassis fans are right next to the CPU heat sink.

Turns out I was probably wrong.

Another thing was to install those topside fans, I needed to re-set my CPU heat sink since it was “in the way”. When I did this, I believe I didn’t use enough thermal paste since I used the last bit of a syringe and had no more.

To make a long story short, I re-applied thermal paste and flipped the fans to blow out and now the PC idles around 45° C and heats to about 80° max under heavy load. That’s still hot, but lowering peak temp by 20° C seems like a big deal for a minimal investment in thermal paste and a little elbow grease flipping those fans.

I wrote a simple Java program to saturate my CPU’s. It just adds a counter so my gut tells me that lots of transistors in the CPU are unused and the program may not heat up the CPU even though the OS is reporting 100% utilization.  It’s possible a program that either access more memory or more registers/logic gates on the die could theoretically cause the CPU to be busier and thus heat up more. If anyone knows about this, I’d love to learn more.

I may still spring for a water-cooler down the road, but if this configuration remains stable then I guess I’m happy with it.

Easy Glass Buttons and the Greatness of Your Life

Way back in 2005 I wrote this blog post about how to create easy glass buttons in Adobe Illustrator. I remember this post being popular and still to this day it turns up as one of the top hits on Google search. I did this search Incognito just now, but curious if you see the same results or if the result is machine learned from my habits.

Go ahead, search, and click on the green button and see if it leads you to my post.

What I love about this post is that I was a relatively young 35-year old man who still had dreams about doing something big with technology and I strive to always retain that wonder. Now, almost 15 years later, I didn’t do anything big by the standards I set for myself at that time, but I did make a living for my family, raised four gorgeous children, loved a beautiful wife, made some amazing friends, learned a lot of cool Kung Fu, and still manage to stay up to 3am bumping into walls wearing a new VR headset I have no idea how to use.

As I turn the corner on 50, I try to look at the young man who wrote that post and I try and see what made him tick. What I’ve come to realize is that it’s not always the size or impact of your accomplishment and more about the journey. That’s not to say I don’t harbor disappointment at not doing more, or regret at not working harder. What I strive for is not to be so enveloped by disappointment that I throw my hands up, giving up and becoming a bitter old man. To this day, I still cling to that feeling I had when I was 35… that the best is yet to come. I hope I always feel that way, regardless of the quantifiable greatness of anything I do or don’t do.

VR is here

I mean, virtual reality is here in my home. I bought a Rift S. The price was so good as I got $50 off the $399 list and I had two $50 gift certificates. At $250, it was a no brainer. Of course, the week before I went and dropped down eleven Benjamins on a Nvidia 2080 TI graphics card so my sense of bargain-hunting is a little off-kilter.

Now the “server” I bought for cluster research, Kubernetes administration, and low-latency programming explorations has, as originally envisioned, morphed into a pretty good gaming PC and a suitable VR-ready playground.

I think everyone knows that someday VR will take off, but the take off date seems to be a moving target. By take off I mean, everyone will have it like everyone has a smartphone. Well maybe everyone doesn’t believe this but I sure do.

But the take-off date is similar to that of AI’s take-off date. Everyone knows we’ll figure out Artificial General Intelligence one day but the date of discovery keeps moving forward. Of course, VR taking off is a lot more likely to happen in the next decade than AGI but no-one really knows.

So while the movie Ready Player One was disappointing, the book written by Ernest Cline painted a fantastic vision of a VR future, albeit the story chose a dystopian narrative. I think it will be impossible to create a truly immersive VR experience without extensive AI assisting in world-creation. A human artist, or even a team of human artists, can only place so many trees and boars before they go insane. A computer will need to do that and do it well.

I’ve wanted to get a VR headset for years, but either the tech didn’t seem ready or the desire to obtain such a magical possession never exceeded my budget for frivolous purchases.

Yet here we are.

One of the tipping points for me was my current crusade to interest my kids in programming. My kids are avid PC gamers yet VR hasn’t been on their radar so getting some VR gear and forcing it on them seemed like a good idea or at least good enough excuse for me to finally get something at home.

Rift S

I chose the Oculus Rift S over the Oculus Quest because while the mobility of the Quest was tempting, the device’s image fidelity is inferior to the PC-powered Rift. The Vive seems very high-end, expensive, and while I was tempted to get a headset that would stress our my powerful 2080 TI, the cost was prohibitive. So here I am.

I was pretty bored during setup as the Rift S installation makes you watch these boring safety videos. I mean, yeah, yeah, yeah.

They make you trace out your real-world play area very carefully through their Guardian setup and have a lot of customization options on how to tweak the Guardian system’s sensitivity parameters. This is the way the virtual “box” of your real-world play area is projected into your virtual reality so you don’t smack furniture, your dog, or your child while immersed in the virtual environment du jour. It seemed like unnecessary precautions to me because of course I knew what I was doing and I’m not a dumbass.

Then I played one of the free games Spider Man: Far from Home, and nearly fell on my ass due to the VR was so disorienting! It was at this time that I remembered my friend, who has a lot of experience in VR, told me a story of this guy he knew had a VR-induced accident. Evidently this chap suffered from the same disorientation I felt when I played Spider Man and he lost his balance, fell, and broke his fucking nose!

So now I’m careful and I tell my children to be very careful.

More VR experiences are coming and I can’t wait to write about them.

More Kubernetes: WSL2, Hyper-V, Docker, and X11

Most programming I do at my day job is done via PuTTY and IntelliJ. The latter runs on Linux and I run MobaXterm as an X11 server. I don’t use Moba’s terminal because the chrome around it’s way to gizmo-ey, like I’m at an ugly sweater party or something.

PuTTY is ideal because it’s so minimal and it has all the features I need. I actually use PuTTY Tray, which is a fork that has a few more options than the standard PuTTY. IntelliJ works great over X11 and MobaXterm is pretty snappy on our local network, even from our datacenter to our office.

That said, at home I don’t have access to a fancy $70k server running dual Intel Platinum Xeon processors with 512GB of RAM and 1.6TB of Enterprise-grade Northbridge-connected NVMe drives. I have a 12/24 core AMD Ryzen 3900X and Windows 10 Pro for Workstations. But Windows is so developer-friendly these days, it gives me a ton of options for development.

You can do Linux a few ways, two of which are in a VM via Hyper-V or using Windows Subsystem for Linux (WSL). In the very latest fast-ring builds of Windows, you can enable WSL2, which is an overhaul of the subsystem that’s a lot faster and more compatible with Linux binaries.

WSL2

To get WSL2, I joined the Windows Insider program and set updates to the “Fast Ring”. Right now I’m running Windows 10 Pro for Workstations Build 19025.vb_release.191112-1414, or whatever. Then I followed the steps here to enable WSL2. Yippee!

Docker

Docker isn’t by default available in WSL2, and I’d prefer it was since I like to stay in a bash prompt over PowerShell or cmd.exe. Lucking the latest Docker for Windows Edge version has an option to expose Docker to WSL2. It’s in Settings > Resources > WSL Integration. To check things are working, run wsl -l -v

PS C:\Users\nic> wsl -l -v
  NAME                   STATE           VERSION
* Ubuntu                 Running         2
  docker-desktop         Running         2
  docker-desktop-data    Running         2

X11

I wanted to see if I could get IntelliJ to run under WSL/X11 and my initial attempts were met with utter failure. IntelliJ wouldn’t start up no matter what I set my DISPLAY variable to. localhost:0 (or 0.0) didn’t work, nor 127.0.0.1:0, nor the eth0 IP reported by ifconfig, nor the Windows desktop IP. Finally, I went into /etc.resolve.conf and saw this:

nameserver 172.22.80.1

Setting my DISPLAY to 172.22.80.1:0 worked.

One interesting observation is that the WSL2 network and my Desktop bridged network to my home router are isolated enough that IntelliJ’s license check can’t detect that I had two copies open. One copy running natively on Windows and the second running in the WSL context over X11. On a flat network, the second copy would refuse to run, or the first copy would die on launch of the second. I forget!

Kubernetes

So I went through all of this so I could interact with my Kubernetes cluster at the WSL prompt. I setup Rancher as a container running in Docker for Windows. Once Rancher was running, I created three VM’s running Ubuntu using Hyper-V and called them node1, node2, and node3. Rancher made it easy to have each VM register itself as a node in a cluster called “Lab”. At this point I’m using 27GB of my 32GB of RAM, which is surprising so I’ll probably scale down Docker and the VM’s RAM. maybe giving each VM 4GB was a little excessive.

I still have two free DIMM slots so I guess it will be time to get to 64GB soon.

The cluster is extremely unstable and I’m in the process of figuring out why. When I reboot the desktop, the VM’s (cluster nodes) shut down (along with Rancher’s container) and after the reboot rancher comes up, tons of containers in each node come up, but the status is redder than Carrie at her Prom.

For some reason, the containers running on each VM node do not auto-start until I invoke docker somehow, like sudo docker ps. I’m still researching why this is.

New PC Build: Server

I’m putting together a new PC. For myself this time. The last, I don’t know, ten PC’s I’ve built have been for my kids or my kids’ friends. I got fed up and decided to build one of my own.

I was set on AMD this time around, and Windows (more on that later). AMD Threadrippers have great benchmarks and the price point makes it a no-brainer to go the AMD route.

I ordered a motherboard that supports a whopping 4400Mhz of DDR4 memory. But I wanted 64GB and I couldn’t find 16GB DIMMS (the motherboard only has 4 slots) so I went with 3600Mhz memory. It looks like the Threadripper 3900x I’m getting only supports 3200Mhz without overclocking so I suppose this was a good compromise on memory. Here’s the parts list:

  • CPU – AMD Ryzen 9 3900X 12-core, 24-Thread
  • Motherboard – ASUS AM4 TUF Gaming X570-Plus
  • Memory – G.SKILL Trident Z Neo Series 32GB (2 x 16GB) 288-Pin RGB DDR4 SDRAM DDR4 3600
  • Storage – SAMSUNG 970 PRO M.2 512GB
  • Power Supply – CORSAIR RMx Series RM850x
  • Case – Corsair Graphite Series 760T Black Full Tower Windowed Case

You may notice a a graphics card is missing. This was intentional for the initial build to reduce costs. Also, a graphics card is unnecessary because I’m building a server so I can play around with containers, VM’s, and Kubernetes. I’ll access it after the install via Remote Desktop from my Macbook Pro.

Choice of OS

I’ve always thought about doing my day-to-day work a Linux Desktop and this would be the perfect time, right? Unfortunately, no. In the back of my mind this rig will get a GPU upgrade one day and when that happens I’ll start doing gaming or VR on it. As such, Windows 10 will give me the most flexibility. And now I see Microsoft has Windows 10 Pro for Workstations, which is a new flavor of desktop OS with some features borrowed from recent Windows Server builds:

  • More sockets support (up 4 CPU’s)
  • RDMA support
  • Support for 6TB of RAM (from Windows 10 Pro’s 2TB)
  • Support for NVDIMM-N (storage-class RAM, which can recover on power failure)
  • ReFS support
  • SMB-direct support for faster file transfers

Upgrades

  • The motherboard I got supports two M.2 slots so at some point I’ll add a 1TB Samsung EVO Pro for extra storage.
  • Another 32GB of RAM to get me to 64GB
  • Obviously a graphics card, one with a lot of GPU’s, like 2080 Ti maybe? Probably not as it’s pretty expensive.

K8s Configuration?

My plan right now it to use Hyper-V natively, instead of something like VirtualBox, to host Linux VM’s. Why? I don’t know. I just want to try something new. I read a good article on Hypervisor performance between Hyper-V, KVM, Zen, and vSphere:

https://www.researchgate.net/publication/242105480_A_Component_Based_Performance_Comparison_of_Four_Hypervisors

It seems like performance varies greatly from workload and most of these hypervisors are in the same ball-park in terms of performance.

I plan to create 2-3 VM’s and have those be my Kubernetes nodes. On these nodes, I’ll deploy all of the containers I’m interested in configuring and using:

  • Kubernetes core pods
  • Docker image repository
  • Rancher
  • Prometheus
  • Grafana
  • Kafka
  • Gitlab and maybe Azure DevOps

Where am I?

Two years is a long time between posts. Yet I used to remember a time when I posted more often. I used to think of myself as a blogger. I mean, what happened?

First, the numbers:

  • 4 posts in 2016
  • 3 posts in 2015
  • 4 posts in 2014
  • 0 posts in 2013 (holy moley)
  • 8 posts in 2012

2012 is when I started this blog, nickcoding.com. But I blogged before, way before. In my other blog, the one I maintained for many, many years, the one I started in the  early days of blogging, I had a lot more posts.

  • 4 posts in 2014
  • 4 posts in 2013
  • 15 posts in 2012
  • 58 posts in 2011
  • 113 posts in 2010
  • and it goes back all the way to 2004…

Wait, what? I went from 113 posts per year to basically 0. So why?

I guess my life changed a lot. Things seemed easy for a long time. I was young and optimistic. I thought people would want to hear what I had to say. I thought the world was my oyster, that I had unlimited possibilities ahead of me. Then reality set in. Things started to get hard. My view of the world and the opportunity in that world began to shrink. I had mounting responsibilities. I had setbacks. I started to realize life wasn’t such a cakewalk. I started to realize I wasn’t actually good at what I did, or at least there were a lot of people who were a lot better. I went on the defensive and I stayed there. Why would anyone give a shit about what I had to say? So I stopped blogging. Not that blogging is a reliable measure of a person’s worth, but it was something I did and then something I stopped doing and there was a reason for it.

So where do I go from here? Why am I even writing this? The answer is simple. I’m writing for me, not for you (sorry). I always did. It’s OK, blogging was always my means of taking notes, in a public way. It was a way for me to document for myself what I was working on, what interested me. Like many bloggers in the early days I posted my musings because maybe someone would comment on it. Maybe someone would find my tip on managing a photo library as useful and they would thank me in a comment and then I’d be floating on a cloud all day because I actually reached someone.

You see, I wrote this book (a novel) and it means a lot to me. It means a lot to me for you to read it. For probably two years I’ve struggled on the editing process but now I realize what I’ve struggled with. I’ve struggled with me.

So how do you beat yourself anyway? How do you get yourself under control? I don’t know, exactly. But what I do know is that I have a growing anxiety. I have a growing discomfort. I have a growing anger and utter dissatisfaction about what I’ve done about the gap between what I’m doing and what I want to do.

And still, There is a gap in who I used to be and who I am now. Yet this is a different kind of gap. I used to be naive, hopeful, stupid. I’ve closed up, dried out, and shriveled into cynicism and self-doubt. But something has been setting me free and in this freedom I’m shedding all of my baggage. I am cleaning house. I’m feeling hopeful again.

My mother said something amazing to me yesterday. She told me she had a new idea that inspired her. She said if you believe you’re never too old, that it is never too late to begin again, then you are free.

Establishing my groove

Last weekend I attended the annual Writer’s Digest conference in NYC. The event served as an informal deadline for me, to get my revisions done so I could pitch my story to agents of the science fiction genre during the conference’s Pitch Slam event.

So I did that and it was great. I had a number of agents show interest in my story so I began to send out queries. One step closer to rejection letters!

The book is on the long side at 104k words. I’ll be spending the next month or two waiting for more beta reader and agent feedback and ruminating on tweaks that I can make. Like every writer, I hope the feedback isn’t disastrous–necessitating major work–but that’s something I need to anticipate and accept. When it’s time to roll up my sleeves and get the work done, then that’s what I’ll do. That aside, I’ll be researching more agents to query. I’ll start fleshing out my ideas for follow on books. I’ll keep writing.

I need to establish my groove.

Continue reading “Establishing my groove”

On Beta Reading

I’ve been on both sides of the beta reading spectrum in my journey to write my first novel, The Harvester. Rewarding is the best word I can think of right now when I think of both the experience in beta reading for others as well as the experience in working with people who’ve read my manuscript. My friends have been wonderful and this past January I joined a writing group and the past six months has been a whirlwind of discovery.

That said, know that whatever I say here are the words of a beginner. I know nothing. I am learning as I go.

Continue reading “On Beta Reading”

Chinese New Year Kung Fu at the Asia Society in NYC

I woke up at 6:30am, took a shower, and got dressed. I made my wife an omelette, walked the dog, and hopped in my car. I took a 7:48am train into NYC. I live on Long Island so this is about an hour trip on the weekend (slightly faster when I get on an express train on a weekday). I had a nice donut and some espresso at Birch. I got to my Kung Fu school around 9:30am and everything after that is a blur. I got back home at 6pm.

Hing Dai

Continue reading “Chinese New Year Kung Fu at the Asia Society in NYC”

Ulysses for Writing Prose on Mac

I use Ulysses app for Mac OS X for all of my writing. When you write in Ulysses you’re working with plain text and you perform basic formatting with markdown syntax. What I love about Ulysses is how simple it is. Yet the application’s simplicity does not mean it’s trivial. In fact, it’s extremely powerful. I am writing this post to talk about how I use it to write.

Continue reading “Ulysses for Writing Prose on Mac”

3rd Draft

With my third draft nearly complete, I’m beginning the process of putting together 10-15 literary agents to query. In the past I wondered if this process was going to be stressful. Now that I’m almost here, it doesn’t seem stressful at all. It seems exciting.

That said, I haven’t actually submitted a query yet. My best guess is that my heart rate will spike a bit in the moments before I send out those e-mails.

I ask myself if that’s really a bad thing? I don’t think so. Life is richer with a little risk and excitement. I am glad to be writing.

NaNoWriMo 2015 and my goal to finish

When I finished the first draft of my novel back in January I was pretty happy with myself. Little did I realize that I had a boatload of work ahead of me, and a lot more work than writing the first draft. O was a bit naive about the whole novel writing endeavor since this is my first time through. I guess that’s understandable. Now I’m focusing on finishing my second draft and I’m going to use NaNoWriMo to do it. I mean, it worked last time.

So, I’m inventing a new way to count words, wherein when I’m done with my second draft, the word count will be 50k. When I am half done,  I’ll be at 25k. Simple.

Go and check out my progress here: http://nanowrimo.org/participants/nickcody/novels/the-harvester-rewrite