Categories
Hardware Tools

Upgrade Your Storage Every Decade!

I got a home NAS (Network Attached Storage) with a formatted volume size of about 11TB (4x4TB drives in a custom RAID). I've had a few local USB/Firewire-based home storage solutions in the past such as a Drobo, but this time I wanted to go to the network because I thought it would be more flexible. The drivers were:

  • Its good to upgrade storage periodically over your lifetime (see below)
  • The NAS is more flexible that a single-computer USB-based connection
  • It has more storage
  • The NAS device has a lot more features than a simple storage array

More on all of this below.

The model I chose is a Synology DS920+.

Synology 920+ NAS
Synology 920+ NAS

The DS920+ has four drive bays and two m.2 slots for a read/write SSD cache. This is a nice feature because you can get inexpensive high-capacity spinning disk drives and use the SSD as read/write cache to mitigate the performance issues you'd normally see on low-rpm spindle-based storage such as horrible parallel reads/writes and slow random access/seek speeds.

I got four Seagate IronWolf 4TB NAS Internal Hard Drive HDD.  These have a modest 64MB cache and only spin at 5900rpm. I also filled both m.2 slots with a pair of Samsung 500GB  970 EVO SSD's.

The DS920+ also offered a 4GB RAM expansion, bringing total RAM to 8GB. This is useful since the NAS is really a Linux computer and my gut says 8GB will give the OS more breathing room for disk cache and running various apps and protocol servers it supports.

On that note, the Synology DS920+ has a ton of apps and services and I've only scratched the surface on what I use. Some of the highlights:

  • SMB server (for WIndows sharing)
  • AFP/Bonjour server (for Mac)
  • rsync
  • NFS

SMB and AFP are pretty slow protocols and I always hate using them for large file transfers, like the multi-terabyte transfers I need to make to get all of my home videos and photos off my old Drobo. I found a great writeup on the performance of these protocols by Nasim Mansurov, here at the Photography Life blog. These protocols are great for general use, but not for the initial data-loading phase.

Part of my apprehension is not knowing the state of a large transfer, particularly if its interrupted. If I moved files and the transfer was interrupted, were they really moved?  Sometimes I'm dealing with 50k files and it's not easy to get a warm and fuzzy feeling about whether a large file transfer worked or not even if it appeared to finish. Sure, when the copy is done I could compare file counts and byte sizes between both source and destination. This would give me some confidence that I can now delete the source directory, but that's not the real problem.

The real problem is managing a large transfer and being able to optimally start/stop/resume it at will. This is trash using a GUI and AFP/SMB. For instance, if I want to resume a copy, do I need to copy the whole thing again? Do I manage the copy by doing one folder at a time and sit and wait for it to finish before starting the next folder? LOL, I've been there! Also, what happens when I find destination files that already exist? Walking into my office at 7am to check on the progress of a a twelve hour file transfer only to find that twenty minutes after I went to bed the system politely prompted me about how I want to handle a destination file conflict. I never want to be that angry holding a cup of hot coffee ever again. Screw all that.

The answer of course is rsync, a tried and true utility that's a foundation of Linux. Since the NAS is a Linux machine, it's got an rsync facility. rsync is single-purposed yet sophisticated piece of software that runs on both ends of a file transfer. The client and server both have access to their respective file systems and can negotiate a large transfer efficiently. If it's good enough for Fortune 500 companies to use in production, it's good enough for home videos of my kids fighting in their diapers.

rsync negotiates file listings and checksums on both sides of the transfer and will always do the right thing in terms of resuming a partial transfer. It's like magic.

To get this to work smoothly, I had to root into the NAS drive and change a few SSH options to allow for a more or less automated transfer. Right now I'm in the middle of what will be a 2+ day transfer of 1.6TB of home videos.

It's Good to Upgrade Storage

On a final note, I wanted to say one simple thing about why I like to upgrade my storage every decade or so. It's based on a few simple points:

  • Hard drives have a life expectancy and they don't last forever
  • New connectors and protocols are constantly getting faster and more robust
  • Old connectors and protocols are always becoming obsolete. If you plan the frequency of your copies properly, you'll always be in an era where both the old and new technology are still around and you can get adapters and whatnot to ensure the transfer is possible
  • I like to play with new things

Ok, the last point is probably the real reason. You figured me out.

Categories
Hardware Programming

CPU Cooling

tl; dr;

Make sure your fans point the right way and you apply thermal paste properly.

When I bought my PC, I stuck am AMD 3900X 12/24 core processor in there and used the STOCK heat sink and fan. Out of the box, the processor ran anywhere from 40-60° C and I was pretty happy. Then I added a GPU and stick a few MORE fans into the case (at the top). When I was finished, the CPU was regularly overheating, spiking anywhere from 75-100° C.

This was annoying since my GPU was reading relatively cold 30-40° C.

I was ready to drop $200 for a water cooler, which I know would bring it down to 40° C (or less) even when in heavy use. But I obviously didn't want to spend the money on that since my upgrade list includes a second M.2 card and a second bank of 32GB of RAM.

I was talking to my son and he said that I'd put in t he top fans wrong. He said the best-practice is to draw air out from the top. I knew this was a best-practice too, but I decided against it because I wanted to see for myself and my intuition said that blowing cold outside air right onto the CPU heat sink was the way to go since those topside chassis fans are right next to the CPU heat sink.

Turns out I was probably wrong.

Another thing was to install those topside fans, I needed to re-set my CPU heat sink since it was "in the way". When I did this, I believe I didn't use enough thermal paste since I used the last bit of a syringe and had no more.

To make a long story short, I re-applied thermal paste and flipped the fans to blow out and now the PC idles around 45° C and heats to about 80° max under heavy load. That's still hot, but lowering peak temp by 20° C seems like a big deal for a minimal investment in thermal paste and a little elbow grease flipping those fans.

I wrote a simple Java program to saturate my CPU's. It just adds a counter so my gut tells me that lots of transistors in the CPU are unused and the program may not heat up the CPU even though the OS is reporting 100% utilization.  It's possible a program that either access more memory or more registers/logic gates on the die could theoretically cause the CPU to be busier and thus heat up more. If anyone knows about this, I'd love to learn more.

I may still spring for a water-cooler down the road, but if this configuration remains stable then I guess I'm happy with it.

Categories
Hardware Virtual Reality

VR is here

I mean, virtual reality is here in my home. I bought a Rift S. The price was so good as I got $50 off the $399 list and I had two $50 gift certificates. At $250, it was a no brainer. Of course, the week before I went and dropped down eleven Benjamins on a Nvidia 2080 TI graphics card so my sense of bargain-hunting is a little off-kilter.

Now the "server" I bought for cluster research, Kubernetes administration, and low-latency programming explorations has, as originally envisioned, morphed into a pretty good gaming PC and a suitable VR-ready playground.

I think everyone knows that someday VR will take off, but the take off date seems to be a moving target. By take off I mean, everyone will have it like everyone has a smartphone. Well maybe everyone doesn't believe this but I sure do.

But the take-off date is similar to that of AI's take-off date. Everyone knows we'll figure out Artificial General Intelligence one day but the date of discovery keeps moving forward. Of course, VR taking off is a lot more likely to happen in the next decade than AGI but no-one really knows.

So while the movie Ready Player One was disappointing, the book written by Ernest Cline painted a fantastic vision of a VR future, albeit the story chose a dystopian narrative. I think it will be impossible to create a truly immersive VR experience without extensive AI assisting in world-creation. A human artist, or even a team of human artists, can only place so many trees and boars before they go insane. A computer will need to do that and do it well.

I've wanted to get a VR headset for years, but either the tech didn't seem ready or the desire to obtain such a magical possession never exceeded my budget for frivolous purchases.

Yet here we are.

One of the tipping points for me was my current crusade to interest my kids in programming. My kids are avid PC gamers yet VR hasn't been on their radar so getting some VR gear and forcing it on them seemed like a good idea or at least good enough excuse for me to finally get something at home.

Rift S

I chose the Oculus Rift S over the Oculus Quest because while the mobility of the Quest was tempting, the device's image fidelity is inferior to the PC-powered Rift. The Vive seems very high-end, expensive, and while I was tempted to get a headset that would stress our my powerful 2080 TI, the cost was prohibitive. So here I am.

I was pretty bored during setup as the Rift S installation makes you watch these boring safety videos. I mean, yeah, yeah, yeah.

They make you trace out your real-world play area very carefully through their Guardian setup and have a lot of customization options on how to tweak the Guardian system's sensitivity parameters. This is the way the virtual "box" of your real-world play area is projected into your virtual reality so you don't smack furniture, your dog, or your child while immersed in the virtual environment du jour. It seemed like unnecessary precautions to me because of course I knew what I was doing and I'm not a dumbass.

Then I played one of the free games Spider Man: Far from Home, and nearly fell on my ass due to the VR was so disorienting! It was at this time that I remembered my friend, who has a lot of experience in VR, told me a story of this guy he knew had a VR-induced accident. Evidently this chap suffered from the same disorientation I felt when I played Spider Man and he lost his balance, fell, and broke his fucking nose!

So now I'm careful and I tell my children to be very careful.

More VR experiences are coming and I can't wait to write about them.

Categories
Hardware

New PC Build: Server

I'm putting together a new PC. For myself this time. The last, I don't know, ten PC's I've built have been for my kids or my kids' friends. I got fed up and decided to build one of my own.

I was set on AMD this time around, and Windows (more on that later). AMD Threadrippers have great benchmarks and the price point makes it a no-brainer to go the AMD route.

I ordered a motherboard that supports a whopping 4400Mhz of DDR4 memory. But I wanted 64GB and I couldn't find 16GB DIMMS (the motherboard only has 4 slots) so I went with 3600Mhz memory. It looks like the Threadripper 3900x I'm getting only supports 3200Mhz without overclocking so I suppose this was a good compromise on memory. Here's the parts list:

  • CPU - AMD Ryzen 9 3900X 12-core, 24-Thread
  • Motherboard - ASUS AM4 TUF Gaming X570-Plus
  • Memory - G.SKILL Trident Z Neo Series 32GB (2 x 16GB) 288-Pin RGB DDR4 SDRAM DDR4 3600
  • Storage - SAMSUNG 970 PRO M.2 512GB
  • Power Supply - CORSAIR RMx Series RM850x
  • Case - Corsair Graphite Series 760T Black Full Tower Windowed Case

You may notice a a graphics card is missing. This was intentional for the initial build to reduce costs. Also, a graphics card is unnecessary because I'm building a server so I can play around with containers, VM's, and Kubernetes. I'll access it after the install via Remote Desktop from my Macbook Pro.

Choice of OS

I've always thought about doing my day-to-day work a Linux Desktop and this would be the perfect time, right? Unfortunately, no. In the back of my mind this rig will get a GPU upgrade one day and when that happens I'll start doing gaming or VR on it. As such, Windows 10 will give me the most flexibility. And now I see Microsoft has Windows 10 Pro for Workstations, which is a new flavor of desktop OS with some features borrowed from recent Windows Server builds:

  • More sockets support (up 4 CPU's)
  • RDMA support
  • Support for 6TB of RAM (from Windows 10 Pro's 2TB)
  • Support for NVDIMM-N (storage-class RAM, which can recover on power failure)
  • ReFS support
  • SMB-direct support for faster file transfers

Upgrades

  • The motherboard I got supports two M.2 slots so at some point I'll add a 1TB Samsung EVO Pro for extra storage.
  • Another 32GB of RAM to get me to 64GB
  • Obviously a graphics card, one with a lot of GPU's, like 2080 Ti maybe? Probably not as it's pretty expensive.

K8s Configuration?

My plan right now it to use Hyper-V natively, instead of something like VirtualBox, to host Linux VM's. Why? I don't know. I just want to try something new. I read a good article on Hypervisor performance between Hyper-V, KVM, Zen, and vSphere:

https://www.researchgate.net/publication/242105480_A_Component_Based_Performance_Comparison_of_Four_Hypervisors

It seems like performance varies greatly from workload and most of these hypervisors are in the same ball-park in terms of performance.

I plan to create 2-3 VM's and have those be my Kubernetes nodes. On these nodes, I'll deploy all of the containers I'm interested in configuring and using:

  • Kubernetes core pods
  • Docker image repository
  • Rancher
  • Prometheus
  • Grafana
  • Kafka
  • Gitlab and maybe Azure DevOps