Categories
Uncategorized

IntelliJ IDEA falls short on font selection

IntelliJ IDEA has a problem with selecting font variants, like variants on weight and subfamily. The UI doesn't distinguish them and only provides a coarse-grained collection in their Font dropdown. Consider the font selector:

Font selection in IntelliJ IDEA

Notice we have Iosevka and Iosevka SS09. But FontBook shows the truer picture. Look at all of those variants that aren't selectable in IntelliJ:

Font selection in MacOS FontBook

When you're running on a non-retina display, such as a 96dpi or any sub-retina resolution (notice the proper use of the term!), fonts can often look horrendous, draining all of your productivity. This is particularly true if you've got some form of font quality OCD like I have. On Retina displays, most fonts look amazing and it's just a matter of style since the fidelity of the font is preserved on these high-dpi displays.

The problem here is IntelliJ doesn't have a sophisticated enough user interface. Consider iTerm2's interface, which is top-notch:

Font selection in iTerm2

VSCode has a text-based property system and I spent a little time to figure out how it worked. The advantage of a dropdown is you know what you're getting since the dropdown choices will all be valid. But when you're entering the font name manually, you might spend some time scratching your head on exactly what name should appear. I tried entering the font name exactly as it appeared in Windows Font previewer (don't have a screenshot handy) and/or how it appears in FontBook. Here, Iosevka SS09 Extralight Extended:

FontBook's proper name for a font

But it turns out this does not work on neither the Mac or Windows. How infuriating. But, the true power of text and the CLI is that you have the ability to tweak things. This is unlike being presented with a fixed list of fonts as in IntelliJ's dropdown. Both on Windows and Mac, if you find the TTF file via Show in Finder on Mac or Show in Explorer (I think?) on Windows, you'll see the ttf filename:

Finder's listing of font variant filenames

So if you enter the filename, without the extension, into VSCode then, viola!, it works:

VSCode's font selection using the font filename, sans extension

Note, the filename has dashes so there is no need to surround the name in single-quotes as we see 'Courier New' is so decorated.

I wonder where this enhancement sits in the JetBrains issue queue? Easier than adding a more granular font selection UI, they could simply allow text-entry override for font selection. Of course this makes me wonder if I can hack this to work by going into the system config files and modifying some XML file... an effort for another day.

Not to abandon all hacks, I did spend some time fixing this for myself. You can, gulp, follow these steps:

  • Download a utility like TTF Edit
  • Open the TTF file of the variant you want to expose to IntelliJ as a unique font (so it will show up in the dropdown)
  • Modify the metadata
  • Save As to a new filename
  • Load the new file into Windows or Mac, usually by double-clicking on the file

Here is an example of where the fonts are located on my system, required by the TTF Edit's Open dialog. Good luck! Make sure you replace EVERY field of metadata wit h the new name since I'm nt sure which field is the important one that IntelliJ will use to distinguish unique fonts in its listing.

Hahahaha, have fun.

Categories
Programming

First steps in rust

I started to play around with rust and one of the first things we do when we learn a new language is write a little hello, world project. So I typed this in:

fn main() {
println!("Hello, world!");
}

I compiled via rustc main.rs and viola, my terminal shows hello world. A quick check on runtime shows it runs more or less instantaneously:

$ time ./main
Hello, world!

real 0m0.001s
user 0m0.000s
sys 0m0.000s

You may be wondering why I bothered to see how fast this was and if I told you that I've spent most of my time on the JVM over the past ten years then you'd know why I'm so paranoid.

But I digress. I looked at the binary size and saw it was 2.5M. Hmmm. My gut says that's a bit big so I wrote this little C++ program to compare:

#include <iostream>
int main(int argc, char** argv) {
std::cout << "Hello, world!" << std::endl;
return 0;
}

Sure enough the a.out was 17K. Rust must have some option to minimize for size and the docs say it's -C prefer_dynamic. When I compile the rust code, it generates a 17k binary, just like C++. yay.

But now it won't run since it's dynamically linked:

./main: error while loading shared libraries: libstd-205127404fcba336.so: cannot open shared object file: No such file or directory

Crapola where is this library. What else may be missing?

$ ldd main
linux-vdso.so.1 (0x00007ffcc23cc000)
libstd-205127404fcba336.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6a9e912000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6a9eb14000)

Oh, just that one. Linux's locate didn't find it. I thought I saw a ~/.cargo directory somewhere.

find ~/.cargo/ -name "libstd*"

Nope. How about:

find ~/.rustup -name "libstd*"
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/wasm32-unknown-unknown/lib/libstd-077104c061bb2ffc.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.rlib
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/analysis/libstd-205127404fcba336.json
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/libstd-205127404fcba336.so
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-sys-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-thread-internals.html
./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/doc/rust/html/unstable-book/library-features/libstd-io-internals.html

There you are you wascolly rabbit! But I don't know enough about rust to say if this is recommended approach:

LD_LIBRARY_PATH=./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib ./main
Hello, world!

But who cares that worked!

Categories
DevOps

Through the rabbit hole I found a blank screen

With my home lab, which I affectionately named SINGULARITY, I mostly interact with it via ssh over my home network. However, I've been tweaking BIOS settings a lot so I hooked the PC up to my wifes' keyboard/video/mouse and was doing a lot of reboots.

But then I stopped tweaking and went back to using the machine as a node on my network. But as I sat there doing my work I started feeling anxious and I didn't know why. I felt as though someone were watching me. Try as I may, I couldn't shake this anxious feeling. It turns out that the login prompt was showing and the monitor wasn't turning off! Mercy me!

I run the machine headless, which means its running at a lower runlevel, so I had no GUI tools or screensaver to set.

Incidentally, I enabled the low runlevel by issuing this simple command:

sudo systemctl set-default multi-user.target

If I wanted the GUI back, I'd just do:

sudo systemctl enable graphical.target --force
sudo systemctl set-default graphical.target

But I digress.

I poked around and didn't find a direct solution. It did seem like the command I wanted to run was setterm -blank 1, which would blank the screen and put the monitor into power-save mode. This is exactly what i wanted but the command vomits if you set it in a remote terminal, like Windows Terminal:

$ setterm -blank 1
setterm: terminal xterm-256color does not support --blank

After a little experimenting, I found that when I'm logged directly into the terminal, $TERM is set to linux. So with this little bit in my .bashrc, I was able to run setterm -blank 1 only when I'm logged in locally:

if [ "$TERM" == "linux" ]; then
setterm -blank 1
elif [ -e /usr/share/terminfo/x/xterm-256color ]; then
export TERM='xterm-256color'
else
export TERM='xterm-color'
fi

Great! Now when I login, a minute of inactivity blanks the monitor!

But the problem here was on reboot, I wouldn't be logged in. So I dived into login details more and found there's a file called /lib/systemd/system/getty@.service that specifies the command used for the login prompt:

ExecStart=-/sbin/agetty -o '-p -- \u' --noclear %I $TERM

Still, I didn't see a way to inject setterm -blank 1 in there. But agetty does take a -l parameter allowing you to specify a login program. So I created /bin/login2:

!/usr/bin/sh
setterm -blank 1
/bin/login $*

That's so ugly I love it! I then invoked this piece of art from agetty:

ExecStart=-/sbin/agetty -l /bin/login2 -o '-p -- \u' --noclear %I $TERM

After a reboot, I found this did not work at all.

Why? because agetty blocks on awaiting login name. It queries for a login name and then passes it to /bin/login. So setterm -blank 1 within login2 does not get invoked until you enter the login name. Damn.

But wait, there's more! agetty also takes a -a argument, autologin name! So I descend further into the rabbit hole and create this beauty:

ExecStart=-/sbin/agetty -a nic -l /bin/login2 -o '-p -- \u' --noclear %I $TERM

This bypassed the username prompt, executed setterm -blank 1, but this left me with an awkward prompt for the password, and this filled me with even more anxiety than the original non-blanking screen!

I saw that /bin/login can take a -f argument to force a login. Documentation says "Do not perform authentication, user is preauthenticated." Hahahah, so of course I went ahead and added it. Fuck it, it's a home lab and this is only a security vulnerability for local access.

Now when I reboot, I basically get logged in automatically. But then, at this point, my .bashrc does the job of calling setterm -blank 1 so why the hell do I need /bin/login2 for?

Sooooo... I started thinking about if I can pass -f from the original agetty invocation and found that yes, this is possible. That's what the -o option is for. Now I can get rid of /bin/login2 and just have this:

ExecStart=-/sbin/agetty -a nic -o '-f -p -- \u' --noclear %I $TERM

QED

Categories
DevOps

Home Kubernetes Lab

Over the weekend, I created a home lab. I had an aging PC that had windows on it and I decided to wipe it away and install Ubuntu.

I created a home lab late last year on my main PC, but I didn't like having all of that infrastructure on a PC where I did general purpose programming and gaming. Right off the bat, Windows hypervisor and the VM's I'd need for a proper lab took up precious RAM. And there was also the matter of having the VM's up and running as nodes in my Kubernetes cluster, having Rancher running etc. It all seemed like extra baggage for my PC which I wanted to be as lean as possible and ready to handle large tasks (like running games) using all of the resources in the machine.

So this PC has 16GB of RAM and an i5 4-core processor.  It's unfortunate that I have 32GB of DIMMs installed on the box. My mother board says it supports 32GB of RAM but for some reason I can't get the BIOS to recognize it. Turns out to a CLI-only Ubuntu, this is all of the power I need. Anyway, figuring out how to use the other 16GB of ram is an Area of Research! The box has an old SSD I pulled from my old Apple Mac Pro and performance of everything is just lightning.

When it came to a hypervisor to use, I chose kvm. It seemed like the right choice for my needs and did't take a lot of know-how to get working. This link helped: https://help.ubuntu.com/community/KVM/Installation

When it came to VM's to create, I used this Minimal CD image of Ubuntu: https://help.ubuntu.com/community/Installation/MinimalCD. During the installation of each VM, I chose OpenSSH server only since that's all I needed. VM installation was still manual and a little time-consuming. At some point I hope to automate it, but since I only have 16GB of RAM, I'm only creating 203 VM's as Kubernetes worker nodes and that will be that.

A few useful installs:

  • sudo apt install libguestfs-tools
  • sudo apt install virtinst
  • sudo apt install libosinfo-bin
  • sudo apt install cpu-checker

I decided to use Rancher as the k8s cluster management layer since we use that at work and I figure it would be a good idea to get more familiar with it. Obviously the new server needed docker and this article was helpful in describing how to get docker to install properly on Ubuntu: https://docs.docker.com/engine/install/ubuntu/. It turns out just saying "apt install docker.io" is not the best choice. You need a few extra steps.

Rancher was up and running without much effort. Rancher runs in a container directly on the host, on a server I call SINGULARITY. I have two VM's called star01 and star02 which run as work nodes. star01 also runs etcd and the control plane. I'll probably create a new VM star03 soon so I can experiment with deploying various software in clusters which require 3+ nodes, but I can prob get by with 2-nodes for a while.

An open item is having star01/02 DNS available on my LAN without using a bridged network. I tried implementing the technique described here, but it didn't work on my first try. I'll have to try again. It would probably help if I understood NetworkManager and DNS better. More areas of research!

A few things I'm interested in learning about in the short term:

  • How to create a docker image that manipulates mounts on the host for external storage for configuration. Rancher does this and it's kind of magic to me right now.
  • Figure out how to have all of my kvm guest hostnames recognized on the host, if not my entire home LAN.
  • How to backup my Rancher and cluster configuration so I can restore it if I want to blow away my lab and start from scratch. This blog post is good documentation if I want to start over, but obviously I'd rather automate away the toil.

Some useful commands:

  • Make sure your kvm guests restart after host reboot: virsh autostart vmName
  • List kvm guests and their IP addresses: virsh net-dhcp-leases default
  • Dump network details: sudo virsh net-dumpxml default
  • Edit network details: sudo virsh net-edit default

Ciao for now.

Categories
Computer Graphics

Falloff Gradients

So I'm sitting there in my office at home, coding up Anton's Triangle Tutorial program in an effort to learn OpenGL. While I do this, the COVID-19 virus is busy multiplying in the world and getting a lot of people sick. But since I'm quarantined, I keep myself busy translating Anton's C/C++ code into Java. I'm using Java/Scala for my project because that's where I've spent of the majority of time in my career for the past ten years and I was curious how far I can push the platform.

In Java I started to use the Lightweight Java Game Library (lwjgl) which has a very good OpenGL bindings. Small note that the light library I'm writing on top of lwjgl is written in pure Java, but most of my client code is written in Scala. The lwjgl bindings are good because they take advantage of a few features of the JVM which make it straightforward to work with native libraries.

The first of these features is Java NIO. A few weeks ago I went through Jenkov.com's Java NIO Tutorials since I was a bit rusty on Java NIO. Using Java NIO you can expose native memory to the native OpenGL API's through a safe JVM interface, which is exactly what the native OpenGL library requires.

The second feature was introduced in Java 5 in 2004, static imports. Using Static imports you can simulate Standard C/C++ #include directive, which exposes variables, functions, and classes to your default namespace without any object or static class qualifiers. So, OpenGl code in C++ like this:

while(!glfwWindowShouldClose(window)) {
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glUseProgram(shader_prog);
  glBindVertexArray(vao);
  glDrawArrays(GL_TRIANGLES, 0, 3);
  glfwPollEvents();
  glfwSwapBuffers(window);
}

Looks like this in Java. Yes, they are identical.

while(!glfwWindowShouldClose(window)) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
    glUseProgram(shader_prog);
    glBindVertexArray(vao);
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glfwPollEvents();
    glfwSwapBuffers(window);
 }

Not all of the code is identical, however. Instead of this block of C++:

float points[] = {
   0.0f,  0.5f,  0.0f,
   0.5f, -0.5f,  0.0f,
  -0.5f, -0.5f,  0.0f
};

GLuint vbo = 0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);

Your Java code will look like this (assuming you have a similar definition for points:

# Scala
private val points = Array(
     0.0f,  0.75f, 0.0f,
     0.75f, -0.75f, 0.0f,
    -0.75f, -0.75f, 0.0f)

# Java
int vbo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
Buffer buffer = BufferUtils.createFloatBuffer(data.length).put(data);
glBufferData(GL_ARRAY_BUFFER, (FloatBuffer)buffer.flip(), GL_STATIC_DRAW);

The program generates a color triangle and uses a mouse uniform to place a highlight over the cursor. Click on the image below to see the image in its full fidelity.

Default linear fall off gradient

The fall off is linear with the percentage of white being a linear fall off from 1->0 proportional with the distance from the mouse cursor over 300 pixels. The shader code looks like this:

#version 400

const float PI_2 = 1.57079632679489661923;

uniform vec2 u_resolution;
uniform vec2 u_mouse;

in vec3 color;
out vec4 frag_color;

#define FALLOFF (u_resolution.x / 16.0)

void main() {
    float distMouse = min(FALLOFF, distance(gl_FragCoord.xy, u_mouse));
    float lin_falloff = 1.0-(min(distMouse, FALLOFF) / FALLOFF);
    vec3 finalColor = mix(color, vec3(1.0,1.0,1.0), lin_falloff);
    frag_color = vec4(finalColor, 1.0);
}

In mathematical terms, it looks like this (generated from GNU Octave):

Which didn't look so bad in Octave, but in the original triangle image above the edge was too harsh and I wanted to soften it. I tried to square the falloff, yielding this curve:

But I felt the falloff dropped too abruptly. I decided to try cos(x). Notice how the center bulge is brighter:

Yet when viewed in OpenGL, the edge was still too abrupt, maybe even moreso!

Falloff based on cos(x)

So I decided to square cos(x) and notice t he beautiful S-curve I was looking for!

So I decided to generate a bunch of curves and try them all out:

Trying a bunch of curves.

In the end, none of the linear falloff variants pleased me and I went with cos(x)^2

This gave me the best central bulge and a smooth gradient that faded away without any noticeable edge. QED!