Async Rust: Why is it so Fast?

March 27th, 2024

Everyone knows that a web server, for example, will be able to serve up a lot more traffic if it is written using Rust async code rather than straight OS threads. Lots of people have the numbers to prove it.

But I have wondered why, and none of the explanations I have read have explained it to me. They all seem to boil down to a vague “because”.

Well, today, listening to the podcast Rustacean Station, the episode “Asynchronous Programming in Rust with Carl Fredrik Samson”, I think I figured it out. Yes they gave more explanation, not quite satisfying, but it helped, so I hit pause and started talking to myself like a crazy person (handy to be driving alone in cases like this) and I think I get it. Here it is.


I am not experienced with Rust async code. I am still learning, I am certain to get things at least a little wrong, possibly a lot. I’m largely writing this for myself, but I’m putting it in public on the theory that others might find it illuminating even in its errors.

Multitasking Techniques

The problem is how to do multitasking, efficiently, so as to get the most work done. Let’s look at different approaches, in sequence, to land with a better understanding the virtues of Rust’s async abilities.


This is the oldest of these approaches. Here the operating system uses its god-like abilities to set up individual processes, each with an apparently complete computer at its disposal. Each process runs its own program, and can do what it wants, can (attempt to) access any memory location, and can happy crash when it messes up too badly, but likely with no other process noticing. Individual processes are so well isolated that unless they go to special effort to communicate with each other, or really load the machine with heavy work, can be completely ignorant of any other process even existing.

Processes are very general purpose. They are also relatively expensive, for the OS to switch context from one process to another everything that it can access has to be changed, and that is work. To do the work to switch processes dozens or it hundreds of times a second is no problem, but don’t try to do it hundreds of thousands of times a second. Being general purpose has a cost.


Threads are more lightweight than are processes, but at least on Linux, they are kind of the same. The way to start a new process on Linux is to fork a copy of oneself. There will now be two of you, clones of each other, both running at once, which seems a bit wrong and pointless. But you wrote the program they are running, and there is a way for the two copies to each check and see which copy it is. One can discover it is the original and the other discover it is the copy, and they can do different things in the two cases. A common thing for a new process to do is to start running an entirely new program.

So what is a thread? Well, when forking a Linux process one can choose how much is shared between the old and the new. In the case of a thread they still share the same memory. Sharing memory means they can cooperate in their work, providing the cooperating is done carefully.

Threads do not share everything. Each thread has its own copy of the CPU registers, and each has its own stack. So we are still pretty general purpose, the different threads could be working on different problems independently, more likely they cooperate, though that is a lot harder to program correctly.

The primary value to threads isn’t that their context is faster to switch (though threads are little faster), the motivation for threads is to be able to very efficiently share memory and so programs take advantage of multiple CPUs for some single purpose.

Green Threads

Green threads (or stackful co-routines) are very much like threads, except they live entirely in userland. Instead of the OS using its god-like powers to switch execution from one thread to another, the switching happens within the process. But the process doesn’t have any god-like powers over itself other than self-discipline. So green threads need to be written in a slightly special way to make the context switching possible at all. When context switches do happen there is still a similar amount of work needed, but there is a little win in that the OS doesn’t have to do it, so no context switch into kernel mode. Remember, any system call into the Linux kernel takes longer than a local function call.

So green threads do switch faster than can OS threads, and early Rust, before 1.0, did have some sort of green threads. But it is long gone.

Preemptive vs. Cooperative Multitasking

Time for a little detour. Everything up to this point has been preemptive multitasking. At one moment one program (or thread) is running then whoosh the next moment suddenly another program or thread will start running instead. The code that is being run and then not run doesn’t have any control, nor even any easy knowledge, of this change.

In cooperative multitasking these unanticipated changes do not happen, rather cooperative multitasking requires code regularly yield the CPU and let other code run. If it doesn’t yield, other code can’t run on that CPU. This is both good and bad. The good is if code has valuable and important work it is doing, it won’t be interrupted. The bad is if some code is churning away on something unimportant it can keep other code from running. Cooperative multitasking assumes responsible—cooperative—behavior.

Fun fact: The original Macintosh, way back when, before you were born (not all of you, but an awful lot of you), had cooperative multitasking. It worked amazingly well, but a lot of people scoffed, they wanted “real” multitasking. Meaning preemptive multitasking. But cooperative multitasking is also real.

Rust Knows More

Async Rust benefits from being Rust.

I like to say that writing a multithreaded program is easy, but maintaining a multithreaded program is mostly impossible. (And writing a multithreaded program of any size takes long enough that some of the effort is effectively maintenance before you are done…which means you are screwed.)

Rust makes multithreaded programming possible (if not easy) by insisting that the program explicitly say a bunch of stuff that in other languages would be in the comments. “Be sure to always use this mutex before accessing that data structure.”, for example.

The result is Rust knows a lot about your program and how all data is shared and not shared. This is key to async Rust.

Rust Async: Cooperative

Rust async is cooperative multitasking. (Mostly: you can also spawn threads and that is no longer just cooperative.)

When you write async Rust code you can be assured that once your code is executing the Rust async stuff will not halt you to start running some other async code, except at clear points, where you know this might happen.

These context switches can happen whenever you call code that would block, normally this is any code that might wait for IO. Such as waiting for network activity, waiting for a disk, or waiting for a user to type something. Or, you can explicitly yield with yield_now() or maybe you have a reason to sleep() for a specific amount of time. But the key point is that execution can only switch from one hunk of code to another hunk of code at specific and well defined points.

Here async draws on Rust’s requirement that we be all precise about what data is where and who has access to it for what purposes. I don’t know the details, but given all that information about data, plus knowledge of all the locations where execution might change, the Rust compiler works out a state machine for how to do all of the possible execution transitions. (I’m a bit amazed that this is possible, but I’ll believe it works, people use it for real work.)

And here is where the efficiency seems to lie: To switch context from one async execution path to another async execution path nothing really has to happen: no stacks need to be swapped out, no CPU register files need to be swapped, no MMU configurations need to be touched, nothing has to happen inside the kernel, rather we are still just running a Rust program, and the Rust program is just doing something else, not that different from branches you explicitly put in your own code.

This also means that the multitasking that happens in a Rust async program is not general purpose the same way that preemptive multitasking is. The Rust compiler looked at your specific code and output the code necessary for your program to run. This isn’t a practical problem, for the compiler is still completely general purpose, but its output is not.

Put another way, Rust’s passion for zero cost abstractions apply here, too. Rust only has to mess with the minimum to start working on something different, and that means it will be fast.

At least I think so. Note I used the weaselly word “just” twice in that paragraph. Always be suspicious of the word “just” (along with the words “exactly” and “simply”, and …).


The same way that Rust doesn’t have a garbage collector, because it largely manages to analyze away the whole problem at compile time, Rust async seems to make context switching costs go away by mostly analyzing away the code that implements context switches, at compile time. This is what makes it fast.

In async Rust there is still a “runtime” (and more than one mutually incompatible runtime option to choose from, annoyingly), and it still needs to have rather general purpose code that talks to the OS to find out about IO that has unblocked and connect that up with pending awaits in your Rust code, and a pretty general purpose scheduler that decides what to run next when blocked execution is unblocked. But this runtime is still well smaller than nearly any OS. It comes as source code a Rust crate, and it is part of what the compiler will be optimizing along with your code. Something to be aware of: When you see an “async” or an “await” in code, a lot of magic is happening in there, more than other parts of Rust’s syntax.

Stuff I will learn more about as I dig deeper and use async more.



There is a fairness issue I need to bring up: asynchronous, cooperative multitasking has inherent advantages, unrelated to Rust. Other languages can and do pull these tricks, and it works for them, too. Rust maybe can do it better…but async is not entirely a Rust trick.

And another epilogical thought: The scheduler has a lot of information about the program being run. It will have choices in what to schedule (if more than one task becomes unblocked, which should run?). It can maybe put the two together and be smart about what it schedules.

For example, if there happens to be a crossbeam channel that is filled by only one task, but is drained by many, and if several are waiting to pull work out, it might be a really smart to give more priority to the one that puts work in so as to unblock those ready to drain it. The scheduler has information about every blocked task and can know dependencies between them. Clever programming in the scheduler might make a really big difference in total performance.

Also, the scheduler is in a position to keep statistics about system performance. To the extent work loads follow patterns it might be able to dynamically tune how it schedules unblocked work. For example, if the code that fills the hypothetical crossbeam channel runs very quickly but the code the drains it takes a lot of time and can be processed in parallel, maybe statistics can reveal that, and used to put more priority on filling the channel.

Some of these sorts of tricks can and are done by OS schedulers, too, but an async scheduler runs inside the program and can have a lot more information with which to make such choices. How much of an advantage Rust has here when compared to other async systems, I don’t know. But I bet there is some.

I suspect this is going to be an area of ongoing work, and I suppose it makes it a little less annoying that we have multiple Rust async runtimes, if that multitude allows more innovation to happen.

I’ll mention a final benefit to an async approach: It scales better. Having many thousands of active async tasks in existence is more practical than having many thousands of OS processes or threads running at one time.

Much of this is because the context information for each is inherently smaller, but there is another fairness issue here: new async systems were specific written to scale that much. A general purpose OS could also scale up to enormous numbers of processes. Arguably this is true with Erlang now, but okay, Erlang is weird. Had Linux anticipated current RAM capacity and been built from from the start to handle orders of magnitude more processes, I bet it could have, too. (But it didn’t.)

©2024 Kent Borg

Using emacs as a Rust IDE

March 1st, 2024

Turns out I don’t like Helix very much. The problem? My fingers know emacs. I hate emacs, but it is what my fingers know.

So I decided to figure out how to get emacs to be a Rust IDE. This was made a little tricky because I had attempted to do so a few years ago, when things were rougher, and the solution was to start over and not try to fix the old. Luckily I do not customize emacs a bunch, so starting over isn’t such a problem. I am not going to get into the details of how I did it, but I installed:

  • rust-analyzer
  • eglot
  • lsp-mode
  • lsp-ui
  • company
  • lsp-treemacs
I don’t know that that be a sensible collection, but they do do some nice stuff. This is really a cheat-sheet for myself, but I put it in public just in case anyone else finds it useful.
(This is a work-in-progress.)
  • mouse over something that has a definition and a popup should appear
  • M-. to see something’s definition
  • M-? to search for occurrences
  • M-, to navigate backwards
  • start typing and auto completion options should appear
  • M-x eglot- TAB to see various commands
  • M-x rust-run-clippy
  • right mouse
  • note the Flymake menu
©2024 Kent Borg

Helix, Terminal-based Rust IDE

February 26th, 2024

I was skimming through the latest 2023 Rust Survey, and I noticed the 5th most popular “editor or IDE setup” is Helix. (Just ahead of emacs, even!) What the heck is Helix?

Helix seems to be:

  • Rust IDE, seems to do other languages, too, but I’m interested in Rust.
  • Heavily inspired by vim,
  • Written in Rust
  • Terminal-based (run remotely on some distant or headless target machine)
  • Open source
Based on vim? Ugh. I long ago learned vi (and vim) just enough to be able to get it to do basic stuff. Intriguing. Let me give it a try.

This post? It started as my own cheat sheet, and I decided it might be useful to others. The goal is to tell you the very basics, enough to use Helix, not enough to get good at it. (I’m not.)


I decided to build it myself, they said:

$ git clone
$ cd helix/
$ cargo install --path helix-term --locked

The “–locked” option gave me a very unnerving warning:

warning: package `cc v1.0.85` in Cargo.lock is yanked in registry `crates-io`, consider running without --locked

But without that option it failed, so I kept the “–locked”.

On my couple year old laptop it took about 5-minutes. First time I tried to build it on a Raspberry Pi Zero W, running inside an emacs shell buffer, it died after filling up all of its 512MB of RAM and 100 MB of “swap”, before it finished. I’m trying again not inside emacs, I’ll let you know.


$ ln -Ts $PWD/runtime ~/.config/helix/runtime

That last part is necessary to have syntax highlighting work, etc.

Using Helix, Setting the Stage

I do not find it obvious how to use Helix, but it is based on vim, so what would I expect?

First, how do I even run it? No, typing “helix” will not work. Because that would be too obvious. And because nostalgia and tradition, I suppose, the executable is called “hx”. Run that and you are in. (But do you know how to get back out? Keep reading.)

Second, as it is based on vim (which is based on vi), it is very modal. The design of vi dates back to the beginning of time (1976), when for $5,000 (in today’s dollars) would buy an ADM-3A CRT terminal, which could display 24-lines of 80-characters each! It was the hot new technology, and it didn’t even have dedicated arrow keys. The computer mouse, and graphical user interfaces of any sort, had barely been seen in public. And “power users” would scoff at such silliness for many years to come. This is a modal, keyboard-based interface.

In vi there are two enormous modes. You can be in “insert mode”, where typing “hjkl” results in “hjkl” appearing in your document, or you can be in “normal mode” where pressing “hjkl” moves the cursor left, down, up, and then right. Tending to leave you back where you started.

We are about to get very modal.

Insert Mode and Normal Mode

When you first run Helix you will be in normal  mode. In the bottom left corner it will say “NOR” indicating NORmal mode, and when you are in insert mode it will say “INS” for INSert mode.

Time for our first cheat sheet items:

  • i” puts you in insert mode. This is where you can type stuff. (And because current keyboards have arrow keys, more around, too.)
  • Type stuff and it will be inserted into your file.
  • Arrow keys, pageup, pagedown, home, end, backspace, forward delete all do as you would hope.
  • Escape key gets you out of insert mode, back to normal mode, where the letter keys do not type those letters, but do other things. No matter what you are doing in Helix, the escape key seems to generally be a safe thing to press; wherever you are, press it a few times until you will get back to familiar territory.
Everything is organized around these two big modes: Normal  mode, and insert mode.

Command Mode

The normal mode is for single-keystroke stuff. But opening and saving files isn’t single keystroke territory (file names can be long), so there needs to be a command mode that accepts multiple keystrokes.

  • :” command mode commands all begin with colon.
  • :quit” (or “:q“), followed by another press of enter, will quit; though it will stop if you have unsaved changes.
  • :q!” will quit without saving changes. If you are in a panic and need to get out (without saving changes) press escape a couple times “:q!”, then enter, and you will be free again.
  • :write” (or “:w“) will save changes to current file
  • :w” will save to a file called “”.
  • “:open differentfile.txt” (or “:o” …) will open a new or existing file called “differentfile.txt”
  • :reload (or “:rl“) revert current buffer, discarding any changes.
  • Escape will get you out of this mini mode and any command you have partially typed, and put you back to the regular normal mode.
In Helix pressing “:” will immediately show you lots of available commands, I don’t know what most of them are. I don’t think there is a way navigate through these with arrow keys, and that text seems there merely to guide further typing As you enter more letter keys the number of possible completions will be reduced accordingly. This feels like a dangerous mode to me, because other than the name of the command, I can find no on-screen documentation of these commands. Explore here, but maybe carefully.

More Normal Mode: Real Cheat Sheet, Finally

At this point I think you know enough to do the most basic editing. Hooray! But only just barely enough. So press escape, to be sure you are back to the normal mode (NOR in the corner), let the cheat sheet begin!

We all make mistakes:

  • u – undo
  • U – redo


  • hjkl - move cursor, pretend those letter keys are labeled: ⇠⇣⇡⇢ (those letters might already be under your fingers, so maybe it is worth learning them, or maybe you are on a vintage ADM-3A terminal with no dedicated arrow keys—but probably just use arrow keys for now).
  • w -  word forward
  • W -  WORD (larger concept of word) forward
  • b -  back one word
  • B -  back one WORD
  • e -  end of word
  • pageup – page up
  • pagedown – page down
  • ctrl-u – half page up
  • ctrl-d – half page down
  • nnnG – goto line nnn
  • 4k (or 4 then up arrow) – move up 4-lines, etc.
  • /xyz – search forward for “xyz”
  • ?xyz – search backwards for “xyz”
Searching is also another mode. While typing your search string Helix will show you the next text that matches your search string so far. Press the enter key and now the search is done. At the point you are back to normal mode.

Once out of search mode:

  • n – next matching search string.
  • N – previous matching search string.

Making a selection—another mode:

  • v – enter (or leave) select mode
This is a little like holding the mouse -specificdown in a graphical editor, combined with navigation, you can make a selection. A bit like dragging a mouse through text.

Once you have a selection you can use copy and paste:

  • y – copy (yank) selection
  • p – paste after selection
  • P – paste before selection
  • d – delete selection
  • x – shortcut to select a line, without being in selection mode.
  • xx – select two lines, etc.
  • xd – select line, and delete it, etc.
At this point you have enough to use Helix as a basic editor: run and quit the program; type stuff; open, save, and close files; navigate through your file; search; copy and paste; undo and redo. Nothing very fancy. Time for another mode, where they seem to keep all the fancy stuff.

Insert Mode

The simplest thing to do in insert mode is to type text. In addition those extra keys that didn’t exist in 1976 also do what you would expect: arrow keys, backspace, forward delete, home, end, page up, page down. Beyond them there are a lot of other things that can be done:
  • ctrl-k Delete to end of line.
  • tab When offered a completion item, move to next

Space Bar Mode

When in normal mode, pressing the space bar puts you in an interactive menu world (and the escape key still works to get you out). Press space and cool options appear, each begins with a single letter followed by a little explanatory text, press the space bar again (or press escape or an arrow key) and this menu disappears. Press the key for one of the letters and you will get a new menu. I appreciate that the choices in this menu system appear to be pretty clear and safe to explore. Go take a look.

Here are some of the editor things I’ve discovered in my exploring, and exploring here is nice:
  • f – file picker. Cooler than the “:o” feature of the command mode. Once you are in the list of files up and down keys, page up and down keys, all work as you might hope.
  • w – window picker. In here you can split your current window horizontally or vertically. The result is tiled panes not overlapping windows, but that’s a good thing. You can close a window pane. You can move from one of these window panes to another. This is starting to turn into a useful editor!
  • b – buffer picker. You can have more than one file open at a time. How many files you have open and how many window panes you have displayed (and what they display) are different things. This editor is getting more useful.
  • y, p – more copy and paste features.
Here are—finally—some of the Rust-specific things I have discovered:
  • s – symbol picker. Choose from the variables, structs, constants, functions, etc., that you have actually defined, rather than mistyping them from memory.
  • S – workplace picker. Like the symbol picker, but seems to only offer the public symbols.
  • r – rename. Seems to work one variables, struct names and members, function names, etc. Cool.
  • / – global search.
  • k – Documentation for whatever thing the cursor is in. (To scroll the result use ctrl-u and crtl-d.)
  • a – perform code action. A bunch of very Rust-specific options that you should explore a little, maybe once all this other stuff has settled in.


So far Helix looks good. By being limited to a keyboard and having no GUI, and being based on VIM, it seems a lot more constrained than the other, flashier, disorienting, featuritis-plagued IDEs that I have tried in recent years. The space bar mode is nicely self-documenting, as opposed to all the magic single-key “What did I just do when I bumped that key‽‽” of VS Code or some Jetbrains product.  At least for me all this makes Helix a lot easier to learn.


Oh, and my native Raspberry PI Zero W build keeps failing as I try various experiments. I would like this because I’m programming some Pi Zero-specific hardware…

Followups: Syntax Highlighting

I hate low contrast, gray on gray displays, so to have more choices:

$ git clone
$ cd helix-themes
$ ./
$ mkdir ~/.config/helix/themes/
$ cp build/* ~/.config/helix/themes/

Now I can select the theme I want by editing ~/.config/helix/config.toml. Um, I think I got my config.toml from configuration section of the Helix docs.

©2024 Kent Borg


September 20th, 2023

I’m on Mastodon.

P.S. Comments are broken and have been for sometime. Sorry.

Rescheduling Rio’s Carnival?

May 29th, 2022

It’s the end of May, 2022, so I’m a month late in writing this, but still…it’s kind of appropriate. Read on.

This year Rio held their Carnival (as in Mardi Gras or Fat Tuesday), in person! Live! Drinking, music, food, sights and sounds, pickpockets—all that fun stuff! After several years of canceling it due to Covid-19 pandemic, they were back! Cool.

Except it was rescheduled, it was weeks late.

When I saw that it seemed so WEIRD. How do they move Carnival without also moving Ash Wednesday? And Lent? And Easter?

They moved it to after Easter! What’s the point then? Part of the joy of The Resurrection is knowing Lent is over. Sure, Christ is Risen, that is nice and all, but Lent is about what you give up. The Tuesday before Lent is your last chance to have fun before giving it up. By the time Easter Sunday comes around some people are dying for a cigarette. Lent is over, Hallelujah!

Moving Carnival is like moving the New Hampshire primary to after the convention, because the weather is better that time of year.

Or, like taking conception, pregnancy, and birth, and reordering them on some practical excuse.

The whole point, the reason for drinking too much and partying too hard…is ya not gonna get such treats during Lent, so let’s have so much now as to be sick of them. It’ll be easier that way.

The other reason for this spectacle is to be, well, a spectacle.

Without Lent, isn’t Fat Tuesday mostly just for obese tourists, gawking, drinking, and being unpleasant—as long as they have money?

Tearing Carnival out of the liturgical calendar takes away all the meaning. Makes it just a commercial enterprise. Might as well do something like move all our holidays to the nearest Monday.

-kb, the Kent who writes this on Memorial Day Weekend, which is about mattress sales and the beginning of summer, because the very idea that the very word “memorial” means to preserve a memory…is forgotten.

©2022 Kent Borg

P.S. Comments are broken and have been for sometime. Sorry.

Saw a Slashdot headline “Intel Calls Its AI That Detects Student Emotions a Teaching Tool, Others Call it ‘Morally Reprehensible’”

April 18th, 2022

Um, …Yes.

People think the questions of AI ethics arise from dealing with the consequences of what Turing and all those cats invented: Computers. And they are right, to a point. But more it is about dealing with the consequences of what the Sumerians invented: Bureaucracy.

Teachers infer the emotional state of students all the time, so what if a computer program does it, what is the problem? If it is okay for the human to do it, then it is okay for the computer to do it, too—as long as the computer does it competently, as long as the computer does it fairly. Right?


When a teacher looks a student in the eye and infers his/er emotional state that is an invasion of the student’s privacy. And, as an isolated act, it would be wrong. But it is not an isolated act.

  • The student understands s/he is in a school, has frequently chosen to be there.
  • The teacher is looking into the student’s eyes (or otherwise observing, maybe asking questions of the student), and the student can see that.
  • When the teacher is looking into one student’s eyes s/he can’t also drill into another student’s eyes.
  • And the student can look back into the teacher’s eyes, and the teacher can see that.
  • The student gets to infer, too.

All this is part of a larger human relationship. And that is valuable. (No, not all human relationships are good, but they are pretty much all we got.)

Estimating a student’s emotions is just a small part of what is going on between a teacher and student. In isolation, staring at someone is an aggressive act, one where the “student” might be justified to hit back, but more wisely would turn away from this crazy person, cross the street, and put some distance between them.

Just because a human does thing X, even as a key part of some wonderful accomplishment, doesn’t mean thing X is itself somehow good or even remotely acceptable in other circumstances.

The fundamental problem with Intel’s innovation has little to do with whether they did a good job of implementing this isolated skill or not, it is more the fact they implemented it as an isolated feature that then can be deployed in myriad ways. Ways the subject (“student”) maybe has no knowledge of, ways that can drive decisions about the subject that s/he has no ability to influence nor appeal.

When used completely as intended, as part of online “Zoom” learning, this innovation might not be all bad, but the student is no longer in a student/teacher relationship of the sort we think we understand. No, this rapidly evolving online tool that Intel apparently wants a piece of, is redefining what is going on. And God knows what “data driven” innovations will be added next, particularly in the hands of for-profit institutions.

The real problem here is what horrors are possible in the maw of faceless and unanswerable bureaucracy. The fact that the bureaucracy has a shiny new toy is important, but don’t so focus on the toy as to ignore who is using it, and how it will be used as part of a much larger system.

I have seen the movie Schindler’s List only once, it was in first run, a zillion years ago. I do remember it was in black and white, and I happened to see it on a glorious big screen. But I don’t remember a lot of other details. Yes, there were nasty Nazis, terrible dilemmas, sadness and heroism, but it’s all kind of blurry. Except one chilling shot etched in my mind: a typewriter, typing up names. The film hit me between the eyes with that.

Bureaucracies have always kept lists of names, the difference is the typewriter made it more efficient. Everytime bureaucracy is handed a more efficient tool, we are at risk of bureaucracy doing what it does at a more industrial scale. As impressive and powerful as the typewriter is, seeing it in the movie also reminded me that IBM sold more powerful information processing equipment to the Nazis. To help them run their bureaucracies more efficiently.

Some of the the new computer programs we like to call “AI” are really impressive information processing tools. They are tools that allow bureaucracies to not just possess and sift through vast amounts of data, but tools that allow bureaucracies to make subtle decisions about that data without bothering with the pesky bureaucrat step anymore. No human bureaucrat means breaucracies can now make complex decisions at scale, at great speed, and therefore make many more decisions than ever before.

Sure, individuals can deploy this technology in bad ways, but so can bad teachers can be bad in completely old fashioned ways, individuals don’t need such a fancy tool to do individual harm. But bureaucracies need to operate at scale.

“AI” ethics isn’t about the “AI”, it is mostly about bureaucracies—government and commercial, public and secret—and ethics in how and what they deploy. Bureaucracies have never been good at ethics.

But right now they are all transfixed by this shiny new toy, looking for what data they already have sitting around, what new data they might collect, and how they could plug it all together, to do things they could never do before.

Read Kafka’s The Castle, then let your imagination roam…


©2022 Kent Borg

P.S. Comments are broken and have been for sometime. Sorry.

I Killed a Russian Troll

January 16th, 2022

At least I think he was a Russian troll. Clearly a troll, spreading disinformation, in a way Putin would like. And I’m guessing a “he” based on his demeanor, but I might be wrong about that.

Last Sunday, 9 January 2022, I saw someone on Twitter posting some nonsense, so I replied in what I think was a thoughtful way; I think I made a good point. He replied in a trollish way, I replied, etc.

Then I started to wonder who he was, I checked. His account was just a few hours old! He followed only one account and had only one follower (same person who followed him back). He was still wet behind the ears.

Except he was very good at trolling, he clearly was not new to Twitter.

So my 4th tweet at him was:

Wait. You are just a Russian troll?

And I started asking him questions about his job. (I think some of them were pretty good.)

He never answered my questions directly, but he did reply in ways to try to bolster his supposed Boston location, talking about Dunkin’ and swerving when he drives, and something else easy to Google that I forget.

I wish I had a complete record of our exchanges. Between my off-brand Twitter client on my phone and another copy on my tablet, I still have a lot of his tweets, but not all.

At one point he was trying to be condescending, to put me in a childish position, he asked whether I have been given a lollipop. But he said “lolli”. I have lived in Boston a long time, I called him on it, that is not how a Bostonian talks. I told him I have heard my next door neighbor say “lolli”, but that my next door neighbor was born in the UK.

He was thorough, he looked at other things I tweeted. In one I replied to a tweet by NPR’s @ElBeardsley. She had just arrived in Kyiv on a reporting trip, I said I had been scheduled to go there, but a pandemic intervened.

My troll replied. Repeating the “Kyiv” version of the city. I called him on that, said his Boston character would have been more likely write “Kiev”, I also pointed out that a Russian troll who was actually Russian would more likely spell it “Kiev”. (Saying “Kyiv” is more sensitive to Ukrainians.) I asked him whether he is actually Ukrainian. I asked whether he is worried by Putin’s buildup of troops on their border.

My logic on his name for the city didn’t hold water—he was quoting back what I had written—but he didn’t call me on it. He did accuse me of being the Russian troll, however.

I did rat him out a few times, telling people he was trolling that he was an apparent Russian troll. I praised him for being good at his job, observed that he tried hard to not actually lie. Be disingenuous, sure, that was his job. But he worked hard to not lie.

Yesterday, 15 January, I checked in on him again. He was still trolling, and I squealed on him again. I noticed someone who had blocked both of us had unblocked me. I replied to a previous tweet about the blocking:

Now I noticed I am no longer blocked.

I see you are trolling this morning.

And he blocked me. Or so I thought.

This morning I used a different account to see what he was up to…and the account has been deleted.

I killed him!

If someday in the future whoever was running @bsacamano545 (“bob sacamano”) is in a new line of work and can reminisce about old times, I’d love to talk.

In the mean time, “bob”, be well, stay safe,

-kb, the Kent who bets you don’t look at all like the Twitter avatar that was on the account.

©2022 Kent Borg

P.S. I still want to know which vaccine you got.

P.P.S. Comments are broken and have been for sometime. Sorry.

Python is a Great Prototyping Language…but One Should Never Ship a Prototype

May 26th, 2020

I really like how Python lets me start to get things working before everything is working. I can fire up an interactive debugger and immediately start playing with some library I Googled up and think I might need, quickly get it doing stuff, plug it in to other code and quickly get the whole doing useful stuff.

I can get my Python program in a useful state before I have really decided what I wanted it to do, and well before I have stopped to think hard about the best way to do it.

This kind of exploratory programming is exactly what is needed to develop a prototype. But never “ship” the prototype!

Here is an analogy to the physical world: there are prototyping materials that are easy to work with but are not as durable nor economical as are materials suited to real manufacturing. For an extreme example, automobile bodies used to be prototyped, at least in part, with modeling clay. And the very properties that make modeling clay good for prototyping make it terrible for manufacturing. (Take it to go buy a Christmas tree, strapped on top.)

Similarly, in the case of Python, the key property that makes it good for prototyping, makes it terrible for “real” programs: Probably the biggest thing that makes Python powerful is precisely that it allows the programmer to defer so many decisions. What kind of parameter does the function take? “A parameter called X!” Not very useful. Even if the parameter is called something like “address_list”, that only hints–it might not actually be a list, maybe the address “list” is in a dictionary and the keys are customer numbers. (Likely.) And even if we really honestly know the address_list is a Python list. Okay…a list of what? Let’s guess dictionaries, Python loves dictionaries. And what will be in the dictionary? Whatever anyone anywhere else in the code might manage to put in there–or remove from there. And it gets worse: Some programmers think it is cool to put “**kwargs” in the parameters, which means we don’t even know what the parameters to the function are! We have to examine every line of code that might call this function to see what the possible parameters are, and even then you will see (you just now it) that some of that code is going to be passing a dictionary that is only known at runtime.

The fact the programmer doesn’t have to decide what s/he is doing can give the illusion that real programming is happening really fast, but there is an illusion there. A dangerous and beguiling illusion. Worse, of course, is when such dynamic features are actively abused (see kwargs), but merely deciding to use a simple list yet having no good way to pin down what is in it is such a rich place to hide bugs.

Strongly Typed Language: Python

There is this idea that compiled languages such as C (or C++, all the weaknesses of C without the virtues of being a small and elegant language) are strongly typed but an interpreted language such as Python is not. This is half-right.

In C you have to say what kind of data goes into your variable.

In Python you can put whatever you want in your variable–a string, a boolean, some kind of number, some enormous data structure, a function, or None. Not only can you put what you want in there, you can change it at your whim; in one line the you might declare an instance of the class your variable holds, and a couple of lines later (or a different thread, if it can get a chance to run) set your variable to 42. Python is very liberal about such things.

But this doesn’t mean Python isn’t strongly typed! It is very strongly typed, it just doesn’t make up its mind about types until the last possible moment, at runtime. Repeatedly. Every time through your loop.

In fact, Python does almost nothing but constantly checking types of things. It takes much longer to check the types of two variables for adding than it takes to actually add them. (To check whether they are numbers and that adding is sensible, and how to add these particular numbers–assuming they proved to be numbers. Python needs to check a lot before it can do the addition.)

Deferred Work Doesn’t Go Away

It is presumably important to you that when the Python code runs it not crash. One would think. In which case doing that clever thing of instantiating a class instance from a variable at one moment and doing arithmetic on 42 the next had better be done right because the reverse operations will not work. Even doing unclever things, such as misspelling a variable name and accidentally doing arithmetic on a class definition with a similar spelling is a bad idea.

And though Python will catch both of these mistakes if you make them, it will only do so if you exercise the right lines of code with the right (unfortunate) values. And only if the right person is watching in the right way will it do any good.

It is really hard to thoroughly exercise code. And in the case of a very dynamic language like Python the permutations are so great that it really isn’t possible.

Yes, Compilers are Annoying

In statically typed, compiled languages, it is more work to make the compiler happy, but a benefit is the compiler will prevent these sorts of errors. It is less work in total to catch a type problem up-front than to have to do it in the debugger and in vague bug reports from users. Unless you are planning to defer some of the work forever, planning on never finding and fixing some of the bugs…

Yes, Compilers are Inflexible

Yes. And in a good way, if it prevents accidentally doing arithmetic on a class definition.

But what about cases where one needs to be clever. Maybe not so clever as to mess with class definitions at runtime, but something more conventional, such as wanting either a value like 42 or some other flag value (such as Python’s None), isn’t that reasonable?

Yes. And compiled languages allow such things. Some in safe ways, even.

(Some Compilers are Nice)

The Rust compiler is demanding but in exchange lots of bugs simply won’t exist once the compiler is happy.

Rust: Not as slow as C without being as low-level as Python.

Prototypes are Expensive to Operate

I would like to see some hard numbers, but it feels to me like Python must spend a hundred times as much effort constantly checking the runtime type of every bit of data as it does doing real work on that data. Certainly Python is not very efficient, whatever the ratio. How much carbon is released just because of Python?

-kb, the Kent who is looking opportunities to finally get good at Rust.

P.S. Comments are broken and have been for sometime. Sorry.

©2020 Kent Borg

Patron Saint of Ham Radio?

May 18th, 2020

Today, 19 May 2020, is Oliver Heaviside’ 170th birthday.



Okay this post will be a bit technical…but only a bit.

One of the giants of science is James Clerk Maxwell, he figured out the science behind how radio waves work. Pretty important stuff. He is immortalized in the famous four Maxwell Equations. Maybe you have head of them, lots of science-y types have heard of them, but not that many understand them, so don’t feel too left out, this is rarefied territory. (I’m still learning this stuff, but I’m not there yet.) But it seems they are important to the experts that make modern life work.

And Maxwell did them!

Except he didn’t. Maxwell did eight equations, not four. Oliver Heaviside is the one who simplified the eight into four.

People will write otherwise, but it seems to me that it was Heaviside, not Maxwell, who created the whole field of electrical engineering.

In Heaviside’s day the hot new technology was the telegraph. You know, Morse code, dits and dahs. It was slow and cumbersome and expensive, but it could communicate over long distances very rapidly when compared with a horse or ship or even a racing locomotive (that means a train). The slight detail is that the longer the wires got the mushier the signal got. People would try to make up for it by turning up the voltage, and they had other tricks they tried, but it was a seat-of-the-pants, rule-of-thumb world, and don’t ask too many questions because even the most talented “electricians” didn’t really know why one trick might work and why a different did not. The first trans-Atlantic cable was not only very slow but burned out after a very short amount of use. (They turned up the voltage quite a bit.)

Heaviside worked for a telegraph company, and he wanted to figure out this stuff worked. Precisely how it worked, as in quantifying things. This rubbed some in the industry the wrong way, there was a lot of opposition of quantifying these things. And he did. Both rub people the wrong way and figure out these things. He figured out how to make telegraph wires operate at much faster speeds for much greater distances, without burning them out. His same principles were applied to voice telephone calls, for they had the same problem of the signal getting mushy if the lines got too long, and he explained how to fix that, too.

His solution was to set up what is now called a balanced transmission line. Back when TV antennas were put on roofs the first kind of cable for connecting it to the TV was a balanced transmission line, 300-ohm twin lead, to be precise, and it was a flat cable, made of plastic with a wire running down each edge. Don’t tape it directly to your metal antenna mast, it doesn’t like that, but if suspended away from metal, it is very efficient at getting a very weak signal down to the TV without picking up interference along they way. Heaviside invented the balanced transmission line. And it is useful for a lot more than ancient TV antennas. It you haven’t heard of balanced transmission line, but have heard of coax cable (it is perfectly okay to tape coax to the metal TV antenna mast–coax is not as efficient as twin lead, but easier to string), well Heaviside invented coaxial cable, too.

About the time telegraph and telephone were still pretty new there was another hot technology: radio. Heaviside was paying attention there, too. At this point sensible people knew the world was not flat but is round (spherical, to be pedantic). And people also knew that radio waves travel in straight lines, so radio wouldn’t be useful for long distance communications, right? Wrong. For some frequencies, under the right conditions, radio waves will bounce off the ionosphere and can travel great distances. Heaviside figured this out. He figured out that this should work before people found out that it did work. In fact the layer of the atmosphere (the ionosphere) that does this was originally called the Heaviside Layer.

Back to the Maxwell Equations. The way that Heaviside did all of this was by taking the science and figuring out how to apply it in a precise way to make an engineering discipline: he created electrical engineering. Including inventing new ways of doing the math.

In the early days of Bell Labs they were working magic by taking Heaviside’s work and applying it in a practical way. Some Bell engineers were so impressed with Heaviside’s work, and so indebted to him, that they tried to send him money, but he said no.

At this point Heaviside was old and not rich, yet he said no. Oliver Heaviside could be difficult. Various folk tried to help him and to the extent it looked like help to him, he said no, even though he needed it. And part of why the Bell Labs engineers were having such a hay day on his work is he that, though he might have been brilliant, he didn’t stop to try to make his work easy to understand, it took awhile for his work to have full effect.

Back to the my claim he should be the patron saint of ham radio: He made practical much of what the field is built on, he was a hands-on man. And he died of complications from falling from a ladder.

-kb, AC1HJ

©2020 Kent Borg

P.S. Comments are broken and have been for sometime. Sorry.

Why Do They Deny?

September 15th, 2019

I think I have finally figured out something basic about human nature, something that has long puzzled me. I have gone from shaking my head in disbelief to maybe understanding.

Here is the most extreme example:

Why are the people who strenuously argue Hitler’s death camps never existed, also seem to argue they should have existed?

They are attracted to Hitler, Hitler is most infamous for his genocidal murder, there is clearly something attractive in that fact, but as they are attracted by infamy, they also go to great efforts to deny it! I mean, they are already going to a very taboo place, why not really go there and MGMGA? (Make Genocidal Mass-murder Great Again!).

Why this strange split?

This doesn’t only happen in the extreme: There is a resurgence of a kinder, gentler (than Nazis), racism these days…but as these newly outed racists gleefully promote their racism, they also say that they are not racist? They insist! Why?

To get all Star Wars here, I think it is a basic property of “the dark side”. Those who resist it see it as dark and repulsive. But my realization is that for those who embrace “the dark side” it is still dark and repulsive. Being dark and repulsive is somehow part of the appeal.

So in the case of neo-Nazis I think it is also the point, but in this case taken to its logical extreme.

And I think that extreme–of murder on an industrial scale, in a network of slave labor camps–is enough to make even Nazis queasy. So they lie. They lie to all of us, as hard as they can, because that is the best way to lie to themselves.

In some extremely dark corner of their already dark souls, they know it is true that it happened, and in some still darker corner they sort of wish to see it again. But first they want to be part of a rampaging mob, they want to be drunk on the high they get in abusing their choice of “the other”, to have violent power over others, laughing with their fellows, being goaded on by their fellows, goading on the others, spreading responsibility. Because they know it is wrong.

It is bad stuff. Few have the stomach to really go there, to go there alone, so they lie to themselves and look for support in others.

A silver lining: There is still some good in most of these people, maybe not much good, but some. (No, Donald Trump that doesn’t make them “good people”, not on balance.) If they are still capable of being revolted, there is still some good in there.

No, don’t think I am going so far as to assert that sociopaths don’t exist, they do, but most Nazis are not sociopaths, and I suspect most sociopaths are not attracted to Nazis.

My thesis here is that–excepting some pathological, diseased minds–there is good in everyone. Look for it. Try to draw it out. Try to tempt them away from the repulsive “dark side”, for they find it repulsive, too.


©2019 Kent Borg

P.S. Comments are broken and have been for sometime. Sorry.