Sunday, March 9, 2025

I Aintn't Dead! I'm even ok, all things considered!

I aintn't dead!

And while this has been a rough week -- sickness is going through our family, and while I have had a handful of days where I had little to no energy to do anything, I can happily say I haven't had any days where I wondered if I was going to die.  I've had colds (and other diseases) that have been worse than this.

Nonetheless, this little sickness has been severe enough for a daughter to be taken to the emergency room, for a cough that wouldn't let up, resulting in a lot of vomiting.  It's under control now!  And it while it didn't get to the level of "scary", it's nonetheless yet another thing eating up energy.

All this has been somewhat of a blow to my efforts to find the right rhythm for my blogging, but then again, it's also given me time to think of how I might want to approach things.  In particular, I am trying to figure out how to balance limited energy with my desires to info-dump my thoughts (some of which have been maturing over a period of years, some of which are picked fresh from last week), and my desires to make and design things.  My goal has been to throw out a blog post every day, and then work on a project for the rest of the day.

When the blog posts are complex, however, this doesn't work very well!  (And it doesn't seem to matter that some of those things are things I've thought about for years!)  I may have to limit those to one per week.  If I want to post more often than that, I think I need to figure out how to get into the habit of throwing out an occasional pithy post.

Meanwhile, I've spent the last two or four days working on translating a cardboard "computer easel" into a FreeCAD model that can hopefully be sent to a manufacturer.  While I want to design my own CAD system from the ground up, I have also been wanting to get familiar with FreeCAD, both because it may prove that my "vision" is redundant, and because I could take lessons from what I didn't like about my experience.  This project has been very helpful to that end!  (Albeit with more than one frustration along the way.)

I've also been thinking occasionally about how I ought to do another Identity Management post.  The first "molecule" I want to discuss isn't necessarily an Identity Management thing, although it draws from the Elements:  it's a data structure!  But it illustrates what can be done with signing hashes.

And I've been pondering the Algebra series I've started.  I intended to throw out a handful of rules and their explanations, with the (yet untested) notion that they'll be useful for getting comfortable with algebra ... but after I described the notion of "symbols", it occurred to me that if I'm going to say a certain symbol is a "number", it would be helpful (and daresay important) to lay a foundation for just how those darn things work, anyway!  And, of course, there's always a balancing act between figuring out how deep a concept should be explored, and how many deep ideas ought to be separate concepts.

For example, when I introduce multiplication, I might have to resist the temptation to dive into the distributive law, because it may make more sense to wait a bit later, when I have a better place to bring up its motivation -- in particular, I'm coming around to thinking that it's weird to describe why you'd want to multiply "6" with "5+3" when it's pretty obvious that "5+3" should just be "8"! -- but the motivation becomes obvious when you want to multiply "6" by "x + 3", because now you don't have a nice means to "simplify" things -- it's as simple as you can get it, until you figure out what "x" might be!

Friday, February 28, 2025

Initiating New Projects via NixOS

I have a confession to make!  I'm kindof get delayed whenever I start a new project.  It can take a little while for me to set things up.

Several years ago, I worked at a cryptocurrency company -- and as a company, we had two goals.  The first is to find a "consensus algorithm" that would be able to approve transactions at about the rate that Visa and MasterCard can approve transactions -- because waiting a day for a transaction to complete (which is where Bitcoin was at the time) is kindof unacceptable for something that's supposed to be used as currency!  The second?  To explore other things that cryptocurrency can do -- and among them was the possibility of putting software on the cloud, compiling it, and trading it, all managed by cryptocurrency transactions and smart contracts -- and one package management system we were encouraged to investigate for inspiration was NixOS.  And I fell in love with it!

Now, just what is this NixOS thing?  It's actually several things:  a configuration language, a package manager, and an operating system, and maybe a thing or two besides.  As a language -- it is weird and complex, and a source of great headaches! -- but it's also where its power lies -- the language can specify exactly what you need, and customize things very precisely.  As a package manager, you can set it up on any Unix-like system and install NixOS packages that will work on the system (which, for Linux, means pretty much anything, because it's a Linux distribution, after all, but for Mac OS X, there's a *lot* of stuff available for installation!).  And as an operating system -- well, it's a Linux OS, after all, so you can install it on your computer.

I initially used NixOS as a package manager for Debian and for Mac OS X, but I have "graduated" to installing NixOS on a computer itself.  It was a bit of a challenge, but I don't regret it!  NixOS has  a solution for something that has annoyed me about other Linux distributions (and Mac OS X, too!):  whenever I have a "blank" operating system, I have to remember the applications I installed before -- and while I try to keep notes of what those applications are, sometimes I forget to update the list, and often I have to just "get to work" and run into a situation where I need Package X, but discover it's not installed, and take a moment to install it.

The only advantage to this approach is that, every time I have to install a new version, some of the older software I'm no longer using (often because it was a "one off" for an exploratory workshop or installed out of curiosity) "disappears" simply because I don't get around to re-installing it.

With NixOS, I can specify all the packages I want installed in a single configuration file -- or, if it gets complicated enough, I can break it up into several smaller ones -- and I can also include information on user accounts and preferences for each package!

Yet, even with NixOS on my system, I still insist on creating a little "shell.nix" whenever I start a new project.  Take my "HIVE" project, for example -- it's intended to be written in Common Lisp, but because it uses OpenGL, I need external GPU drivers and libraries installed as well -- that little "shell.nix" allows me to create a custom command line shell that installs SBCL and these libraries, and even sets up needed environment variables to make sure everything works.  I can specify the version of SBCL I'd like to use, the versions of the libraries, and anything else I might need -- and all these things are independent of the OS I'm currently running!

This is much like the "virtual environments" that computer languages like Python and Ruby use, so that you don't get stuck with the out-of-date operating system version -- or, if your operating system is updated to a newer version, you don't have to get stuck with a project that no longer works because of breaking language changes.  This is particularly valuable when you have a "legacy" project that you don't yet have time to update, and you're wanting to start a new project using a later version.

And this has, interestingly enough, also solved the "temporary package" problem I had before -- I can use a "shell.nix" file to temporarily install an application or two for a particular workshop -- or I can even do something like "nix-shell -p gimp" to drop me into a command line shell where Gimp is temporarily installed -- and once I close that shell, Gimp is no longer available.  (Well, technically, it kindof is still available -- NixOS doesn't automatically delete temporarily-installed applications -- so, assuming I want to use the same version of Gimp I used before, NixOS doesn't necessarily have to reinstall it the next time I use "nix-shell -p gimp".)

So, whenever I embark on a new adventure, one of the first things I do is create (or more likely copy) a "shell.nix" file, and start figuring out what I need for my project.  In the case of a "computer easel", I want to use FreeCAD, which needs Python -- and since I want to keep all the data for running FreeCAD "local" to the project, I had to take some time to figure out how to set up environment variables to inform FreeCAD where my "home", "data", and "tmp" directories were -- and I had to figure out where I wanted them.

But I have that working now -- and, as a bonus feature, if I wanted to share my FreeCAD configuration with others, I think I just have to share this "shell.nix" file and the above directories.  I'm not 100% sure about that, though, because I'm not 100% certain if I figured out where FreeCAD keeps all of its configuration.

Overall, though, this allows me to have complete-ish control over the setup.  While I have had some surprises over the years, even with NixOS, I have nonetheless appreciated having a single spot where I can maintain a project's dependencies, without having to figure out what's on the particular system I'm currently using!

Wednesday, February 26, 2025

Motivation!

So, for the past few days, I've been trying to overcome a combination of repeated interruptions by various errands and chores, and a strong sense of "Pervasive Drive for Autonomy" where apparently I cannot do something because someone is putting a demand on me, and mentally, I'm not prepared to fulfill people's demands -- even if that person making the demands is me, and the demands are work on one of those things you know you want to work on, darn it!

This post isn't about my motivation to do things, though.  This morning, as I was settling in to finally start a project, deciding to watch a couple of videos before actually starting it, thinking I'm not even sure if I'm going to write a blogpost (or even just post something) -- because I'm afraid that a blogpost takes away precious energy needed to work on projects -- and besides, I'm not addicted to videos, I can stop any time! -- I came across The Problem With Math Textbooks on Youtube Shorts that resonated greatly with me.

The TL;DW (too long, didn't watch -- wait, isn't this a short video? -- well, maybe it won't be there by the time you internet archaeologists get to this post) summary:

Pure Math textbooks delve right into the axioms, which is a problem, because students are left thinking that we could just pluck axioms from thin air, giving us infinite possibilities.  Where do these axioms come from?  We need to describe the motivation that led to these axioms!

This is, indeed, an approach I've been wanting to take with mathematics for years.  When I took a "Physics for Scientists and Engineers" class as an undergrad, my room mate was explaining that he was taking the Physics class that didn't use calculus -- and thus, the math was significantly harder! -- and this led me to the conclusion that both physics and calculus would benefit if they're taught as physics gives birth to calculus -- or, perhaps, rather, as both are given life as twins!

But I was initially at a loss as to how to find motivation for everything else -- when I realized I had answered this for myself years ago too!  The motivation comes from the history of mathematics.

  • Euclid's Geometry was motivated by an attempt to standardize the measurement of the Earth (hence the geo of geometry!) -- and its alternatives were motivated by attempts to prove that Euclid's axioms were the only alternative, only later to be discovered that they have their own physical analogs.
  • Calculus was motivated by physics, and each refinement to the idea by mathematicians like Euler, Riemann, Gauss, and Lebesgue, were done to address philosophical concerns, and to refine the techniques.
  • Modern Abstract Algebra was motivated by solving the classical Greek problems of trisecting the circle, doubling the cube, and squaring the circle, using only a straight-edge and compass.
As for everything else?  Well ... I'm not sure if I can tell you ... because I'm not as confident on the history as I'd like to be.  The problem the field of Mathematics has is that, as the math becomes more refined and purified, the older techniques are jettisoned -- and little to no effort is put into understanding the history!  A good example of this is in Calculus itself -- everyone who goes through the mathematics classes know about epsilon and delta proofs (My blog's nom de plume "Epsilon Given" is taken from this!) -- but far fewer know about Newton's fluxions or Euler's infinite and infinitismal numbers, among other approaches to the subject.

Indeed, my own understanding of the history of mathematics is a mixture of "The History and Philosophy of Math" class I took in my first year of college, and being self-taught.

To this end, I have spent some time trying to collect older mathematical works by early mathematicians, with the hope of exploring the more "intuitive yet unrefined" approaches to mathematics.  I have Euler's Elements, a work or two by Archimedes and another mathematician I can't remember, Euler's "Introduction to Algebra", a book of Leibnitz's works on calculus, and Sir Isaac Newton's work on calculus, A Treatise of the Method of Fluxions and Infinite Series, With its Application to the Geometry of Curve Lines -- wait, shouldn't that have been Principia?  Well, that was Newton's physics book, published in Latin, but using complex, difficult, and sometimes incorrect "simple" math, because Newton was highly jealous of calculus during his lifetime -- and thus, A Treatise of the Method of Fluxions was published posthumously in English (albeit translated from Latin).

Sometimes the motivation is simply "I don't know.  It seemed like an interesting problem at the time!"  And that, too, is good, because it's a reminder that sometimes we just have to play, and see where are games take us!

Monday, February 24, 2025

Thoughts on Practical Interface Design

Apple has some interesting notions about software interface design that I find amusing -- and also deeply irritating.  I appreciate their reasoning, but the axioms they use as a basis for their reasoning are deeply flawed!

One is "the five closest points to the mouse cursor are where the cursor is, and the five corners of the monitor".  This makes intuitive sense -- because you can "fling" a cursor to a corner, and it would come to a hard stop -- and this is the foundation for putting the menu of the active app on the top of the screen, rather than at the top of the window.

The problem with this, however, is that it only makes sense when the screen is the size of a postcard (which was literally true of the first Macintoshes -- ok, maybe my memory is skewed here, but they nonetheless were rather small) or maybe even the size of a VGA monitor -- but, as I discovered when one of my employers provided me a nice, giant, curved monitor, and a Mac laptop that I could plug into that monitor ... this entire dynamic changes!  When you're on the lower right corner, this principle puts the menu up in the upper-left corner, and when you can literally choose between "distance moved on screen" and "distance as the crow flies" to describe the distance one needs to travel ... and where it might even be reasonable to describe said distance in "yards" or at least "feet" instead of "inches" -- all of the sudden, this one principle requires me to pick up my mouse several times to reach the menu.

What's worse, I have come to appreciate a feature under Linux, where I could hover my mouse over a partially covered window, and it becomes "active" without pulling it to the top -- such a feature is useful to have a browser with helpful information over a command line window where you're trying to use that information -- and while this can be sortof emulated under Mac OS X, it can only be partially emulated, because if you had this feature, and needed to cross several windows to access the menu, the menu would have changed several times by the time you get to it!

Fortunately for me, when I was given a curved screen, my employer also provided me a desktop computer, and promptly had me install Linux, so I was able to install KDE, which gave me the Windows-style convention of "menu on each window".  It's tempting to say that this is Linux's style, or at least KDE's, but it's more accurate to say that the driving force behind Linux user interface design is flexibility.  Indeed, if I preferred MacOS's design decisions, I can easily find them in Gnome, an alternative to KDE.

Another principle is "we have studies that show it's faster to use the mouse than it is to use the keyboard to edit text, but everyone thinks it's faster to use the keyboard".  The "studies" they rely on involve asking random people to do mundane tasks like "go through this paragraph and replace every 'e' with an underscore '_' -- and, surprise, it's easier and faster to do this with "point and click" than it is to arrow down to each letter, and then make the replacement.

Of course, as an avid user of Vim, I cannot help but ask "Why not just visually select the paragraph, type 's/e/_/gc', and after highlighting all the matches, manually approve one or two of the search-and-replaces for a couple of of times, and then hit 'a' to change all other matches when you're satisfied with the result?"  The Apple response to this, though, is "We're designing an interface for the 'average' person, not the power user!" but the proper response to this is "Yes, it's nice to have simple-yet-painful interfaces for the 'average' person who is going to do anything only once or twice -- but we need to cater for the power users, too, because eventually, in at least some tasks, the 'average' person is going to want to cross over the line into 'power user'!"

So, yeah, I'm not entirely a big fan of Apple's user interface design principles.  They sound good in the abstract, but they have led the designers astray to produce some awful designs!

Since I'm in the process of trying to figure out how to create my own Dual-Quaternion based 3D CAD-like system, I've given some thought about how things ought to be designed ... and I think the guiding principle I'm most attracted to is flexibility:  Don't try to predict what any particular user is going to need, instead, provide the tools necessary for the user to create and customize their own interface!

The fundamental principles behind all these are perhaps the most important:  first, enable flexibility, second, put as many things under your finger tips as possible, third, it's nice to be able to select anything and everything and copy them, and fourth, don't clobber user data -- and everything is user data.  I also have a couple of principles than I don't yet know how I'll implement -- the fifth is take advantage of the strengths of every input method (for years, the default treatment of touch screens under Linux has been to treat the touch screen as another mouse, which is annoying when you touch the upper half of the touch screen, and it sends the information to the second monitor that isn't a touch screen), as well as a sixth, the user should always have complete control over the program (which will probably need me to figure out how to install and use Real Time Linux).

So, with these principles in mind, I have had the following generic thoughts about the user interfaces I'd like to try to implement:

  • Anything that a user can do, to the practical extent possible, should be captured in undo trees of some sort -- perhaps even made version-controllable,
  • All functions of an application ought to be available to the user, to be bound to any key, or touch, or mouse movement, or gesture, available to the user (as inspired by Emacs),
  • The command line is a special interface:  it allows us to describe what we are trying to do with text, and this enables scripting, as well,
  • For every workflow, it should be possible to work out a "language" that can translate to keys on a keyboard -- much like how Vim approaches text editing -- although that "language" might differ from team to team, or even individual to individual, or project to project,
  • There is only one place closest to the cursor, and that's where the cursor is at this given moment.  It should be simple to pop up a circular menu at that particular point, probably by right-click, which can then open up to other circular menus -- and every such menu should have a "computer" icon on top (at 12 o'clock), an "application" menu just to the right (at 11 o'clock), an "environment" level just to the right at that (10 o'clock), and an exit button at the very center,
  • Perhaps every "icon" should have a character point in UTF, to the extent possible, and every help text, warning, and possibly graphic should be selectable by mouse and copyable to the right medium,
  • It should be possible to create menu-ish panels that consist of easy-to-access functions, information to be watched or examined, and icons to access various things; every such panel should be "locked" into place, with a little "lock" icon, that must be explicitly unlocked before menu items can be added or removed, or the panel itself be moved or resized (I have been both impressed by Ansys's ability to manipulate "default" menu and information panels, and annoyed by how easy it could be to accidentally change them, without knowing how to undo the changes, or even having the option to undo them! which, to be fair, I think Ansys has, I just don't remember how to use it),
  • It should be possible to view all panels available at any given moment, even if some of them are hidden, via some sort of "explosion" mechanism that keeps everything in place unless someone moves them (much like MacOS and KDE has for windows, although both seem to randomize what they show, and neither provides ways to organize the windows on the screen in this mode, without mechanisms to preserve these changes between "explosions"),
  • Everything associated with a project should be kept in a structured file format that can be explored by command-line tools, and in particular, as text files where possible, and while automatic strategies can be provided for inserting new things into these files, any changes made by a user needs to be respected -- and any changes that a user might do, that would break the system (eg, syntax errors, system display changes, etc) needs to be handled gracefully by the system -- so that the user will feel free to experiment without fear of everything coming crashing down (to the extent possible -- we are dealing with complex systems, after all, and we cannot fully understand what we are doing!) -- in other words, as I currently envision it, I intend my projects to be "text editors" that can keep track and edit "non-text" information.
I think some of these are contradictory, some of them may prove to be impractical, and while I have given these notions a lot of thought over the years (well, some of them -- my menu and panel ideas are relatively new), I do not know how they will work out in practice.

But then, if I knew what I was doing, it wouldn't be research, now, would it?

Friday, February 21, 2025

Identity Management: Introduction to Molecules: How Atoms Come Together

So far, I have covered all the fundamental elements that are needed for the Calculus of Identity Management.  Recall that they are:
  • Randomness, because unpredictability is hard to break,
  • Universal Unique Identifiers (UUIDs) and Nonces, for identifying unique things, and for providing unique tokens,
  • Cryptographic hashes, to check that documents have not been altered, or to keep passwords from prying eyes,
  • Symmetric keys, to pass data between two people, keeping prying eyes from understanding it,
  • Asymmetric keys, to share symmetric keys in a safe-ish way, and to confirm other people's identities,
And, as an honorable mention,
  • Steganography, because it's sometimes fun to hide data under people's noses!  And, er, because it can sometimes be useful to "watermark" pictures!
None of these things, by themselves, provide identity management.  They need to be brought together to do interesting things!  And this is where things get interesting -- because while we only have a small number of tools to manage identities, they come together in rich and complex ways -- in much the same way that atoms are relatively boring, but come together to provide innumerable molecules that make the universe possible.

As I cover these patterns, I won't be doing it in any particular order (beyond my personal whimsy); indeed, as these are sufficiently complex, there might not be an "order" to them that makes sense!

Wednesday, February 19, 2025

Work in Progress: Debugging Sweet Expressions

This is intended to be a brief report, so I'm not going to explain what Sweet Expressions are or why I am interested in them (although such an explanation, does, indeed, call for a "Curious Treehouse Musing") -- but in the process of trying to get this library up and running, I ran into a bug: it turns outh that Sweet Expressions cannot process a comment that comes right after a regular sexpr.

In the process of trying to wrap my head around the Sweet Expressions source code and figure out where and how comments are processed, I created a little macro to help me see how functions are called:

				
(defmacro debug-defun (fn-name fn-args &rest fn-body)
  "Like defun but adds debugging info around the function definition."
  (let* ((first-line (car fn-body)) ;; because the first line sometimes has to be the first
         (has-declare (and
                        (listp first-line) (eq (car first-line) 'declare))) ;; declare check!
         (body-bits (cdr fn-body)))  ;; the remainder of the body
    `(defun ,fn-name ,fn-args
      ,@(if has-declare `(,first-line))  ;; (declare) form?  Put it here!
      (if *debug*
        (progn
          (push "  " *current-level*) ;; "increment" the level
          (format t "~{~a~}IN FUN ~a~%" *current-level* ',fn-name)))  ;; print out level/name of fn
      (let
          ((function-result (multiple-value-list (progn ;; We need to capture the return value
                                                  ,@(if (not has-declare) `(,first-line)) ;; Not declare form?  first-line goes here!
                                                  ,@body-bits))))  ;; put last of body here
        (if *debug* (pop *current-level*)) ;; "decrement" the level -- we're done calling
        (values-list function-result))))) ;; return the result!
				
			

This macro can easily be "deactivated", too, by commenting it out and uncommenting this macro:

				
(defmacro debug-defun (fn-name fn-args &rest body)
  `(defun ,fn-name ,fn-args
     ;(format t "Hello, ~a~%" ',fn-name)
     ,@body))
				
			

But it turns out that this macro probably isn't as helpful as I thought it would be: I still struggle to see what value each symbol has, or what the algorithm is doing, at any one moment. It is the byproduct of how I am used to debugging code: with "print" statements! In my years of using PHP on web servers, Python on Amazon Web Services, and JavaScript, it's very easy (and sometimes necessary! because you don't have run-time access to an Amazon "Lambda" instance! among other scenarios) to use print statements or logs to show the states of variables.

However, in my previous internship, we were using Qt in a Windows application -- and while I probably could have used logs to get my bearings, and I sometimes even modified labels to confirm that I was looking at the right spot, it was more convenient to put in a breakpoint, attach to the running process, and step through the code. This, in turn, made me wonder how I can do that in Common Lisp -- and even accept that this might be the easiest option for debugging this issue.

Nonetheless, I appreciate creating that macro, even if I'm not going to use it. It was an interesting learning experience, helping me to better understand the power of Common Lisp macros and how to write them!

So, today, I'm going to try to do a "deep dive" into Common Lisp breakpoints and stepping. I also need to figure out how to run the Sweet Expression tests, or figure out my own way to do so -- and I need to create a test or two that would cause "Sweet Expressions" to choke, and then fix the issue.

Also, at the Experimental Airplane Association, this week is Virtual Ultralight Days! So I am going to try to catch a webinar or two on flying ultralights. (I have a special interest in aviation, and have a particular soft spot for ultralights, because my Grandpa built and flew one.)


Fun fact: For this post, I had to figure out how to insert code into Blogger; I found this blogpost and this help forum to be useful, although I had to figure out on my own that I needed to add the CSS into the blog's HTML, because the "Add CSS" option recommended in the instructions didn't exist...

Tuesday, February 18, 2025

Of Mice, Keyboards, and Men

Back in the ancient days where Intel 486 and Pentium processors were state-of-the art, monitors were vacuum tubes, 3 1/2" floppy disks were the the best option for transferring data between computers, and the best options for connecting between computers were by dialing into BBSs (Bulletin Board Services) via telephone and shareware called "QModem", I had an interesting realization:  that someday, all this wonderfully advanced -- and quickly advancing! -- computing technology will mature to the point where most people wouldn't feel pressure to upgrade every six months, or lament over the expensive computer that became out of date a month after purchasing it (which always seemed to happen whenever anyone purchased anything).

How did I reach that conclusion?  I realized that keyboards and mice were already there, and it would only be a matter of time before everything else got to that point, too.  It took about two or three decades to do so, but we're at that point today!  Well, monitors can probably advance a little bit to Augmented Reality setups that would essentially make 5-monitor systems portable -- or, for physical monitors, to have full-color movie-capable electronic ink (and now that the patents are close to expiring, maybe we'll finally start seeing advances in that direction!) -- but we have somehow even reached this point for silicon chips!  While I expected this to happen for silicon eventually, we didn't quite reach that point for the reason I expected:  I thought that processors would just get "good enough" to the point where most people wouldn't care, but instead, via the magic of Moore's law, we pushed them to the limits of our knowledge of physics.  I don't think I fully appreciated the role that 3D video games in particular, and the heavy computing needed for engineering and meteorology in general, would push technology to this point.

So, naturally, knowing about this technology plateau, I've given a lot of thought in how I'd design a keyboard and a mouse.

Wait, what?!?  Why the heck would I spend so much time and energy on stale, mature technology???  Do I expect to significantly advance the state of the art of these input devices?

No, no I don't.  I am interested in keyboards and mice because I want to customize them -- and, perhaps, to make it easier for others to customize their input devices, too.  Granted, this isn't for the faint in heart -- anyone who is perfectly satisfied with their $15 squishy keyboard and plain $10 mouse probably won't be interested in pushing these things to their limits -- but it can be fun for those who wish to explore, and may even make using these things a little more comfortable, particularly for those of us who are using these devices for several hours a day.

When I started working at my last position, the IT Department (aka Joe) set up a nice computer with a GPU, two monitors ... and a spongy computer and simple mouse.  My wife (who was also a co-worker at the time) suggested I ask Joe for a better keyboard and mouse -- but I didn't do so, for two reasons.

First, I have a perfectly serviceable keyboard and mouse that I had chosen both for their features and for their portability.

My keyboard, something like this Keychron K8 with blue key switches, was "only" $50 when I purchased it several years ago -- at the time, I was expecting to pay $200 when I finally decided to pull the trigger on a mechanical keyboard -- and before I pulled that trigger, I purchased a "sampler" of twelve different kinds of keyboard switches, each with their own levels of "tactileness", "hardness", and "clickiness" -- ultimately, I chose the blue keys, because they had a nice internal tactile "click" that lets me know I made contact without having to bottom out, they required only the softest touch to be activated, and they had a nice loud "clickity-clack" sound while typing.

My mouse, a VicTsing Pioneer, "only" cost $35 -- this mouse no longer seems available, and as far as I can tell, but is similar to this Rapoo MT760 mouse, which looks like it's $50 -- and it was chosen for its a nice horizontal thumb scroller I use regularly, a couple of thumb buttons I wish I could figure out how to use better, and overall was pleasant for me to use except when it froze up (which, unfortunately, it does fairly regularly, particularly when the power is getting low).

Second, to the degree I'm interested in getting a different keyboard and mouse, I want to get something significantly different!  I want a split keyboard with staggered columns instead of staggered rows, with each thumb having half a dozen small keys available (as opposed to just a giant spacebar and a couple of hard-to-reach modifiers), lots of layers, and the ability to attach several devices via Bluetooth and USB so that it could pass everything on to the computer the keyboard is attached to -- essentially, making it as if the keyboard, mouse, and headphones all shared a single dongle or Bluetooth connection -- and I want a mouse that has three push-button horizontal scrollers for my thumb, two buttons each for my two fingers, and a vertical scroll that could be tipped left or right as well as pushed straight down -- or, perhaps even have three vertical scrollers, each being tippable either direction -- and the mouse should have a wireless dongle that can fit into the keyboard.  Lately, I've been wondering if I could make all the buttons pressure sensitive -- allowing any one button to play the role of "Wacom tablet pen", but for mouse movement.

Oh, and I'd like to try a Space Mouse as well!  (Which was potentially relevant for my internship!)  I'm not entirely sure if I'd like it, but I like to experiment -- it's just that I haven't yet been sufficiently immersed in the 3D world to justify the expense!

So, yeah.  I opted to choose to bring my own keyboard and mouse, rather than ask for equipment I would like (and would have had to leave behind, anyway).  I had to buy a Bluetooth USB dongle to get my setup to work, but everything worked fine!  (Except when it didn't.)  I want too much customization, and for what little customization I had, I was comfortable with what I have found so far.

Incidentally, I used to hate Bluetooth, but it's grown on me.  While I would still rather use a USB wireless dongle than Bluetooth, I'm to the point where anything I make that has a dongle, is going to have Bluetooth, too.

Monday, February 17, 2025

Transcendental Computer Languages: Or How I Learned to Stop Worrying and Embrace the Lisp

When it comes to computer languages, there is a mantra popular among software engineers.  "Use the right tool for the job!"  For every project you might want to work on, there's a computer language just right for that project -- what's more, for every aspect of that project, there's the perfect language!  Every software developer should have a dozen or two languages in their toolbox, ready to go, to do do everything effortlessly using the best tool possible.

While computer languages seem to "specialize" for different "use cases" -- Assembly language is a language unique to a given processor; a Systems language is "close to the metal", requiring you to do "bookkeeping" chores by hand, but at least gives you "if/else" branches, "for" loops, and "functions"; and from there, higher level languages that manage memory for you, and have a lot of features built in or easily available as libraries.  If you're doing something that's "throw away after using once" or is so dominated by waiting for user input and internet communication times that the processor is mostly idle, you don't really care if you stop to automatically every fem moments to clear out some memory -- but if you're working at the inner core of the operating system, you'll want to use a Systems language where every bit of memory is managed by hand, and every moment is timed perfectly ... and where you can "dip into" Assembler if you're doing something particularly esoteric.  This naturally creates a hierarchy of languages, from "low-level" to "high-level", with the most manual at the bottom, and the most hand-holdy at the top.

So you wouldn't use a Systems language when writing an operating system, right?  And you wouldn't use a high-level language when working on the operating system kernel.  And you'd want to use a language specializing in web development for creating and managing websites, or a data processing language to do statistics and analysis, or a UI language for designing user interfaces, or if you want to, well, ....

Yeah, about that ... for all this talk about "The right tool, darn it!" there's an awful lot of "settling" into just a handful of tools.  C for systems programming, PHP and HTML/CSS/JavaScript for web programming, MySQL for databases (put aside that PostgreSQL may be a bit better), Python for data crunching, C++ for some applications ... and I'm not at all convinced that some of these are really the "right tools".  JavaScript, in particular, dominates its space not because it's the world's greatest language for capturing interface design and web interaction, but because it's pretty much the only option offered it its space.  C and C++ dominate where they do mostly because of a mix of inertia and Tradition, tradition!, particularly with languages like Rust on the horizon, promising to offer systems programming with the safety of scripted languages, somewhat.

The funny thing about most of those languages, is that most of them are "Turing complete" -- which is a fancy way of saying that every language can do anything any other language can do -- and because it can take so much time to master a language, and that language environment becomes a comfortable place to live, and it's such a hassle to download and install a new language ... all too often, the "best tool for the job" is simply "the one the project's already written in" or "the one I'm most comfortable with" (or, in the case of companies, "the three or four languages we think most of our software engineers are comfortable with") ... and because many languages are "general purpose", they can do everything you need to do, even if some of what you do is "against the grain" of the language itself.

Yet, despite this, there are all sorts of reasons for learning new languages!  Many years ago, when I was starting out as a web developer and software engineer, I stumbled onto a weird "functional" language called Haskell.  At the time, my brother-in-law asked me "Why are you learning that?  Where are you going to use it?", and I don't think I answered the question well at the time (or even perhaps at all), nor do I think I can coherently answer it right now.  As vague as I have to be about the answer, however, I can at least provide a few suggestions:  it's a new paradigm that, if I understand it, can help me be a better programmer; it's something new and interesting, and I crave variety, because one can't live life solely dedicated to only things that are "useful" (although I probably crave variety more than most people); as a mathematician, I couldn't help but admire its pure mathematics roots (yet, as an exploratory software engineer, I simultaneously despise its pure math roots, figuring one shouldn't have a PhD in mathematics to comfortable programming!); it's weird, and I love weird things; who knows, maybe it will prove to be useful after all?

And besides, eventually, it did prove useful!  I spent a few months using Haskell to help implement a "consensus" algorithm for a cryptocurrency ledger.  Also, several years ago, I started seeing weird themes crop up in JavaScript meetups:  algebraic types -- for better and for worse, Haskell-ish things were making their way to the web!

But those use cases were well into the future.  In the meantime, as I was first learning Haskell, I gradually developed the notion of a "transcendental" language -- a language that could reach down into the lower levels of the system, and eek out performance there, yet stretch out to the highest levels of abstraction, and provide simplicity without worrying about all those "fiddly bits" that makes low-level programming such a pain.  Ironically, I would shortly conclude that Haskell isn't quite transcendental, as powerful as it may be:  it's too rigid, it requires too much understanding of higher mathematical ideas, and it's too "bulky" (someone once pointed out to me that the compiler is about a gigabyte in size) -- to really be "transcendental".

But my exposure to Haskell introduced me to a language that is transcendental.  While Haskell is the first pure functional language, the first functional language came out in 1958, "specializing" in "LISt Processing", created a year after a language specializing in "FORmula TRANslation" came on the scene.  Because Haskell is a relative newcomer compared to these languages, when Haskell enthusiasts extolled the virtues of "functional programming", they relied heavily on essays describing the power of LISP.  And thus, I started down the path of learning about Common Lisp and Scheme.

(I had technically encountered LISP before, in a book comparing four different languages, but the book was so focused on showing how you could do the same things in four vastly different languages, it somehow made the languages seem equally capable, so at the time, I wasn't convinced I ought to explore LISP.  Having said that, it's been years, and I may be misremembering -- or may have misunderstood -- the tone of the book.)

Unlike "FORTRAN", which really was a language that specialized in formula translation (and to the degree it departs from this, there's the saying that "No one knows what language scientists and engineers will use 100 years from now, we just know it's going to be called FORTRAN"), LISP didn't exactly specialize at all.  LISP introduced pretty much everything we expect in a modern language -- functions as first class citizens, garbage collection, symbols, variable and keyword arguments to functions, anonymous functions, native data structures and the means to create new ones -- well, maybe not everything we expect -- things like object-oriented programming and exception handling didn't come until later -- but it managed to literally absorb those things as they came down the pike -- and LISP includes, even now, a few things we don't have in computer languages, such as macros written in the language itself -- taking advantage of the fact that every program written in LISP is a list of lists, which are essentially trees, which are easy-to-manipulate data structures -- and thus, it's possible, even easy (or at least, easy-ish, because some problems are inherently hard) to create "mini languages" that are just as much a part of the language itself, as any other thing you might do for that language.

Thus, LISP has this ability to reach into the very bowels of a system, even resorting to Assembler if it has to, yet it can easily reach the highest heights of abstraction.  LISP is a transcendental language!

LISP isn't the only one, to be sure.  I wondered whether FORTH should be considered one, because it's low-level and doesn't have garbage collection, but a demo of FORTH on an Arduino with 2 kilobytes of memory convinced me it deserves to be included in the category.  I also figure Smalltalk, too, should be included, although it's not well known as something that can reach down to lower levels -- but it's not hard to see how that ability could be added to a flavor of Smalltalk, if it is needed.  Are there others?  I don't know.  I do know, however, that some of my favorite languages -- APL and J, in particular, and Elixir as well -- have ideas I appreciate, but nonetheless don't manage to transcend the differences between different levels of language.  (And for Elixir, I wonder if I'd change my mind if I understood the language better.)

It is interesting to observe that these three transcendental languages have been used to write operating systems -- this is only natural, considering their ability to reach down into the bowels of things -- and these operating systems are known for their flexibility, which is the result of their ability to abstract ideas.  This doesn't necessarily mean that only transcendental languages can be used to write an operating system -- BCPL, C, C++, PL/1, and even Pascal and FORTRAN are low-level counter-examples, and I'd suggest that Python and Ruby, while not really operating systems, can be counter-examples as well -- but overall, if a language truly is transcendental, it's going to have a stronger "gravity" towards becoming an operating system than other computer languages.

But then, this is likely a red herring, anyway.  At some point, I think I'll write a post explaining why all languages want to be operating systems!

Saturday, February 15, 2025

Ideas Thrive When They are Free

I have been asked by several people if I'm going to patent my work.  The answer is "no", and I have both philosophical and practical reasons for this answer.

I know this is going to be difficult to believe, but ideas are living things.

Just what does it mean to be alive, anyway?  Living things take energy from the environment, and use it to alter that environment.  They reproduce.  Perhaps they mate (this may be optional, but exceptions seem to be rare).  They grow.  They die -- although dying, in some sense, is probably optional as well.

Plants and animals are obvious examples of living things.  They live, grow, reproduce, take energy from the environment (sunlight for plants, plants and animals for animals), and they die.  But I would propose there are less obvious examples as well:  cars and houses and cities reproduce (via manufacturing) and mate (via engineering), they take fuel, and they alter the environment -- but these things are only living because, much like every living cell of a human has mitochondria living quasi-independently of that cell, every car, house, and city, so long as the humans inside them maintain them, show signs of life -- and when they are abandoned, they die, no longer taking in fuel or exhaling, no longer being "healed" from wear and tear, rusting away and crumbling from abandonment and neglect.

Can ideas be alive?  They live in people's heads, they affect the actions of humans, when they swirl together they can create new ideas, they can inspire the creation of physical objects (some living, some dead).  And they reproduce -- when a human shares an idea with another human, two humans now have it, then four, then an entire culture -- and when humans no longer pass them along to the next generation, they die, or at least go into hibernation.  Ideas can be dangerous, too, as the existence of Communism has proven, time and time again.

Now, if ideas are living things, it's natural to ask:  What makes them thrive?  What limits their growth, or even kills them?

I would propose that ideas thrive when they are shared, when they are taught, when they are put in books and on the internet, when they are embodied into physical machines ... and they whither away and die in obscurity because no one thought the idea was worth preserving ... or because no one was allowed to share it!

We have a lost work of Archimedes that was discovered by x-raying a centuries-old prayer book -- at some point, a monk decided to reuse a random book for that purpose, because parchment is hard to come by -- and while we don't know what the monk was thinking at the time (it could very well be "we already have four copies of this, and the monastery the next town over has two" -- or it could very simply be "eh, this is just arcane mathematics, no one would care if I used it for this") -- the monk nonetheless decided the world wouldn't miss one less copy of that work, and, ironically enough, accidentally preserved it so that could be restored in the future, by a culture that would value it!

Now, it so happens we have two entrenched legal traditions that are particularly poisonous to ideas:  patents and copyrights.  Both versions of idea "protection" can snuff out the potential of of an idea even before life can be breathed into it.  Thankfully, they eventually expire, so we don't forever stagnate, but we suffer until that happens.

Examples of how patents have hampered ideas are numerous, but one that's dear to my heart is the airplane.  When the Wright brothers successfully made their first flight, they got a patent, and they diligently tried to get everyone to purchase a copy of their first flyer ... and sued everyone who attempted to fly on their own ... including anyone who came overseas from Europe.  They never really got any traction, until Congress decided to buy out their patents, and create the National Advisory Committee on Aeronautics (NACA).  NACA then went on to research the heck out of all things airplane, and make it free, available to any manufacturer who wished to make an airplane.

Why did Congress resort to such drastic action?  A hint can be found in the etymology of aeronautics terminology:  if you look closely, you'll see about half of the words that originated from French!  Why French?  Because while the Wright brothers were eagerly suing everyone trying to fly an airplane in America, in Europe, the patents didn't apply, so experimentation took off.  It got to the point that, with WWI looming above the horizon, Congress noticed that Europe's airplanes were significantly more advanced than anything in America.  We needed to do something to catch up!  And that "something" was effectively the neutralization of patents for all things aviation.

Another example that's dear:  software, which has always been subject to copyright, and has recently (approximately 1980, if I recall correctly) become subject to patents as well.  But what is the most successful software company in the world?  Microsoft -- a company that spread like a virus because of piracy, just so that people could use their computers.  What is the most successful operating system?  Linux -- it's on pretty much everything but desktop computers -- and it's a system that has grown by explicitly rejecting the protections of copyright.  Patents were harder to deal with, but eventually Linux became popular enough that major companies just adopted the system in their licensing pools, so that no one would have to care about potential liability of using patent-infringing software.  

Rather than a boon for inventors and writers, copyright and patent laws strew mine fields across the landscape -- creators always have to be wary of predatory publishing contracts and patent trolls and being accused of stealing story and picture ideas, and what's worse, if you're going to sue someone over a violation, you'd better have plenty of money and patience to do so!  All these things are hassles I just don't want to deal with.

What's more, the biggest fear I have in my endeavors isn't that people will steal from me, or pirate my work -- my greatest fear is that I'll languish in obscurity.  Every book that's in a library, every Github repository, every blogpost and yes, every book sale (new or used) is an attempt to get away from that obscurity.  If someone pirates my work, why should I care?  More people get to see it!  If someone copies me, it's because my idea is a good one -- and while someone is copying it, I'm already ahead in refining it, and while someone is trying to push a stolen idea, I'm already working on the next one!

That is why I'm not going to patent anything I do.  I am going to declare everything technical "prior art" (I call this "patent preemption"), any software will be licensed under the MIT Software License (the one that provides maximum liberty), and any non-software material will be licensed under CC0 (the least restrictive version of the Community Commons License).  My only concern about piracy is that companies like Amazon like to demand exclusivity for publishing -- and if they find the work on some pirate's website, that may make them a little temperamental -- but I figure that in those cases, a "Cease and Desist" letter may be called for, if only to show the Amazon bean counters that these things aren't distributed with my permission.  And in environments where I may speak with publishers and authors who have every reason in the world to avoid discussing ideas,  I may even request people to sign "Free Disclosure and Use Agreements" where I waive any right to sue over sharing my ideas -- on condition that they don't sue me, either! -- and this, so that we could talk freely about anything, and let the chips fall however they may.

What's particularly ironic about the concerns about story idea "stealing" is that it's not particularly unusual for two different people to get a similar idea independently of each other, at the same time ... and even then, particularly for something as unconstrained as story writing (physics can impose constraints that give the creators fewer options), two different people will have different takes for the same idea!

If you find something inspiring on this blog, or in any of my work, grab it and run with it!  I will take it as a compliment.  It will mean that something I create is interesting to other people!

Friday, February 14, 2025

The Reality of Free Will

Several years ago, I encountered an interview with Sam Harris by Dave Rubin (this may be the interview, but I'm not 100% sure), who made a rather convincing-to-me (at the time, at least) case that there's no such thing as free will.  He talked about determinism, how quantum mechanics isn't enough to save us from that determinism, and that even if we had a "soul", that soul is still telling us what to do -- so we have no free will whatsoever!  He then insisted that if we just embrace this, we can be more forgiving of everyone around us, and in general, live a happier life.

It wasn't "convincing" enough to me to agree with his conclusions, but for a week or so, I kept catching myself thinking "I have to do this because I do not have free will" or "I can't do this because I have no free will" and I started wondering -- what the heck was going on?  Apparently, if free will doesn't really exist, I need the illusion of free will just to function!  When I accepted this, I was able to function normally.

Over the next several months, after encountering others who made similar cases (some of whom I respect to one degree or another), I continued to reflect on what happened that week.   As I tried to puzzle out what happened ... I noticed a certain free will strawman that opponents of the notion of free will fall back on, to discredit the entire notion.  "No one has free will," they say, "because everything is determined by the laws of physics, and so no one can do anything random, and thus free will is impossible."

Just how is this a strawman?  The problem rests in the fact that no one sits down and defines "free will".  Most people generally think of "free will" colloquially as "the ability to make choices" -- and these opponents of free will observe that everything is determined from the beginning of the universe -- and then they conclude that, because we can't do anything random, we have no free will.

But this deserves a little more delving into.  What do people generally mean when they say "the ability to make choices"?  Do they really mean "the ability to act at random"?  I would propose that the answer is "no" -- that most people, when they intuitively think about "free will", are generally thinking that people have the ability to take in their environment, logically think about it, and then make conclusions on how they can alter their own behavior.

Granted, the "logic" involved here might not be the best -- we're naturally pattern-seekers, after all, and our logic isn't always sound -- and we nonetheless have things we cannot do, due to our circumstances -- for example, I cannot sprout wings and fly, nor can I regrow a leg if I lose one, or walk afterward -- but I do have the ability to think about how I might make wings, or create a prosthetic, or get a wheelchair, and then think about how I might use these things, and then act on it.  To the degree that we can do this, limited only by the physics of this world, is the degree we have free will.

For example, a female ferret doesn't have free will when it comes to mating:  if she doesn't mate when in heat, she dies.  That doesn't mean, however, that she has no choices.  She can be trained to do things, if she decides she wants the offered rewards, and if given things to play with, she can investigate them and try them out.

As a human, I have much more free will, because I can sit down and think about mathematics, or work out a story, or sit and think about the conversation I had the other day trying to figure out what I did right or wrong ... and I can seek out help when I need it from others, to fulfill goals I set out for myself.  I can also take the advice of others, and think about how it might apply to me -- and I may try it or not, to see if I like it, or I might recognize something about myself that allows me to conclude the advice is nonsensical for me.

Do computers have free will?  The underlying silicon does not -- the silicon very strictly follows the laws of physics, which have been forced to process electrical signals in certain ways.  I have spent some time trying to justify the idea that a thermostat has free will, but now that I realize that free will needs both behavior and the ability to change it, I realize that thermostats don't have free will -- and that silicon of a computer has as much free will as a thermostat.  Software, however, can have behavior, and the behavior can even be emergent -- but I cannot help but see hints of free will, with the fairly recent report that the algorithms set to process satellite images for Google Maps will generate random photos to fit in to a deadline given by the software engineers.  I don't think we can purposely "give" a computer free will -- but I think we'll see it as an emerging property in any system that has sufficiently complex behavior.

It is a mistake to assume that determinism means we cannot have free will:  if we cannot depend on ourselves to make the decisions we like, can it really be said to be free will?  We need a predictable universe in order to have at least somewhat predictable outcomes so we could figure out our own preferences -- and we cannot do that if we act randomly for every decision we have to make!  And what's more, there's likely a certain amount of randomness built in to our behavior, anyway:  after all, discovering our preferences requires numerous trials and comparisons to figure them out -- and when we see the frequency of randomized trials will decrease over time, we do so only because we have learned the lessons of the experiments, and will choose accordingly.  After all, how many times do I have to try peas to know I hate them with the passion of a thousand suns?  Free will doesn't disappear with this lack of randomization -- on the contrary, developing preferences over time is at the core of free will!

Thus, contrary to the notions of opponents of free will, determinism doesn't cancel out the possibility:  it's a requirement!

So go, follow the advice that free will opponents paradoxically seem to always give, after they make their case:  go do good things, for yourself and others, knowing that you can make a difference in the world!  And don't necessarily expect to be able to change others, for good or ill -- they also have free will, even if they don't see the wisdom of your suggestions!

Thursday, February 13, 2025

Curious Treehouse Musings: An Introduction

I have always wanted to design a computer language.

Growing up, I only had access to BASIC of various flavors -- Atari, IBM, and a weird beast of a computer that had 8" floppy drives and a dumb terminal that my Dad brought home from Sperry Univac -- so, when my Dad noticed I had an interest in games programming, he suggested I learn C -- and I found the fantastic "C++ Primer Plus" and fell in love with the ways this new language made programming more practical!

Shortly after that (and this was theme of my first year of college, in particular) I hunted down and explored as many languages as I could.  I found Modula-2 on the only Mac in the college computer lab.  I was exposed to Parallel Pascal, and I ran into Forth, Lisp, APL, Snobol, Assembler, Ada, and several other languages.  In the process, I discovered something interesting:  my favorite languages were anything that was as different as C as can be ... and while I came to despise anything that resembles C, mostly because they had stupid conventions that drove me nuts (take Modula-2, for example:  ALL CAPS keywords, BEGIN and END blocks, among other irritating issues).  It was bad enough that, when I took a 3-year hiatus from college, and returned to find the department had switched from C++ to something called Java, I was nervous ... until I saw it was pretty much like C!

The only exception to this is Python, which, when I first discovered, I breathed a sigh of relief:  I no longer have to think about linked lists!  Python can get away with departing from C's syntax because Python makes things simpler, and tends to avoid a lot of the superfluous syntax that's as much as a hindrance as it is a help.  (In particular, I will never forgive Pascal for requiring a semi-colon after an "else" keyword -- what the heck, how does this help with anything?)

As I look back on this, I realize I had unintentionally absorbed certain principles that have influenced my desires and abilities to learn languages:  that syntax is evil, and that the more a language can provide for you (at least in terms of data structures, and to some degree libraries), the better.  Over the years, I have also discovered that precedence of operators itself is evil.  And over time, I have come to realize that I struggle with learning a language when I am not convinced it has something new to teach me, and don't have an outside force compelling me to learn.  (This is why I have been unable to learn Ruby -- I cannot convince myself it's sufficiently different from Python to be interesting to me.)

Now, however, as I have become more familiar with computer languages and environments, I realize that it's probably impossible for me to create a language from scratch, particularly if it isn't substantially different from all the different languages available today!

At the same time, I also recall years ago encountering an interesting question:  "If you were on a desert island, with a single computer, a single language, and all the documentation you want, what language would you choose?"  The answer given made some sense to me at the time, "I'd choose C, because it would give me performance, and I could always write up my own Lisp system", but over the years, I came to realize the folly of this answer, which could be summarized by Greenspun's Tenth Law:  "Any sufficiently complicated program written in C or Fortran will have a buggy, ad-hoc, half-implementation of Common Lisp", along with the corollary "including Common Lisp".

And that's the catch:  a language isn't just syntax!  It's a compiler.  It's an interpreter.  It's libraries.  It's conventions and idioms and optimizations.  It doesn't matter how simple or complex the syntax of any language in general, and Common Lisp in particular, may be:  syntax is only one part of the language!  You need ways to allocate memory (malloc and garbage collection), juggle the scheduling of functions, reading and writing to disk, and so forth.  Even with a language like C, every compiler offers optimizations no other compiler has -- yet every compiler misses optimizations too.  In short, any single computer language worth its salt offers so much, it would be a major effort to implement it!  What's worse is that Common Lisp is almost just as capable as C for optimization -- you just have to consult the docs, add optional type declarations, and maybe resort to assembly language for the particularly fiddly bits -- but the performance can be eeked out, nonetheless!

So, as much as I would like to create my own language, I have to accept I don't have the time and energy to do so -- and thus, I have to choose among the languages that are available to me instead.  And the language I am most attracted to, at this point, is Common Lisp -- it has a weird dynamic of "purity" and "practicality" that appeals to me, and it has a certain flexibility I admire for experimentation -- which means that I have a certain amount of freedom to experiment with language design without having to re-invent an entire language ecosystem!

Yet people for years have been complaining about Common Lisp, how it needs to be "modernized", how parentheses should be eliminated, how it needs up-to-date libraries and tools, and how it is so misunderstood -- everyone thinks it's about "lists" when it's really about "trees", and everybody thinks it's slow, and besides which, "Lisp" is a stupid name, why would anyone want to adopt something that doesn't have a cool name?  It would be like saying you drink "Slug Cola".

With those objections in mind, I decided to start a "Treehouse Initiative".  Originally, it was intended to be a new language in its own right, but at this point in time, I merely want it to be a layer over something well-established.  What's more, however, I have also come to accept that all these objections to Common Lisp are flawed in a major way -- mostly, in that they are matters of opinion.  Thus, I have decided that this Treehouse Initiative shouldn't be an effort to "fix" Common Lisp.  Instead, it is going to be an effort for me to create a language and environment I like.  I will invite other people to join in on the fun, and to create changes they like, too, and to discover and/or create new libraries that would be useful for their projects, as I will for my own.  But I'm not going to pretend that this is the "best" way to do things, or the way that Common Lisp (or anything else, for that matter) needs to be "fixed".

Like the attempts to fix Common Lisp that have come before it, I kindof expect this attempt to stagnate and even fail -- in particular, if I get bored with it, or if no one else finds it interesting, it will almost certainly be doomed -- but if enough people take an interest in this approach, it may very well take on a life of its own.  It will be fun to see what happens!

What kinds of things do I have in mind for this little initiative?  The first, ironically enough, is syntax.  The notion that the parentheses just "disappear" has never been true for me -- and this is true as much for C-style languages as it is for anything else -- indeed, if we ever ran out of curly braces, C-style languages will be in trouble!  While Python's whitespace syntax helps alleviate the need for braces, I've generally thought that commas get in the way as well, and would appreciate a syntax that doesn't use commas for data structures or separators.  Hence, I am in the process of trying to debug the "Sweet Expressions" library, both to fix a bug with comments, and to expand it to include things I appreciate.

The second are libraries.  I would like to use Elixir-style actors and pattern matching.  I need to explore libraries that try to implement these things.  I would also like to figure out how to use the "Cells" library, too, for dataflow management -- which I expect to be helpful when I explore "parameterized dependencies" in computer graphics.  And I need to come to terms with GPU programming in Common Lisp.  I intend to put some effort into keeping track of and even recommending libraries I like.

The third, I'd like to improve documentation for these libraries, and get into the habit of writing out notes as I explore things.  As I have explored some of these libraries, I have struggled to understand how to make use of them, and I figure if I can get good at documenting my discovery process, it will be helpful for other people.

Now, I wanted to have a name for the ideas I want to throw out there -- maybe these kinds of things should be in some sort of forum, so that various topics like these can be discussion points -- but for now, I am just throwing things out on my blog, to see how things will go.  Nonetheless, I'd like a name for them -- I like the humble "Request for Comments" used to propose standards for the Internet, named when the original research group had no idea what they were doing, nor whether there were other researchers who were supposed to supervise them somehow -- but it's also a name already in use.  Similarly, Python has PEPs -- "Python Enhancement Proposals" -- but I don't particularly want to think of these as "enhancements", in no small part because I want to recognize that what might be an "enhancement" to me might be "detrimental" to someone else, and vice versa -- and I figure that, with a language as flexible as Common Lisp, it shouldn't matter what any one person thinks is an "enhancement" or a "detriment" -- they can all co-exist just fine!

So, I decided I'd call these rambling things "Curious Treehouse Musings", and let people figure out for themselves what works, and what doesn't, be it as it may.

Wednesday, February 12, 2025

The Completion of My Internship

Last Thursday, February 6th, I concluded my internship.  I was originally going take a moment to analyze what I have learned, but I realize I've pretty much covered everything in "The State of the Blogger", and I don't see anything I ought to add from there.  Indeed, of the three options I had, I had come to the point where I accepted this as the "best" option -- I may have been able to continue working as a part-time Intern (which would have been the "second best" option), although it would have been a recipe for perpetual burnout -- but I am very glad I wasn't asked to join work full-time, which I considered the worst option for me.

When my internship had come up for review, I was given the option to continue working for two weeks after the internship lapsed.  At the time, I accepted this, wondering "Am I just doing this for the money?  Why don't I cut my losses immediately, and move on to my next adventure?" but as I was completing these two weeks, I came to appreciate the opportunity to wrap things up -- to finish one more project (albeit with dangling bits I didn't have time to fix), and to appreciate the environment I was working in.

Aye, that's the rub:  no matter how ill-fitting I am in an organization, I almost always appreciate the people I'm working with, and I almost always appreciate their missions.  In particular, I have come to realize that functional businesses that successfully provide goods and services are just as noble -- if not more so -- than even the best non-profit organizations, because those goods and services provide value to the customers!  The only problem I've always had with this, though, was that I never really got to work on things that interest me -- the problems I worked on were almost always dictated by the needs of the organization, and often I would be relegated to work on the "mundane" aspects of those problems -- and this, in turn, would lead to burnout, which would cause a drop in performance, which, more often than not, would lead to unemployment ... which would then lead to lousy attempts at networking and eventual job search burnout until I found the next position ... which always started out exciting, but that's merely the start of the cycle!

What's more, I have come to realize that the standard formula for financial success (which can mean anything from having a comfortable home and paying off the bills to becoming a billionaire and starting your own space program), regardless of becoming a full-time employee, or becoming an entrepreneur, or going the hybrid route and becoming a freelancer, is simple -- network like mad, figure out what people need, and specialize (ie, do it over and over again) to provide that which people need -- but this formula is out of reach of my abilities .  I cannot network when I cannot initiate conversations with strangers, and I get burned out when I do the same thing over and over again!  (Granted, the first time I do something, it's interesting, but it gets old after a while.)

So, what should I do instead?  As of right now, I'm trying to stabilize my routine:  sleep from midnight to 8 or 9, wake up, study Scriptures, "brain dump" onto this blog, and then work on a personal project.  And I have a lot of projects to work on!  Just a few things, in no particular order, as a sample:

  • Explorations of Common Lisp syntax called Sweet Expressions, to take advantage of whitespace similar to Python,
  • A 3D GPU-accelerated graphics environment, based on dual quaternions instead of matrices, that would hopefully grow into a CAD environment that can be used to design a liquid-salt thorium space station,
  • A command for Bash that will view any type of file, so I don't have to "context switch" between "ls", "cat", "okular", and perhaps even "diff", among other things,
  • A simple operating system for a Propeller processor-based electronic conference badge,
  • An eight-bit stream binary format for generic complex yet structured data,
  • An app to edit a "syzygy" of file formats that combine text, audio, photos, video, location data, and changes (among other things) in a single digitally-notorizabe format,
  • A "computer easel" that combines my favorite keyboard, mouse, 2-in-1 laptop, and portable monitor into a single portable environment,
  • An "armadillo" trailer that can expand from a simple box into something that can resemble a camping space or workshop,
  • A table-top role playing system using playing cards (and card-counting!) instead of (or, more likely, in addition to) dice to generate randomness,
  • Airplanes and helicopters of all sorts,
  • A custom keyboard and mouse (designing mice is particularly hard, since it's not nearly as obvious on how to collect the parts).
Perhaps some of these things might be marketable -- I have shown a cardboard prototype of my "computer easel" to a couple of people, who have really liked what I have done -- but I am not particularly interested in finding customers, doing market research, or running a business once the basics of manufacturing are figured out; on the other hand, maybe I'd be interested in figuring out the details for bringing something to production ...

I plan on sharing what I work on, including design notes, as blog entries, often with photos and video; since I don't want to run any businesses (although I'd be more than happy to start them! perhaps on a short-term part-time contractual basis), I hope I can sustain myself on donations, and in particular, I hope that someone who wishes to try to bring one of these things to market would be willing to offer monthly donations.

To do this, I will naturally need to ask for funding, so I have set up a Campaign at GiveSendGo:  The "Trash Panda Arcane Research Center".  Additionally, as I write blog posts, I also intend to collect various posts, perhaps refine them for a bit, and then publish them as e-books or books-on-demand.  Come to think of it, I may even discover it's possible to do "manufacturing on demand", at least for smallish items.  And who knows?  If I get a substantial enough fan base, maybe I can produce merch!

In any case, this is the direction I've been thinking of going in the last days of my internship.  Heck, I've been thinking about doing something like this since at least my college days!  So it will be interesting to see what happens, as I forge ahead in this direction.

Tuesday, February 11, 2025

Identity Management Atoms: Steganography

Steganography is the art of hiding things in plain sight.  While this isn't a cryptographic thing -- indeed, it can be practiced without cryptography at all -- it is nonetheless something interesting enough that I thought it should be included as an "atom" in its own right.

Perhaps the funniest example (to me, anyway) that I have come across is found in the docs describing how to use SSH (ie, Secure Shell, an app used to securely connect with, and transfer data between, remote computers) -- the docs explain that, while SSH can ensure that data transferred between computers will be safe, they cannot prevent data from being "leaked out" by other means -- such as by encoding data in the sizes of data packets sent by SSH.

A more visual example can be provided by considering a lowly picture format, say JPEG, and observe that each pixel consists of four 8-bit parts -- Red, Green, Blue, and an "Alpha" to indicate how transparent that pixel is (because, hey, if you want to fit 24 bits into the standard 32-bit word that most modern computers use for memory, you might as well do something with that extra eight bits!).  If we consider the Red byte, and observe that "0000 0000" gives us absolutely no red, "1111 1111" gives us the darkest red, and overall we have 256 individual shades of red to choose from ... sure, if we consider two shades adjacent to each other, say, one colored "0000 0000" and one colored "0000 0001", or, for that matter, one colored "1011 1110" and one colored "1011 1111", our eyes can tell the difference between the two ... if they were large squares!

But if we make that subtle change for a single pixel, and only do those subtle changes for all the four values that make up that pixel, and then surround that dot with eight other dots, it becomes much harder to spot!  And by using this particular technique, each 32-bit pixel gives us 4 bits we can play around with.  That doesn't sound like much, but when you consider that a single photo has millions of pixels, we can suddenly hide lots of data!

Naturally, we can wonder:  "What does this have to do with Identity Management, besides the obvious role of transferring data underneath people's noses?"  The most common use of this technique is to "watermark" visual data, so that someone who publishes an original photo or video can demonstrate later if someone else just copied it.

Steganography has a couple of weaknesses, to be sure.  For one thing, if someone knows what to look for, they can find the data using statistical analysis -- but this starts off an "arms race" of sorts, where the person hiding data uses more sophisticated techniques to hide it, while those who are looking for hidden data use more sophisticated statistical analysis to uncover the information.  One good example of this is to consider how JPEG uses sophisticated trigonometry as part of its format -- and then consider that data can be hidden in the coefficients of each cosine function used.  Another, perhaps more simple method, is to simply encrypt the data you wish to hide -- because every camera has subtle variations in every pixel, and heck, even subtle variations from second to second when each pixel sensor reads data! -- encrypted data becomes indistinguishable from the natural "variance" and "noise" that surrounds us.

Another weakness of steganography is -- for watermarks, at least -- if someone has reason to believe that a photo or video has been invisibly watermarked, and wants to remove that mark, it doesn't matter if the mark is encrypted or not -- all the person has to do to remove the watermark is to hide their own steganographic information into the picture, which would clobber the watermark.  This can even happen accidentally, if the image is merely edited and manipulated before it is re-published.


Monday, February 10, 2025

Identity Management Atoms: Asymmetric Public/Private Keys

Asymmetric keys are the final element we need for Identity Management.  So far, everything we've covered makes it possible to send data secretly, and to make sure that we can confirm that data we send or receive hasn't been tampered with -- but we cannot share Symmetric keys easily, out in the open, where everyone can see it -- we have to share these things privately -- and that's kindof difficult to do on a forum open to the public, such as the internet.

Heck, even if we limit our communications to pencil an paper, it might be nice to share a way for people to reach out and contact us!  If only Alice could pin a key of some sort on that bulletin board, so that Bob can encrypt something and share it with Alice.  That way, Alice wouldn't even have to meet Bob to exchange information privately!

The first algorithm that provided for just this is called the Diffie-Hellman Key Exchange (that some suggest should be called the Diffie-Hellman-Merkle key exchange, to recognize Merkle's role in laying the foundations -- which nonetheless puts aside that a British Intelligence team came up with the same algorithm several years before, but had to keep it classified until much later -- and who knows, maybe it will be found one day in one of the numerous works that Leonard Euler wrote, leading us to sigh and say "It's a good thing we didn't know about it, because otherwise half of mathematics would be named after him!"? -- there's a reason I simultaneously appreciate and don't worry about making sure everyone gets credit!).

The idea is relatively simple:  Alice and Bob agree on a key $P_\infty$ (recall that the $\infty$ subscript is a reminder that the key is shared by everyone) to use as a basis for communication.  Alice chooses a private key $A_0$ (recall that the $0$ subscript is a reminder to share the key with zero people), and combines this to create a key $A_0 P_\infty$ she publishes publicly; likewise, Bob can share $B_0 P_\infty$ with the world after creating his own private key $B_0$.  To communicate, Alice and Bob combine these publicly shared keys with their private keys, $(A_0 (B_0 P)) = (B_0 (A_0 P)) = S_0$, and by the magic of modular arithmetic (ie, mathematics I don't want to delve into right now), things mix together to produce an $S_0$ that can then be used to share messages between Alice and Bob.

For what it's worth, Wikipedia has a more colorful explanation -- by literally using colors and color mixing to explain what's going on.

Ok, maybe it's not exactly simple -- and it relies on advanced mathematics to allow for things to cancel out nicely, so that the symmetric key is the same for Bob and Alice.  But it gets the job done, and it's used internally in a lot of internet protocols where sharing messages, rather than confirming identity, is the primary concern.  It's not quite a public-private key system -- but it's a bridge between symmetric keys and asymmetric ones!

Shortly after Diffie-Hellman was made public, Rivest, Shamir, and Adleman created a simpler algorithm:  rather than having a public key that everyone uses as a basis for creating private shared secrets, each individual produces their own public and private keys.  Hence, Alice creates $A_0$ for herself, and shares $A_\infty$ to the world, while Bob creates $B_0$ for himself and $B_\infty$ for the world.  If Bob wishes to share a message $M$ with Alice, he encrypts it with Alice's public key $A_0(M)$ -- and if Alice wishes to read it, she applies her private key $\A_0 (A_\infty(M))$, which cancels out the encryption, leaving Alice (who, if she's careful, is the only one who has her private key!) the only person in the world besides Bob to be able to read the message.

Now, here's the fun thing about asymmetrical encryption:  both keys can be used for encryption -- and the other key decrypts!  If Alice wanted to, she could take a message $M$ and encrypt it with her private key, $A_\infty(M)$, and then share the result with the world.  If Bob, or the President, or my sister and her darling dachshund, or anyone, really, wanted to read the message, they can -- they just have to apply the public key (also available to the public) to the message:  $A_0(A_\infty(M)) = M$.  But why would Alice want to do this, though, if the purpose of encryption is to keep unwanted people from reading messages?  Well, when Alice does this, she isn't just sharing the message:  she's reminding the world that, as the world's only holder of the private key $A_0$, she's the only person who could encrypt something that can be decrypted by $A_\infty$.  Thus, you can be fairly certain the message came from her!  This is the basis of cryptographic document signing.

Of course, if Alice wanted to, she could send Bob a signed message $A_0(M)$ -- and if it's only intended for Bob, she can further encrypt it with Bob's public key, via $B_\infty(A_0(M))$ -- thus, to read this message, Bob would use his private key to "unwrap" the message, $B_0(B_\infty($A_0(M)) = $A_0(M)$, which can then be further decrypted by $A_\infty$ to confirm that, not only is the message intended for Bob, but that it can only have been sent from Alice.

Besides RSA algorithms of various strengths, there are now algorithms based on elliptical curves, which (if I understand correctly) may be computationally as fast as symmetric keys, and can also be smaller while offering the same level of security -- because as fantastic as public/private key encryption may be, it's still computationally slow, so it still makes sense to use symmetric keys when you can, and transfer them via public/private key authentication, rather than using public/private keys directly.

As of right now, asymmetric keys have only three weaknesses, two real, and one theoretical.

The first weakness is that you have to make sure you never let other people know your key -- and that's a challenge, considering how many vulnerabilities have been found in our software! -- but there are schemes for rotating through keys that make this more manageable.

The second weakness is called "Man in the Middle" -- if Eve wanted to listen in to Bob's and Alice's conversation, and she can intercept their traffic, she can create her own public/private key pair $E_0$ and $E_\infty$, and when Bob tries to send a message to Alice, if he convinces Bob that $E_\infty$ is Alice's public key, then Bob would try to send a message to Alice via $E_\infty(M)$, which Eve would then decrypt with her private key and encrypt with Alice's public key -- $A_\infty(E_0(E_\infty(M))) = A_\infty(M))$ -- which Alice can now decrypt with her private key.  And if Alice tries to respond, and Eve managed to convince Alice that $E_\infty$ is also Bob's public key, Eve can read all the traffic going back and forth.  To be sure, this requires that Eve captures this stream at the beginning of the conversation, and that she is constantly there to be an intermediary between Alice and Bob -- but it is a risk nonetheless, and a real one where something public like the internet is concerned.  There are also strategies to prevent this from happening; indeed, this is why "certificates" are so important for web browsers.

And the third weakness rests on the notion that the prime numbers used in these schemes are very difficult -- indeed, beyond-the-lifetime-of-the-universe difficult -- to factor, even with the fastest of computers.  Mathematicians have been unable to prove, one way or another, that factoring like this is, indeed, hard -- so we may very well be a surprising, fantastic, and beautiful proof away from the entire security of the internet crumbling -- but mathematicians generally believe that there are no shortcuts to factoring numbers, and we may very well prove that, instead.  Physicists have been hard at work creating quantum computers that can, in theory at least, go through lots of factors all at once -- but it's unclear if engineers will ever overcome the hurdle of noise that plagues quantum mechanics so much, and if so, whether engineers would be able to gather enough "qubits" to be able to carry out the large number of computations necessary to factor large numbers.  Nonetheless, this threat is serious enough that researchers are working to develop "quantum-proof" algorithms for encryption -- and governments , for that matter, are sucking up as much current communication as they can, with the hopes that someday they'll be able to read everything!


Fun fact!  This is the first blogpost where I formally use $\LaTeX$ to format things!  I originally intended to keep it simple, but I discovered that I really wanted easy-to-format subscripts for the keys.  Plain A_0 just looked ugly! I found a forum that directed me to https://koutuholi.blogspot.com/2021/04/mathjax.html, which provides a non-supported way to provide the magic of $\LaTeX$ to blogs.

For those not familiar with $\LaTeX$, it is a fantastic document layout system used by mathematically-oriented people to write papers; I personally find the creation of documents using the system to be fantastic, but when I get frustrated with the ASCII mathematical representation, I remind myself ... that $\LaTeX$ is the worst math system out there, except for all the others!  (I particularly despise "equation editors"; they are surprisingly painful to use!)

Monday, January 20, 2025

Identity Management Atoms: Symmetric Keys

So far, nothing I have discussed actually encrypts data.  At best, we have hashes, which takes a block of text and produces a short, seemingly random collection characters -- that is specifically designed to prevent discovering the data that produced it!

While all of this is called "cryptography", there is a major reason I like to call this "Identity Management":  these are the tools that allow us to confirm each other's identities, which is a task that transcends the mere sharing of information.

At some point, however, we're going to want to share data with someone we trust, and we don't want anyone else to read what we send!  To do this, we need some sort of way that scrambles data so that, once sent, it can be unscrambled.  Perhaps the oldest of these is the "symmetric key", something shared between the two communicators beforehand, so that they can recover communication.

Perhaps the simplest example is called "Caesar's Cipher", which simply "rotates" letters by 13 -- A becomes "M", B becomes "N", and so forth -- which isn't a particularly difficult algorithm to crack.  It's easy to imagine a more complicated version, where each letter is assigned to another random letter, but even then, the algorithm is simple enough to crack, it's offered in puzzle books as "cryptograms", to be broken for entertainment.

On the other side of this, there's the "one time pad", which is a pad of randomly generated numbers used for cryptography.  For each "block" of a message, the sender uses a page, and the receiver needs to know what page is used.  This method is mathematically proven to be impossible to crack -- if you can trust that your random number generator doesn't produce identifiable patterns, if you don't use a page more than once (because two pages of data that use the same encryption page can be used to decrypt each other), and if you can ensure that only you and your confidant have unique one-time pads that no one else can see.

And this brings us to the weakness of symmetric keys:  how the heck do you get a symmetric key to the people you wish to communicate with, without anyone else getting their dirty little mittens on them, too?  Well, besides meeting up face-to-face with each person you wish to communicate with, and make sure that each person has their own unique pad, and keep track of where you are in each pad in your communications .... well, this isn't exactly the best way to distribute keys when you're trying to reach out to a computer on the other side of the world.

Naturally, there are a number of schemes for generating these random keys, all essentially designed to create things akin to "one time pads" on the fly, AES being a particularly popular one.  To the best of my knowledge, there aren't any concerns about these keys being weak against quantum computers -- they use algorithms that aren't susceptible to the type of parallel processing that quantum computers will theoretically offer.

Nope!  To the degree that these keys are weak against quantum computers, it's because they have to be shared!  And they are typically shared by asymmetrical public/private key cryptographic systems -- which are susceptible to quantum computer algorithms -- and which are also the cornerstone of both computer cryptography and identity management in general.

It seems like public/private key pairs would be better than symmetric keys for sharing data -- so why are symmetric keys still used?  It so happens that symmetric keys are far less computationally intensive than asymmetric ones, so they are used to optimize our information sharing.