One of the recent Windows 11 updates described in release notes that passkeys are now easier than ever to manage in Windows Settings, so I gave it a go to see if it is time to go passwordless for websites that support Web Authentication.

After poking around this functionality for a bit, however, I realized that the line between going passwordless and ending up passwordless is not as bold as one would hope with passkeys, because this technology is not well documented, relies on data hard-wired into a specific device, and on how well tech behemoths, such as Microsoft, Google and Apple, interact with one another, which is not the strongest trait for many of them.

Intel NUC - a well hidden gem

I have been running for years a small Linux box for various activities, such as recovering hard drives with bad sectors, hosting local source repositories, running a local DNS server, and so on. My original Linux installation went through many hardware upgrades over the years and eventually ran into end-of-life for the 32-bit Linux kernel. The time my Linux installation needed a bigger box.

Most of my Linux tasks are not CPU-intensive, so in the past I would recycle old hardware, but this time I decided that switching to 64-bit hardware deserves a quality upgrade and started looking for small PCs with good connectivity options. After researching this for a bit, I thought CompuLab's Fitlet3 will serve as the new home for my Linux setup, but it did not work out quite as well as I hoped.

Build numbers are corporate, bro

Some time ago, in a discussion about incrementing versions in CI builds, I suggested that a release version should be defined by a Product Owner and should not be incremented in CI builds, and that a build number should be used instead to track packages produced by CI pipelines.

One of the participants commented that I must be working in a corporate environment and that most Open Source projects do not have a Product Owner and build numbers are not something that a small team would use. This point of view is incredibly misguided, but it is also, unfortunately, quite widespread.

APC - We don't charge batteries

One of my Back-UPS 550 units started emitting an annoying continuous sound in the middle of the night a couple of weeks ago. Unplugging it did not silence it and I had to disconnect the battery to keep it quiet.

In the morning, I checked my records and it turned out that the battery in this unit was replaced about a year ago, which was surprising. Nevertheless, I had a spare battery just for such occasion and I thought I won't hear this UPS again for a while, but a day later same same sound came to haunt me again.

The source stands alone

People keep weird stuff in their source repositories. Over the years, I observed mobile development frameworks stored in Mercurial, SQL Server installation media stored in Perforce, network captures stored in SVN, large test datasets stored in pretty much every source repository type, and most certainly a lot of 3rd-party source stored alongside with the project source.

The common belief is that a source repository should contain everything that is needed to build, package and test project binaries and these processes should not rely on external sources. This sentiment does make sense on its own, but keeping binaries and test data in source repositories simply turns these repositories into oversized junkyards, storing everything, but the kitchen sink.

Practical requirements

Requirements remind me one of The Simpsons episodes, where Bart reads outs loud junk mail: "Gas your termites. Freeze your termites. Zap your termites. Save your termites?" and it feels sometimes that writing good requirements is more of an art form than a teachable skill.

In the world of agile development, requirements are often seen as something cavemen used to gather, but in reality user stories serve the exact same purpose as requirements, with the intent to avoid the black magic that requirements are made of.

This post describes some of the practical usefulness in well written requirements that I found for myself and how requirements can make managing projects a bit more straightforward.

Columns have types

I recently needed to import some CSV data into a Mongo DB database and while mongoimport makes it easy to import data from a variety of data sources, importing time stamps proved to be more complicated than I anticipated.

Initially, importing time stamps seemed like a trivial task, given that there is a way to describe a CSV field as a time stamp via the --columnsHaveTypes switch. The example in the docs seemed strange, though, because it didn't look like any date/time format one would expect and is shown as 15:04:05).

reMarkable, to a point

Pen and paper were always by favorite tools for technical designs and capturing ongoing work notes and I used to buy blue-grid paper notebooks in packs of five to save me a trip to Staples. I even developed a system to cross-reference paper pages, in order to link notes made at different times.

With all those paper notebooks lying around, whenever new note taking technologies emerged, I rushed to try them out and it worked out well enough for me in that I didn't buy a paper notebook in years. Mostly because OneNote covers majority of my note taking needs now and partly because some of those new technologies make taking notes as easy as pulling a paper notebook out of the desk drawer.

A reMarkable tablet is a good example of the latter.

OneNote, too many cooks

I always favored pen and paper for initial technical designs and when first devices that enabled handwriting recognition emerged, I enthusiastically tried them out. Some of those original applications were quite good, but worked only on one device, like the Samsung's SNote, and some worked on several platforms, but captured only pixel images instead of pen strokes.

Eventually, I ended up using OneNote, which works on many devices and has all the features I need, but many of those features are implemented inconsistently across various devices and I can't help but wonder if OneNote is being developed by multiple independent teams with limited communication channels between them.

File Integrity Tracker (fit)

Last month I ended up copying thousands upon thousands of files, while recovering my data from ReFS volumes turned RAW, because Microsoft quietly dropped support for ReFS v1.2 on Windows 10. During file recovery, I was trying to be careful and flushed the volume cache after every significant copy operation, but a couple of times Windows just restarted on its own and I faced a bit of uncertainty on whether data in all files safely reached the drive platters or not.

I used a couple of file integrity verification tools in the past and thought it would take some time to read all files, but otherwise would be a fairly simple exercise. However, it turns out that everyday file tools don't work quite as well against a couple of hundred thousand of files.

Resilient, until it's not

I have been a big proponent of Storage Spaces in Windows 10 for many years and while redundant storage provided by Storage Spaces is not a replacement for a proper backup, it does provide good protection against individual drive failures and some forms of enclosure failures.

When Windows 10 was just released, in addition to drive redundancy, it also allowed formatting Storage Space volumes as ReFS (Resilient File System), which added a layer of protection against bit rot and sudden power loss because of the way it performs disk writes. Later on, Microsoft removed the ability to format new volumes as ReFS from Windows 10, but existing ReFS volumes remained usable and I assumed that Microsoft will be respectful of terabytes of data and will warn me that ReFS will no longer be maintained on Windows 10 when the time comes.

That turned out to be a bad assumption and what followed felt like a gut punch.

From TinyMCE to CKeditor and back

When I wrote the first version of this blogs application in 2008, I initially used TinyMCE for posts and comments, but within a couple of weeks I switched to CKeditor because it handled HTML better and provided server-side support for image uploads. Years have passed since then and the last version of CKeditor I integrated into this application was v2.6, which was written so extraordinarily well, that it continued working for me without any problems for over 10 years.

In the last couple of years, when I started noticing little problems, like Ctrl-B switching back to plain text on its own in Chrome, I decided it was time to upgrade the editor to the latest version. CKeditor worked so well for me over the years that the thought of checking out alternatives didn't even enter my mind.

Windows 11 - Twice as pretty, half as bright

Last week, after a large Windows update, my laptop popped up an offer to upgrade to Windows 11 before even getting to the sign-in screen. There was only upgrade and decline buttons and while I didn't want to update right at that moment, I didn't want to decline either. I pressed Esc and it continued. No idea if it was the same as decline, but checking Settings > Windows Update confirmed that the offer is still there.

I looked around for Windows 11 upgrade stories and couldn't find anything useful - all articles and posts described the new Windows 11 look and feel and had very little to say about features and general behavior. So, I decided to upgrade on the weekend and check it out for myself.

From ASP.Net to Node.js

I originally wrote this blogs application in 2008 in ASP/JScript, thinking that JavaScript-like language would age better than VBScript, but soon realized that while that might be true for the language, the classic ASP itself didn't have a lot of life left in it. This prompted me to rewrite the blogs in ASP.Net/JScript, in 2009. This time I thought my choice of the framework was quite clever and would surely outlast my needs for a blog.

ASP.Net indeed has done remarkably well since 2009, but JScript didn't do nearly as well and Microsoft quietly dropped it from the platform at some point, so my choice of JavaScript as the server side language for my blogs needed another revision. Needless to say, Node.js was really the only choice to consider, so it was an easy decision.

Version? What's that?

Traditional packaged applications rely on the application version to communicate to application users the set of features included in a package and the impact of upgrading from one version to another for applications with carefully maintained versions.

Website applications, on the other hand, are often upgraded by the website operator in their own environments and website users usually have no idea what version of the application is running behind the website UI, even if there is one.

Website applications are centered around user-visible features, which are being continuously developed and deployed to production environments, so grouping features into version levels for such deployments makes very little sense.