Andre's Blog
Perfection is when there is nothing left to take away
Practical requirements

Requirements remind me one of The Simpsons episodes, where Bart reads outs loud junk mail: "Gas your termites. Freeze your termites. Zap your termites. Save your termites?" and it feels sometimes that writing good requirements is more of an art form than a teachable skill.

In the world of agile development, requirements are often seen as something cavemen used to gather, but in reality user stories serve the exact same purpose as requirements, but with the intent to avoid the black magic that requirements are made of.

This post describes some of the practical usefulness in well written requirements that I found for myself and how requirements can make managing projects a bit more straightforward.

Columns have types

I recently needed to import some CSV data into a Mongo DB database and while mongoimport makes it easy to import data from a variety of data sources, importing time stamps proved to be more complicated than I anticipated.

Initially, importing time stamps seemed like a trivial task, given that there is a way to describe a CSV field as a time stamp via the --columnsHaveTypes switch. The example in the docs seemed strange, though, because it didn't look like any date/time format one would expect and is shown as created.date(2006-01-02 15:04:05).

reMarkable, to a point

Pen and paper were always by favorite tools for technical designs and for capturing ongoing work notes and I used to buy blue grid paper notebooks in packs of five, to save me a trip to Staples. I even developed a system to cross reference pages to link notes made at different times.

With all those paper notebooks lying around, whenever new note taking technologies emerged, I rushed to try them out and it worked out well enough for me in that I didn't buy a paper notebook in years. Mostly because OneNote covers majority of note taking needs now and partly because some of those new technologies make taking notes as easy as pulling a paper notebook out of the desk drawer.

A reMarkable tablet is a good example of the latter.

OneNote, too many cooks

I always favored pen and paper for initial technical designs and when first devices that enabled handwriting recognition emerged, I enthusiastically tried them out. Some of those original applications were quite good, but worked only on one device, like the Samsung's SNote, and some worked on several platforms, but captured only pixel images instead of pen strokes.

Eventually, I ended up using OneNote, which works on many devices and has all the features I need, but many of those features are implemented inconsistently across various devices and I can't help but wonder if OneNote is being developed by multiple independent teams with limited communication channels between them.

File Integrity Tracker (fit)

Last month I ended up copying thousands upon thousands of files, while recovering my data from ReFS volumes turned RAW, because Microsoft quietly dropped support for ReFS v1.2 on Windows 10. During file recovery, I was trying to be careful and flushed the volume cache after every significant copy operation, but a couple of times Windows just restarted on its own and I faced a bit of uncertainty on whether data in all files safely reached the drive platters or not.

I used a couple of file integrity verification tools in the past and thought it would take some time to read all files, but otherwise would be a fairly simple exercise. However, it turns out that everyday file tools don't work quite as well against a couple of hundred thousand of files.

Resilient, until it's not

I have been a big proponent of Storage Spaces in Windows 10 for many years and while redundant storage provided by Storage Spaces is not a replacement for a proper backup, it does provide good protection against individual drive failures and some forms of enclosure failures.

When Windows 10 was just released, in addition to drive redundancy, it also allowed formatting Storage Space volumes as ReFS (Resilient File System), which added a layer of protection against bit rot and sudden power loss because of the way it performs disk writes. Later on, Microsoft removed the ability to format new volumes as ReFS from Windows 10, but existing ReFS volumes remained usable and I assumed that Microsoft will be respectful of terabytes of data and will warn me that ReFS will no longer be maintained on Windows 10 when the time comes.

That turned out to be a bad assumption and what followed felt like a gut punch.

From TinyMCE to CKeditor and back

When I wrote the first version of this blogs application in 2008, I initially used TinyMCE for posts and comments, but within a couple of weeks I switched to CKeditor because it handled HTML better and provided server-side support for image uploads. Years have passed since then and the last version of CKeditor I integrated into this application was v2.6, which was written so extraordinarily well, that it continued working for me without any problems for over 10 years.

In the last couple of years, when I started noticing little problems, like Ctrl-B switching back to plain text on its own in Chrome, I decided it was time to upgrade the editor to the latest version. CKeditor worked so well for me over the years that the thought of checking out alternatives didn't even enter my mind.

Windows 11 - Twice as pretty, half as bright

Last week, after a large Windows update, my laptop popped up an offer to upgrade to Windows 11 before even getting to the sign-in screen. There was only upgrade and decline buttons and while I didn't want to update right at that moment, I didn't want to decline either. I pressed Esc and it continued. No idea if it was the same as decline, but checking Settings > Windows Update confirmed that the offer is still there.

I looked around for Windows 11 upgrade stories and couldn't find anything useful - all articles and posts described the new Windows 11 look and feel and had very little to say about features and general behavior. So, I decided to upgrade on the weekend and check it out for myself.

From ASP.Net to Node.js

I originally wrote this blogs application in 2008 in ASP/JScript, thinking that JavaScript-like language would age better than VBScript, but soon realized that while that might be true for the language, the classic ASP itself didn't have a lot of life left in it. This prompted me to rewrite the blogs in ASP.Net/JScript, in 2009. This time I thought my choice of the framework was quite clever and would surely outlast my needs for a blog.

ASP.Net indeed has done remarkably well since 2009, but JScript didn't do nearly as well and Microsoft quietly dropped it from the platform at some point, so my choice of JavaScript as the server side language for my blogs needed another revision. Needless to say, Node.js was really the only choice to consider, so it was an easy decision.

Version? What's that?

Traditional applications rely on the application version to communicate to application users the set of features included in a package and the impact of upgrading from one version to another for applications with carefully maintained versions.

Website applications, on the other hand, are often upgraded by the website operator in their own environments and website users usually have no idea what version of the application is running behind the website UI, even if there is one.

Website applications are centered around user-visible features, which are being continuously developed and deployed to production environments, so grouping features into version levels for such deployments makes very little sense.

Looking for thoughts on HTML reports

HTML reports generated by Stone Steps Webalizer didn't change much structurally since I forked the original project back in 2004. Current HTML reports use CSS styles wherever possible and have some JavaScript niceties, such as rendering charts in JavaScript and, less-known, jumping between reports with Ctrl-Alt-Up/Down, but otherwise they remain the same monthly one-page reports with pre-formatted all-items sub-reports and a single collective index report.

Bitbucket fishing

A few months ago Bitbucket dropped the Stone Steps Webalizer project repository because Mercurial was no longer supported. Along with the source repository Bitbucket also dropped Wiki, issues, downloads and some configuration, like components and milestones, even though these didn't have anything to do with Mercurial.

I moved the project to another hosting service, but about a week ago I needed to follow up on one of the past Bitbucket issues and noticed that some images from the supposedly deleted repository Wiki were still accessible in Bitbucket cloud, which made me think that my repository was just disabled, but not deleted.

Five days till release

One cannot spend a week in software development without hearing how many days are left before some milestone, be that a release or just a sprint demo. Yet, I cannot remember ever hearing that there is only 120 hours left until the upcoming release. Sometimes 24 or 48 hours are being tossed around to deliver the same message, but that's just saying that the release is in 1 or 2 days and that the team is expected to work around the clock to get there.

Most issue tracking systems, on the other hand, tracks work in hours and one cannot help but wonder how work hours translate into days until that milestone. The simple answer is that they don't, even though most teams keep trying to have that cake and eat it too.

What is build metadata good for?

Build metadata in Semantic Versioning is a quite commonly misunderstood concept, which often sparks passionate online discussions on whether build metadata should be allowed in package repositories or not, and some of the confusion around this topic even seeps through into prominent online services and applications.

CSS font size - it's not what it seems

Ever since I started using a 4K monitor, I noticed that I have to adjust browser zoom almost for every site where I wanted to read content and not just glance at some excerpt. Looking closer at CSS styles on some of those sites I found that most of them used pixel unit sizes for pretty much everything, including fonts. I was puzzled at first by the widespread use of a technique that cannot compute sizes in a predictable way across various displays and devices, but further investigation confirmed that it was done on the advice of the CSS specification.