Subscribe to Personal Log via RSS and find other blogs to read in my blogroll.

A simple but useful wiki

I reflected 3 years ago in this blog about the state of the wiki movement, and even I implemented a toy wiki in Scala at some point. If I check these pages, I’ve mentioned or linked a wiki 90 times –true that the Wikipedia is important, but there are other wikis–, so wikis are still a significant part of how I experience the web.

Recently I ended in the pages of the Oddμ project again (yes, again). As the page says, it is a minimal wiki. And it is exciting and inspiring in its simplicity –although it was simpler when I first learned about it in Alex’s blog–.

Some time ago I mentioned the project to a friend, and being perhaps a “hardcore” wiki user –certainly more than me–, he wasn’t too impressed because the limitations of Oddμ’s simplicity. One of the more significant features was the lack of edit history. I don’t recall the conversation 100%, so this may not be a direct quote, but: is Oddμ even a wiki?

I mentioned that on Masto, and I was pointed out that the original wiki –now a static site with awful navigation– didn’t have edit history initially, and that it was added later on to deal with spam and vandalism.

Someone shared two links to c2’s wiki, that are very interesting and led to this post:

According to the first link, a possible “minimal set of principles” that make a wiki could be:

  • Automatic link generation. You can refer to a page, and if it doesn’t exist, it generates a link for that page to be created.
  • Content editable by all. And that “all” can be a limited set of people, but the idea is about open collaboration.
  • Easy text input. Just a form and not HTML. Could be wiki text, markdown, a WYSIWYG, etc.
  • Back links. It must be possible to easily find which pages link each page. This one I didn’t think about!

Then there are additional features that are considered almost required because they improve the experience significantly:

  • Recent changes. Show what pages were edited within some period of time. Although depending on the wiki, this doesn’t scale. The Wikipedia allows you to subscribe to changes of a subset of pages, for example.
  • Find a page. Search support, and it is interesting that it wasn’t in the minimal set!
  • Page differences. See the changes made to a wiki page. Which may sound similar to the history feature we were discussing, but technically this could be the differences between the current version and the previous one.
  • List all the pages. This would be like a site map.

Then there’s another list of features that are offered by some wikis, and I think is perhaps what we expect now to be the full feature set of any wiki engine. This list includes page history, user names and edit conflict resolution –an important one!–, among others.

The second link is about a competition:

What’s the shortest piece of source you can write that will implement a fully-featured wiki?

And there are links to some proposals, and they are super cute and inspiring. Evidently nothing you would run in “production”, but still a fun “code golfing” exercise.

Which left me thinking that I may want to host my own wiki. I know I tried to do something similar in this blog with my notes. I wrote some content, but then I realise I have been doing some significant amount of sysadmin work recently, and I didn’t feel the urge to publicly document any of it. I guess editing a Markdown file and updating the static site isn’t really quick –not a lot of friction, but friction nonetheless–.

I’ve looked at wiki engines in the past, and they generally have something that makes my sysadmin senses tingle and I end giving up with “OK, I don’t need a wiki” –that is how the notes idea came about–. They are generally too complicated for my personal needs or use technology I don’t like to self-host, and that all comes with some risk without a clear benefit.

At the end, all this reading of what “makes a wiki”, and getting inspired by Alex and his Oddμ, now I’m thinking as well that writing my own blog engine was a big part of me blogging back in the early 2000s, so perhaps that’s what I need: write my own wiki, and use it!

Back on XMPP

I started using XMPP in the early 2000s, when Microsoft Messenger was the instant messaging app, because good old Microsoft abusing their market dominance with Windows.

Although there were multi-protocol messaging applications, and some supported Messenger, back then I was in my free software advocacy phase, so obviously I had to use a free protocol. Back then it was Jabber, until Cisco acquired Jabber Inc; and I can’t remember when but it changed to be just XMPP.

XMPP was the main reason I stopped using IRC, until the slowness of the standardisation and clients adopting features kind of drove me away. At that point I was using it with family essentially, and a lot of basic features –compared with other protocols– were still missing. On top of that smartphones were suddenly a thing, but there weren’t good XMPP clients. I wasn’t hosting my own XMPP server anyway, so there wasn’t much to lose and a lot to gain moving to a closed protocol. Shame on me.

We used Skype for a while, then Google’s Hang Out –or whatever was the name that week–, and finally we settled on Signal as soon as it became “good enough”. And that is essentially what we are using.

I have mentioned in this blog that I’m back to enjoy IRC, with 5 or 6 channels that are more or less active and I love the conversation, including the retrogamedev channel that I started. And things are great, although sometimes I miss a Telegram channel focused on MSX –specially when I’m making an MSX game; more on that soon–. I can’t help thinking about XMPP, and how we believed it was the future.

So I have decided giving it another go. Today there are very good clients, like Gajim for desktop and Conversations for Android. There are plenty of full featured servers where you can make a new account in a couple of minutes –if you don’t want to self-host–, and given that people are more exposed to federation thanks to the Fediverse, may be there is still a chance for XMPP.

My approach is going to be different than the last time I used it:

  • Instead of using it to chat with my contacts –that either are in a closed platform or are already using Signal–, I’m going to treat it a bit like email: you can find my address in my contact page –won’t put it here in case I change it, so I don’t have to edit this post–, and anybody is free to use it to talk to me –using OMEMO for end-to-end encryption, if possible–.
  • I’m going to try some group chats –how channels are called in XMPP–. There is a search engine; although not all channels may be there, it is a good starting point. I may not find a place to discuss everything that interests me, but I can start any channel –like I did with the IRC channel– and see what happens.
  • XMPP can be very similar to IRC, or even Mastodon –and I met plenty of nice people in both places–. I have said it before, that IRC is possibly my main social network these days.
  • In a way, history repeats itself. This time is Discord instead of Messenger, but perhaps by discussing XMPP publicly and using it, the next time a community decides to have a place to chat, they may not end using a closed platform.

I know there is also Matrix these days, and it has its pros and cons. It doesn’t look to me like a clear winner in the free and open protocol space. Besides, may be the answer is not OR but AND –I’m not leaving IRC, by the way–. I may investigate it in the future!

Update 2025-06-01: I’m running my own XMPP server using Prosody. Very easy to setup in Debian!

My home server

I had a home server at home (obviously) from around 2001 to 2009, in different shapes and sizes. Mostly old recycled PCs, but also a dedicated “mini PC”: a VIA EPIA-PD6000E at 600HMz with 512MB of RAM. It was fun and I learned a lot about services and system administration –it run OpenBSD!–, that was very useful in my early career.

But then I moved to a different country, and had a VPS or two with a hosting provider, so the home server idea kind of wasn’t a thing anymore. And it makes sense: I have, as I say, a couple of servers, and account in some “shared systems” like SDF or cltrl-c. Fine, I’m not going to administer services in those two, but I do it in my servers since 2002.

Anyway, the thing is that… it lost the magic for me, despite having 3 or 4 Raspberry Pis, that one way or another ended in a drawer accumulating dust.

Not always, though. At least one of them, a Pi 1 if I’m not mistaken, got some use as multimedia center like 7 years ago. It had composite video output and I had an old TFT TV –that I still use with my old 8-bit systems–, so it was a cool retro experience to have in the kitchen so the kids could watch cartoons.

But the home server idea was dead. Or that’s what I thought because this week I decided to put to use a Raspberry Pi 2B 1.1 that I think I got as a gift from an employer after being 5 years with them. Interestingly enough, it is much more powerful than my latest home server, much cheaper –even if I had paid for it, that I didn’t–, and silent!

The black case of my rPi 2B

These machines are so cute!

Sure, I have my servers that I administer with a handful or services. But those are production and I’m very efficient using my free time, so I like them to be stable… and boring. So I don’t experiment or try new things, like I used to do 24 years ago.

Now I have this little black box that installed in less than 3 minutes –the Raspberry Pi Debian images are very cool–, and then without much thinking about it, I did the usual setup for a production server.

Then I decided to run unbound in my LAN, that required running my own DHCP server because my ISP’s router doesn’t support providing your own DNS resolver –I assume to prevent customers from breaking things–. It all came to me so naturally, without effort. And it felt good.

I’ve been very pragmatic. I know the SD card will fail eventually –the rPi and the card are around 10 years old–, but given that I can disaster-recover in 20 minutes by reinstalling packages and dropping my configuration files in the right place, I’m not even cloning the SD card but checking in the configuration files into a git repo that I push to one of my VPS.

And that’s all, for now. I think I’ll run some experiments with cool stuff that I would hesitate to run in one of the production servers, and just have good old fun with Linux.

Playing Skyrim

I used to write more often here about games I was playing, specially because it was a rare event.

Now things have changed a bit. Essentially because my PC broke –no idea why, which is what happens with the modern computers: they just stop working–, and I got a modern-ish replacement.

It isn’t a gaming PC, but has a Ryzen 9 CPU that allows playing some stuff, including games not that new, but that for me look amazing independently of not being state of the art any more. For example: The Elder Scrolls V: Skyrim.

Some nice views in the game

Not the best views, but unfortunately I didn't think about taking screenshots until now

When playing new games, I don’t seem to stick with them for more than a handful of sessions, and it feels like real effort! Until I tried Skyrim. I’ve been playing for 45 hours so far, and my interest hasn’t waned at all.

Sure, some of the side quests are awful –like you stumble upon a bandit hideaway, and of course you murder everybody–, and the main quest is not super original; but there is something about walking around foraging ingredients to make potions, or visiting new places –not necessarily dungeons–, or going back to my house in Whiterun and talk to the two girls I adopted –I play a woman and they call me mommy–; that clicks with me. I had tried Oblivion –the previous game in the series– before this one, but after a few sessions I forgot to keep playing it.

Fighting a dragon

You get to kill a few of these in the game and some people say that after a while it gets old

Also the difficulty seems OK for me, so far. I’m playing in the default setting and only a couple of events felt a bit unbalanced –and there are some caves with vampires that I had to give up; but some day I will return and they’ll see!–, and another couple of encounters required some creativity. In one of those, my first companion Lydia sadly died, and I refused to reload a save: that’s my experience of Skyrim and I will take it as it is.

Of course everything is scripted, but you still have some agency and being an open world game, I feel like it is my story.

As always, I’m not sure how much I’m going to play, but I don’t feel like quitting for now. I know is not too original, but considering that I don’t play many new games, Skyrim is probably my favourite of the modern era!

logwatch and systemd/journal

I’m a bit old-style and I like logwatch, and all my servers send me an email every day with a handy summary of what happened on the server. And sometimes I even read those emails! It is probably not as useful as logcheck, but it is easier to use.

Anyway, from the man page:

Logwatch is a customizable, pluggable log-monitoring system. It will go through your logs for a given period of time and make a report in the areas that you wish with the detail that you wish.

I have been using it virtually for ever, and I was setting up a new server last weekend and of course I had to get that daily email. But turns out a fresh install of Debian 12 comes with systemd-journald –my other servers were upgraded, so they still use the old logging system–, and there aren’t logs for sshd that logwatch can process. At least not in the usual place.

In reality systemd-journald is not that different from what you get with rsyslog, but some of the differences are annoying, like being a binary log that means you can’t use the text processing tools you are used to in simple files, you need to use journalctl. And that is what prevents logwatch from checking sshd’s logs, because there is not auth.log file.

I don’t like the direction most Linux distributions are taking embracing systemd and its ecosystem, but I trust Debian, even if some decisions are controversial. In theory systemd-journald improves on a few things, but in practice none of those really make a difference to me, and I’m only left with the annoyance of things that used to work that now they don’t.

This time I decided to see if I can still use it, instead of just installing rsyslog like in the other servers. And turns out, logwatch can interact with journalctl.

We only have to add a file in /etc/logwatch/conf/services with the name of the service ending in .conf, in this case sshd.conf, with the following content:

LogFile =
LogFile = none
*JournalCtl = "--output=cat --unit=ssh.service"

With Logfile you specify a logfile group, and it is required. You can provide as many entries as you want and they will be merged. We don’t really have a log file, that’s why we need to provide two entries: one empty to clear any value, and the other with a magic string none for no logfile group (we could also create a logfile group pointing to an empty log file, but this is cleaner).

Then *JournalCtl refers to a script in /usr/share/logwatch/scripts/ that will interface with journalctl, and will enable logwatch to process the missing logs.

Once the file is in place, you can run logwatch with /etc/cron.daily/00logwatch and you should get your email, including the report of the sshd logs (you can also just run logwatch and get the report on the console, but testing end-to-end is nice in this case).

I assume I will find other cases in which journalctl gets in the way and I may end installing rsyslog anyway, but for now things work!

The problem of the LLM crawlers

The founder of SourceHut, an open source platform to build software collaboratively –sometimes referred to as forge–, has written a post in his blog that shows the scale of the problems that bad crawlers feeding AIs are causing:

If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure – actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic.

I know other people suffering with those new crawlers that are essentially very aggressive –or in some cases likely broken–, don’t follow the rules, and try to conceal themselves as regular traffic. For example, read what Alex has been writing about trying to keep Emacs Wiki online –and a follow up, and another, and it won’t stop, etc–.

As Drew mentions on his post, you can find a lot of system administrators struggling with this, just because they are sharing publicly source code. As it sounds: because they have decided to distribute open source.

I have been self-hosting my repositories since 2023 and although it is not really a forge, I managed to make it small scale and work for me by providing a web interface on git.usebox.net. It has “about pages” –rendering the README see for example the SpaceBeans page–, and together with email and RSS feeds to track releases, it just works.

I noticed some issues last year in my server, but I attributed it to a mistake in the setup of cgit, honestly thinking that it was less performant than I was expecting and just configured a cache. It is supported by the tool, but being optional I thought I probably don’t need that.

Setting the cache seemed to fix the issue, and I didn’t investigate further. Until I few weeks ago that I was reading on Mastodon how someone was having a hard time dealing with these bots, and I realised that it was probably happening to me but I wasn’t paying attention. And that was the case!

Essentially it is what Drew is describing –if much smaller in my case–, and for me it seems to be always coming from the same IPs owned by Alibaba Cloud –that seem to have their own “Generative AI” product–. The day I checked the logs, I had over 200,000 requests in 24 hours coming from only two IPs.

My first impulse was to check with whois who was the owner of the IPs and, because they are owned by a cloud company, block on the firewall the whole range. And I kept monitoring the situation for a couple of days.

Of course they kept coming, so I kept blocking. At some point there was a few /16, /15, and even some /14 ranges in my block list. That was already 681,504 IPs, all owned by the same cloud company.

Because I have better things to do –really–, I wrote a small script that will ban IPs if they make what I consider “an abusive number of requests in 24 hours”, and keep the ban until they stop the abuse for 2 complete days. I don’t think this should affect legit users, but if you experience any issues, please contact me to justify why you need that volume of requests!

I did this on principle, because my forge is very small and I can handle the load. It wasn’t strictly necessary for me to block these bad actors, but I know people that couldn’t spend time with the problem and had to make all those open source repositories private; which is in my opinion the tragedy of all this: we are sharing code with the rest of the world, and the abuse of these companies trying to make profit on it is ruining it all for everybody.

I can’t help thinking about the paperclip maximizer.

Using a light theme

Spring is coming and we have more hours of sun, or light at least, so I’ve been suffering headaches and pain around my eyes –likely to be eye strain–. My solution for this has always been closing the curtain so I can keep doing my work –that requires staring to a computer screen–. Is not living in darkness, but I was avoiding bright light into my eyes.

I’ve been using a dark theme in my terminal and my editor, that is what I use most when I’m programming, since forever. Can’t put a date, but over 20 years ago. I always found the black background more comfortable, and my screens have already very low brightness.

And over the years the “dark themes” have become more popular, and now virtually everything from your computer to your phone can be done in a dark theme; some websites (like this blog!) even support both light and dark themes depending on the settings of the user –the browser chooses the right CSS–.

There is also an aesthetic consideration: it just looks better to me.

But I know as well that there is some scientific proof that dark themes aren’t really the best for people with astigmatism –and to some extent myopia–. Essentially because a dark mode requires your pupils to dilate, which can make it harder to focus on the screen, resulting as well on the foreground content bleeding into the dark background and making it hard to read –specially with small fonts–. And that can lead to eye strain.

For some time now I have my phone configured to use both dark and light themes, depending on the time of the day: during the day hours it will use a light theme, and during the night hours a dark theme. And turns out I can see the screen better during the day!

So I have been running an experiment with the work laptop –that I use during the day–, and changed everything from a dark theme to a light one. Because I’m using the excellent gruvbox-material, it was very easy to switch my editor.

Screenshot of neovim editing a file using a light theme

Gruvbox Material (light version)

After a couple of days in which I didn’t close the curtains, I think I can feel the benefits: no eye pain or headaches! So I have decided to transition completely, including in my personal machine, with the caveat that it requires me now to ensure that I always have good lighting when I’m using the computer –specially at night–, so the bright light coming out of the screen is not causing other problems.

Perhaps Gurvbox is not the most popular theme, but is popular enough that you can find settings for most popular applications. In my case there is already a theme built-in in WezTerm, and it was easy to find a theme for tmux.

Then I had to set my desktop to use a light theme, and everything else changed, with the exception of my i3 theme, that for now I’m keeping with the Gruvbox dark theme because looks great and is a very small portion of the screen that won’t affect my eyes.

I’ve never been an advocate for dark themes –use whatever works for you and leave me alone!–, and this post is not me recommending a light theme. It works for me and I wanted to share the experience.

Using my own DNS resolver

Many years ago I used to have a home server. It was connected to elxwifi, a metropolitan area network built on WiFi, and also to the Internet.

It was hosting my blog and a few more things, so it kind of made sense to provide some services to the local network. Like a good firewall with QoS –back then residential connections didn’t have much upload bandwidth–, HTTP proxy for caching, and a DNS resolver.

I was reading the other day how reCAPTCHA has wasted 819 million hours of human time and led to billions of dollars in profits by helping Google in their tracking business.

Re-captcha takes a pixel by pixel fingerprint of your browser, a realtime map of everything you do on the internet.

And Cloudflare Protection achieves a similar goal: when you are forced to “prove that you are human” is just because they don’t have enough tracking information about you, so… you could be a bot. Because that is what differentiates humans from robots these days?

For whatever reason it bothered me that Mozilla uses DNS over HTTPS with Cloudflare, and although they have a clear privacy policy, big tech has exhausted any trust left in me.

And what about my Internet provider’s DNS resolver? Well, my provider –like many others– implements a DNS hijacking service, so if you try to resolve in your browser a domain that doesn’t exist, they redirect you to a landing page they own. This can be disabled, but we are back to trust –why is this opt-out?–.

I don’t have a 7x24 server at home, so today I’m not going to implement this for my whole local network, but I fancied the experiment with my machine.

Please take into account that this might not be a good idea for you. My PC never leaves my desk and it always uses my home connection, so the use case is not the same as if I was using a laptop on a coffee shop’s free WiFi. I would say using Mozilla’s DoH may be your best option!

I installed Unbound:

Unbound is a validating, recursive, caching DNS resolver. It is designed to be fast and lean and incorporates modern features based on open standards.

My OS is Debian 12, so I just run:

sudo apt install unbound

The configuration is in /etc/unbound and there is a full commented example in /usr/share/doc/unbound/examples/unbound.conf.

I recommend reading the base configuration, but essentially Debian enables remote control in localhost that is handy to check stats and manage the service using the unbound-control tool as root.

I added a local.conf file in /etc/unbound/unbound.conf.d:

server:
  username: "unbound"
  directory: "/etc/unbound"
  do-ip6: no
  interface: 127.0.0.1
  port: 53

  cache-max-ttl: 14400
  cache-min-ttl: 1800

  hide-identity: yes
  hide-version: yes

I can’t remember if I had to do anything else, but it is managed by systemd, so you can run the usual commands, starting with systemctl status unbound.

Then I had to make two changes to use the new resolver:

  1. In Network Manager, I edited my wired connection –yes, I don’t use WiFi in this machine–, setting method “Automatic (DHCP) addresses only” and in DNS servers “127.0.0.1”. Then restart the connection to apply the changes. When all is done, your /etc/resolv.conf should be like this:
# Generated by NetworkManager
nameserver 127.0.0.1
  1. In Firefox, open settings and search for “DNS”. In “Enable DNS over HTTPS using”, select “Off, use your default DNS resolver”.

And that should be all.

I don’t have any scientific proof, but browsing feels snappier, and I guess it makes sense because for cached name resolutions there is no need to go to Cloudflare at all!

After a bit of browsing you can run unbound-control stats_noreset (the regular stats clears them), and get something like:

thread0.num.queries=26297
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=13232
thread0.num.cachemiss=13065
...

There is no need to be an expert to more or less understand what these mean.

It was all very easy, and it took much more time writing this post than setting it up. This could be a good service to offer the local network, so perhaps I have found finally a good use for one of my Raspberry Pis!

Web based minigame

Yesterday I published a minigame I made in JavaScript following the ideas of my 7yo son. He drew most of the graphics –I gave him a hand with the slimes’ sprites–, and I had never made a twin stick shooter type of game, so that was interesting. I haven’t played any games on the genre either, and I think I understand now why they can be very fun!

It was a good exercise to use my JavaScript codebase with canvas 2D that I put together in 2023 refreshing my knowledge of the language with the new shiny bits from ECMAScript 2016, that didn’t exist –or I wasn’t aware of them– when I made some web games back in 2014, like Flax.

It is still more like a tech demo, but playable –and kind of fun for 10 minutes–, and it helped me to make some improvements on the “engine”. The development experience was OK, but the bits I don’t like about JavaScript are still there, including the hit and miss performance of canvas 2D on Linux.

I don’t see myself writing larger games in JavaScript, but for small prototypes or jam games –if I was still doing jams!–, it could be a good match.

The source is available js-twin-shooter.

Reading feeds should be easier

This is a bit of a rant. You can skip it or, depending on you experience, you may come for the ride nodding along.

I have spent a couple of evenings improving my feed reading experience, and I’m not sure why it is so difficult in 2025 to read blogs like we did in the early 2000s. Was it bad back then? It is possible it was!

Let’s set the starting point:

  • I don’t want to use a cloud service to read my blogs. I only read blog in my main PC, never in my phone –I used to, but I realised I wasn’t really reading them–.
  • Although is not necessarily linked to the previous point, I don’t want to use a website either. Essentially because I don’t want to self-host a complex application I would have to maintain to just read blogs on my PC. I know I could just run some containers and whatnot on my PC, but that’s over-engineering it.
  • If I’m using a native application, I want it to be native. We used to have those applications, now we have Electron and Flutter, and probably others I don’t want to know about. Let websites be websites, and native applications, well… something else.

I thought it would be easy! There aren’t that many options currently open source, working on Linux, and still maintained: do you want to download and process content from the Internet with an application that has been untouched for years? Sounds fun!

I started using Liferea over 20 years ago, but a few months ago something happened –can’t remember what, it could have been me!–, and I decided to look around to see what was out there.

And I found NewsFlash. Although not exactly: what I found because the decadence of the search engines –topic for another day– is a lot of click-bait sites saying how amazing NewsFlash is. And it really looks good, I agree!

The easiest way of using the latest version is via its official flatpak, which I’m not the biggest fan of but you can’t have it all. And slowly but surely you realise that all those sites raving about NewsFlash haven’t used it for more than 10 minutes.

Don’t get me wrong. NewsFlash it is Open Source and it keeps improving with each release. I have reported bugs, the response times of the main developer are fantastic, and I believe it has a bright future ahead. It is just that currently it isn’t a good match for my needs, and I don’t have the time –or the skills, to be honest– to contribute and make it work for me.

Hopefully I’m not unfair or not too picky, but I hit bugs importing an OPML file –that I resolved editing the sqlite database by hand, fun!–, exporting to OPML doesn’t seem to overwrite an existing file correctly and can result on a corrupt file –seems to be fixed already!–, it doesn’t seem to be rendering the fonts I choose –probably Flatpak’s fault–, and other small paper cuts I can’t remember right now.

Oh, and the last straw was when it crashed my whole system. Can’t tell 100% what happened because the system didn’t respond and I had to power-cycle, but my I3 status was reporting 800MB free from the 32GB of RAM that I have on my PC. Not bad for an application written in rust!

At the end, I’m back with Liferea. I edited ~/.config/liferea/liferea.css to:

div.content {
    margin: 2em;
    max-width: 50em;
}

body {
    font-family: "Garamond";
    font-size: 16px;
    line-height: 1.5;
    color: rgb(240, 240, 240);
    background-color: rgb(32, 32, 32);
}

And with GTK on a dark theme, looks decent enough. I’m back to a happy place, and the tools I use work and I’m not frustrated any more.

Then perhaps we should take a look to the current state of the feeds out there:

  • RSS is still popular, although it has many limitations. And I know it because: I use RSS! At very least it needs some extensions from Atom, but it is common that the feed uses the post creation date in pubDate instead of when it was modified last time resulting on updates to the post never showing in the feed reader. I am conflicted on this one: is it the RSS’ fault or the reader’s limitation? I don’t know, but it is hard to detect this unfortunately. We should be using Atom instead because it makes distinction between published and updated.
  • It is not common, but some people –including me– like tinkering, and we end with bad implementations of RSS or Atom –e.g. the images won’t load on the feed readers–. You can contact the author, but let’s be real: people are busy and I didn’t get results, so no images on that blog.
  • Some feeds don’t include the full post. I don’t know why, but back in the day this was because the blog post had ads and those don’t show in the feed. NewsFeed deals with it beautifully and turns out Liferea can do it as well –although is not 100% consistent applying its theme, but it is close enough–. This “retrieve the post from the web” works also with the two previous problems, but then feels like we are failing a bit a syndicating content.

Finally, it is kind of difficult to find new interesting blogs.

Back in the early 2000s we had blogrolls and comments, and both were an excellent way to build your own community of blogs and find new content organically.

Unfortunately all that was ruined because Google’s dominance in the search space and ads, and their pagerank that lead to SEO and spam, and links had value beyond their original intent of linking content. Having comments on your blog wasn’t useful anymore, because it was mostly spam, and most of us removed the functionality.

Similarly people started to drop the blogrolls, because all that perceived value on the outgoing links. Which I don’t think it matters anymore because Google buried blogs deep in their search results, so you may as well have links. If I stopped having a blogroll is because the blogs I was reading disappeared, mostly. And I lost the community part around blogging.

Since about a year that my blogroll is back, and we’ll see about the community. Go and check it, you may find blogs that you like.

If you got to this point, I hope it wasn’t too bad. Despite all these arguably small things, I love it and I’m slowly building a list of blogs that I enjoy reading, and I’m even writing about it!