Subscribe to Personal Log via RSS

DOS related jams in 2024

Last year I released two new DOS games (Gold mine run! and The Return of Traxtor), both made in the context of a game jam, even if I don’t really do game jams anymore.

Anyway, it was a lot of fun and it pushed me to get up to speed with the DOS platform, remembering in a way what I knew back in the late 90s, and going beyond that. And there are more DOS jams in 2024!

There could be other jams this year, like the DOS Games Jam –that had its first edition in 2020–, but these two jams tick a couple of boxes for me:

  • The target platform must be DOS (in other game jams is more about the feel, but I prefer going a bit more retro and make a game that runs on the actual hardware).
  • There are some interesting limitations, being 8086code in one of them (meaning: Intel 16-bit), and a COM file the other one (all the game must fit in 64K).

Since I released The Heart of Salamanderland for the Amstrad CPC last month I’ve been going through some ideas for potential games, and I was suffering a bit of choice paralysis and not starting anything (neither continuing one of the on hold projects).

When I released Gold Mine Run I put together a DOS library to make games with DJGPP, which is 32-bit and not useful for any of these two jams. But because I made Traxtor targeting the IBM PC/TX, I have some interesting code that I can reuse.

I’m toying with the idea of making something for the first jam, although I’m not sure what I’m going to target yet. I don’t feel like doing CGA again –too soon, and it is hard to draw nice things I guess–, EGA is such a pain to program, and VGA may not be a great match for the jam’s limitations.

We will see. A month is not a lot of time, but at least this time I don’t start from scratch like when I streamed the making of Gold Mine Run!.

Update (2024-07-13): the DOS games July 2024 jam will be on as well in a couple of days!

Amtix! CPC

Fusion Retro Books, besides of publishing very nice retro-computing inspired books, has been reviving some of the old magazines that were popular in the United Kingdom in the 80s. That is Crash for the ZX Spectrum, Zzap!64 for the Commodore 64 and Amtix for the Amstrad CPC.

If I’m not mistaken, it all started with annuals for the speccy and the Commodore, that were (are?) like a slightly larger magazine on a nice hardback edition that covers games released on the last year –with exceptions, they also cover new games that were released before the annuals were a thing–, and because we were living a retro-boom and the annuals were successful, Fusion got the rights for the magazines and started publishing them in a small A5 format with 60 pages.

Looks like, of the three, the Amstrad CPC is the smaller –or less active– community of users, so when Fusion was touching base trying to find if there was enough interest to sustain a Amtix revival, I was supportive of the idea –and excited!–. But as I recall it, the response from the community was lukewarm at best.

Hyperdrive review in Amtix 7

Hyperdrive was reviewed in Amtix #7

There were different reasons. For example, some people didn’t like the original Amtix, so I thought it was not going to happen. That’s why we can’t have nice things, etc. But it finally happened, and Fusion decided to give it a go with 12 numbers, and depending on the support, perhaps keep going.

We just got the 12th and last issue of Amtix CPC, and I’m not surprised.

I do think that the Amstrad community is sometimes weird, and not that different from other retro-communities in reality. We see that with new games, when people are very quick to ask for physical releases of the games, but very often the sales of those editions don’t justify the effort –and I’m not talking about profits but to just break even–, or even with people looking forward to play the game when is in development, but when the game is released… silence.

I find this frustrating, but what can be done? The ones wishing there were more things happening around the Amstrad CPC are not always the same that won’t support those initiatives, and being a smaller community perhaps than the ones behind the ZX Spectrum or the Commodore 64 (or the MSX!), means that we have less options (physical editions, magazines).

Although all this is true, in the case of Amtix, I think there are other reasons at play.

In my opinion the magazine was a bit hit and miss, and this is probably not unique to Amtix. I was subscribed to both Amtix and Crash, and by Amtix issue 7, I decided to cancel my subscription because I wasn’t enjoying reading the magazines.

What didn’t work for me? I guess it was a mix of different things: layout problems, some of the texts were very amateur, reviews of games that shouldn’t be there –a free game that is not good, why waste pages on it when there are more games to review?–, and in general it felt like they wanted to revive exactly what those magazines were in the 80s without realising that we live in a different time.

Yes, I know. But I said it already, the Amstrad community is sometimes weird, and I’m part of it.

Perhaps it could have been better, or different, but I’m glad we had Amtix back even if it wasn’t perfect. You can still get the magazine from Fusion Retro Books pages on Amtix CPC.

The Heart of Salamanderland is out!

I announced here last year in December that I was working on a new Amstrad CPC game, although I had started the project in November. I’m still quite hesitant to announce a new game until I know I will finish it, despite doing this for almost ten years now.

The game is essentially what I planned: whip action fighting enemies, a good sized dungeon to navigate, and levers and keys to implement some basic puzzles. Which is always a bit of a refreshing surprise because very often the finished game is not quite what I had in mind when I started the project.

I took some risks on this one, by using an engine I wrote for performance, but that requires a lot of memory for a 64K game –essentially 16K for the hardware back buffer–, and that can always affect the size of the resulting game.

I put a lot of time into the encoding of the screens, using a new idea I had never implemented before based on meta-tiles of variable size –a bit more advanced than what I described here–, and at the end I managed to cram in 6 different enemy types with their own behaviour, a final boss, 55 screens and 4 music tunes –menu, in-game, game over and boss fight–.

In reality is not that you have 64K, it was in this case 32000 bytes of usable memory, and I finished the game having 162 bytes left. It was a tense when I had to fix a couple of last minutes bugs!

In game screenshot

One of the screens on the game

The game is framed in the world of fantasy of my 7 years old son, mixed a little bit with the universe of the Fablehaven novels by Brandon Mull, that happens to be mostly “compatible”.

You could argue that is not that important, because we all know that what it was written on the inlay of the 8-bit games from the 80s was just some filler that may or may not fit the actual game, but in this case it was an important source of inspiration when designing the enemies and the mood of the game. I don’t know if I succeeded, but my son is happy with the result, and that’s the gold standard for Salamanderland!

I made some decision regarding gameplay that I knew that could be controversial: tight jumps –I implemented “coyote time”, so it shouldn’t be that hard–, and classic “3 lives and game over”. It is an 8-bit game, these shouldn’t be a problem, isn’t it?

My hope is that players will persevere and get to enjoy the game for what it is. Of course there will be people that will get frustrated and quit, but that’s something that can always happen, no matter what game you make. If you can’t make everybody happy, be sure it is you that is satisfied with the result.

The game can be downloaded and played for free here: The Heart of Salamanderland, and a physical edition by Poly Play is planned for later this year –more information about that soon!–.

People's expectations regarding Open Source

I was reading “Is This Project Still Maintained?” by Matt Palmer:

There is a delusion that “maintained” open source software comes with entitlements – an expectation that your questions, bug reports, and feature requests will be attended to in some fashion.

I couldn’t agree more, as I experienced that myself a few times. Including some people being a bit too pushy for my personal taste, so that I accepted patches and features because my project was upstream for them, and it is obviously better if I maintain those changes instead of them keeping track of my project as a soft fork.

Some of those changes, I accepted them. I recall for example scp support in a sftp proxy for OpenStack Object Storage, despite scp not being part of sftp really. Although I knew I was not going to use that feature, and I was very sure I didn’t want to maintain it.

The consequence is that it makes the maintainer less happy and, although I was employed by a company for which I was maintaining that open source project, it didn’t feel right and made me double-think if I really wanted to release more open source projects.

I recall reading a blog –can’t find the link– where the author said that they would never release anything that is useful, so they don’t get bullied into maintaining it after that. It sounds a bit extreme, but makes a lot of sense.

Matt includes in his post what he calls The Open Source Maintainer’s Manifesto:

I wrote the software in this repo for my own benefit – to solve the problems I had, when I had them. While I could have kept the software to myself, I instead released it publicly, under the terms of an open licence, with the hope that it might be useful to others, but with no guarantees of any kind. Thanks to the generosity of others, it costs me literally nothing for you to use, modify, and redistribute this project, so have at it!

Perhaps I don’t fully share the tone in which he wrote the whole piece, but that’s the gist of it: when I release something as open source, I see it as a gift to everybody, and the licence makes it crystal clear what anyone can expect from it. Yet, it is difficult.

I wasn’t happy with some of the contributions to my ubox MSX lib project, and didn’t accept them –which ended in a fork, that’s OK!–. Other changes, I accepted them, and I’m still unhappy about it because it wasn’t really why I released that code as open source after a lot of work to document and prepare the whole thing.

So it may seem I’m making things more complicated to get contributions now that I’m self-hosting the project out of a forge –it was in GitHub, then in GitLab, and now I self-host it–, but in reality I’m just keeping in the open something I made for myself and I’m sharing with the world.

But is not really a product, and I don’t want it to be. Thanks!

More GPL and less CLAs

I read recently Corporate Open Source is Dead by Jeff Geerling, which elaborates on the fact that is there is a noticeable pattern in corporate open source, that is turning proprietary and adding to the list of formerly open source or free software:

2024 is the year Corporate open source—or at least any remaining illusions about it—finally died.

It’s one thing to build a product with a proprietary codebase, and charge for licenses. You can still build communities around that model, and it’s worked for decades.

But it’s totally different when you build your product under an open source license, foster a community of users who then build their own businesses on top of that software, then yoink the license when your revenue is affected.

That’s called a bait-and-switch.

The list of formerly proprietary software now open source is longer, but that can be explained because often times a project that is dying commercially, becomes open source and community managed.

And someone shared in mastodon a post from a couple of years ago drones run linux: the free software movement isn’t enough by Jes Olson, that mixes probably too many things –including ethics that have been never part of free Software, and perhaps they should–.

Not a verbatim quote, I formatted the text (go and read the original post):

Groups of capital formed, and two libertarians started the open source movement as a corporate-friendly free software alternative.

And they won.

And later on:

The accidental benefits of the free software movement: a global community working asynchronously, sharing code without pay. These important, critical benefits, which were responsible for the absolute dominance of things like gcc, the gnu coreutils, and Linux - have been hopelessly devoured. All they had to do was strip away the pesky moral movement that all of these efficiency gains carried with it - and voilà. Money.

The post is very negative and defeatist and, although I don’t agree with all the points, it resonates with me in I way I wasn’t expecting: from how hard is not using non-free Software –although less today than 20 years ago–, to how the mainstream mood is aligned with the corporate view of open source –every open source project must be a product produced industrially and exploitable by businesses–.

Yes, the free Software movement was colonized and, at the end, we only have GitHub and permissive licences. And they told us we had won because “even Microsoft is doing open source”. But, did we?

I haven’t given up, yet. Like I said a year ago: write free software.

ubox MSX lib news

I finally moved ubox MSX lib from its GitLab home to my own infra, you can check ubox MSX lib via cgit.

I moved SpaceBeans back in June 2023 –as I wrote self-hosting git repos for now–, and I kind of put off moving my MSX project because having more users I thought it would be harder. But I was wrong!

SpaceBeans has binary artifacts, that I have to build and host somewhere, and back in GitLab, that was mostly automated and managed by CI (Continuous Integration). And in my mind, ubox MSX lib was that, and more –because the users–.

I was probably right about the users, but ubox MSX lib doesn’t have a binary that needs to be produced and distributed. Instead, the project’s output is just the source code, and in that case cgit has you covered using the refs view where you can download a zip or a tar.gz of any of the tags. And that’s all!

So the project is out of GitLab now. I put an announcement on the GitLab’s repo, and archived it so it is preserved read-only. And, obviously, things are going to work slightly different:

  • The project can be cloned only via https only with: git clone https://git.usebox.net/ubox-msx-lib.
  • The web interface to the repo is provided via cgit: ubox MSX lib tree view.
  • You can subscribe to new releases following this feed in your feed reader.
  • Contributions are now via email, or alternatively you can make them available on a public repo so I can pull from it.

The project home is the same, ubox MSX lib in usebox, with news and the documentation for easy access.

The only thing that is currently missing is a shared public channel for communication and collaboration, that previously was GitLab’s issues and merge requests. I know things can be more difficult now, for several reasons, but mainly because the way I want to work now is not following the mainstream forge model that you can see in GitHub, GitLab, Codeberg, and others –you can even self-host projects like Forgejo–.

Nothing is set on stone: if necessary, I could setup a mailing list somewhere. I’m not doing it for now because it doesn’t look to me like this project has that many contributions that the resource would be used.

I’m planning a 1.2 release, adapting the project to work with the latest SDCC and its new calling convention, which will be a big change because people using the older SDCC will stay in current 1.1.14 version.

Meanwhile, there is an active fork by robosoft, in case you want and advance of what that 1.2 could look like (also it includes some MSX2 related changes that may interest you as well). Let’s keep those MSX games coming!

Funco

I like programming languages, and since I attended University many years ago, I’m attracted to their design and implementation. For example, I talked here about Micro, which I think is my latest complete release on the topic.

But implementation of programming languages is complicated, and takes a long time. That’s OK, however I think I always make the same mistakes.

To start with, I tend to implement a language that is too big. I try to do everything “the right way(tm)”, which takes even longer, and in the case of compilers, when I get to the parts that I don’t have experience and I really should investigate more, I’m overwhelmed and out of energy.

Like, why did I start working on a compiler written on Haskell when I’m still learning Haskell. Not a great idea!

So last week I was busy and tired, and frustrated, so one night I started a new project to see what I could do in a few hours, with the following conditions:

  • It doesn’t have to be nice, or well done. For example: error reporting? where we are going we don’t need error reporting!
  • Build an interpreter first, we’ll see if it is worth adding code generation (compiler) later.
  • Use tools that I know well already.
  • Keep everything small, so it is easy to change direction without a lot of refactoring.

And that’s how Funco came to be. It is very small, written in Python (3.10 or later because I used match), it is only an interpreter taking from Python everything I could, and the user interface is very rough (raising exceptions on errors!). But it works, and it was very satisfying to write, even if it is not very useful other than helping me to solidify what I already knew.

It is inspired by lispy and Atto, and it looks like this:

# example of a factorial function
def fact(n acc)
    if =(n 1)
        acc
    else
        # tagging a tail call with "@" so it can be optimized
        @fact(-(n 1) *(acc n))
    end
end

# main is the entry point of a program
def main()
    display("Running fact(50)...")
    display(fact(50. 1))
end

# output:
# Running fact(50)...
# 3.0414093201713376e+64

It is functional, with no variables, and well… almost everything is a function –I excluded function definition and conditionals to make it easier to use–. It feels very Lisp, and the syntax is a bit Ruby-like (which is useful so get syntax highlighting).

You won’t find anything revolutionary in the code, but that wasn’t the point. I even implemented recursive tail call optimizations, because otherwise it wouldn’t be useful at all given that the loops are implemented with recursivity. For example:

# recursive for loop
def recfor(n fn)
    if >(n 0)
        fn(n)
        @recfor(-(n 1) fn)
    end
end

def main()
    recfor(10000 display)
end

Because there is no “return”, it is required to tag the tail calls with @ so the interpreter tries to optimise that call avoiding hitting a stack limit (and improving performance, although speed was never in my plans).

Perhaps I will put some more time to add nicer error reporting, just in case I can use this funco as a base for future experiments. Now that I have something small and easy to modify, it shouldn’t been that costly to make small experiments with code generation!

Edit (2024-04-29), I have added more examples. It is a toy language, but there is some brain teaser about writing these that I like.

Backdoor in upstream xz/liblzma

Long story short: someone added a backdoor to upstream xz/liblzma that will compromise a SSH server in some conditions. And it got to places, for example xz-utils in Debian unstable and testing (it has been reverted now).

There’s a great in-detail summary by Evan Boehs: Everything I Know About the XZ Backdoor. I recommend reading it, because there is a lot to learn from this situation.

For example: how many single maintainers are out there taking care of vital pieces of open source software, without help, and that may even be in a specially bad place personally?

Alan Cox was commenting on mastodon:

At a certain level I am amused that probably millions of dollars of careful espionage work has been pissed away by a fraction of a second delay in an exploit.

Far more of a problem though are systems that dynamically assemble stuff from latest versions of things. We can be sure that some maintainers of those thousands of tiny pieces are careful, reliable maintainers funded by various governments who if the call comes will use that trust to flip them for bad causes.

The first part makes reference to how the backdoor was detected –it made sshd slower and someone was looking–, and the second is one that has been bothering me a lot since I started working on the JVM professionally –with Scala–. It is common practice to update and include lots and lots of dependencies directly from different upstream providers, without appropriate scrutiny. Does it compile? Do your test pass? I don’t think people really read the changelog, and we are live!

Which makes sense. It isn’t possible to review all your dependencies, because that is how industrial software is built today. And as we can see, the fact that a distribution like Debian is providing your packages is not bullet-proof –although the issue was in unstable/testing, it never got to stable–, but I generally trust the distribution maintainers to do the right thing. If anything, it is another layer of security.

It is not if but when more things like these will happen, be it because the maintainers are overstretched and make an honest mistake –see Log4Shell as an example–, or because there’s malicious intent.

Connecting blogs

I wrote about the IndieWeb about two years ago and, as part of that post, I mentioned webmentions, as one of the protocols they promote.

I was thinking, how can I introduce the idea as simple as possible? We could compare it to other linkback mechanisms, but then I realised that perhaps not many people remembers what trackbacks or pingbacks are.

Webmention is currently a W3C recommendation –not quite a standard yet– that enables cross-site conversations. It is a simple protocol to allow your site to tell a different website that you mentioned one of its posts as a comment, like (or other types of responses, apparently).

In a way, is similar to the experience we have in social media: you know when someone replies to you, or likes, or quotes. The difference is that we own our website, so we move to a distributed and heterogeneous landscape, instead of centralised and uniform –all using the same social network–, in which we need to add the glue between us so we can share that information.

I was thinking about implementing it here, even if is not going to be easy because this blog is a static website, but then I remembered that in my old blog in Spanish –that I closed after 18 years online– had comments support, but very little use. I got some comments over the years, but mostly in the early days –around 2003–.

Comments helped to improve the posts, but they also connected blogs because it was common that the commenters had a blog as well, and as part of the comment they could provide a link to their blog. As you can imagine, the idea was eventually perverted –and somewhat ruined– because spam. Those comments where a cheap way to get incoming links to a website, and that helped with SEO –search engine optimisation– and positioning in the search engines’ results. The spam is bad, of course, but I’m also unhappy with the incentive: those sites wanted traffic because they had ads and that meant income.

In my blog I had to disable comments automatically on a post after a number of days, include simple captchas, do fancy stuff with cookies and sessions. Very messy, for little benefit –the occasional comment–. And trackbacks lasted even less than comments. I think I disabled them shortly after finishing my implementation because not many real people used them, and it was only spam.

This blog doesn’t have comments, although you can always send me an email if you want to comment anything, and some people have done that. Not often, but if I have received a handful of emails, that’s more comments that my old blog had in its last few years.

So I am undecided. Although I read blogs, I don’t seem to quote them often –perhaps I should, I should find my small blogosphere like I had in the early 2000s–. Would it be worth it to add support for webmentions here?

Sending them is easy, I can do it as part of the publishing step that renders the posts –in markdown format– using Hugo. Receiving webmentions will require a bit of extra work, and likely some non-intrusive Javascript to show them in the post somehow. I may give it a go for fun, I can always remove it if it turns out it was a bad idea.

In any case, I have the feeling this is something that should be widely supported, if blogs stand a chance against centralised social media.

Edit (2024-04-24), Alex writes on Micro-blogs:

I tried web-mentions and they didn’t work [how] I wanted them to. Far too few other blogs supported them, and for this blog, I didn’t know what to do with them. I didn’t want to send myself email so I turned them into comments, but mentions aren’t as strong a signal as a comment. Mentions aren’t public but comments are. Mentions don’t need a strong connection to the main article but comments do. By turning mentions into comments, I had made a mistake and the result was frustrating. So I got rid of them.

We had a conversation over email, and it really helped me to think about this.

On those emails I went full circle: from comments is not the answer to it was the comments all along! Although, perhaps, better than we had them back in the day. Probably using the Fediverse, so we have notifications and things like that, that was never a problem solved in the early 2000s –that’s my back in the day for this topic–.

I’m starting to think that what a blog needs to be connected with other blogs is comments. Alex quotes this take from James:

If your blog then has comments, and likes, and people on other platforms can comment, like, and share posts directly, is it really a blog any more? … Should blogs even have comments and notifications?

And I found this very interesting. The informal definition of blog I was more familiar with, included comments from readers that complemented the post, to the point that when some blogs –specially with a considerable audience– started to remove them, it was controversial: is that still a blog?

May be the likes and sharing posts is a bit too much, specially because if you add follows, that is social media. But comments were part of a blog, and that is when my chat with Alex ended in: of course, we just need comments to connect blogs!

Not sure if it would be used enough, unless I find my blogosphere, or if the spam would make the functionality useless and too costly to operate, but in the meantime –as Alex mentions in his post– we will have to do the work and link to other’s people blogs.

Have a blogroll

About 3 years ago I was wondering here how could we get the RSS back to what it was. It was a rhetorical question, and I didn’t provide an answer back then.

This is a topic going around the fediverse, and the usual suggestions are:

  • Have a personal website (or start a blog).
  • If you use RSS on your website, make it prominent so people know that it is there.
  • Have a blogroll, or a way to recommend the blogs you read.

The last point is important and, without thinking about it, I was neglecting it in this blog.

What is a blogroll exactly? According to the Wikipedia:

A list of other blogs that a blogger might recommend by providing links to them (usually in a sidebar list).

It also had a social component of this blogger reads me and has my blog in their blogroll, I will add theirs to mine. Which was a way of building your small blogosphere or community of blogs by sharing links. I made a few friends like that, and I still keep in touch with some of them after over 20 years. All because at some point we all had a blog.

Although I still haven’t found how to integrate it in the blog itself –I blame the mobile phones support for this, but it is all my fault–, I have now my public blogroll. It also generates automatically when I post and update the blog by processing ~/.config/liferea/feedlist.opml, because liferea is still my feed reader, so it should be up to date and reflect exactly what is that I’m reading.

In my old blog that run from 2003 to 2021, it had blogroll up to late 2009 –according to the wayback machine–, with a section on the right column suggesting blogs to read. On 2010 I started learning Python and, as an exercise, I rewrote my old PHP blog engine –using Tornado and Redis, in pure NoSQL hype–. At that point, I dropped the blogroll, but I can’t remember the reason.

I have the vague recollection of most blogs I used to read being inactive. Checking that blogroll from 2009, most of these blogs are either gone or stopped posting many years ago. Which is fair, I closed my old blog after all. A good blogroll has to be alive and updated to be useful.

Anyway, the idea of this post is to say: hey, I have a blogroll! You should have one as well, like the other cool kids. I still have to decide how I integrate it with the blog, even if most people read this pages via a feed reader!