Nov. 4, 2024

Asterogue header graphic

tl;dr: you can play the new version in your browser here 👉️ https://asterogue.com

This is just a quick note to let you know I re-released my sci-fi roguelike Asterogue for the web, so you can now play it in your browser. It works on phones and desktop browsers. The first few levels are free to play.

Asterogue is a "juicy" graphical coffeebreak roguelike that is pretty much directly inspired by the original Rogue in terms of scope and features. You descend 17 levels into the heart of an asteroid to find The Orb and save the universe. There are a bunch of different monsters which get progressively harder as you descend. Instead of magic there is technology and you can pick up nanotech items and beakers of chemicals to buff your character (or hurt them if you get unlucky).

I received a lot of feedback from players since the first release for Android and Windows and this release includes some changes based on that. Here's a list of quality of life improvements and major features that were added:

  • 💾 Game progress is now auto-saved.
  • 🛠️ Fixed unwinnable level generation.
  • 🍫 Added hunger indicator.
  • 💯 Added a high scores table (tombstones).
  • 🔊 Volume control for music & SFX.
  • 📱 Mobile: fixed pixel UI issues.
  • 📱 Mobile: fixed layout on tiny screens.
  • 📱 Mobile: improved touch controls & UI scaling.
  • 🔙 Support cross platform back button behaviour.
  • 🔃 Ability to exit to the menu and resume.
  • ❎ Dismiss messages by tapping.
  • ⚒️ Many many bug fixes.

Thank you to Andry Bethpalko who helped implement some of the new features. 🙏

The game was always built with web tech but I only released it on Android and Windows at first because that seemed to be the right way to release a game. Well I realized maybe the right way is the wrong way. Rogule does well on the web so why wouldn't my other roguelike game? Now I'm trying out a web release to see if I can make it easier for more people to play Asterogue. So far this is working well and the game is getting more daily players than it ever did as a native app. I'm super grateful for that!

Asterogue initial release analytics

(It's a post for another time but I am increasingly of the view that native apps are past their prime and web based apps are the future. Yes I know PWA enthusiasts have been saying this for a long time, but after seeing stats on how much people dislike installing native apps versus visiting web pages, I think we may actually already be in this world and nobody noticed.)

Another big change is the payment model. The original Asterogue was like most other games in that you simply buy it in the app store or on Itch and download the game. This time I am trying a new experiment with this and instead of buying a downloadable binary, you can play the first few levels free in your browser and then you pay one-time to unlock the full game online if you want to continue. I think this strikes a nice balance for players as you get to try it out and only continue if you're actually into the game once you have picked up the vibe. I haven't really seen this done before with web based games so it's all a bit of an experiment.

Thankfully it seems this model is working for people as the game is making sales already. People seem to be ok with paying one-time to unlock the full game in the browser. Most of all though I am just happy to have people playing and enjoying the game instead of it sitting forgotten and lost in the app store piles. As I said I'm feeling very grateful my little game has new life. Thanks to everybody who has tried it! 🙏

Thanks for reading and I hope you enjoy playing it!

Sept. 5, 2024

makesprite.com is a simple open-source online app I made for generating sprites for games.

animation.gif

The first time you open the app it downloads a set of default prompts and sprite sheets. These are a useful starting point for generating your own. You can click the "Re-run prompt" button and the prompt that was used to generate that spritesheet will be loaded. You can also use the "Copy prompt" button if you want to use it elsewhere. Before you can run a prompt you'll need to go to the settings page and enter an OpenAI key since the app uses the OpenAI API to generate the images. Click "Send" to send the prompt to the API and after a while you will receive a spritesheet back.

This post is available as a video on YouTube:

d457a991b132a5c01f3fdcd299e9c219.png

Once you get the spritesheet it will be stored locally in your browser. You can then tweak the image by using the fill tool to remove any unwanted background colors. Once you find sprites you like you can use the "extract sprite" mode. This will copy the sprite to your clipboard as well as providing an interface to download it. You can also favourite, download, or revert spritesheets to their original form if you make a mistake when removing background.

Makesprite uses OpenAI's DALL-E to generate the images and comes with a bunch of user-interface enhancements to make it easy to organize and extract game sprites. It is a 100% client-side browser app and nothing is stored on the server side. Because it relies on DALL-E for the image generation you'll need an OpenAI key to use it. Future versions may integrate other image generators.

Makesprite doesn't do anything you can't do directly with DALL-E. You can use DALL-E directly with the same prompts. However it does enhance the experience of the sprite-generation workflow specifically:

  • It provides a series of curated prompts that are the best I could come up with for generating sheets of sprites.
  • It provides an interface for removing the background and extracting individual sprite elements. In my experience it is time consuming and fiddly to do this with e.g. Gimp or Photoshop.
  • It keeps your generated sprite sheets together and visible in one place rather than spread throughout your other sessions in the ChatGPT interface.
  • You can create template prompts with variables that can be modified between runs.
  • You can re-use prompts easily and copy them to your clipboard.

Limitations

The sprites generated by DALL-E are limited. They aren't animated. They are pixel based not vector based and aren't always cleanly separated from the background. They suffer the usual gen-AI issues with missing/extra limbs, elements that make no sense, bad reflections, weird shapes and shadows, etc. It is difficult to get any consistency between different generated sprite sheets. DALL-E is really bad at creating pixel art so I've focused on non-pixel-art prompts. Sometimes DALL-E throws things in that have nothing to do with the prompt. Finally, just like Copilot and ChatGPT, DALL-E is probably trained on non-public-domain and sometimes copyright data, and that might pose ethical or legal problems.

All that said, I think these sprites will be good enough for some use-cases.

If you're making a simple game with limited animations, and if consistency doesn't matter that much, or you only need a small number of specific sprites, or if you're only looking for simple flat token images, then makesprite might fit the bill. If you just need placeholder graphics to give your demo or gamejam game the right look or feel it might be good enough. I think makesprite could be great for gamejams where you just need something fast that fits the theme.

If you generate something that is good enough for placeholder graphics you could also take it to a real game artist once your game is ready, and pay them to create real game art using the makesprite output as a reference or inspiration. I'd be really happy if more work for real artists was an outcome from tools like this.

About the tech

I had heaps of fun building this app with ClojureScript. I've built a lot of small web apps with Reagent and CloureScript now and the workflow has only got better. At around 1000 lines of code and 21 days of part time dev this is probably one of the fastest apps I've built. I chose to deploy it to GitHub pages instead of going my usual route of deploying with Piku. As a predominantly front-end app hosting it on GitHub should make it faster due to their CDN.

This was fun to make. I hope you find it useful. Enjoy!

Aug. 13, 2024

Claude AI has a mode where it can generate something called "artifacts". One of the things you can do with this is generate simple single page web applications. It generates the web app and then mounts it in an iframe so you can quickly test it and give feedback. This gives you a fast iterative process using the AI to refine the web app incrementally.

This is pretty cool but I would much prefer a web app written in ClojureScript with Reagent forms instead of JavaScript or React. ClojureScript is more concise and I find it leads to less bugs and is faster to work with.

Note: this post is available as a YouTube video.

Claude AI app artifact generation

There is a version of ClojureScript that runs entirely in the browser called Scittle, created by Michiel Borkent who also created the babashka suite of Clojure utilities. Unfortunately Claude can only use libraries that are available on cdnjs when generating web apps, and Scittle was not available. So I raised a PR with cdnjs and it's now available to use in Claude generated artifacts.

What all this means is you can now prompt Claude to generate small ClojureScript apps and it will generate clean ClojureScript code. I've set up a basic repository with a ClojureScript + Reagent prompt and example HTML file you can give to Claude to get it started.

The best way to use this repository is to create a new project in Claude, and copy the prompt and the example in as the default project prompt. A project is Claude's way of letting you use the same prompt multiple times.

As a simple example, you can generate a basic compound interest calculator using the prompt from the repo and adding the following text to the end:

Please generate a simple compound interest calculator.

You can test the resulting app and see the code here. The code it produces is a single page HTML app with inline ClojureScript and I've shared the cljs part here:

(require
  '[reagent.core :as r]
  '[reagent.dom :as rdom])

(def state (r/atom {:principal 1000
                    :rate 5
                    :years 10}))

(defn calculate-compound-interest [principal rate years]
  (for [year (range 1 (inc years))]
    {:year year
     :balance (* principal (Math/pow (+ 1 (/ rate 100)) year))}))

(defn input-field [label key type]
  [:div
   [:label label]
   [:input {:type type
        :value (get @state key)
        :on-change #(swap! state assoc key (js/parseFloat (.. % -target -value)))}]])

(defn result-table []
  (let [{:keys [principal rate years]} @state
    results (calculate-compound-interest principal rate years)]
    [:table
     [:thead
      [:tr
       [:th "Year"]
       [:th "Balance"]]]
     [:tbody
      (for [{:keys [year balance]} results]
    ^{:key year}
    [:tr
     [:td year]
     [:td (str "$" (.toFixed balance 2))]])]]))

(defn compound-interest-calculator []
  [:div
   [:h1 "Compound Interest Calculator"]
   [input-field "Initial Principal ($): " :principal "number"]
   [input-field "Annual Interest Rate (%): " :rate "number"]
   [input-field "Investment Duration (years): " :years "number"]
   [result-table]])

(rdom/render [compound-interest-calculator] (.getElementById js/document "app"))

I hope this will be useful to people who want to build with LLMs and ClojureScript. Enjoy!

June 6, 2024

Last week I installed Xubuntu 22.04 on a Dell XPS 13 (9305). It was flawless. Everything just works out of the box. 🤯

This is completely amazing for me because I have been installing GNU/Linux on computers since some time in the 1990s when Pentium was a thing. I remember staying up all night tweaking xfree86 modeline configs just to try and get a terminal window to appear without looking squashed or destroyed by scanlines, worried about the warnings that I could permanently damage my parents' monitor. I have spent a ridiculous amount of time, often at 2am, tearing my hair out in a tumultuous relationship with Linux over the years. Lets not even get into WiFi drivers or printers. 😅 So when I get to run my favourite operating system without the tradeoff of having to put up with the difficult parts, it feels like magic.

Last week was the first time I can remember that I have installed Linux and not had to edit a config or run a script to get something working. Everything just works, first time, no issues. It's wonderful, and I feel incredibly grateful for the millions of volunteer person-hours put into making this operating system and software stack. ❤️ You people are amazing!

I'm going to celebrate by showing you a screencast of how fast various applications start up under Xfce on Xubuntu on this 3 year old second hand Dell XPS 13. I see posts these days about the dire situation on Mac and Windows where startup times have got really bad and people are lamenting the good old days when apps were snappy. Well guess what? You can have those good old days. You just have to run lean software (yes with all the tradeoffs and caveats that entails, but you might be surprised by what you can run under Linux these days).

In this screencast I'm launching a bunch of different applications. The keys I am pressing show up at the bottom of the screen. First I use a key combination (alt-tilde) to launch the Xfce terminal but it's so fast that you can't actually see the time between when I press the keys and when the terminal shows up, so they appear to happen simultaneously.

Then I use Application Finder to launch various applications by name. The time between hitting Enter and the window showing up is what to look out for. For the record the slowest application on my system is Thunderbird email client which takes 3 seconds to launch. Enjoy the show!

May 5, 2024

tl;dr: check out LuaVST on ChatGPT if you want to generate some VST plugins.

I've been doing weekly beats this year and it has been a lot of eustress fun (my best song so far is "smectite canyon gambit"). I found a nice positive feedback loop between composing electronic music and writing software for dopeloop.ai. Composing helps me figure out which features are important on the software side.

screenshot.png

In recent weeks I've been tinkering with Protoplug. It's a piece of open source software which allows you to write VST plugins in Lua. It turns out Lua is efficient enough to do DSP processing on modern CPUs. You can write the code interactively in the embedded editor, which makes for a smooth iterative workflow. I am using Protoplug with OpenMPT as a host running on Wine and really enjoying it.

After tinkering for a bit I had the idea to take the Protoplug API and some examples and feed them to ChatGPT to see if it could generate plugins from a written description. If you want to try it yourself you can go here: LuaVST GPT. Note, I am using GPT-4 and I haven't tested this with GPT-3.5. You will need to install the Protoplug VST into your host and then copy the code from the chat session into the VST's built-in editor.

Results

So how good is it? I don't like AI hype. I'm going to try to be objective and honest.

  1. Good: it can generate plugin boilerplate really well. If you just want to get something up and running that is a bit more tailored than copy-pasting one of the examples then it works well. You can say something like "create me a plugin that pitches all incoming MIDI notes down by one octave" or "create me a plugin that generates a pure sine tone at 440Hz" and it will do a resonable job that is usually bug-free.
  2. Okay: it can modify your existing code. If you can't be bothered looking through the API for how to implement something you might get a pretty good first pass out of it by pasting your code in and asking for a change. For more complex changes to the code it is probably going to create a lot of bugs. One thing that would significantly improve this would be automatically feeding any errors back to the GPT. At the moment you have to copy-paste errors and often you will figure out what is wrong faster than the AI will.
  3. Bad: asking it to do something complex like "simulate a full TB-303 with incoming MIDI and take into acount the non-linearities as documented by Devilfish creator Robin Whittle in 1999" it is going to do a very poor job. Even the first part of that ("simulate a TB-303") is going to be too much to ask of it. I tried with a few different prompt variants and it couldn't get there. I think this is where the hype of AI falls down. At this point in time only a human practitioner with years of experience, a nuanced understanding, and the ability to iteratively listen to the output as they code, is able to work their way towards a really good bug-free implementation of a complex plugin.

An example of a session that went well was when I used an online graphing calculator to come up with a distortion algorithm, and then I gave the equation to the GPT and asked it to write a plugin. I tweaked the code a little bit but on the whole it was a good implementation and did what I wanted. A distortion algorithm is one of the simpler types of plugins to code from scratch of course.

In the end building this has saved me some time and typing. I am able to work with the output from the GPT and get fairly useful advice from it without having to keep the whole API in my own head. This feels like a microcosm of the larger usefulness of modern LLMs. Productivity boosting but not job destroying.