Weeknotes: 1st September 2025

Last week

Claudius

It was the end of the summer Outreachy program, and Shreya Pawaskar finished her internship in style by adding image loading to Claudius. If we combine that with getting Claudius into opam, and adding the ability to record animated GIFs from Claudius, that's been quite a productive few months. To celebrate the image loading being added, I made this example based on the old Flying Windows screensaver:

For me one of the main improvements to Claudius through its involvement with the Outreachy process this summer is not any one new feature, but rather the summation of all that effort that has gone into it. Whilst I've always wanted for Claudius to be an accessible tool to help people get into programming, if you read the docs I wrote they were all aimed at people working on Claudius - that unintentional bias said a lot about where Claudius was as a project. But one of my final contributions in this phase has been to redo the README and the examples documentation to focus more on people who might use Claudius rather than those working to develop Claudius.

Topogmesh

My other summer intern, UROP student Finley Stirk, has been continuing to develop Topogmesh, a python script for turning geospatial data into models for 3D-printing.

Finley's come up with quite a cool way for allowing you to colorise the models: using Open Street Map (OSM) data. In addition to the overall boundary shape, you can provide topogmesh with a series of OSM tags and it'll then break up the model into chunks based on any OSM polygons in that area. Furthermore, you can provide a second raster layer for the elevation data, and have it use the primary elevation raster for the open area, and then the secondary elevation raster for thsoe within the OSM polygons. Why might you want to do this? Well for the UK the Department for Environment, Food, and Rural Affairs provide several elevation models, one of which is the elevation of the ground with all features (buildings, trees, etc.) removed, and one is the elevation including any such features. That means if you provide these two layers to Topogmesh you can have only the features you're interested in show up on the map.

For example, here is a chunk of central Liverpool, where I've asked it for buildings and parks:

A screen shot of a 3D printer tool, showing a rendering of a model of a section of a city with the ground level, showing faint impressions of roads, in one colour, and then all the buildings in another. The area is St George's Hall and Liverpool Central Library.

And this was the result of a 17 hour print job I did to make it real:

A photo of a large multicolour 3D print of an area of central Liverpool. The ground is white, buildings are brown and parks/grass areas are green.

Garish colours aside (I was just using what was to hand in the maker space I was using), I'm delighted both with the result here and the direction Finley has taken this in. Originally I was asking to be able to provide random GeoJSON files to let me define shapes, and in the future perhaps I'll add that feature, but for now this is a much more useful method for most people, and the idea of the project was not to build a tool just for me, but a tool that would be more generally useful.

Yirgacheffe

I said in last week's notes that I'd do no work on Yirgacheffe, and for once I actually stuck to that, so I feel I get to list not doing any work on it as an achievement :) We did however finish the PROPL camera ready and submit that (by we I mean mostly Anil).

AOH pipelines and Validation

Not a huge amount to write up here, but I started implementing the second part of the Dahal et al AOH validation method. I implemented the model checking part, with much help from Chess, late last year, but I've been dragging my heels on implementing the point validation part. For point validation we need to query GBIF to get occurrence data for species and compare that with our AOHs, and I have to confess in part I put this off as I found the idea of learning yet another Python library or another API a bit tedious on top of what is already not that exciting a part of the workflow (note that I think validation is super important, but it's not the part of the work that I find very motivating).

Anyway, to break the deadlock here I decided to speed-run the learning GBIF APIs by expanding my experiments with Claude to ask it for some examples. I steadfastly refuse to use AI for direct coding, but I've been trying to understand why some of my friends and colleagues are so excited about them. I find Claude is a mixed bag result wise, and realise that as much as I detest the term "prompt engineer", there is an art to interacting with these tools that I suspect I have not yet developed. But then that also tells me the Star Trek computer that I wanted from my younger years is still not here.

Even in this instance, I asked it to make me an example of getting species data from GBIF with certain filters, and it did that, but ignoring that GBIF have an official Python library. After a few iterations of this where I was able to push it more because I already knew a lot of what I wanted, it gave me some code that was enough for me to understand how the APIs could be used. But certainly the code it generated wasn't anything I'd personally want to use. Even if I did trust it to be perfectly correct (I don't), it was written so differently from how I structure my Python it'd be a maintenance pain.

But, all this playing around with Claude was enough to distract me from the idea I was writing boring code and get the job started, so perhaps that was its real value :)

This week

  • Push more on the point validation implementation
  • Do some more fancy 3D prints
  • Help Ali with her LIFE case studies paper as needed

Tags: propl, claudius