I went back and did Advent of Code
2015. Spoilers.
Language choices:
- 2018: very beginner Rust, floundered a lot, gave up after five days.
- 2019: Perl, completed
- 2020: Rust, but switching to Perl after two days; completed
- 2021: Rust, completed
- 2022: PostScript, not quite completed
- 2023: Rust, completed
- 2024: Rust, completed
- 2015 after the fact: Rust, completed
This was actually more of a Rust learning experience than 2024 was, I
think largely because I wasn't feeling the pressure to get the puzzles
done each day, so I was more willing to try out a new approach.
I'll talk here about each day and my solutions. Should you want to
follow along, my code is at
Codeberg.
The very first AoC ever. And part 1 is basically an iterative
add/subtract with the final result output. (In fact I cheat by just
counting the number of occurrences of each symbol.)
For part 2 I can't do that of course, so it's a matter of counting one
by one until I hit the terminating condition.
Pretty clear here: build a list of areas, sort to find the smallest.
(I know, I know, I should just use min
, but when there are only
three…) For part 2, I don't even need the sort.
The obvious way to do part 1 is so obvious to me that I couldn't think
of another approach: track position, amend, insert entry into set for
new position.
And for part 2 I ended up "treating 2 as a special case of 1", rather
than generalising a round-robin.
So far I haven't had what I think of as the AoC standard, a full set
of test data—just individual small samples.
Again, the solution is inherent in the problem: generate candidates,
MD5 them, check the answer.
In my idiom, this is clearly a job for regular expressions. But it
turns out that the standard Rust regex crate doesn't support
backreferences. Fortunately I can use fancy_regex
, which is a bit
slower but has the shiny features needed here.
No detailed analysis needed, just is it a match or not.
This looks as though there ought to be a short cut, but in the end I
didn't get it to work: a set for part 1, a hash for part 2, and
manipulating the values as needed,
Rust things learned a bit better: (ab)use of the Entry structure,
saturating_sub
to subtract without overflow.
I could have kept a rectangle list with overlaps, but with a mere
million-cell grid it seemed easier just to take advantage of having
the speed of a compiled language available and work it all out the
hard way.
(For part 2 I might have tried counting the affected areas, as the
puzzle hints, but the zero lower bound on brightness makes that
hard work.)
This might well have been easier with regexps, but I didn't use
winnow
at all in Advent of Code 2024, and I wanted to renew my
acquaintance with it.
I'm sure there's a more efficient way to do this, but this seems to
work: build up individual parsers (using the enum trick so that
there's an operand type which can be a constant or an address). Each
type of line (assignment, single operand ("not"), dual operand (all
the other ops)) gets its own parser, and the final combinator tries
each of them in succession. (I've blogged this in more detail
elsewhere; "See also" at the bottom of this post.)
That gets me a list of Gate objects (including assignment as a type of
gate), and I run through them repeatedly, looking for ones with all
necessary inputs either constant or with known values, setting their
outputs and deleting them from the "pending" list for further
consideration. (Yes, one could draw up a dependency chart. In fact a
topological sort would be the ideal for this, but I didn't bother.)
Left shift overflows, so I thunk up to a larger data type and back
down again. I feel there should be a non-overflowing shift, that just
lets the more significant bits drift off into the void. Maybe there
is.
For part 2 I suppose I could put the thing in a function and run it
twice, but it seemed less trouble just to code in a trap for the
assignment gate and hardcode the input value.
Bonus program, "todot", produces a directed graph of the network
suitable for feeding to GraphViz. I take advantage of the fact that
LSHIFT and RSHIFT in my input always have a constant second argument.
I note quite a lot of cases where two inputs are fed through both an
OR and an AND-NOT sequence, the the results of those two ANDed
together which is clearly an XOR.
No excuses for new crates today, but I did get to play with
iterators - in part 1 at least. When I see the start of an escape
sequence, I want to consume the rest of that sequence, then continue
through the list of characters from wherever I got to; so instead of
just doing a simple for-loop, I pull out the chars()
iterator into
its own variable and call .next()
explicitly.
And I don't actually need to get the decoded values. All I care about
is the length difference.
So if I find a bare "
that'll be removed, and score one point of
deference. A \
will be followed by another \
, or a "
, or an x
plus two more characters (I don't care what they are, I can just
assume the encoding is correct). In each case I advance the iterator
over those characters and carry on from the next—which takes care of
sequences like \\"
that might otherwise cause confusion.
Part 2 is easier: if I see a \
or a "
then add one to the
difference to account for the escape, and add 2 overall to account for
the wrapping quotes.
More winnow
parsing today. Could easily have done it with regexps of
course, and working in Perl I would have, but this feels like a more
appropriate tool for the job.
Well, I've written Travelling Salesman solvers before, and I wanted to
use a library. But the one that looked best, elkai-rs
, gave me a
wrong answer (but the right answer for the test case). And most of
the other libraries wanted Cartesian coordinates for each node rather
than inter-node distances. But since I only had eight nodes in my
input, I just wrote a basic brute-force solver instead.
Which was handy for part 2, because the libraries tend not to be able
to solve for maximum distance. Probably I should find and learn a
linear programming library and how to make it do TS problems.
A much lighter problem today, with nothing terribly sophisticated
needed. (Yes, all right, I'm using a language that can handle
multi-million-member lists without complication.)
I made an assumption (that there would never be ten or more of the
same sigit in a row) but it turned out to be a valid one.
Another fairly straightforward one here. No parsing needed, just a
base-26 numbering system and a series of tests (each of them
short-circuiting once they get a definitive result).
The incrementer skips over forbidden characters, but they can appear
in the input (as with test 2 starting "ghi") so they get encoded. In
theory one could increment the first forbidden character, so that
"ghixxxx" would turn directly to "ghjaaaa", but I didn't need it to
get quick answers.
Interesting to see this and day 4 which seem pretty clearly to be "use
your chosen language's library to do the thing"; that's not something
I'm used to in the more modern AoCs.
Anyway, in the spirit of learning stuff, I had a poke at serde_json
,
and it turned out to be thoroughly easy to use. Part 1 is nearly
trivial, and part 2 not much harder, once one has worked out how the
JSON values will be presented.
This is basically Travelling Salesman all over again, with a more
complicated parser, so I more or less re-used my Day 9 solution. Where
that had a distance from city A to city B, this is a pair of happiness
changes for A and B when seated next to each other—but the "distance"
is in effect just the sum of those two.
A little more tweaking reduces the permutations and sets up a round
trip (so nodes 0-3 lead to a permutation of [1, 2, 3]
, with a 0
stuck on the start end end of each permutation).
Part 2 is a one-line change, since I'm already initialising all
distances to 0 anyway.
The now-standard winnow
parsing, but this time I could get a struct
out of it (unused name, speed, flight time, rest time). And once I had
a struct I could stick some methods on it to get the duration of a
flight-rest cycle, and the distance covered in that cycle.
From there it seemed obvious to build another function for the
distance covered after a specified time; rather than go cycle by
cycle, I broke it down into full cycles and a final partial cycle. (I
was expecting one of the usual AoC tricks, of part 2 being a much
largser total.)
But in fact part 2 requires me to accumulate a score, and while I
could probably do something to find cycle changeovers, it seemed
easier just to go second by second.
Unbounded knapsack problem. But again it's small enough for a naïve
brute force.
The two fiddly bits really are (a) working out the total value of each
recipe, which needs iteration over each ingredient parameter and each
ingredient, and (b) iterating over the possible values for ingredient
amounts such that the sum doesn't exceed 100. (Which I'm sure I'm
doing inefficiently, but at least I can take the last column as 100 -
the sum of the previous columns, which saves one level of looping
depth.)
Then part 2 is essentially another ingredient-property, but for
clarity I parse it separately.
Most of today's effort went into more fiddly winnow
parsing,
scooping up a whole lot of attributes into a hashmap. (Still copying
strings, but not straight away, so I have some cargo-culted lifetimes
in here too. I've blogged this in more detail elsewhere; "See also" at
the bottom of this post.)
All right, I couldn't resist having a struct called Sue.
The actual algorithm is more or less "check each parameter, and if
it's present ensure it matches".
Part 2 just meant extending that slightly. I like the elegance of
Rust's match
operator, though in part that's because there's nothing
similar in Perl.
It's another knapsack problem today, which for me means a depth-first
search with a custom data structure. (No excuse to use winnow
today
since it's just one number per line.)
I set up an initial stack entry (next index is zero, remaining eggnog
is the 150 units specified by the problem). Then when I pop each stack
entry I scan from next index to the end (i.e. containers we haven't
already used), then see whether that container is an exact match for
the amount remaining (count up one combination) or can accept less
than the amount (set up a new stack entry based on that container).
For part 2, the algorithm doesn't change much; the stack structure
gets a "containers used" field, and when we get a full set we check
that against the running minimum and score appropriately.
This is a pattern I've used a few times, when I want to list e.g. all
the minima of a function I'm evaluating repeatedly:
if new-value < minimum {
minimum = new-value
clear list
}
if new-value == minimum {
push new-value onto list
}
OK, classic Game of Life here (even acknowledged in the part 2 text,
which mostly doesn't happen in more recent AoC puzzles). Probably
hashsets are not the most efficient way of doing it, nor is looking up
each neighbour multiple times. But at this scale I don't need to
optimise hard.
Fun with parsing, and then with autogenerating all possible products
from a given input.
Part 2 was completely different, and the first time I've got a
"classic AoC" feel from this first year. Dijkstra ran out of memory,
as did reverse Dijkstra (starting with the product and searching for a
route to "e"). So there was a fair bit of head-scratching here.
But, inspecting the input rules…
- e always becomes 1-2. (Hyphens for clarity.)
- X usually becomes 1-2.
- Sometimes X becomes 1-Rn-2-Ar.
- Or 1-Rn-2-Y-3-Ar.
- Or very occasionally 1-Rn-2-Y-3-Y-4-Ar.
And Rn, Y and Ar never get expanded themselves, or produced in any
other pattern.
Let's conceptualise Rn as (
, Y as ,
, and Ar as )
, and call
everything that isn't one of those a "true element".
If I have two adjacent true elements, they must have been produced by
the "X → 1,2" rule. So getting a list of true elements down to a
single element will take (list length - 1) steps.
If I have 1-Rn-2-Ar, where 1 and 2 are true elements, there must be a
rule that lets me squash that down to a single true element. One step
removes three elements from the list rather than just one, so seeing
this complex scores two steps fewer than four individual true elements
would..
Similarly 4 (for the single-Y) and 6 (for the double-Y)—so each Y cuts
off two more points from the total.
Or to put it another way, the number of steps to reduce an element
list to the initial "e" is
- elements - 1
- minus number of
Rn
s and Ar
s (or 2 × number of Rn
s, if they're
evenly matched)
- minus 2 × number of
Y
s.
And that's what my part 2 does: split the line with regexes and count
the types I need to count.
The first part of this is clearly a sum of divisors problem (sigma
function), which I can optimise a bit by using a variant on the sieve
of Eratosthenes. Then I search for a possible value, return it if
found, and otherwise expand the sieve and try again.
(The alternative would be to factor each number but I'm not convinced
that would be faster.)
Part 2 is another matter. I have to accumulate each number's count,
but the first number to exceed the goal isn't necessarily the lowest
number that will. So this conceptually happens in two stages:
(1) keep ascending until I find a count high enough for the goal
value. Store that index as a working minimum.
(2) keep doing the same thing, taking any lower indices that satisfy
the goal, until the iterating value exceeds that first index. No value
of lesser index will increase further, so that lowest index is the
lowest value.
Back to something a bit easier today. I could have done a full-on
cartesian product, but it seemed easier just to lay out the loops
explicitly. Armour gets a single dummy entry to represent the
no-armour option, and rings get two to represent no ring on one or
both hands. Then it's just a matter of simulating the battle, testing
for a win, and logging the minimum cost.
Given this structure, it's a trivial change to the code to solve part 2.
This was the occasion of my traditional AoC total loss of enthusiasm.
I wasn't especially excited by day 21, and here's more of the same.
So basically I run a depth-first search, with a list of spells and a
list of effects. (I initially missed the constraint that you can't
have more than one of the same effect running at once.) If I were
starting from scratch I'd just have one effect slot per
effect-generating spell, but it works as it is.
Ah, that's more like it—and I think this may be the first ever AoC
virtual machine (which would to my mind reach its acme in 2019 with
Intcode), unless you count the logic gates on day 7.
I probably made heavier weather than I needed to of the parsing, and
I'm not enforcing syntax, but I was having fun with it.
The actual dispatch table is very straightforward.
In my input, each package weight is unique, and I take advantage of
this.
This is basically a two stage decomposition:
(a) find a combination that generates the target weight
(b) see if the remaining objects can be evenly split (i.e. there is a
combination among them that generates the target weight).
If (b) passes, check the number of elements and the product of the
sequence to see if it's a new best combination.
I should really have done these with the same function, since it's
basically the same depth-first search. But I didn't.
So when part 2 made it a four-way rather than a three-way split, I
broke our (b) into a recursive function but left (a) the same. That
would be the obvious target for refactoring if I were to revisit this,
but the code as I have it works, so.
The fiddly bit is working out the serial position of the code that's
needed. My approach is to look at the first column, 1 2 4 7 11 ….
,
which is clearly n * (n - 1) / 2 + 1
If I want a result from column 2, I'll look down and left one row to
the column 1 result I already have, and then add one.
If I want a result from column N
, I look down and left (N - 1)
rows to the column 1 result I already have, then add (N - 1)
.
Then it's just a matter of doing the div-and-mod operation several
million times, which, I have a computer.
Conclusions
I didn't end up using flashy new techniques this year, just built with
the Rust I already had. Mostly the hard bits were writing a good
algorithm; days 15 and 21 involved lots of worrying away at special
cases.
Crates used:
- winnow (11 times)
- itertools (3 times)
- regex (twice)
- fancy-regex (once)
- md (once)
- serde_json (once)
No pathfinding! But there was the obligatory Game of Life, and a
surprising along of Travelling Salesman and Knapsack.
Puzzles sorted by Part 2 difficulty: this percentage is the proportion
of people who have successfully solved part 1, but not part 2, which
is the only public information about completion rates.
- 23: 0%
- 11: 1%
- 13: 1%
- 15: 1%
- 18: 1%
- 24: 1%
- 9: 2%
- 17: 2%
- 21: 2%
- 4: 3%
- 7: 3%
- 10: 3%
- 16: 3%
- 22: 3%
- 6: 4%
- 8: 5%
- 14: 6%
- 20: 6%
- 3: 7%
- 2: 9%
- 12: 13%
- 5: 14%
- 1: 16%
- 19: 24%
19 and 20 were definitely the tough ones for me.
Thanks as always to Eric Wastl and the AoC team.