I've been doing Everybody Codes,
a set of programming challenges in the style of Advent of
Code, now in its second year.
All of my code is on
Codeberg.
Part A: fairly straightforward, mostly "have you got the data
structures right".
Part B: this is the meat of it. Run through the procedures and get the
answer, in reasonably short order.
Part C: solved partly by inspection. My notes (and I assume
everyone's) are already sorted into ascending order. Therefore all I
need to do is the second phase procedure.
Then I consider that I know the ending values: by definition, they all
equal the mean of the initial values.
Each step brings the total deviation from this final value two steps
closer to zero (overall, one column goes up by one, one goes down by
one).
So I take the difference of each value higher than the mean from the
mean, add those all up, and that's my answer. (Each value lower than
the mean would also work.)
Much more to my taste after a frustrating day 10.
This was a day for making guesses and turning out to be right.
Part A: it looks a little bit like a DFS and I used a stack structure,
but there are no blind alleys which makes it easier. A "global"
unburned list (i.e. outside the context of the stack) checks for
neighbour validity and, by bring pruned when a node is burned,
prevents infinite loops.
Part B: first guess. I assume that changing the stack to a queue will
not let things get too far out of synchronisation, and this does
indeed give me the right answers.
Part C: second guess. I wrote a brute force approach, to turn the
single burn into a function that accepts an unburned list and starting
points and returns a shorter one and then wrap that in a loop for the
three search passes and one final test.
I ran this on my test machine, and when it didn't immediately return I
started thinking about optimisations (e.g. is it worth considering only
nodes with the highest remaining value) but twenty seconds later I had
the right answer.
Premature optimisation is the root of all evil. But sometimes it's
still useful.
Part A: OK, clearly I lay out the numbers round a notionally circular
buffer and take the modulus.
Part B: hmm, yeah, I could decompose the ranges to actual lists of
numbers, but if I keep them in a range type I can easily test for "is
number within range". So I end up building a hash of index ranges
against output ranges. This is made slightly trickier by Rust's
core::ops:;Range type being strictly low to high; I end up storing a
reversal flag with it, because the "left side" output ranges will be
in descending order relative to their index values.
Also, I index the "left side" ranges with negative numbers, then
update those indices onces I know the size of the total.
So then finding the answer is a matter of finding the index position
range in which the rotation value lies, and constructing a value
within that range from one end or the other (depending on the value of
the reversal flag).
Part C: I can't strictly use my part B solution unaltered because the
new rotation value won't fit in an i32, but all I had to change was
to use i64 instead.
This is the grade of problem I'm happiest with: not super fiddly or
needing abstruse knowledge (defined as "things I don't alread know"),
but reasonably meaty even so.
Part A as usual gets the basic data structures set up. I've done this
sort of thing with hashes before, but in this case I went with a vec
of vecs in case speed mattered. Two functions: a newboard generates
the next iteration, and a scanboard counts the set values. (There's
also a showboard which isn't active in any of the production code,
but is useful in debugging; it just dumps the current board state to
stdout.)
newboard doesn't do anything terribly clever, just bounds-checking the
coordinates to make sure the neighbour values won't over- or
underflow.
Part B is essentially part A again, but making sure the routine is
fast enough. Mine was.
Part C gets more interesting. My approach was to generate a longer
sequence of "interesting" frame numbers (using a custom matcher that
only looks at the central cells), and then run differences against
them; sure enough, the differences form a repeating pattern. (In the
test case, that's [892, 3203].) On the assumption (valid in this
case, confirmed by inspection) that the first repeated value in the
sequence of differences indicates the start of the next cycle, I end
up with a series of offsets from the previous frame number which will
generate the full set of interesting frames.
Do I need to generate the full pattern for each interesting frame to
get the cell counts? It turns out not: the sequence of cell counts in
the repeating frames also repeats. So then it's just a matter of
running repeatedly through the differences, finding the frame number
to see if we've finished, and adding the cell count for that class of
frame if we haven't.
This was quite a fiddly one, but I got it eventually (with some breaks
for relaxation).
For part A I use some of my standard patterns: a Grid sonsisting of
a HashSet of Coord tuples, the grid defining a neighbours
function for the eventual call to dijkstra from the pathfinding
crate. (In this case the cost of a move is always 1, so I could do it
with a breadth-first search, but when I'm pathfinding I tend to throw
the problem at Dijkstra anyway.) The Dir enum is there to handle
rotation in a convenient (and easily debugged!) way.
Then the input gets parsed into a series of direction-distance pairs,
each of which generates one or more wall locations that go into the
Grid. (As sometimes before, the grid is a negative one, consisting of
the places to which one can't go.)
That's enough for parts A and B, but C needs something a bit chunkier
even in Rust. The trick I came up with is to compress the coordinates
by making an assumption: "interesting" locations (i.e. where we might
change direction) are always going to be adjacent to the end of a
wall. So the X and Y positions of those locations are the only ones we
care about. In effect I'm eliding all the columns which just contain
the middles of horizontal walls, and all the rows ditto vertical
calls; example A-2 would effectively change from
#######
# #
# #
# #
# #
# #
#### #######
# #
# #
#### ######S #
# # #
# # #
#### ####### ####
# # #
# # #
# E #######
# #
# #
#######
to
#######
# #
# #
#### #######
# #
# #
#### ######S #
# # #
# # #
#### ####### ####
# # #
# # #
# E #######
# #
# #
#######
dropping out some of the top rows. Which isn't much of a saving on the
small grids, but on the bigger ones… crucially, the connectivity stays
the same.
Therefore when parsing the instructions I don't directly build the
grid; I build a sequence of (x, y) pairs for the ends of the walls,
and for each end generate the nine points including and immediately
surrounding it, then store both x and y coordinates in separate
BTreeSets (since what I want out of them is a sorted list which gives
each value just once).
Then I build the mapping tables: cx and cy are those sorted lists
(going from compressed to original value), while mx and my map
from original to compressed value. The grid now gets a list of the
compressed values, and the structure includes copies of cx and cy
so that we can recover the originals.
All that's really left is to rewrite the neighbours function of my
Grid class so that it returns the genuine, uncompressed cost of a move.
And lots of debugging as I put x where there should be y, and vice
versa. So it goes. This is one of the rare cases in my life in which
Rust compiles but gives the wrong answer — both compressed and
uncompressed coordinates are isize values, so I can't get a type
error when I've written that wrongly.