I’ve been doing the Weekly
Challenges. The
latest
involved counting divisors and another joke sort. (Note that this is
open until 12 December 2021.)
Task 1: Divisor Last Digit
You are given positive integers, $m
and $n
.
Write a script to find total count of divisors of $m
having last digit $n
.
As with last week's challenge there are two parts to this: (1) find
the divisors and (2) filter them. Let's find them first. My code from
last time can provide a useful base, but instead of counting the
factors I'll put them into a list (and do the division that I could
skip over before). Perl:
sub factor {
my $n=shift;
if ($n==1) {
return [1];
}
my @ff;
my $s=int(sqrt($n));
if ($s*$s == $n) {
push @ff,$s;
$s--;
}
foreach my $pf (2..$s) {
if ($n % $pf == 0) {
unshift @ff,$pf;
push @ff,$n/$pf;
}
}
unshift @ff,1;
push @ff,$n;
return \@ff;
}
Since the order doesn't matter, all those unshift
calls could be
push
calls instead and would probably be a bit faster if they were,
but I may need this again some day. (In Python and Rust there are
explicit double-ended list classes which do this more efficiently than
plain lists with a fixed starting point. Or of course I could sort the
list before returning it, but that would be boring.)
The second part is simply a matter of filtering the factor list and
counting what's left:
sub dld {
my ($m,$n)=@_;
return scalar grep {$_ % 10 == $n} @{factor($m)};
}
Other languages come out much the same way, with whatever grep
-like
functions they offer.
Task 2: Sleep Sort
Another joke sort similar to JortSort suggested by champion Adam Russell.
You are given a list of numbers.
Write a script to implement Sleep Sort. For more information, please checkout this post.
The basic idea is, for each integer in the array, create a worker
thread/process/etc. that sleeps for a time proportional to the number
and then outputs its value. The easy way to do this is to throw the
results at standard output, but I also wanted to collect it into an
array for testing.
I immediately found that this is thoroughly unreliable; the slower the
language or the more heavily loaded the machine, the longer the delays
between worker instantiations, and the more likely numbers are to
arrive out of order (especially if they are close in value and near
each other in the input list).
Also, of course, each language does it differently. Starting with
plain old Perl, and a module I use whenever I have
readily-parallelisable tasks to hand:
use Parallel::ForkManager;
use Time::HiRes qw(usleep);
sub sleepsort {
my $n=shift;
my @r;
We'll have a number of worker processes equal to the size of the list,
and exit quickly when they finish.
my $pm=Parallel::ForkManager->new(scalar @{$n});
$pm->set_waitpid_blocking_sleep(0);
When a worker exits, it'll give us a reference to a scalar (among
other data we don't care about here), which we'll stow into the output
list.
$pm->run_on_finish(sub{
if (defined $_[5]) {
push @r,${$_[5]};
}
});
For each entry in the list, start its process. Sleep for a suitable
number of milliseconds, then return the value.
foreach my $e (@{$n}) {
$pm->start and next;
usleep 1000*$e;
$pm->finish(0,\$e);
}
Wait for all the workers to do their thing, and return.
$pm->wait_all_children;
return \@r;
}
Raku of course has stuff built in for this. await
and map
seemed
the best way to kick things off, while Channel
was clearly the way
to get the values out again.
sub sleepsort(@a) {
my @r;
my $channel=Channel.new;
await (@a).map: -> $e {
start {
sleep $e/100;
$channel.send($e);
}
}
$channel.close;
for $channel.list -> $r {
push @r,$r;
}
return @r;
}
Python seems to have grown a lot of these capabilities relatively
recently. I tried using ProcessPoolExecutor
and
ThreadPoolExecutor
, but while they kicked off worker entities they
wouldn't run in parallel, and I couldn't see what I was doing wrong.
So I ended up with straightforward multiprocessing and a queue.
import multiprocessing as mp
from time import sleep
The worker function.
def sleeper(x,q):
sleep(float(x)/500.0)
q.put(x)
def sleepsort(n):
q=mp.SimpleQueue()
pq=[]
Kick off each task, and put its process-object in the list.
for i in n:
p=mp.Process(target=sleeper,args=(i,q))
p.start()
pq.append(p)
Wait for each process to finish.
for p in pq:
p.join()
And extract the queue contents.
r=[]
while not q.empty():
r.append(q.get())
return r
Unusually, Ruby is highly traditional here, with its C-like mutexes.
def sleepsort(n)
mutex=Mutex.new
r=[]
threads=n.collect do |x|
Each thread does its sleep, then gets a lock on the output list.
Thread.new do
sleep(0.001 * x)
mutex.synchronize {r.push(x)}
end
end
Clean up, and the list is ready.
threads.each { |t| t.join }
return r
end
In Rust, everyone says you should use Rayon for anything serious, but
let's try this the conventional way.
use std::time;
use std::thread::{spawn,sleep};
use std::sync::mpsc::channel;
fn sleepsort(n: Vec<u32>) -> Vec<u32> {
let (sender,receiver) = channel();
let mut handles=vec![];
To avoid conflict over the list, we take a reference.
for &i in &n {
Still have to clone the queue-sender object, though.
let ss=sender.clone();
As in the Python, we add a thread-object to the list at the same time
as invoking it. It sleeps, then pushes its value onto the queue.
handles.push(spawn(move || {
sleep(time::Duration::from_millis(i as u64));
ss.send(i).unwrap();
}));
}
Wait for each worker to finish.
for handle in handles {
handle.join().unwrap();
}
Then pull the results off the receiver.
let mut ri=receiver.iter();
let mut r=vec![];
for _i in n {
r.push(ri.next().unwrap());
}
r
}
I believe this can't be done in PostScript, because there is no
concept of multiprocessing in the language. Shame.
Full code on
github.
Comments on this post are now closed. If you have particular grounds for adding a late comment, comment on a more recent post quoting the URL of this one.