I’d be showing off some updates to {openpoweRlifting}, but:
- the epic csv downloads break the free tier of RStudio Cloud I’ve been playing with for overloading the RAM.
- The home RStudio Server isn’t much better on RAM.
- Batch processing it into a SQLite Database almost works.
It turns out that in the actual server it’s 3 tables; meets, lifters and entries. The server internally produces 3 csvs for these and these might get exposed down the line. For today I built the server through docker, pulled out the csvs, and {pin}ned them.
There’s faster ways to get just the csvs, but I spent the time doing other stuff rather than working out how to make rust happen.
board = board_local()
entries = pin_read(board, "opl-entries")
lifters = pin_read(board, "opl-lifters")
meets = pin_read(board, "opl-meets")
I’m James Riley #7. My numbers aren’t brilliant, but a couple of years ago I was practically unable to walk from sciatica, so there’s a lot of progress from that low point.
Stronger by Science have written about scaling by body weight, that at least theoretically goes past human limits.
first_entry = entries %>%
filter(!is.na(BodyweightKg)) %>%
filter(Event == "SBD") %>%
left_join(
select(meets, MeetID, Date),# There's less ugly ways to do this, but meh,
by="MeetID") %>%
group_by(LifterID) %>%
slice_min(n=1, order_by=Date, with_ties = FALSE) %>%
ungroup()
first_entry %>%
select(LifterID, BodyweightKg,
Best3BenchKg, Best3DeadliftKg, Best3SquatKg) %>%
pivot_longer(Best3BenchKg:Best3SquatKg) %>%
filter(value>0) %>%
mutate(ass_factor = (98.7^2/3)/ (BodyweightKg^2/3) ) %>% #typo fix
mutate(ass_score = value * ass_factor) %>%
group_by(name) %>%
mutate(mean = mean(ass_score), sd = sd(ass_score)) %>% # Cheap way to remove the long, long tail
filter(abs(ass_score-mean) <= 3*sd) %>%
ungroup() %>%
ggplot(aes(x=ass_score)) + geom_density() +
facet_wrap("name", scales = "free")
You may be asking if I’ve been waiting to use the Allometric Strength Score abbreviated as ‘ass’ because I’m that immature. On my days off, yes, I am. I’m also making it a ASS factor relative to my weight on comp day. That makes it easier to relate these numbers to my unmodified numbers.
And frankly, I’m somewhere around the 25% body fat level and would be generally in better health if it was 15% or lower. That puts me at least 1 weight category too high, but it was good to get the experience of comp before being in competitive shape.
We’ve been looking at the 7 degrees of separation graph in Powerlifting Meets on the OpenPowerlifting chat server.
With the entries/meets/lifters disambiguation I realised that it’s pretty easy to make a lifter is adjacent to a meet they’ve taken part in. Ordinary graph distance just needs halving then - the path from me to anyone else goes me -> YNE spring open -> someone else in the same comp. The distance we’re interested in is 1, the distance the graph says is 2.
Anyway, for the connected component I’m in, practically everyone is about 4 meets away. This makes sense - at a regional meet someone will qualify for nationals. At a national meet someone will qualify for continental, … worlds.
rm(first_entry)
lifters = select(lifters, LifterID, Name)
meets = meets %>%
mutate(MeetName = str_c(MeetName, Date)) %>%
select(MeetID, MeetName)
graph = entries %>%
select(MeetID, LifterID) %>%
left_join(lifters, by="LifterID") %>%
right_join(meets, by="MeetID") %>%
rename(from=Name, to=MeetID) %>%
as_tbl_graph(directed = FALSE)
graph %>%
activate(nodes) %>%
mutate(flag = (name == "James Riley #7")) %>%
arrange(desc(flag)) %>%
mutate(dist = node_distance_from(1)) %>% # I wish I had a better way to do "distance from named node"
filter(is.finite(dist)) %>%
as_tibble() %>%
ggplot(aes(x=dist)) + geom_bar()