CyberPatriot 8 Round 1 Score Analysis

For the past several years1, I’ve analyzed CyberPatriot2 competition rounds using a Pryaxis product called The Magi.

This report covers the data The Magi collected during CyberPatriot 8 Round 1, during the entirety of the competition window.

After the competition window closes, CyberPatriot modifies scores to account for penalties, alternative score dates, and extenuating circumstances that warrant score modifications. Because this round had no Cisco/Networking scores, the scores compared here are free reveal how the CyberPatriot operations center alters scores from images after the competition closes.

 Removed Scores

CPOC’s3 released official score PDFs tell an interesting story about five teams.

One team, 08-2587, was listed as “Score Under Review,” and no score was provided. Their team was flagged as having multiple copies of the same image open but had a fairly “average” score of 155 points4. This was the only team listed as “under review,” rather than withheld.

Two teams from the open division were listed as “Score Withheld,” and again, no score was provided. These teams were 08-2868 and 08-2869. Both teams had a score of 154 points at shutdown, were started within six minutes of each other, and are only a single digit off in their team identifiers.

Two middle school teams, 08-1043 and 08-1046, started within three minutes of each other and were withheld under the same conditions as the open division high school teams – no warnings issued. These teams had 94 points and were subsequently removed.

The first team’s removal is interesting, because other teams with concurrent instance flags were penalized in points – not entirely removed. This data is interesting on its own, and is examined later.

All other excluded teams had no warnings. The only conclusion that makes sense given the startup times and the team ID numbers is that they were sharing information between teams. Both sets of teams were likely in the same school – which would make this type of cheating trivial. Unfortunately, without the graph data to confirm this hypothesis, the answers will likely remain with those coaches and CPOC.

 Adjustments

Score adjustments made by CPOC during most competition rounds are normally obscured due to the addition of Cisco & networking challenges that lay outside the scope of the online competition system. Because this round lacked these additional components, these changes are unmasked.

 Lower adjustments

In the middle school division, four teams lost points, with an average of 25 points lost each time.

In the high school competition, score reductions were more numerous.

In the all service division, 14 teams lost an average of 25 points. Of those, three teams were reduced with the multiple instances flag, and five with time flags.

In the open division, 40 teams lost an average of 20 points. 11 teams were flagged with multiple instances, and 12 with the time flag.

 Higher Adjustments

I originally thought this section wouldn’t exist, or if it would, it would be a very small number of score corrections.

I initially thought this was a fluke with the data, but after looking at teams with apparently unrelated traits, a majority of them gained adjustments of exactly eight points, irrespective of time or multiple instance penalties. The middle school division proves that this wasn’t a fluke with the scraper – only five teams received positive point adjustments in that division.

Because the middle and high school competitions use different images, it would be no surprise if an issue with a vulnerability was found in an image that caused teams to lose points. However, this shouldn’t be the case, as 105 teams were recorded with a perfect round one score.

The only option that tends to remain would be a curve, but that raises the question as to why it would only apply to the high school teams – not the middle school teams. Moreover, it wasn’t unilateral. Teams with 200 points didn’t get bumped to 208.

For the “curve” it was actually a error in the score engine. It didn’t log one of the vulnerabilities properly for those teams, but did for others. It was confirmed by CyberPatriot via email when they sent out the results. –TibitXimer, via Reddit7

 Data

The data for these calculations was pulled from The Magi’s database, and is available in The Magi’s source repo as a human readable document and CSV.

 Closing Thoughts

In the past, it was difficult to see how time penalties affected teams in CyberPatriot. Teams very clearly lose points, even for minor overages, which makes stopping on time critically important in future rounds. The mysterious eight-point addition is certainly interesting, but it probably masks scores that would have lost points without their addition. As always, running multiple images seems to be the most dangerous trial – it caused the complete removal of a score as “under review,” the only score of its kind, along with significant point drops.

The Magi is again predicting platinum slots, as it did last year with 95% accuracy. This year, it also correctly calculates slots for All Service and Middle School teams.

Coaches and captains who want more practice should consider Jump, a scoring engine for Linux & Windows that provides a lot of power for a fair price. Read more about it in my announcement post.


  1. Several years is vague. I don’t really remember the exact competition I started doing this on, so I’ll leave it vague. The earliest scraper I have open sourced is the one for CyberPatriot 6, available on github

  2. CyberPatriot is the National Youth Cyber Defense program, presented by Northrop Grumman

  3. CPOC is the CyberPatriot Operations Center, and is commonly used in reference to the staff and organizers behind CyberPatriot. 

  4. Not really an average score, but it was normal enough that I consider it “average.” 

  5. Scraped refers to what The Magi saw at the end of the CyberPatriot 8 competition window. Quoting CPOC, “The scores and warnings shown on [the scoreboard] have not been officially verified and are provided for reference purposes only. Displayed scores may not include penalties or other lost points. Official scores are published by the CyberPatriot Program Office during the week following each round of competition.” 

  6. CCS is the CyberPatriot Competition System, developed at the University of Texas (San Antonio). 

  7. The same user provided the full email sent to affected teams

 
19
Kudos
 
19
Kudos

Now read this

Exist.io mood tracking in Day One

About a year ago, I signed up for Exist, a service that correlates passive data from things like Fitbits and Twitter to determine a variety of different data analytical statements about a user. The hidden gem in Exist is its mood... Continue →