Traditional volleyball statistics are boring, uninteresting (yes, I know that is redundant) and conceptually restricting. Moneyball and the data revolution means we can search for different (better) ways of understanding volleyball through reimaging old methods, and developing new methods. Some of those are described here.
One area that is woefully under analysed is the role of different skill areas in the transition phase. We understand that more aces is better than less aces, that more service errors is (sometimes) better than less service errors, that block points are good, and that is about it. Don’t get me started on digs and defence. We don’t well understand how block and defence together impact the game beyond knowing that they do. Numbers of blocks and (god forbid) digs do not help in understanding.
We can easily measure the effectiveness of the serve using the expBP%, and opp.expSO% (coming soon to a blog near you). One of the advantages of using ‘expected’ measures for reception and serving is that not only do they relate directly to sideout and breakpoint, they use the same scale. This means that taking and comparing ‘expected’ results with actual results we can start to analyse different parts of the game. For example…
The expBP% measures the effectiveness of the serve. It is how many break points we should expect to win on the basis of serve quality. This should mean that the difference between expBP% and actual BP shows us the effect of the other components of the break point phase, the other components being block, defence and transition attack. Obviously that number can be either positive or negative. Let’s call this difference defensive rating. A team can have a block defence system that adds to the value of the serve (positive defensive rating), or detracts from it (negative defensive rating). What about an example from the recent Olympics.
| Team | N serves | BP% | exp BP% | Def Rating | opp.exp SO% |
| Brazil | 365 | 0.359 | 0.329 | 0.030 | 0.611 |
| France | 543 | 0.354 | 0.328 | 0.026 | 0.606 |
| Germany | 394 | 0.332 | 0.308 | 0.024 | 0.612 |
| USA | 561 | 0.339 | 0.321 | 0.018 | 0.609 |
| Italy | 520 | 0.319 | 0.310 | 0.009 | 0.602 |
| Slovenia | 366 | 0.352 | 0.346 | 0.006 | 0.578 |
| Poland | 524 | 0.332 | 0.342 | -0.010 | 0.573 |
| Japan | 391 | 0.304 | 0.315 | -0.011 | 0.620 |
| Canada | 256 | 0.289 | 0.309 | -0.020 | 0.615 |
| Serbia | 258 | 0.271 | 0.320 | -0.049 | 0.612 |
| Argentina | 201 | 0.239 | 0.302 | -0.063 | 0.628 |
| Egypt | 148 | 0.209 | 0.288 | -0.079 | 0.651 |
The first thing to notice is that Brazil was the best team in the breakpoint phase at 35.9%. And they did it by being good at serving (32.9%, 3rd in expBP%) but their defensive system added 3% to the breakpoint rate. That made them the best defensive team in the tournament, quite a feat considering they lost three of their four matches. That France is the second best defensive team is not a surprise given that, a) they won and b) what we know about the individual players.
As we go down the list we notice a couple of interesting things. The best serving teams were Slovenia and Poland, and both relied on their serving to create their point scoring opportunities (both have a defensive rating around zero or below. The team with the what we might think of as the best defensive team, i.e. Japan, actually relies very heavily on their serving. As it turns out (and you’ll have to take my word for it for now), Brazil, Slovenia and Japan were the best at defence by ATT/D. The key point in understanding the difference between defence (ATT/D) and defensive rating shows how many attacks come from defence. Defensive rating shows how many of those were expected on the basis of the quality of the serve.
I sense that I have reached the point where writing more will detract from understanding, so I’ll leave it there and ask for comments.
One comment