Timeout Studies – Loose Ends

Posted by

The four posts on the timeout studies conducting by Ben Raymond and I (here, here, here and here) unsurprisingly created some discussion points.  Many readers remain convinced that timeouts they call impact the game positively for their team.  The logic used was sometimes convoluted and difficult to follow.  Some readers had the honesty to admit that there was no evidence that could be provided that would change their minds.  I did wonder why those people would choose to participate in coaching forums, but that is just an aside.

The most common point was that since the study showed that sideout percentage returned to normal after a timeout it proved the effectiveness of timeouts.  This factually correct statement neatly sidestepped the also factually correct statement that in the absence of a timeout the sideout percentage also returned to normal.  So whatever the study ‘proved’ it was that taking a timeout and not taking a timeout were EQUALLY effective in their short term impact on sideout percentage.  If your personal bias is toward action, then you will use that information to take timeouts.  If your personal bias is somewhat neutral between action and inaction, they probably you won’t.

A second common point was that it was not the coach’s sole aim to win a single sideout so it would be more accurate to analyse a series of points before and after the timeout.  I think this is a reasonable point.  I would propose that coaches take timeouts to win a single sideout, and for some longer term objective.  I am certain that this analysis would show that the sideout percentage for the three or four points before the timeout is lower than the three or four points after the timeout.  That would be entirely consistent with the data we have, and also entirely consistent with the findings we have.  That is, there would be a statistically insignificant difference between taking a timeout and not taking a timeout.  (*See my calculations below.)

Strangely, nobody brought up other situations that happen in the game that might influence the impact of timeouts.  Substitutions take up almost as much time as timeouts, and in the Polish and Italian leagues at least, Video Challenges take up even more.  In each set, there can be up to twenty official breaks in play, of which we looked at only five or six.  We have no clue what happens with the rest of them.  And how the different breaks in play interact with each other.  Ultimately, maybe the sheer volume of all of the breaks in play negates the effect of any single break.  Maybe volleyball is so intermittent none of the breaks make any difference anyway.

My personal takeaway is simple.  We coaches overestimate our individual impact.  Perhaps in more areas than just this one.


*Scenario – Team loses 2 break points in a row then takes a timeout.  What is the change in sideout percentage for three points before and after the timeout? (Figures from Polish League)

SO rate before the timeout – 1 sideout from 3 attempts = 0.33

SO rate after the timeout (expected) – 0.668 + 0.666 + 0.666 = 0.666

Therefore, the timeout was effective in the medium term.

BUT…

if there were no timeout taken the expected SO rate after the initial series is 0.666 + 0.666 + 0.666 = 0.666

Therefore, the medium term sideout rate is also exactly the same with and without a timeout.

_________________________________________________________________________________________________

Read about the great new Vyacheslav Platonov coaching book here.

Cover v2

6 comments

  1. A couple things.

    First a side question. why are points scored by the serving team called “break points”? I don’t know of any sport other than tennis where that term is used, so I’m guessing it was ported over to volleyball from there. In that case, we’re using it completely backwards.

    Now for the real point. There’s a contradiction in the data. Or perhaps it’s better to think of it as simply an indication that a deeper drill down might be required. I’m referring to the fact that as serving runs progress the sideout rate drops.

    It would be good to see if the sideout rate for each given point is significantly different than the overall sideout rate. More data might be required for that. If the indications in the report hold up, though, there does look to be a point at which it does make sense to take a timeout to increase you chances of sideout on the next ball.

    Bigger picture, I will wait for a more comprehensive study to draw a full set of conclusions. Looking at players in two of the top professional leagues in the world is simply not representative of the spectrum of the sport.

    As for the coaches who say there is no evidence that could be provided that would change their minds, of course they believe they’re better than everyone else. Maybe they should have a look at this: http://coachingvb.com/think-youre-great-coach-youre-probably-poor/ 🙂

    Like

    1. Break points comes from tennis. You need a different designation.
      I’m not sure ‘contradiction’ is the correct description. The data shows that there is some indication a timeout could ‘work’ during a run, but the result is not significant. See the Section 3.2.
      Actually you don’t know that these results are not representative. You are assuming they are not. They might be.

      Like

  2. As several people suggested would be the case, the data show that the sideout rate (of the team calling the timeout) prior to timeout is indeed lower than afterwards. In the four points prior to timeout the average sideout rate is 0.50. For the first point following the timeout it is 0.67, and over the four points following the timeout it is also 0.67.

    One of the real difficulties of this type of study is inferring cause and effect. Looking at the above numbers, one would naturally tend to conclude that the timeout has caused the increase in sideout rate. However, before we can make this conclusion, we need to be sure that the same rise in sideout rate would *not* have happened if the timeout was not called. To conclusively show this would need something like a controlled experiment: in every situation where a coach wants to call a timeout, they toss a coin and randomly either take the timeout or not. Then it would be possible to truly isolate the effect of taking the timeout.

    Clearly, we don’t have that experimental luxury. One way of gaining some insight here is to find situations in the data record where a timeout might reasonably have been called, and compare the outcomes between the timeout/not-timeout cases. This was the motivation for Tables 16 and 17 in the analyses (http://untan.gl/articles/2016/07/16_timeouts-in-the-polish-volleyball-league.html). We know that timeouts tend to be called after multiple consecutive serves. Table 16 suggests that sideout rate drops slightly with serve run length, but that timeouts don’t have a significant effect on this. In other words, in these situations where we might reasonably expect a timeout to be called, there does not appear to be any consistent difference in outcome when calling a timeout or not.

    Table 16 does show a slight tendency for sideout rates to increase after timeout, but not significantly so. The lack of significance here suggests that any timeout effect either does not exist, or if it does it is patchy or weak and so we cannot detect it given the data available. Perhaps there might be some situations in which a timeout effect is more predictable, and this seems like a reasonable line of enquiry to follow up on.

    Liked by 1 person

Leave a comment