As I have already mentioned in my last post and in many others, I am interested in testing the assumptions with which we carry out our daily work. One of the assumptions that we work on, is that timeouts are useful. To test this assumption, I participated (with reader Ben Raymond, who did the actual statistics parts, i.e. the ‘work’) in a study of timeouts in the Polish and Italian men’s leagues from the last season. For the 2015/2016 Polish Plus Liga, we were able to obtain data for over 70% of the matches (143/181). We found out some interesting things.
As an explanatory note, in this post I am trying to summarise and simplify the findings and not add too much in the way of tables and figures that impact the flow of the post. Therefore there are a couple of footnotes with extra information. All use of the world ‘significant’ is in its statistical sense. The final paper with all notes, discussion, tables and figures will be linked later.
One assumption, or conventional wisdom, that I have written about is that there are more service errors after timeouts. We did not find that to be true. In fact, the opposite is true. The first serve of the set and serves after both tactical and technical timeouts produce lower error rates than serves in general play. This result could be explained in two ways. Either the conventional wisdom that is confirmed by years and years of confirmation bias is not actually true. Or professional players, instructed by their coaches, or through years of training, approach serving at these moments differently which leads to different outcomes.
|Serve category||Serves||Errors||Error Rate|
|First of set||543||62||0.114|
|Followed technical timeout||1016||126||0.124|
To test whether in fact players did approach serving differently at different match junctures, we looked also at ace percentage and perfect pass percentage. We found that the ace percentage at the start of the set was a little bit higher, and after a timeout a little bit lower, but neither of these results are significant. As far as reception quality (as measured by perfect pass percentage) goes, reception in general is worse than after timeouts. This result is not very significant.
Overall, I think we can reasonably infer, due to the lower error rate and higher reception quality, that players approach serving differently after timeouts. So maybe the conventional wisdom was once true even if it no longer is.*
The biggest theory we wanted to test is that timeouts effect sideout percentage. Here we found a very interesting thing. The sideout percentage is a remarkably robust figure. Two thirds of all serve attempts are won by the receiving team. This figure is almost exactly the same in general play, on the first ball of the set, after a tactical timeout and after a technical timeout.
|Serve category||Opp Serves||Sideouts||Sideout Rate|
|First of set||543||369||0.68|
|Followed technical timeout||1016||680||0.669|
We looked deeper to see if timeouts may actually be more or less effective in different parts of the set, and saw that there might be some small positive effect on sideout percentage by a timeout taken before the score is 10. However, overall there is no significant effect of a timeout taken at any point in the set. The same goes for score differential.
Service Series / Runs
The final area that we considered in the project was how these parameters changed during a service series. We found that the likelihood of service errors does not significantly change during a series**. On the other hand, sideout percentage does change, for the worse. After the third serve in a series, sideout percentage decreases significantly. Even though this is the one situation in which we found variation in sideout percentage, there was still no significant effect in taking a timeout.
There is no evidence, in this league, in this season, that timeouts effect the game in any useful way.
Further posts in the series can be found here and here.
The full study can be found here.
* We also looked at Earned Sideout (Sideouts won not including serve errors) and First Ball Sideout (sideouts won on the first attack after reception). Both of these were also higher after timeouts, which is also what you would expect if serving was weaker at those times.
**It has been suggested that the third serve of a series is most likely to be an error. This conventional wisdom did not hold up.
I really love that you challenge common ideas about timeouts. I am afraid that the presentation of the statistic is slightly wrong. The dot seperator is placed wrong in the column Sideout Percent. Example: Serving – Followed Timeouts – 1460 / 100 = 14.6 equals 1 %. Hence 216 / 14.6 = 14.795 % (rounded) and not 0.148 (which would be an outlier player 😉 ).
Yes, in the original paper the column header is ‘sideout rate’, hence 0.148. In writing this post I changed the header to ‘sideout percentage’, which as you correctly point out is technically incorrect. Thanks 🙂 Fixed now
So the error rate of serving after timeouts is smaller than in general which might suggest that they serve with less risk which, as you also showed, makes it easier for one’s reception. That is actually a good point for taking timeouts isn’t it? Although the sideout percentage is still the same.
Thanks for interesting statistics post!
Can you explain, why the time out is not useful after series of sets? You said that after third serve in series, the side out rate decreases significantly. In this case the time out is likely to be taken. And after that the side out rate returns to normal 66% according to your stats. So isn’t it useful way to increase the rate then?
The data shows that but the sample size is small. That is why the result is not significant.
That means it might be true but we can’t say for sure.
Thanks for the article.
A few questions.
1. From a research design perspective, why are you using the first serve of a set?
2. Could you add the coloum: after a tactical time out?
3. Shouldn’t you use earned SO% for the side out study?
4. Wouldn’t it make more sense to compare the serving error rate / SO% in the pouts before a time out compared to the points after the time out?
I think the best way to do the study is to analyze when a coach normally takes a time out, let’s say after they lose 2.6 points in a row. Now you look at the serve / SO% when the coach normally would take a time out but doesn’t have one left and compare the two scenarios. I think/know you would have different findings.
PS: are you at WL finals in Poland ?
1. We found a difference between tactical and technical timeouts. We then thought that perhaps a technical is actually more analogous to the first serve of a set.
2. Don’t we have already?
3. Why? The point of taking a timeout is to win the next rally. It doesn’t matter how.
We have it also. If you read the footnote. ESO% increases by the same amount as service errors decrease. SO% is unchanged.
4. Isn’t that covered by the results during series? Please don’t say you ‘know’ it will be different. Unless you have results you don’t know anything. And if you do have results, you should share them.
And you actually describe two different scenarios there. Which one do you mean?
I’m not in Kraków. Still in Australia.
2.) you have followed timeout and technical time out. How I understood it: followed timeout = technical + tactical?
3.) But you also look only at serving errors. 🙂 just to have the full picture because you separate serving error and sideout. I thought it might be interesting to the the effect here. “ESO% increase by the same amount as service errors decrease”, that can’t be right.
You comment in the footnote is saying that timeouts have an effect (ESO higher and serving weaker after a timeout) but your conclusion is saying the opposite.
4.) The first scenario (ESO, Serving Error before a time out compared to the after) is significant different.
2. Ok. Timeout = tactical TO
3. In the full paper we have everything. ESO increases, serve errors decreases, SO stays the same. What i wrote is the difference variance in some parameters seems to infer that serving is weaker after a timeout. But in all cases SO stays the same. How the point is won is irrelevant.
4. Do you have data? Assuming you are correct (which I think is likely) it is axiomatically correct. Simple regression to the mean. It doesn’t say anything.
Iam still confused. ESO% increased, serve errors decreased, SO% stays the same. or as you wrote before: “ESO% increases by the same amount as service errors decrease”. this is only possible if ALL serves which are served more into the court after the timeout, have a point scoring of 0% (opponent SO=100%). I can’t imagine that with a sample size of 21000 serves.
I disagree with the statement: how the point is won is irrelevant in this study. I think it has a huge impact on coaches decision making if they take a timeout on what rotation you are in, who is serving, why you lost 2 points in the row etc. Maybe separate here by jump server vs float server, Setter in 1 vs Setter not in 1 etc. just a thought 🙂
yes I have data. and it makes sense to me. you take a timeout to go back to normal. before a timeout you were worse than you normally are, thats why you take the time out. so i would even argue that your findings support the effectiveness of timeouts because you show that after a timeout you go back to normal (AVG).
On a side note, I think you can’t argue with the regression to the mean in sports in general. Players are not a coin flip. or: you don’t keep a player wo plays really bad in the game with the argument: he will be great in a few point because of regression to the mean. or the other way around. And this particular case even from a statistical standpoint, the sample size is way too small for regression to the mean.
Sorry, do I have formula for ESO wrong? We are using
(number of sideouts – opp serve errors) / (number of receptions)
The sideout rate after a tactical timeout is 0.668 and in general play (not after a tactical or technical TO or the first serve of a set) is 0.666. So using the above formula, the total number of sideouts is exactly the same (+/- 0.002). The number of service errors after a timeout is less. Therefore, ESO must be, and is, higher. You are right, the difference is not exactly the same because the denominator is different. I wasn’t looking at the tables when I made that comment.
To answer your question directly, yes, (effectively) ALL serves which are served more into the court after the timeout, have a point scoring of 0%. That is a really, really interesting result.
All of your specific examples are valid and of course are part of the decision making process. And you can come up with a million other specific examples that are equally valid. And you are almost certainly right that the coach might be able to have some effect in some spot situations. It doesn’t change the overall results that we have. The expectation of winning a rally after taking a timeout is exactly the same as the expectation of winning a rally if you don’t take a timeout.
You are 100% correct that a coach takes a timeout to get back to normal. You are 100% correct that this data shows that taking a timeout gets you back to normal. This data also shows that not taking a timeout also gets you back to normal. These findings support the view that timeouts are effective. BUT they do not support the view that taking a timeout is different from not taking a timeout. The findings show that doing nothing is equally as effective as taking a timeout.
Whatever the appropriate statistical jargon, the situation you describe (re pre and post TO) is exactly what you would expect. It might be because of the timeout, it might be because the earth shifted on its axis. The data can’t show the cause.
And on the scoreboard, every point is equal.
Is there any statistic about the following two or three ralleys? timeouts often occurs after a series of won ralleys for the opponent…
This is fascinating. Thank you.
Just a question about the sideout percentage after a timeout. Given you have shown that timeouts are generally taken by teams that are behind, does your data indicate that these teams’ side out percentage before the timeout (that is, in the points that have occurred in that set already) tend to be lower than the average side out percentage? If so, do you know how much?
If the losing teams’ side out percentage is generally lower than the side out average, could the fact that the side out percentage after a timeout is back to essentially the average in fact provide evidence that the timeouts have worked to improve the teams’ side out percentage back to the overall mean (given the fact that timeouts are primarily taken by the receiving team)? I don’t have a clue what the figures for any of this would be, and so do not know whether this difference or change is significant or negligible, so I could be way off the mark. Indeed the side out percentage after technical timeouts you show seems to argue against what I have suggested above.
Thanks for doing this’. Really worthwhile.
It would be more interesting to look at the effect of the serve after a timeout. For example – perfect reception%, positive reception%, minus reception%, freeballs/overpasses% and so on than just to look at the ratio of missed serves. If the server serves more safe after a timeout, how much is that reflected on the outcome in passing?
The sum total of the effect of the serve after a timeout is that the sideout percentage is the same.
We note that there is some suggestion that serving is weaker, but the ace rate is higher. However, the weaker serving is not reflected in better sideout percentage, which is what you would expect.
We looked at error rate specifically to test the widespread assumption that there are more service errors after timeouts. This assumption turns out to be, shall we say, misguided.