A couple of stories come to mind.
In a full time centralised National Team program there are typically more players training full time than are required for any given tour or competition. When the team is away, those players remain and practice in a small group. In my experience, the coach who worked with those players, without fail, would report to the returning team that the players had worked well and improved a lot. And without fail, when those players returned to training in the full group, there was no evidence of improvement.
I was once involved in the review of a National Junior Team tour. When the topic of poor reception during competition came up one of the coaches wondered aloud why these good receivers were not able to perform in competition.
The players in the first group had improved. At the drills they were doing. But small group drills have little transfer to complex team drills.
The glaring answer to the question in the second situation is ‘they are not actually good receivers’. Which begs the question why the coach thought they were. The answer to which was they can do the small drills better than the other players.
The two stories and postscripts highlight a common coaching situation. Coaches work with the players to prepare for competition. When the players don’t perform at the expected level the coaches always look at why the player didn’t receive the training properly, and never (rarely) reflect on the training itself.
The evidence supplied by competition is (mostly) unequivocal. And if the coach wants to take credit for their good preparation work, they must equally take credit when the evidence shows their work was lacking.