Intermittent reinforcement is an interesting procedure. In many ways, it is hard to distinguish between "no-food trials in an intermittent reinforcement schedule" and "extinction". In both cases, no food is delivered following the target response. More importantly, the removal or prevention of a reinforcer contingent on a particular response (response cost or neg. punishment) adds another twist to the question. Here is how I would address the question:
I used to think of myself as standing perpetually on a bridge, with a foot in each camp. I used to expend a lot of time trying to talk psychologists into understanding or at least coming to watch what we were learning about the animals with their science. No luck. No luck in the other direction, either: the behavioral biologists were not much interested in training or reinforcement.
It seems straightforward: we click to mark a desired behavior, and then we reinforce. The act of reinforcing the behavior necessitates a change in action: the horse eats the treat, the dog plays with its favorite toy, and the animal in the process of being reinforced no longer performs the behavior for which it was clicked. The click, therefore, ends the behavior. The phrase has become a widely-repeated tenet of clicker training. Yet, is it true?
Sometimes this question is asked in a different way: Will I have to continue clicking and treating forever? In asking either question, what we really want to know is: When are we done? When can we call a behavior trained once and for all? The answer to these questions is (like most not-so-simple questions): It depends.