A parametric test assumes that data follow a certain parametric distribution, with the most common assumption being that of a normal distribution. Strictly speaking, if the assumption is violated (e.g., if the data do not follow a normal distribution), then the test is no longer (exactly) valid. But most parametric tests, and all of the parametric tests employed by us, are robust in the sense that if the sample is large, then the test is still valid `in practice' in the sense that, although is not exact, it will deliver a very good approximation. There is no hard-and-fast rule saying when the sample size is large (enough), but as a rule of thumb 30 to 50 generally does. What constitutes the sample depends on the application. For example, when testing AAR or CAAR, the sample refers to the number of firms included. So if one only has five or ten firms in the sample, then a parametric test is actually not a good idea.

A nonparametric test, on the other hand, does not make any assumption about the data following a certain parametric distribution. They can therefore also safely employed when the sample size is small. Of course, they also work well when the sample size is large.

In practice, parametric tests are still more popular than nonparametric tests for historical reasons (as they often were developed first) and then because people tend to do "as others have done in past". We feel that, if anything, nonparametric should be(come) more popular and that's why we have several of them on our menu!