> This package uses a modified version of aov() function, which uses

> Permutation Tests

>

> I obtain different p-values for each run!

Could that be because you are defaulting to perm="Prob"?

I am not familiar with the package, but the manual is informative.

You may have missed something when reading it.

" ...The Exact method will be used by default when the number of observations is less than or equal to

maxExact, otherwise Prob will be used.

Prob: Iterations terminate when the estimated standard error of the estimated proportion p is less

than p*Ca"

I would assume that probabilistic permutation is random and will change from run to run.

You could use set.seed() to stop that, but it's actually quite useful to see how much the results change.

If you want complete permutation, you'd need to force Exact (unless that does not mean what it sounds like for this package).

It looks like that requires you to set maxExact to at least your number of observations. But given that permutation grows combinatorially, that could take a _long_ time for a run; the Example in the help page does not complete in a useful time when maxExact is set to exceed the number of data points.

So I'd probably run it using Prob and simply note the range of results for a handful of runs to give you an indication of how far to trust the answers.

> Would it still be possible use the regular aov() by generating permutations

> in advance (Obtaining therefore a Normal Distribution thanks to the Central

> Limit Theorem)? And applying the aov() function afterwards? Does it have

> sense?

As a chemist, I'd guess No. And you'd be even more limited in number of permutations.

> Or maybe this issue could be due to unbalanced classes? I also tried to

> weight observations based on proportions, but the function failed.

No, it's nothing to do with balance, if the results change run to run with no change in the model. I'd guess that may exacerbate the permutaiton variability somewhat but it won't _cause_ it.

> Any alternative solution for performing a One-Way ANOVA Test over

> Non-Normal Data?

Yes; the traditional nonparametric test for one-way data (balanced) is the kruskal-wallis test - see ?kruskal.test.

Classical ANOVA on ranks can also be defended as a general 'nonparametric' approach, though I gather it can also be criticised.

*******************************************************************

This email and any attachments are confidential. Any use...{{dropped:8}}

______________________________________________

[hidden email] mailing list -- To UNSUBSCRIBE and more, see

https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide

http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.