(package e1071) SVM tune for best parameters: why they are different everytime i run?

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

(package e1071) SVM tune for best parameters: why they are different everytime i run?

Maggie Wang-2
Hi,

I run the following tuning function for svm. It's very strange that every
time i run this function, the best.parameters give different values.

[A]

>svm.tune <- tune(svm, train.x, train.y,

                    validation.x=train.x, validation.y=train.y,

                 ranges = list(gamma = 2^(-1:2),

                 cost = 2^(-3:2)))



# where train.x and train.y are matrix specified.



# output command:



>svm.tune$best.parameters$cost

>svm.tune$best.parameters$gamma



result:

 cost gamma
 0.25  4.00



run A again:

 cost gamma
    1     4



again:

  cost gamma
 0.25  4.00



The result is so unstable, if it varies so much, why do we need to tune? Do
you know if this behavior is normal? Can we trust the best.parameters for
prediction?



Thank you so much to help out!!



Best Regards,

Maggie

        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: (package e1071) SVM tune for best parameters: why they are different everytime i run?

Uwe Ligges


Maggie Wang wrote:

> Hi,
>
> I run the following tuning function for svm. It's very strange that every
> time i run this function, the best.parameters give different values.
>
> [A]
>
>> svm.tune <- tune(svm, train.x, train.y,
>
>                     validation.x=train.x, validation.y=train.y,
>
>                  ranges = list(gamma = 2^(-1:2),
>
>                  cost = 2^(-3:2)))
>
>
>
> # where train.x and train.y are matrix specified.
>
>
>
> # output command:
>
>
>
>> svm.tune$best.parameters$cost
>
>> svm.tune$best.parameters$gamma
>
>
>
> result:
>
>  cost gamma
>  0.25  4.00
>
>
>
> run A again:
>
>  cost gamma
>     1     4
>
>
>
> again:
>
>   cost gamma
>  0.25  4.00
>
>
>
> The result is so unstable, if it varies so much, why do we need to tune? Do
> you know if this behavior is normal? Can we trust the best.parameters for
> prediction?

I guess you do not have really many observations in your dataset. Then
it highly depends ion the cross validation sets which parameter is best.
And therefore you get quite different results.

Uwe Ligges



>
>
> Thank you so much to help out!!
>
>
>
> Best Regards,
>
> Maggie
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: (package e1071) SVM tune for best parameters: why they are different everytime i run?

Maggie Wang-2
Hi, Uwe,

Thanks for the reply!!  I have 87 observations in total. If this amount
causes the different best.parameters, is there a better way than cross
validation to tune them?

Thank you so much for the help!

Best Regards,
Maggie

On Dec 27, 2007 6:17 PM, Uwe Ligges <[hidden email]>
wrote:

>
>
> Maggie Wang wrote:
> > Hi,
> >
> > I run the following tuning function for svm. It's very strange that
> every
> > time i run this function, the best.parameters give different values.
> >
> > [A]
> >
> >> svm.tune <- tune(svm, train.x, train.y,
> >
> >                     validation.x=train.x, validation.y=train.y,
> >
> >                  ranges = list(gamma = 2^(-1:2),
> >
> >                  cost = 2^(-3:2)))
> >
> >
> >
> > # where train.x and train.y are matrix specified.
> >
> >
> >
> > # output command:
> >
> >
> >
> >> svm.tune$best.parameters$cost
> >
> >> svm.tune$best.parameters$gamma
> >
> >
> >
> > result:
> >
> >  cost gamma
> >  0.25  4.00
> >
> >
> >
> > run A again:
> >
> >  cost gamma
> >     1     4
> >
> >
> >
> > again:
> >
> >   cost gamma
> >  0.25  4.00
> >
> >
> >
> > The result is so unstable, if it varies so much, why do we need to tune?
> Do
> > you know if this behavior is normal? Can we trust the best.parametersfor
> > prediction?
>
> I guess you do not have really many observations in your dataset. Then
> it highly depends ion the cross validation sets which parameter is best.
> And therefore you get quite different results.
>
> Uwe Ligges
>
>
>
> >
> >
> > Thank you so much to help out!!
> >
> >
> >
> > Best Regards,
> >
> > Maggie
> >
> >       [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > [hidden email] mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> > and provide commented, minimal, self-contained, reproducible code.
>

        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: (package e1071) SVM tune for best parameters: why they are different everytime i run?

Uwe Ligges


Maggie Wang wrote:
> Hi, Uwe,
>
> Thanks for the reply!!  I have 87 observations in total. If this amount
> causes the different best.parameters, is there a better way than cross
> validation to tune them?


In order to get stable (I do not say "best") results, you could try some
bootstrap with many replications or leave-one-out crossvalidation.

Uwe

> Thank you so much for the help!
>
> Best Regards,
> Maggie
>
> On Dec 27, 2007 6:17 PM, Uwe Ligges <[hidden email]>
> wrote:
>
>>
>> Maggie Wang wrote:
>>> Hi,
>>>
>>> I run the following tuning function for svm. It's very strange that
>> every
>>> time i run this function, the best.parameters give different values.
>>>
>>> [A]
>>>
>>>> svm.tune <- tune(svm, train.x, train.y,
>>>                     validation.x=train.x, validation.y=train.y,
>>>
>>>                  ranges = list(gamma = 2^(-1:2),
>>>
>>>                  cost = 2^(-3:2)))
>>>
>>>
>>>
>>> # where train.x and train.y are matrix specified.
>>>
>>>
>>>
>>> # output command:
>>>
>>>
>>>
>>>> svm.tune$best.parameters$cost
>>>> svm.tune$best.parameters$gamma
>>>
>>>
>>> result:
>>>
>>>  cost gamma
>>>  0.25  4.00
>>>
>>>
>>>
>>> run A again:
>>>
>>>  cost gamma
>>>     1     4
>>>
>>>
>>>
>>> again:
>>>
>>>   cost gamma
>>>  0.25  4.00
>>>
>>>
>>>
>>> The result is so unstable, if it varies so much, why do we need to tune?
>> Do
>>> you know if this behavior is normal? Can we trust the best.parametersfor
>>> prediction?
>> I guess you do not have really many observations in your dataset. Then
>> it highly depends ion the cross validation sets which parameter is best.
>> And therefore you get quite different results.
>>
>> Uwe Ligges
>>
>>
>>
>>>
>>> Thank you so much to help out!!
>>>
>>>
>>>
>>> Best Regards,
>>>
>>> Maggie
>>>
>>>       [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> [hidden email] mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
>>> and provide commented, minimal, self-contained, reproducible code.
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: (package e1071) SVM tune for best parameters: why they are different everytime i run?

Maggie Wang-2
Thank you so much! I will have a try!! ~ maggie

On Dec 27, 2007 6:43 PM, Uwe Ligges <[hidden email]>
wrote:

>
>
> Maggie Wang wrote:
> > Hi, Uwe,
> >
> > Thanks for the reply!!  I have 87 observations in total. If this amount
> > causes the different best.parameters, is there a better way than cross
> > validation to tune them?
>
>
> In order to get stable (I do not say "best") results, you could try some
> bootstrap with many replications or leave-one-out crossvalidation.
>
> Uwe
>
> > Thank you so much for the help!
> >
> > Best Regards,
> > Maggie
> >
> > On Dec 27, 2007 6:17 PM, Uwe Ligges <[hidden email]>
> > wrote:
> >
> >>
> >> Maggie Wang wrote:
> >>> Hi,
> >>>
> >>> I run the following tuning function for svm. It's very strange that
> >> every
> >>> time i run this function, the best.parameters give different values.
> >>>
> >>> [A]
> >>>
> >>>> svm.tune <- tune(svm, train.x, train.y,
> >>>                     validation.x=train.x, validation.y=train.y,
> >>>
> >>>                  ranges = list(gamma = 2^(-1:2),
> >>>
> >>>                  cost = 2^(-3:2)))
> >>>
> >>>
> >>>
> >>> # where train.x and train.y are matrix specified.
> >>>
> >>>
> >>>
> >>> # output command:
> >>>
> >>>
> >>>
> >>>> svm.tune$best.parameters$cost
> >>>> svm.tune$best.parameters$gamma
> >>>
> >>>
> >>> result:
> >>>
> >>>  cost gamma
> >>>  0.25  4.00
> >>>
> >>>
> >>>
> >>> run A again:
> >>>
> >>>  cost gamma
> >>>     1     4
> >>>
> >>>
> >>>
> >>> again:
> >>>
> >>>   cost gamma
> >>>  0.25  4.00
> >>>
> >>>
> >>>
> >>> The result is so unstable, if it varies so much, why do we need to
> tune?
> >> Do
> >>> you know if this behavior is normal? Can we trust the
> best.parametersfor
> >>> prediction?
> >> I guess you do not have really many observations in your dataset. Then
> >> it highly depends ion the cross validation sets which parameter is
> best.
> >> And therefore you get quite different results.
> >>
> >> Uwe Ligges
> >>
> >>
> >>
> >>>
> >>> Thank you so much to help out!!
> >>>
> >>>
> >>>
> >>> Best Regards,
> >>>
> >>> Maggie
> >>>
> >>>       [[alternative HTML version deleted]]
> >>>
> >>> ______________________________________________
> >>> [hidden email] mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-help
> >>> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> <http://www.r-project.org/posting-guide.html>
>  >>> and provide commented, minimal, self-contained, reproducible code.
> >
>

        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.