

Hi,
I run the following tuning function for svm. It's very strange that every
time i run this function, the best.parameters give different values.
[A]
>svm.tune < tune(svm, train.x, train.y,
validation.x=train.x, validation.y=train.y,
ranges = list(gamma = 2^(1:2),
cost = 2^(3:2)))
# where train.x and train.y are matrix specified.
# output command:
>svm.tune$best.parameters$cost
>svm.tune$best.parameters$gamma
result:
cost gamma
0.25 4.00
run A again:
cost gamma
1 4
again:
cost gamma
0.25 4.00
The result is so unstable, if it varies so much, why do we need to tune? Do
you know if this behavior is normal? Can we trust the best.parameters for
prediction?
Thank you so much to help out!!
Best Regards,
Maggie
[[alternative HTML version deleted]]
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Maggie Wang wrote:
> Hi,
>
> I run the following tuning function for svm. It's very strange that every
> time i run this function, the best.parameters give different values.
>
> [A]
>
>> svm.tune < tune(svm, train.x, train.y,
>
> validation.x=train.x, validation.y=train.y,
>
> ranges = list(gamma = 2^(1:2),
>
> cost = 2^(3:2)))
>
>
>
> # where train.x and train.y are matrix specified.
>
>
>
> # output command:
>
>
>
>> svm.tune$best.parameters$cost
>
>> svm.tune$best.parameters$gamma
>
>
>
> result:
>
> cost gamma
> 0.25 4.00
>
>
>
> run A again:
>
> cost gamma
> 1 4
>
>
>
> again:
>
> cost gamma
> 0.25 4.00
>
>
>
> The result is so unstable, if it varies so much, why do we need to tune? Do
> you know if this behavior is normal? Can we trust the best.parameters for
> prediction?
I guess you do not have really many observations in your dataset. Then
it highly depends ion the cross validation sets which parameter is best.
And therefore you get quite different results.
Uwe Ligges
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Hi, Uwe,
Thanks for the reply!! I have 87 observations in total. If this amount
causes the different best.parameters, is there a better way than cross
validation to tune them?
Thank you so much for the help!
Best Regards,
Maggie
On Dec 27, 2007 6:17 PM, Uwe Ligges < [hidden email]>
wrote:
>
>
> Maggie Wang wrote:
> > Hi,
> >
> > I run the following tuning function for svm. It's very strange that
> every
> > time i run this function, the best.parameters give different values.
> >
> > [A]
> >
> >> svm.tune < tune(svm, train.x, train.y,
> >
> > validation.x=train.x, validation.y=train.y,
> >
> > ranges = list(gamma = 2^(1:2),
> >
> > cost = 2^(3:2)))
> >
> >
> >
> > # where train.x and train.y are matrix specified.
> >
> >
> >
> > # output command:
> >
> >
> >
> >> svm.tune$best.parameters$cost
> >
> >> svm.tune$best.parameters$gamma
> >
> >
> >
> > result:
> >
> > cost gamma
> > 0.25 4.00
> >
> >
> >
> > run A again:
> >
> > cost gamma
> > 1 4
> >
> >
> >
> > again:
> >
> > cost gamma
> > 0.25 4.00
> >
> >
> >
> > The result is so unstable, if it varies so much, why do we need to tune?
> Do
> > you know if this behavior is normal? Can we trust the best.parametersfor
> > prediction?
>
> I guess you do not have really many observations in your dataset. Then
> it highly depends ion the cross validation sets which parameter is best.
> And therefore you get quite different results.
>
> Uwe Ligges
>
>
>
> >
> >
> > Thank you so much to help out!!
> >
> >
> >
> > Best Regards,
> >
> > Maggie
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > [hidden email] mailing list
> > https://stat.ethz.ch/mailman/listinfo/rhelp> > PLEASE do read the posting guide
> http://www.Rproject.org/postingguide.html< http://www.rproject.org/postingguide.html>
> > and provide commented, minimal, selfcontained, reproducible code.
>
[[alternative HTML version deleted]]
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Maggie Wang wrote:
> Hi, Uwe,
>
> Thanks for the reply!! I have 87 observations in total. If this amount
> causes the different best.parameters, is there a better way than cross
> validation to tune them?
In order to get stable (I do not say "best") results, you could try some
bootstrap with many replications or leaveoneout crossvalidation.
Uwe
> Thank you so much for the help!
>
> Best Regards,
> Maggie
>
> On Dec 27, 2007 6:17 PM, Uwe Ligges < [hidden email]>
> wrote:
>
>>
>> Maggie Wang wrote:
>>> Hi,
>>>
>>> I run the following tuning function for svm. It's very strange that
>> every
>>> time i run this function, the best.parameters give different values.
>>>
>>> [A]
>>>
>>>> svm.tune < tune(svm, train.x, train.y,
>>> validation.x=train.x, validation.y=train.y,
>>>
>>> ranges = list(gamma = 2^(1:2),
>>>
>>> cost = 2^(3:2)))
>>>
>>>
>>>
>>> # where train.x and train.y are matrix specified.
>>>
>>>
>>>
>>> # output command:
>>>
>>>
>>>
>>>> svm.tune$best.parameters$cost
>>>> svm.tune$best.parameters$gamma
>>>
>>>
>>> result:
>>>
>>> cost gamma
>>> 0.25 4.00
>>>
>>>
>>>
>>> run A again:
>>>
>>> cost gamma
>>> 1 4
>>>
>>>
>>>
>>> again:
>>>
>>> cost gamma
>>> 0.25 4.00
>>>
>>>
>>>
>>> The result is so unstable, if it varies so much, why do we need to tune?
>> Do
>>> you know if this behavior is normal? Can we trust the best.parametersfor
>>> prediction?
>> I guess you do not have really many observations in your dataset. Then
>> it highly depends ion the cross validation sets which parameter is best.
>> And therefore you get quite different results.
>>
>> Uwe Ligges
>>
>>
>>
>>>
>>> Thank you so much to help out!!
>>>
>>>
>>>
>>> Best Regards,
>>>
>>> Maggie
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> [hidden email] mailing list
>>> https://stat.ethz.ch/mailman/listinfo/rhelp>>> PLEASE do read the posting guide
>> http://www.Rproject.org/postingguide.html< http://www.rproject.org/postingguide.html>
>>> and provide commented, minimal, selfcontained, reproducible code.
>
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Thank you so much! I will have a try!! ~ maggie
On Dec 27, 2007 6:43 PM, Uwe Ligges < [hidden email]>
wrote:
>
>
> Maggie Wang wrote:
> > Hi, Uwe,
> >
> > Thanks for the reply!! I have 87 observations in total. If this amount
> > causes the different best.parameters, is there a better way than cross
> > validation to tune them?
>
>
> In order to get stable (I do not say "best") results, you could try some
> bootstrap with many replications or leaveoneout crossvalidation.
>
> Uwe
>
> > Thank you so much for the help!
> >
> > Best Regards,
> > Maggie
> >
> > On Dec 27, 2007 6:17 PM, Uwe Ligges < [hidden email]>
> > wrote:
> >
> >>
> >> Maggie Wang wrote:
> >>> Hi,
> >>>
> >>> I run the following tuning function for svm. It's very strange that
> >> every
> >>> time i run this function, the best.parameters give different values.
> >>>
> >>> [A]
> >>>
> >>>> svm.tune < tune(svm, train.x, train.y,
> >>> validation.x=train.x, validation.y=train.y,
> >>>
> >>> ranges = list(gamma = 2^(1:2),
> >>>
> >>> cost = 2^(3:2)))
> >>>
> >>>
> >>>
> >>> # where train.x and train.y are matrix specified.
> >>>
> >>>
> >>>
> >>> # output command:
> >>>
> >>>
> >>>
> >>>> svm.tune$best.parameters$cost
> >>>> svm.tune$best.parameters$gamma
> >>>
> >>>
> >>> result:
> >>>
> >>> cost gamma
> >>> 0.25 4.00
> >>>
> >>>
> >>>
> >>> run A again:
> >>>
> >>> cost gamma
> >>> 1 4
> >>>
> >>>
> >>>
> >>> again:
> >>>
> >>> cost gamma
> >>> 0.25 4.00
> >>>
> >>>
> >>>
> >>> The result is so unstable, if it varies so much, why do we need to
> tune?
> >> Do
> >>> you know if this behavior is normal? Can we trust the
> best.parametersfor
> >>> prediction?
> >> I guess you do not have really many observations in your dataset. Then
> >> it highly depends ion the cross validation sets which parameter is
> best.
> >> And therefore you get quite different results.
> >>
> >> Uwe Ligges
> >>
> >>
> >>
> >>>
> >>> Thank you so much to help out!!
> >>>
> >>>
> >>>
> >>> Best Regards,
> >>>
> >>> Maggie
> >>>
> >>> [[alternative HTML version deleted]]
> >>>
> >>> ______________________________________________
> >>> [hidden email] mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/rhelp> >>> PLEASE do read the posting guide
> >> http://www.Rproject.org/postingguide.html< http://www.rproject.org/postingguide.html>
> < http://www.rproject.org/postingguide.html>
> >>> and provide commented, minimal, selfcontained, reproducible code.
> >
>
[[alternative HTML version deleted]]
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.

