Optim function returning always initial value for parameter to be optimized

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Optim function returning always initial value for parameter to be optimized

BARLAS Marios 247554
Hello,

I'm trying to fminimize the following problem:

You have a data frame with 2 columns.

data.input= data.frame(state1 = (1:500), state2 = (201:700) )

with data that partially overlap in terms of values.

I want to minimize the assessment error of each state by using this function:

err.th.scalar <- function(threshold, data){
 
  state1 <- data$state1
  state2 <- data$state2
 
  op1l <- length(state1)
  op2l <- length(state2)
 
  op1.err <- sum(state1 <= threshold)/op1l
  op2.err <- sum(state2 >= threshold)/op2l
 
  total.err <- (op1.err + op2.err)

  return(total.err)
}


SO I'm trying to minimize the total error. This Total Error should be a U shape essentially.


I'm using optim as follows:

optim(par = 300, fn=err.th.scalar, data = data.input, method = "BFGS")

For some reason that's driving me crazy, in the first trial it worked but right now the output of optim for the parameter to get optimized is EXACTLY the same as the initial estimate whatever the initial estimate value is.

Please, any ideas why ?

I can't see the error at this moment.


Thanks in advance,
Marios Barlas
______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Optim function returning always initial value for parameter to be optimized

J C Nash
Did you check the gradient? I don't think so. It's zero, so of course
you end up where you start.

Try

data.input= data.frame(state1 = (1:500), state2 = (201:700) )
err.th.scalar <- function(threshold, data){

    state1 <- data$state1
    state2 <- data$state2

    op1l <- length(state1)
    op2l <- length(state2)

    op1.err <- sum(state1 <= threshold)/op1l
    op2.err <- sum(state2 >= threshold)/op2l

    total.err <- (op1.err + op2.err)

    return(total.err)
}

soln <- optim(par = 300, fn=err.th.scalar, data = data.input, method =
"BFGS")
soln
require("numDeriv")
gtest <- grad(err.th.scalar, x=300, data = data.input)
gtest


On 2018-02-09 09:05 AM, BARLAS Marios 247554 wrote:

> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
>
> with data that partially overlap in terms of values.
>
> I want to minimize the assessment error of each state by using this function:
>
> err.th.scalar <- function(threshold, data){
>  
>   state1 <- data$state1
>   state2 <- data$state2
>  
>   op1l <- length(state1)
>   op2l <- length(state2)
>  
>   op1.err <- sum(state1 <= threshold)/op1l
>   op2.err <- sum(state2 >= threshold)/op2l
>  
>   total.err <- (op1.err + op2.err)
>
>   return(total.err)
> }
>
>
> SO I'm trying to minimize the total error. This Total Error should be a U shape essentially.
>
>
> I'm using optim as follows:
>
> optim(par = 300, fn=err.th.scalar, data = data.input, method = "BFGS")


Maybe develop an analytic gradient if it is very small, as the numeric
approximation can then be zero even when the true gradient is not.

JN

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Optim function returning always initial value for parameter to be optimized

Paul Gilbert-2
In reply to this post by BARLAS Marios 247554


On 02/10/2018 06:00 AM, [hidden email] wrote:

> Did you check the gradient? I don't think so. It's zero, so of course
> you end up where you start.
>
> Try
>
> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
> err.th.scalar <- function(threshold, data){
>
>      state1 <- data$state1
>      state2 <- data$state2
>
>      op1l <- length(state1)
>      op2l <- length(state2)
>
>      op1.err <- sum(state1 <= threshold)/op1l
>      op2.err <- sum(state2 >= threshold)/op2l

I think this function is not smooth, and not even continuous. Gradient
methods require differentiable (smooth) functions. A numerical
approximation will be zero unless you are right near a jump point, so
you are unlikely to move from your initial guess.

Paul

>
>      total.err <- (op1.err + op2.err)
>
>      return(total.err)
> }
>
> soln <- optim(par = 300, fn=err.th.scalar, data = data.input, method =
> "BFGS")
> soln
> require("numDeriv")
> gtest <- grad(err.th.scalar, x=300, data = data.input)
> gtest
>
>
> On 2018-02-09 09:05 AM, BARLAS Marios 247554 wrote:
>> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
>>
>> with data that partially overlap in terms of values.
>>
>> I want to minimize the assessment error of each state by using this function:
>>
>> err.th.scalar <- function(threshold, data){
>>    
>>    state1 <- data$state1
>>    state2 <- data$state2
>>    
>>    op1l <- length(state1)
>>    op2l <- length(state2)
>>    
>>    op1.err <- sum(state1 <= threshold)/op1l
>>    op2.err <- sum(state2 >= threshold)/op2l
>>    
>>    total.err <- (op1.err + op2.err)
>>
>>    return(total.err)
>> }
>>
>>
>> SO I'm trying to minimize the total error. This Total Error should be a U shape essentially.
>>
>>
>> I'm using optim as follows:
>>
>> optim(par = 300, fn=err.th.scalar, data = data.input, method = "BFGS")
>
> Maybe develop an analytic gradient if it is very small, as the numeric
> approximation can then be zero even when the true gradient is not.
>
> JN
>
>

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.