argument "x" is missing in minpack.lm

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

argument "x" is missing in minpack.lm

Luigi
Hello,
I am trying to optimize a function with the function nls.lm from the
package minpack.lm. But I can't get it right:
```
A = 3261
B = 10
K = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,  408,  473,
      546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506, 1648, 1753,
      1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698, 2727, 2771, 2818,
      2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080, 3102, 3119, 3141, 3152,
      3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239, 3246, 3252, 3261)
O = nls.lm(list(a=A, b=B, x=X), holling,
           lower=NULL, upper=NULL, jac = NULL)
> Error in fn(par, ...) : argument "x" is missing, with no default
```
What am I missing?
Is nls.lm the right function for functions that are not linear?
Thank you
--
Best regards,
Luigi

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Sarah Goslee
You create objects A, B, and K, but then use A, B, and X in the call to
nls.lm().

I have no idea if nls.lm is the right tool for your job, but I do know that
you need to use the same names.

Sarah

On Tue, Jun 30, 2020 at 6:01 AM Luigi Marongiu <[hidden email]>
wrote:

> Hello,
> I am trying to optimize a function with the function nls.lm from the
> package minpack.lm. But I can't get it right:
> ```
> A = 3261
> B = 10
> K = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,
> 408,  473,
>       546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506, 1648,
> 1753,
>       1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698, 2727, 2771,
> 2818,
>       2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080, 3102, 3119, 3141,
> 3152,
>       3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239, 3246, 3252, 3261)
> O = nls.lm(list(a=A, b=B, x=X), holling,
>            lower=NULL, upper=NULL, jac = NULL)
> > Error in fn(par, ...) : argument "x" is missing, with no default
> ```
> What am I missing?
> Is nls.lm the right function for functions that are not linear?
> Thank you
> --
> Best regards,
> Luigi
>
> ______________________________________________
> [hidden email] mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Sarah Goslee (she/her)
http://www.sarahgoslee.com

        [[alternative HTML version deleted]]

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Luigi
But I get the same error even if I run:
```
parms = list(a=3261, b=10, x=CH$Cum_Dead[1:60])
O = nls.lm(parms, holling,
           lower=NULL, upper=NULL, jac = NULL)
> Error in fn(par, ...) : argument "x" is missing, with no default
```
the function to be optimized is:
```
function(a, b, x) {
  y = (a * x^2) / (b^2 + x^2)
  return(y)
}
```
so the parameters a, b, x are the same I called within nls.lm

On Tue, Jun 30, 2020 at 1:32 PM Sarah Goslee <[hidden email]> wrote:

>
> You create objects A, B, and K, but then use A, B, and X in the call to nls.lm().
>
> I have no idea if nls.lm is the right tool for your job, but I do know that you need to use the same names.
>
> Sarah
>
> On Tue, Jun 30, 2020 at 6:01 AM Luigi Marongiu <[hidden email]> wrote:
>>
>> Hello,
>> I am trying to optimize a function with the function nls.lm from the
>> package minpack.lm. But I can't get it right:
>> ```
>> A = 3261
>> B = 10
>> K = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,  408,  473,
>>       546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506, 1648, 1753,
>>       1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698, 2727, 2771, 2818,
>>       2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080, 3102, 3119, 3141, 3152,
>>       3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239, 3246, 3252, 3261)
>> O = nls.lm(list(a=A, b=B, x=X), holling,
>>            lower=NULL, upper=NULL, jac = NULL)
>> > Error in fn(par, ...) : argument "x" is missing, with no default
>> ```
>> What am I missing?
>> Is nls.lm the right function for functions that are not linear?
>> Thank you
>> --
>> Best regards,
>> Luigi
>>
>> ______________________________________________
>> [hidden email] mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> --
> Sarah Goslee (she/her)
> http://www.sarahgoslee.com



--
Best regards,
Luigi

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Ivan Krylov
On Tue, 30 Jun 2020 13:53:10 +0200
Luigi Marongiu <[hidden email]> wrote:

> function(a, b, x) {
>   y = (a * x^2) / (b^2 + x^2)
>   return(y)
> }

Take a look at the examples in ?nls.lm. The first argument of the
function must be the parameter list/vector; the rest of the arguments
are passed from ... in nls.lm call. This is _unlike_ do.call, which can
be used to "unpack" a list of arguments into function arguments.

To make it more concrete, change your function to:

function(par) (par$a * par$x^2) / (par$b^2 + par$x^2)

Then it should work.

--
Best regards,
Ivan

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Ivan Krylov
(Adding R-help back to Cc:)

On Tue, 30 Jun 2020 14:44:29 +0200
Luigi Marongiu <[hidden email]> wrote:

> Ok, I tried with:
> ```
> holly <- function(p) {
>   y = (p$a * p$x^2) / (p$b^2 + p$x^2)
>   return(y)
> }
> X = 1:60
> A = 3261
> B = 10

> X = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,
> 408,  473, 546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506,
> 1648, 1753, 1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698,
> 2727, 2771, 2818, 2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
> 3102, 3119, 3141, 3152, 3159, 3172, 3182, 3196, 3209, 3220, 3231,
> 3239, 3246, 3252, 3261)

You are correct in returning a vector of residuals to minimise a sum of
squares of, but X seems to be an independent variable, not a parameter
to optimize, so it shouldn't be passed as such. Instead you can either
close over X:

X <- c(...)
holly <- function(p) (p$a * X^2) / (p:b^2 + X^2)
# function holly now "contains" the vector X

or pass X as an argument that nls.lm will pass to your function:

holly <- function(p, X) (p$a * X^2) / (p$b^2 + X^2)
# nls.lm will pass the X argument to function holly
O <- nls.lm(par = list(a = 3261, b = 10), fn = holly, X = X)
summary(O)
# Parameters:
#    Estimate Std. Error   t value Pr(>|t|)    
# a 3.090e-16  4.102e-17 7.533e+00 3.72e-10 ***
# b 1.000e+01  1.525e-08 6.558e+08  < 2e-16 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 3.107e-16 on 58 degrees of freedom
# Number of iterations to termination: 2
# Reason for termination: Relative error between `par' and the solution
# is at most `ptol'.

--
Best regards,
Ivan

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Stephen Ellison
Ivan Krylov [[hidden email]] said:
>  Instead you can either close over X:
>
> X <- c(...)
> holly <- function(p) (p$a * X^2) / (p:b^2 + X^2)
> # function holly now "contains" the vector X

That would not be an accurate statement as written.
The function only contains an unevaluated call referencing X; not the vector X.
If X is not defined inside the function or its arguments, scoping rules take over and R goes looking in the function's environment, using the first thing called "X" that it finds.

So
X<-1:5
h <- function(p=2) p*X
h()
# works, but
X <- pi
h()
# Not the same answer. If the function 'contained' the vector, the result would be 2*(1:5) as above.
# This is why it's not wise to rely on scoping rules in functions, unless you _completely_ control the environment.

# and
rm(X)
h()
# returns an error because X is no longer defined, in the function or out of it, even though X was defined at the time h() was defined.

BUT

X <- pi/2
fh <- function(p=2) {
        X <- 7
        h(p)
}
fh()
# returns pi and not 14 because h() was bound to the global environment on creation and still is when R finds it on evaluating h() in the fh() function body.

Moral: if you want to be safe, pass it as an argument.


*******************************************************************
This email and any attachments are confidential. Any use...{{dropped:8}}

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Luigi
In reply to this post by Ivan Krylov
Thank you,
I got this:
```
holly = function(p, x) {
  y = (p$a * x^2) / (p$b^2 + x^2)
  return(y)
}
A = 3261
B = 10
K = CH$Cum_Dead[1:60]
X = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,  408,  473,
      546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506, 1648, 1753,
      1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698, 2727, 2771, 2818,
      2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080, 3102, 3119, 3141, 3152,
      3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239, 3246, 3252, 3261)
O <- nls.lm(par = list(a = A, b = B), fn = holly, x = X)
> summary(O)

Parameters:
   Estimate Std. Error   t value Pr(>|t|)
a 3.090e-16  4.102e-17 7.533e+00 3.72e-10 ***
b 1.000e+01  1.525e-08 6.558e+08  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.107e-16 on 58 degrees of freedom
Number of iterations to termination: 2
Reason for termination: Relative error between `par' and the solution
is at most `ptol'.
```
Is this correct?
if yes, then the case is closed, thank you.
However, the optimization is worse the the eyeball estimate:
```
Addendum:
  the optimization actually got a worse outcome than the original
eyeball estimation:
  ```
actual <- c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,
            344,  408,  473,
            546,  619,  705,  794,  891,  999, 1096, 1242, 1363,
            1506, 1648, 1753,
            1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646,
            2698, 2727, 2771, 2818,
            2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
            3102, 3119, 3141, 3152,
            3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239,
            3246, 3252, 3261)
# a = 3261, b = 10
raw_estim <- c(32.28713,  125.42308,  269.25688,  449.79310,  652.20000,
               863.20588, 1072.40940, 1272.58537,
               1459.34254, 1630.50000, 1785.43439, 1924.52459, 2048.73234,
               2159.31081, 2257.61538, 2344.98876,
               2422.69666, 2491.89623, 2553.62473, 2608.80000, 2658.22736,
               2702.60959, 2742.55803, 2778.60355,
               2811.20690, 2840.76804, 2867.63450, 2892.10860, 2914.45377,
               2934.90000, 2953.64844, 2970.87544,
               2986.73591, 3001.36624, 3014.88679, 3027.40401, 3039.01225,
               3049.79534, 3059.82788, 3069.17647,
               3077.90062, 3086.05365, 3093.68343, 3100.83301, 3107.54118,
               3113.84296, 3119.77003, 3125.35108,
               3130.61216, 3135.57692, 3140.26694, 3144.70185, 3148.89962,
               3152.87666, 3156.64800, 3160.22744,
               3163.62765, 3166.86028, 3169.93605, 3172.86486)
# a = 3.090e-16, b = 1.000e+01
opt_estim <- c(3.059406e-18, 1.188462e-17, 2.551376e-17, 4.262069e-17,
6.180000e-17,
               8.179412e-17, 1.016174e-16,
1.205854e-16, 1.382818e-16, 1.545000e-16, 1.691810e-16, 1.823607e-16,
1.941301e-16, 2.046081e-16,
2.139231e-16, 2.222022e-16, 2.295656e-16, 2.361226e-16, 2.419718e-16,
2.472000e-16, 2.518835e-16,
2.560890e-16, 2.598744e-16, 2.632899e-16, 2.663793e-16, 2.691804e-16,
2.717262e-16, 2.740452e-16,
2.761626e-16, 2.781000e-16, 2.798765e-16, 2.815089e-16, 2.830118e-16,
2.843981e-16, 2.856792e-16,
2.868653e-16, 2.879653e-16, 2.889870e-16, 2.899377e-16, 2.908235e-16,
2.916502e-16, 2.924227e-16,
2.931457e-16, 2.938232e-16, 2.944588e-16, 2.950560e-16, 2.956176e-16,
2.961464e-16, 2.966449e-16,
2.971154e-16, 2.975598e-16, 2.979800e-16, 2.983778e-16, 2.987546e-16,
2.991120e-16, 2.994512e-16,
2.997734e-16, 3.000797e-16, 3.003711e-16, 3.006486e-16)
plot(1:60, actual, lty = 1 , type = "l", lwd = 2,
     xlab = "Index", ylab = "Values")
points(1:60, raw_estim, lty = 2, type = "l")
points(1:60, opt_estim, lty = 3, type = "l")
legend("right",
       legend = c("Actual values", "Raw estimate", "Optimized values"),
       lty = c(1, 2, 3), lwd = c(2, 1,1))
```
Is that normal?


On Tue, Jun 30, 2020 at 3:41 PM Ivan Krylov <[hidden email]> wrote:

>
> (Adding R-help back to Cc:)
>
> On Tue, 30 Jun 2020 14:44:29 +0200
> Luigi Marongiu <[hidden email]> wrote:
>
> > Ok, I tried with:
> > ```
> > holly <- function(p) {
> >   y = (p$a * p$x^2) / (p$b^2 + p$x^2)
> >   return(y)
> > }
> > X = 1:60
> > A = 3261
> > B = 10
>
> > X = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,
> > 408,  473, 546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506,
> > 1648, 1753, 1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698,
> > 2727, 2771, 2818, 2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
> > 3102, 3119, 3141, 3152, 3159, 3172, 3182, 3196, 3209, 3220, 3231,
> > 3239, 3246, 3252, 3261)
>
> You are correct in returning a vector of residuals to minimise a sum of
> squares of, but X seems to be an independent variable, not a parameter
> to optimize, so it shouldn't be passed as such. Instead you can either
> close over X:
>
> X <- c(...)
> holly <- function(p) (p$a * X^2) / (p:b^2 + X^2)
> # function holly now "contains" the vector X
>
> or pass X as an argument that nls.lm will pass to your function:
>
> holly <- function(p, X) (p$a * X^2) / (p$b^2 + X^2)
> # nls.lm will pass the X argument to function holly
> O <- nls.lm(par = list(a = 3261, b = 10), fn = holly, X = X)
> summary(O)
> # Parameters:
> #    Estimate Std. Error   t value Pr(>|t|)
> # a 3.090e-16  4.102e-17 7.533e+00 3.72e-10 ***
> # b 1.000e+01  1.525e-08 6.558e+08  < 2e-16 ***
> # ---
> # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> #
> # Residual standard error: 3.107e-16 on 58 degrees of freedom
> # Number of iterations to termination: 2
> # Reason for termination: Relative error between `par' and the solution
> # is at most `ptol'.
>
> --
> Best regards,
> Ivan



--
Best regards,
Luigi

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Ivan Krylov
On Wed, 1 Jul 2020 14:31:19 +0200
Luigi Marongiu <[hidden email]> wrote:

>  the optimization actually got a worse outcome than the original
>eyeball estimation

Could you elaborate on the function you are trying to fit to your data?
nls.lm takes a function returning a vector of residuals, that is,

fn <- function(parameters, input, actual.output) {
 calculate.output(parameters, input) - actual.output
}

...which means that you need a vector of input values and a vector of
output values of the same length. In your example,

> y = (p$a * x^2) / (p$b^2 + x^2)

a and b are parameters. x seems to be the dependent variable (i.e.
output of the process) and not the independent variable (i.e. input of
the model function) like I had initially assumed. What is the input of
your model function?

--
Best regards,
Ivan

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Luigi
In reply to this post by Stephen Ellison
Addendum.
I have found the function Gompertz even better than the Holling III
because it gives more pronounced S profile. However the optimization
is bad even in this case:
```
gomp = function(p, x) {
  y = p$a * exp(-p$b * exp(-p$c * x))
  return(y)
}
A = 3261
B = 10
C = 1
X = c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,  344,  408,  473,
      546,  619,  705,  794,  891,  999, 1096, 1242, 1363, 1506, 1648, 1753,
      1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646, 2698, 2727, 2771, 2818,
      2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080, 3102, 3119, 3141, 3152,
      3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239, 3246, 3252, 3261)
W = nls.lm(par = list(a = A, b = B, c = C), fn = gomp, x = X)
W
> Nonlinear regression via the Levenberg-Marquardt algorithm
parameter estimates: 2.81739569520133e-17, 20.0000002056654, 0.100000000717244
residual sum-of-squares: 4.556e-32
reason terminated: Relative error between `par' and the solution is at
most `ptol'.
summary(W)

> Parameters:
   Estimate Std. Error   t value Pr(>|t|)
a 2.817e-17  3.764e-18 7.485e+00 4.94e-10 ***
b 2.000e+01  1.872e-08 1.068e+09  < 2e-16 ***
c 1.000e-01  2.924e-11 3.420e+09  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.827e-17 on 57 degrees of freedom
Number of iterations to termination: 2
Reason for termination: Relative error between `par' and the solution
is at most `ptol'.
```
but eyeballing gives:
```
actual <- c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,
            344,  408,  473,
            546,  619,  705,  794,  891,  999, 1096, 1242, 1363,
            1506, 1648, 1753,
            1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646,
            2698, 2727, 2771, 2818,
            2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
            3102, 3119, 3141, 3152,
            3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239,
            3246, 3252, 3261)
# a = 3261, b = 20, c = 0.1
GOMP = c(4.508508e-05, 2.523166e-04, 1.198631e-03, 4.909360e-03, 1.758298e-02,
         5.577422e-02, 1.585133e-01,
         4.078768e-01, 9.592495e-01, 2.079652e+00, 4.188604e+00,
7.892437e+00, 1.400134e+01, 2.351997e+01,
         3.760684e+01, 5.750425e+01, 8.444654e+01, 1.195592e+02,
1.637624e+02, 2.176925e+02, 2.816487e+02,
         3.555714e+02, 4.390496e+02, 5.313549e+02, 6.314945e+02,
7.382759e+02, 8.503763e+02, 9.664098e+02,
         1.084989e+03, 1.204775e+03, 1.324520e+03, 1.443095e+03,
1.559508e+03, 1.672916e+03, 1.782624e+03,
         1.888078e+03, 1.988863e+03, 2.084686e+03, 2.175363e+03,
2.260805e+03, 2.341005e+03, 2.416022e+03,
         2.485969e+03, 2.551004e+03, 2.611315e+03, 2.667115e+03,
2.718631e+03, 2.766102e+03, 2.809769e+03,
         2.849874e+03, 2.886657e+03, 2.920347e+03, 2.951171e+03,
2.979341e+03, 3.005063e+03, 3.028528e+03,
         3.049918e+03, 3.069402e+03, 3.087140e+03, 3.103278e+03)
# a = 2.817e-17, b = 2.000e+01, c = 1.000e-01
GP = c(3.894654e-25, 2.179626e-24, 1.035432e-23, 4.240928e-23, 1.518898e-22,
         4.818030e-22, 1.369310e-21,
         3.523425e-21, 8.286434e-21, 1.796498e-20, 3.618306e-20,
6.817846e-20, 1.209500e-19, 2.031762e-19,
         3.248650e-19, 4.967478e-19, 7.294876e-19, 1.032806e-18,
1.414654e-18, 1.880527e-18, 2.433010e-18,
         3.071587e-18, 3.792710e-18, 4.590085e-18, 5.455136e-18,
6.377563e-18, 7.345937e-18, 8.348287e-18,
         9.372625e-18, 1.040739e-17, 1.144180e-17, 1.246611e-17,
1.347174e-17, 1.445141e-17, 1.539911e-17,
         1.631008e-17, 1.718071e-17, 1.800847e-17, 1.879178e-17,
1.952986e-17, 2.022267e-17, 2.087070e-17,
         2.147493e-17, 2.203673e-17, 2.255773e-17, 2.303975e-17,
2.348477e-17, 2.389484e-17, 2.427206e-17,
         2.461851e-17, 2.493625e-17, 2.522729e-17, 2.549356e-17,
2.573691e-17, 2.595910e-17, 2.616180e-17,
         2.634657e-17, 2.651489e-17, 2.666812e-17, 2.680752e-17)
plot(1:60, actual, lty = 1 , type = "l", lwd = 2,
     xlab = "Index", ylab = "Values")
points(1:60, GOMP, lty = 2, type = "l")
points(1:60, GP, lty = 3, type = "l")
legend("right",
       legend = c("Actual values", "Raw estimate", "Optimized values"),
       lty = c(1, 2, 3), lwd = c(2, 1,1))
```

On Tue, Jun 30, 2020 at 9:59 PM Stephen Ellison <[hidden email]> wrote:

>
> Ivan Krylov [[hidden email]] said:
> >  Instead you can either close over X:
> >
> > X <- c(...)
> > holly <- function(p) (p$a * X^2) / (p:b^2 + X^2)
> > # function holly now "contains" the vector X
>
> That would not be an accurate statement as written.
> The function only contains an unevaluated call referencing X; not the vector X.
> If X is not defined inside the function or its arguments, scoping rules take over and R goes looking in the function's environment, using the first thing called "X" that it finds.
>
> So
> X<-1:5
> h <- function(p=2) p*X
> h()
> # works, but
> X <- pi
> h()
> # Not the same answer. If the function 'contained' the vector, the result would be 2*(1:5) as above.
> # This is why it's not wise to rely on scoping rules in functions, unless you _completely_ control the environment.
>
> # and
> rm(X)
> h()
> # returns an error because X is no longer defined, in the function or out of it, even though X was defined at the time h() was defined.
>
> BUT
>
> X <- pi/2
> fh <- function(p=2) {
>         X <- 7
>         h(p)
> }
> fh()
> # returns pi and not 14 because h() was bound to the global environment on creation and still is when R finds it on evaluating h() in the fh() function body.
>
> Moral: if you want to be safe, pass it as an argument.
>
>
> *******************************************************************
> This email and any attachments are confidential. Any u...{{dropped:14}}

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Ivan Krylov
In reply to this post by Ivan Krylov
On Wed, 1 Jul 2020 15:24:35 +0200
Luigi Marongiu <[hidden email]> wrote:

>You are right: The vector X is actually Y -- the response I would like
>to fit the curve upon. I understood I should fit nls.lm with a
>function that describes the data (Holling or Gomperz), initial
>parameters, and the actual values (Y). In return, I get the optimized
>values for the parameters. But when I re-run the function that
>describes the data with the optimized parameters, I get values close
>to zero.

The function to be minimized should have access to all three: the
parameters to optimize, the independent and dependent variable values.
Only then there is enough information to compute residuals and minimise
their sum of squares.

holly <- function(p, x, y) (p$a * x^2) / (p$b^2 + x^2) - y
#               residuals =      y.hat(x, params)      - y.actual
O <- nls.lm(par = list(a = 3261, b = 10), fn = holly, x = X, y = Y)
summary(O)
# Parameters:
#    Estimate Std. Error t value Pr(>|t|)
# a 4380.4979   106.8144   41.01   <2e-16 ***
# b   30.3779     0.9995   30.39   <2e-16 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 157.5 on 58 degrees of freedom
# Number of iterations to termination: 7
# Reason for termination: Relative error in the sum of squares is at
# most `ftol'.

In our previous examples we ended up asking R to minimise y.hat(x)
calculated at wrong x values instead of minimising residuals.

--
Best regards,
Ivan

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: argument "x" is missing in minpack.lm

Luigi
Got it, thanks!
Now I get:
```
actual <- c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,
            344,  408,  473,
            546,  619,  705,  794,  891,  999, 1096, 1242, 1363,
            1506, 1648, 1753,
            1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646,
            2698, 2727, 2771, 2818,
            2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
            3102, 3119, 3141, 3152,
            3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239,
            3246, 3252, 3261)
# a = 3261, b = 10
raw_estim <- c(32.28713,  125.42308,  269.25688,  449.79310,  652.20000,
               863.20588, 1072.40940, 1272.58537,
               1459.34254, 1630.50000, 1785.43439, 1924.52459, 2048.73234,
               2159.31081, 2257.61538, 2344.98876,
               2422.69666, 2491.89623, 2553.62473, 2608.80000, 2658.22736,
               2702.60959, 2742.55803, 2778.60355,
               2811.20690, 2840.76804, 2867.63450, 2892.10860, 2914.45377,
               2934.90000, 2953.64844, 2970.87544,
               2986.73591, 3001.36624, 3014.88679, 3027.40401, 3039.01225,
               3049.79534, 3059.82788, 3069.17647,
               3077.90062, 3086.05365, 3093.68343, 3100.83301, 3107.54118,
               3113.84296, 3119.77003, 3125.35108,
               3130.61216, 3135.57692, 3140.26694, 3144.70185, 3148.89962,
               3152.87666, 3156.64800, 3160.22744,
               3163.62765, 3166.86028, 3169.93605, 3172.86486)
# a = 3.090e-16, b = 1.000e+01
opt_estim_old <- c(3.059406e-18, 1.188462e-17, 2.551376e-17,
4.262069e-17, 6.180000e-17,
               8.179412e-17, 1.016174e-16,
1.205854e-16, 1.382818e-16, 1.545000e-16, 1.691810e-16, 1.823607e-16,
1.941301e-16, 2.046081e-16,
2.139231e-16, 2.222022e-16, 2.295656e-16, 2.361226e-16, 2.419718e-16,
2.472000e-16, 2.518835e-16,
2.560890e-16, 2.598744e-16, 2.632899e-16, 2.663793e-16, 2.691804e-16,
2.717262e-16, 2.740452e-16,
2.761626e-16, 2.781000e-16, 2.798765e-16, 2.815089e-16, 2.830118e-16,
2.843981e-16, 2.856792e-16,
2.868653e-16, 2.879653e-16, 2.889870e-16, 2.899377e-16, 2.908235e-16,
2.916502e-16, 2.924227e-16,
2.931457e-16, 2.938232e-16, 2.944588e-16, 2.950560e-16, 2.956176e-16,
2.961464e-16, 2.966449e-16,
2.971154e-16, 2.975598e-16, 2.979800e-16, 2.983778e-16, 2.987546e-16,
2.991120e-16, 2.994512e-16,
2.997734e-16, 3.000797e-16, 3.003711e-16, 3.006486e-16)
# a =  4380.4979, b = 30.3779
opt_estim <- c(4.741739,   18.905561,   42.309262,   74.655637,  115.541787,
               164.471381,  220.869196,
284.097173,  353.471198,  428.277856,  507.790487,  591.283989,
678.047947,  767.397828,
858.684086,  951.299179, 1044.682566, 1138.323858, 1231.764323,
1324.596988 , 1416.465585,
1507.062590, 1596.126574, 1683.439081, 1768.821202, 1852.130003,
1933.254918, 2012.114210,
2088.651563, 2162.832870, 2234.643232, 2304.084201, 2371.171268,
2435.931609, 2498.402055,
2558.627308, 2616.658366, 2672.551143, 2726.365284, 2778.163130,
2828.008847, 2875.967677,
2922.105311, 2966.487363, 3009.178936, 3050.244270, 3089.746448,
3127.747177, 3164.306598,
3199.483163, 3233.333529, 3265.912492, 3297.272946, 3327.465861,
3356.540283, 3384.543344,
3411.520287, 3437.514499, 3462.567553, 3486.719252)

plot(1:60, actual, lty = 1 , type = "l", lwd = 2,
     xlab = "Index", ylab = "Values")
points(1:60, raw_estim, lty = 2, type = "l")
points(1:60, opt_estim, lty = 3, type = "l")
legend("bottomright",
       legend = c("Actual values", "Raw estimate", "Optimized values"),
       lty = c(1, 2, 3), lwd = c(2, 1,1))
```

And it works even better with the function Gomperz:
```
gomp = function(p, x, y) (p$a * exp(-p$b * exp(-p$c * x))) - y
actual <- c(8,   24,   39,   63,   89,  115,  153,  196,  242,  287,
            344,  408,  473,
            546,  619,  705,  794,  891,  999, 1096, 1242, 1363,
            1506, 1648, 1753,
            1851, 1987, 2101, 2219, 2328, 2425, 2575, 2646,
            2698, 2727, 2771, 2818,
            2853, 2895, 2926, 2964, 2995, 3025, 3053, 3080,
            3102, 3119, 3141, 3152,
            3159, 3172, 3182, 3196, 3209, 3220, 3231, 3239,
            3246, 3252, 3261)
# a = 3261, b = 20, c = 0.1
GOMP = c(4.508508e-05, 2.523166e-04, 1.198631e-03, 4.909360e-03, 1.758298e-02,
         5.577422e-02, 1.585133e-01,
         4.078768e-01, 9.592495e-01, 2.079652e+00, 4.188604e+00,
7.892437e+00, 1.400134e+01, 2.351997e+01,
         3.760684e+01, 5.750425e+01, 8.444654e+01, 1.195592e+02,
1.637624e+02, 2.176925e+02, 2.816487e+02,
         3.555714e+02, 4.390496e+02, 5.313549e+02, 6.314945e+02,
7.382759e+02, 8.503763e+02, 9.664098e+02,
         1.084989e+03, 1.204775e+03, 1.324520e+03, 1.443095e+03,
1.559508e+03, 1.672916e+03, 1.782624e+03,
         1.888078e+03, 1.988863e+03, 2.084686e+03, 2.175363e+03,
2.260805e+03, 2.341005e+03, 2.416022e+03,
         2.485969e+03, 2.551004e+03, 2.611315e+03, 2.667115e+03,
2.718631e+03, 2.766102e+03, 2.809769e+03,
         2.849874e+03, 2.886657e+03, 2.920347e+03, 2.951171e+03,
2.979341e+03, 3.005063e+03, 3.028528e+03,
         3.049918e+03, 3.069402e+03, 3.087140e+03, 3.103278e+03)
# a = 3.344e+03, b = 7.715e+00, c = 1.007e-01
GP = c(3.123603,    6.093680,   11.150677,   19.256792,   31.559926,
49.332695,
       73.883838,
106.453532,  148.107305,  199.643054,  261.522677,  333.835070,
416.291985,  508.253731,
608.778470,  716.687199,  830.636345,  949.190754, 1070.891363,
1194.313689, 1318.114912,
1441.068876, 1562.089431, 1680.243298, 1794.754104, 1904.999383,
2010.502340, 2110.919990,
2206.029092, 2295.711021, 2379.936466, 2458.750617, 2532.259301,
2600.616339, 2664.012281,
2722.664577, 2776.809146, 2826.693281, 2872.569775, 2914.692156,
2953.310891, 2988.670438,
3021.007015, 3050.546986, 3077.505743, 3102.087011, 3124.482483,
3144.871725, 3163.422286,
3180.289965, 3195.619209, 3209.543577, 3222.186275, 3233.660724,
3244.071144, 3253.513139,
3262.074288, 3269.834707, 3276.867604, 3283.239800)
plot(1:60, actual, lty = 1 , type = "l", lwd = 2,
     xlab = "Index", ylab = "Values")
points(1:60, GOMP, lty = 2, type = "l")
points(1:60, GP, lty = 3, type = "l")
legend("right",
       legend = c("Actual values", "Raw estimate", "Optimized values"),
       lty = c(1, 2, 3), lwd = c(2, 1,1))
```
Thank you

On Wed, Jul 1, 2020 at 3:42 PM Ivan Krylov <[hidden email]> wrote:

>
> On Wed, 1 Jul 2020 15:24:35 +0200
> Luigi Marongiu <[hidden email]> wrote:
>
> >You are right: The vector X is actually Y -- the response I would like
> >to fit the curve upon. I understood I should fit nls.lm with a
> >function that describes the data (Holling or Gomperz), initial
> >parameters, and the actual values (Y). In return, I get the optimized
> >values for the parameters. But when I re-run the function that
> >describes the data with the optimized parameters, I get values close
> >to zero.
>
> The function to be minimized should have access to all three: the
> parameters to optimize, the independent and dependent variable values.
> Only then there is enough information to compute residuals and minimise
> their sum of squares.
>
> holly <- function(p, x, y) (p$a * x^2) / (p$b^2 + x^2) - y
> #               residuals =      y.hat(x, params)      - y.actual
> O <- nls.lm(par = list(a = 3261, b = 10), fn = holly, x = X, y = Y)
> summary(O)
> # Parameters:
> #    Estimate Std. Error t value Pr(>|t|)
> # a 4380.4979   106.8144   41.01   <2e-16 ***
> # b   30.3779     0.9995   30.39   <2e-16 ***
> # ---
> # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> #
> # Residual standard error: 157.5 on 58 degrees of freedom
> # Number of iterations to termination: 7
> # Reason for termination: Relative error in the sum of squares is at
> # most `ftol'.
>
> In our previous examples we ended up asking R to minimise y.hat(x)
> calculated at wrong x values instead of minimising residuals.
>
> --
> Best regards,
> Ivan



--
Best regards,
Luigi

______________________________________________
[hidden email] mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.