

Hi,
I have some data that i'm trying to fit a double exponential model: data.
Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05,
1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of the double
exponential is: exp (b0*exp (b1*x^th)).
I failed to guess the initial parameter values and then I learned a measure
to find starting values from Nonlinear Regression with R (pp. 2527):
> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
+ Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
c(0.02),b1 = seq(0.01, 4, by = 0.01)))
> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
= grid.Disperse, algorithm = "bruteforce")
> Disperse.m2a
Nonlinear regression model
model: Retention ~ expFct(Area, b0, th, b1)
data: cl
b0 th b1
3.82 0.02 0.01
residual sumofsquares: 13596
Number of iterations to convergence: 160000
Achieved convergence tolerance: NA
I got no error then I use the output as starting values to nls2 ():
> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl, start =
list(b0 = 3.82, b1 = 0.02, th = 0.01))
Error in (function (formula, data = parent.frame(), start, control =
nls.control(), :
Singular gradient
Why? Did I do something wrong or misunderstand something?
Later, I found another measure from Modern Applied Statistics with S (pp.
216217):
> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
negexp.SSival, parameters = c("b0", "b1", "th"),
+ template = function(x, b0, b1, th) {})
> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl, trace =
T)
b0 b1 th
4.208763 144.205455 1035.324595
Error in qr.default(.swts * attr(rhs, "gradient")) :
NA/NaN/Inf (arg1) can not be called when the external function is called.
Error happened again. How can I fix it? I am desperate.
Best regards,
Pinglei Gao
[[alternative HTML version deleted]]
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Here are some things to try. Maybe divide Area by 1000 and retention
by 100. Try plotting the data and superimposing the line that
corresponds to the 'fit' from nls2. See if you can correct it with
some careful guesses.
Getting suitable starting parameters for nonlinear modeling is one of
the black arts of statistical fitting. Good luck! And don't forget to
check for sensitivity.
Andrew
On 9 October 2016 at 22:21, Pinglei Gao < [hidden email]> wrote:
> Hi,
>
> I have some data that i'm trying to fit a double exponential model: data.
> Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05,
> 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>
> Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of the double
> exponential is: exp (b0*exp (b1*x^th)).
>
>
>
> I failed to guess the initial parameter values and then I learned a measure
> to find starting values from Nonlinear Regression with R (pp. 2527):
>
>
>
>> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>
> + Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
>
>> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
>
>> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
> c(0.02),b1 = seq(0.01, 4, by = 0.01)))
>
>> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
> = grid.Disperse, algorithm = "bruteforce")
>
>> Disperse.m2a
>
> Nonlinear regression model
>
> model: Retention ~ expFct(Area, b0, th, b1)
>
> data: cl
>
> b0 th b1
>
> 3.82 0.02 0.01
>
> residual sumofsquares: 13596
>
> Number of iterations to convergence: 160000
>
> Achieved convergence tolerance: NA
>
>
>
> I got no error then I use the output as starting values to nls2 ():
>
>> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl, start =
> list(b0 = 3.82, b1 = 0.02, th = 0.01))
>
> Error in (function (formula, data = parent.frame(), start, control =
> nls.control(), :
>
> Singular gradient
>
>
>
> Why? Did I do something wrong or misunderstand something?
>
>
>
> Later, I found another measure from Modern Applied Statistics with S (pp.
> 216217):
>
>
>
>> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
> negexp.SSival, parameters = c("b0", "b1", "th"),
>
> + template = function(x, b0, b1, th) {})
>
>> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl, trace =
> T)
>
> b0 b1 th
>
> 4.208763 144.205455 1035.324595
>
> Error in qr.default(.swts * attr(rhs, "gradient")) :
>
> NA/NaN/Inf (arg1) can not be called when the external function is called.
>
>
>
> Error happened again. How can I fix it? I am desperate.
>
>
>
> Best regards,
>
>
>
> Pinglei Gao
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email] mailing list  To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/rhelp> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html> and provide commented, minimal, selfcontained, reproducible code.

Andrew Robinson
Deputy Director, CEBRA, School of Biosciences
Reader & Associate Professor in Applied Statistics Tel: (+61) 0403 138 955
School of Mathematics and Statistics Fax: +6138344 4599
University of Melbourne, VIC 3010 Australia
Email: [hidden email]
Website: http://www.ms.unimelb.edu.au/~andrewprMSME: http://www.crcpress.com/product/isbn/9781439858028FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/SPuR: http://www.ms.unimelb.edu.au/spuRs/______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Well... (inline  and I hope this isn't homework!)
On Sun, Oct 9, 2016 at 3:05 PM, Andrew Robinson
< [hidden email]> wrote:
> Here are some things to try. Maybe divide Area by 1000 and retention
> by 100. Try plotting the data and superimposing the line that
> corresponds to the 'fit' from nls2. See if you can correct it with
> some careful guesses.
>
> Getting suitable starting parameters for nonlinear modeling is one of
> the black arts of statistical fitting. ...
>
> Andrew
True. But it's usually worthwhile thinking about the math a bit before guessing.
Note that the model can be linearized to:
log(log(Retention)) = b0 + b1*Area^th
So a plot of log(log(Retention)) vs Area may be informative and useful
for finding starting values. e.g., for a grid of th's, do linear
regression fits .
However, when I look at that plot, it seems pretty linear with a
negative slope. This suggests that you may have an overparametrization
problem . i.e. fix th =1 and use the b0 and b1 from the above
regression for starting values.
Do note that this strategy isn't foolproof, as it ignores that the
error term is additive in the above transformed metric, rather than
the original. This can sometimes mislead. But this is just a
heuristic.
Cheers,
Bert
>
> On 9 October 2016 at 22:21, Pinglei Gao < [hidden email]> wrote:
>> Hi,
>>
>> I have some data that i'm trying to fit a double exponential model: data.
>> Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05,
>> 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>
>> Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of the double
>> exponential is: exp (b0*exp (b1*x^th)).
>>
>>
>>
>> I failed to guess the initial parameter values and then I learned a measure
>> to find starting values from Nonlinear Regression with R (pp. 2527):
>>
>>
>>
>>> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
>> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>
>> + Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
>>
>>> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
>>
>>> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
>> c(0.02),b1 = seq(0.01, 4, by = 0.01)))
>>
>>> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
>> = grid.Disperse, algorithm = "bruteforce")
>>
>>> Disperse.m2a
>>
>> Nonlinear regression model
>>
>> model: Retention ~ expFct(Area, b0, th, b1)
>>
>> data: cl
>>
>> b0 th b1
>>
>> 3.82 0.02 0.01
>>
>> residual sumofsquares: 13596
>>
>> Number of iterations to convergence: 160000
>>
>> Achieved convergence tolerance: NA
>>
>>
>>
>> I got no error then I use the output as starting values to nls2 ():
>>
>>> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl, start =
>> list(b0 = 3.82, b1 = 0.02, th = 0.01))
>>
>> Error in (function (formula, data = parent.frame(), start, control =
>> nls.control(), :
>>
>> Singular gradient
>>
>>
>>
>> Why? Did I do something wrong or misunderstand something?
>>
>>
>>
>> Later, I found another measure from Modern Applied Statistics with S (pp.
>> 216217):
>>
>>
>>
>>> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
>> negexp.SSival, parameters = c("b0", "b1", "th"),
>>
>> + template = function(x, b0, b1, th) {})
>>
>>> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl, trace =
>> T)
>>
>> b0 b1 th
>>
>> 4.208763 144.205455 1035.324595
>>
>> Error in qr.default(.swts * attr(rhs, "gradient")) :
>>
>> NA/NaN/Inf (arg1) can not be called when the external function is called.
>>
>>
>>
>> Error happened again. How can I fix it? I am desperate.
>>
>>
>>
>> Best regards,
>>
>>
>>
>> Pinglei Gao
>>
>>
>>
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/rhelp>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>> and provide commented, minimal, selfcontained, reproducible code.
>
>
>
> 
> Andrew Robinson
> Deputy Director, CEBRA, School of Biosciences
> Reader & Associate Professor in Applied Statistics Tel: (+61) 0403 138 955
> School of Mathematics and Statistics Fax: +6138344 4599
> University of Melbourne, VIC 3010 Australia
> Email: [hidden email]
> Website: http://www.ms.unimelb.edu.au/~andrewpr>
> MSME: http://www.crcpress.com/product/isbn/9781439858028> FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/> SPuR: http://www.ms.unimelb.edu.au/spuRs/>
> ______________________________________________
> [hidden email] mailing list  To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/rhelp> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html> and provide commented, minimal, selfcontained, reproducible code.
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


> On 10 Oct 2016, at 00:40 , Bert Gunter < [hidden email]> wrote:
>
> Well... (inline  and I hope this isn't homework!)
>
Pretty much same as I thought.
Fixing th=0.02 in the grid search looks wrong. Bert's plot is pretty linear, so th=1 is a good guesstimate. There's a slight curvature but to reduce it, you would increase th, not decrease it. Running the regression, as Bert suggests, indicates that b0=5.16 and b1= 0.00024 could work as reasonable starting values. Notice that the grid search had "b1 = seq(0.01, 4, by = 0.01)" which is wrong in both sign and scale.
Andrew's suggestion of dividing Retention by 100 is tempting, since it looks like a percentage, but that would make all Y values less than 1 and the double exponential function as written has values that are always bigger than 1. (It is conceivable that the model itself is wrong, though. E.g. it could be that Retention on a scale from 0 to 1 could be modeled as exp(something), but we really have no idea of the context here.)
(If this was in fact homework, you should now go and write a proper SelfStart initializer routine for this model. Even if it isn't homework, you do need to study the text again, because you have clearly not understood how selfstarting models work.)
pd
>
>
>
> On Sun, Oct 9, 2016 at 3:05 PM, Andrew Robinson
> < [hidden email]> wrote:
>> Here are some things to try. Maybe divide Area by 1000 and retention
>> by 100. Try plotting the data and superimposing the line that
>> corresponds to the 'fit' from nls2. See if you can correct it with
>> some careful guesses.
>>
>> Getting suitable starting parameters for nonlinear modeling is one of
>> the black arts of statistical fitting. ...
>>
>> Andrew
>
> True. But it's usually worthwhile thinking about the math a bit before guessing.
>
> Note that the model can be linearized to:
>
> log(log(Retention)) = b0 + b1*Area^th
>
> So a plot of log(log(Retention)) vs Area may be informative and useful
> for finding starting values. e.g., for a grid of th's, do linear
> regression fits .
>
> However, when I look at that plot, it seems pretty linear with a
> negative slope. This suggests that you may have an overparametrization
> problem . i.e. fix th =1 and use the b0 and b1 from the above
> regression for starting values.
>
> Do note that this strategy isn't foolproof, as it ignores that the
> error term is additive in the above transformed metric, rather than
> the original. This can sometimes mislead. But this is just a
> heuristic.
>
> Cheers,
> Bert
>
>
>
>
>
>
>
>>
>> On 9 October 2016 at 22:21, Pinglei Gao < [hidden email]> wrote:
>>> Hi,
>>>
>>> I have some data that i'm trying to fit a double exponential model: data.
>>> Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05,
>>> 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>>
>>> Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of the double
>>> exponential is: exp (b0*exp (b1*x^th)).
>>>
>>>
>>>
>>> I failed to guess the initial parameter values and then I learned a measure
>>> to find starting values from Nonlinear Regression with R (pp. 2527):
>>>
>>>
>>>
>>>> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
>>> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>>
>>> + Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
>>>
>>>> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
>>>
>>>> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
>>> c(0.02),b1 = seq(0.01, 4, by = 0.01)))
>>>
>>>> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
>>> = grid.Disperse, algorithm = "bruteforce")
>>>
>>>> Disperse.m2a
>>>
>>> Nonlinear regression model
>>>
>>> model: Retention ~ expFct(Area, b0, th, b1)
>>>
>>> data: cl
>>>
>>> b0 th b1
>>>
>>> 3.82 0.02 0.01
>>>
>>> residual sumofsquares: 13596
>>>
>>> Number of iterations to convergence: 160000
>>>
>>> Achieved convergence tolerance: NA
>>>
>>>
>>>
>>> I got no error then I use the output as starting values to nls2 ():
>>>
>>>> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl, start =
>>> list(b0 = 3.82, b1 = 0.02, th = 0.01))
>>>
>>> Error in (function (formula, data = parent.frame(), start, control =
>>> nls.control(), :
>>>
>>> Singular gradient
>>>
>>>
>>>
>>> Why? Did I do something wrong or misunderstand something?
>>>
>>>
>>>
>>> Later, I found another measure from Modern Applied Statistics with S (pp.
>>> 216217):
>>>
>>>
>>>
>>>> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
>>> negexp.SSival, parameters = c("b0", "b1", "th"),
>>>
>>> + template = function(x, b0, b1, th) {})
>>>
>>>> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl, trace =
>>> T)
>>>
>>> b0 b1 th
>>>
>>> 4.208763 144.205455 1035.324595
>>>
>>> Error in qr.default(.swts * attr(rhs, "gradient")) :
>>>
>>> NA/NaN/Inf (arg1) can not be called when the external function is called.
>>>
>>>
>>>
>>> Error happened again. How can I fix it? I am desperate.
>>>
>>>
>>>
>>> Best regards,
>>>
>>>
>>>
>>> Pinglei Gao
>>>
>>>
>>>
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/rhelp>>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>>> and provide commented, minimal, selfcontained, reproducible code.
>>
>>
>>
>> 
>> Andrew Robinson
>> Deputy Director, CEBRA, School of Biosciences
>> Reader & Associate Professor in Applied Statistics Tel: (+61) 0403 138 955
>> School of Mathematics and Statistics Fax: +6138344 4599
>> University of Melbourne, VIC 3010 Australia
>> Email: [hidden email]
>> Website: http://www.ms.unimelb.edu.au/~andrewpr>>
>> MSME: http://www.crcpress.com/product/isbn/9781439858028>> FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/>> SPuR: http://www.ms.unimelb.edu.au/spuRs/>>
>> ______________________________________________
>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/rhelp>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>> and provide commented, minimal, selfcontained, reproducible code.
>
> ______________________________________________
> [hidden email] mailing list  To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/rhelp> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html> and provide commented, minimal, selfcontained, reproducible code.

Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: [hidden email] Priv: [hidden email]
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


I didn't try very hard, but got a solution from .1, 1, .1 with nlxb() from nlmrt. It took a lot
of iterations and looks to be pretty illconditioned. Note nlmrt uses analytic derivatives if it
can, and a Marquardt method. It is designed to be a pit bull  tenacious, not fast.
I'm working on a replacement for this and nls(), but it will be a while. However, I welcome
short scripts like this as tests. My script below
Best, JN
cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
expf2 < "Retention ~ exp(b0*exp(b1*Area^th))"
# grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
#c(0.02),b1 = seq(0.01, 4, by = 0.01)))
#> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
#= grid.Disperse, algorithm = "bruteforce")
# Disperse.m2a
library(nlmrt)
test < nlxb(expf2, start= c(b0=.1, b1=1, th=.1), trace=TRUE, data=cl)
On 161009 07:40 PM, peter dalgaard wrote:
>
>> On 10 Oct 2016, at 00:40 , Bert Gunter < [hidden email]> wrote:
>>
>> Well... (inline  and I hope this isn't homework!)
>>
>
> Pretty much same as I thought.
>
> Fixing th=0.02 in the grid search looks wrong. Bert's plot is pretty linear, so th=1 is a good guesstimate. There's a slight curvature but to reduce it, you would increase th, not decrease it. Running the regression, as Bert suggests, indicates that b0=5.16 and b1= 0.00024 could work as reasonable starting values. Notice that the grid search had "b1 = seq(0.01, 4, by = 0.01)" which is wrong in both sign and scale.
>
> Andrew's suggestion of dividing Retention by 100 is tempting, since it looks like a percentage, but that would make all Y values less than 1 and the double exponential function as written has values that are always bigger than 1. (It is conceivable that the model itself is wrong, though. E.g. it could be that Retention on a scale from 0 to 1 could be modeled as exp(something), but we really have no idea of the context here.)
>
> (If this was in fact homework, you should now go and write a proper SelfStart initializer routine for this model. Even if it isn't homework, you do need to study the text again, because you have clearly not understood how selfstarting models work.)
>
> pd
>
>>
>>
>>
>> On Sun, Oct 9, 2016 at 3:05 PM, Andrew Robinson
>> < [hidden email]> wrote:
>>> Here are some things to try. Maybe divide Area by 1000 and retention
>>> by 100. Try plotting the data and superimposing the line that
>>> corresponds to the 'fit' from nls2. See if you can correct it with
>>> some careful guesses.
>>>
>>> Getting suitable starting parameters for nonlinear modeling is one of
>>> the black arts of statistical fitting. ...
>>>
>>> Andrew
>>
>> True. But it's usually worthwhile thinking about the math a bit before guessing.
>>
>> Note that the model can be linearized to:
>>
>> log(log(Retention)) = b0 + b1*Area^th
>>
>> So a plot of log(log(Retention)) vs Area may be informative and useful
>> for finding starting values. e.g., for a grid of th's, do linear
>> regression fits .
>>
>> However, when I look at that plot, it seems pretty linear with a
>> negative slope. This suggests that you may have an overparametrization
>> problem . i.e. fix th =1 and use the b0 and b1 from the above
>> regression for starting values.
>>
>> Do note that this strategy isn't foolproof, as it ignores that the
>> error term is additive in the above transformed metric, rather than
>> the original. This can sometimes mislead. But this is just a
>> heuristic.
>>
>> Cheers,
>> Bert
>>
>>
>>
>>
>>
>>
>>
>>>
>>> On 9 October 2016 at 22:21, Pinglei Gao < [hidden email]> wrote:
>>>> Hi,
>>>>
>>>> I have some data that i'm trying to fit a double exponential model: data.
>>>> Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05,
>>>> 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>>>
>>>> Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>>>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of the double
>>>> exponential is: exp (b0*exp (b1*x^th)).
>>>>
>>>>
>>>>
>>>> I failed to guess the initial parameter values and then I learned a measure
>>>> to find starting values from Nonlinear Regression with R (pp. 2527):
>>>>
>>>>
>>>>
>>>>> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
>>>> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
>>>>
>>>> + Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
>>>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
>>>>
>>>>> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
>>>>
>>>>> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
>>>> c(0.02),b1 = seq(0.01, 4, by = 0.01)))
>>>>
>>>>> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl, start
>>>> = grid.Disperse, algorithm = "bruteforce")
>>>>
>>>>> Disperse.m2a
>>>>
>>>> Nonlinear regression model
>>>>
>>>> model: Retention ~ expFct(Area, b0, th, b1)
>>>>
>>>> data: cl
>>>>
>>>> b0 th b1
>>>>
>>>> 3.82 0.02 0.01
>>>>
>>>> residual sumofsquares: 13596
>>>>
>>>> Number of iterations to convergence: 160000
>>>>
>>>> Achieved convergence tolerance: NA
>>>>
>>>>
>>>>
>>>> I got no error then I use the output as starting values to nls2 ():
>>>>
>>>>> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl, start =
>>>> list(b0 = 3.82, b1 = 0.02, th = 0.01))
>>>>
>>>> Error in (function (formula, data = parent.frame(), start, control =
>>>> nls.control(), :
>>>>
>>>> Singular gradient
>>>>
>>>>
>>>>
>>>> Why? Did I do something wrong or misunderstand something?
>>>>
>>>>
>>>>
>>>> Later, I found another measure from Modern Applied Statistics with S (pp.
>>>> 216217):
>>>>
>>>>
>>>>
>>>>> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
>>>> negexp.SSival, parameters = c("b0", "b1", "th"),
>>>>
>>>> + template = function(x, b0, b1, th) {})
>>>>
>>>>> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl, trace =
>>>> T)
>>>>
>>>> b0 b1 th
>>>>
>>>> 4.208763 144.205455 1035.324595
>>>>
>>>> Error in qr.default(.swts * attr(rhs, "gradient")) :
>>>>
>>>> NA/NaN/Inf (arg1) can not be called when the external function is called.
>>>>
>>>>
>>>>
>>>> Error happened again. How can I fix it? I am desperate.
>>>>
>>>>
>>>>
>>>> Best regards,
>>>>
>>>>
>>>>
>>>> Pinglei Gao
>>>>
>>>>
>>>>
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>> ______________________________________________
>>>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>>>> https://stat.ethz.ch/mailman/listinfo/rhelp>>>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>>>> and provide commented, minimal, selfcontained, reproducible code.
>>>
>>>
>>>
>>> 
>>> Andrew Robinson
>>> Deputy Director, CEBRA, School of Biosciences
>>> Reader & Associate Professor in Applied Statistics Tel: (+61) 0403 138 955
>>> School of Mathematics and Statistics Fax: +6138344 4599
>>> University of Melbourne, VIC 3010 Australia
>>> Email: [hidden email]
>>> Website: http://www.ms.unimelb.edu.au/~andrewpr>>>
>>> MSME: http://www.crcpress.com/product/isbn/9781439858028>>> FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/>>> SPuR: http://www.ms.unimelb.edu.au/spuRs/>>>
>>> ______________________________________________
>>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/rhelp>>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>>> and provide commented, minimal, selfcontained, reproducible code.
>>
>> ______________________________________________
>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/rhelp>> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html>> and provide commented, minimal, selfcontained, reproducible code.
>
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


If you are not tied to that model the SSasymp() model in R could be
considered and is easy to fit:
# to plot points in order
o < order(cl$Area)
cl.o < cl[o, ]
fm < nls(Retention ~ SSasymp(Area, Asym, R0, lrc), cl.o)
summary(fm)
plot(Retention ~ Area, cl.o)
lines(fitted(fm) ~ Area, cl.o, col = "red")
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Despite nlmrt "solving" the OP's problem, I think Gabor's suggestion likely gives a more sensible approach
to the underlying modelling problem.
It is, of course, sometimes important to fit a particular model, in which case nls2 and nlmrt are set up to grind away.
And hopefully the followup to nlmrt I'm working on will have enough capability in getting analytic derivatives
to work for a wider class of models. Note that functional approaches in nlmrt and minpack.lm allow users to
provide derivatives. Too many users think numerical approximations are a panacea, but my experience is that
most problems benefit from very accurate derivatives, of which analytic expressions are generally the best.
JN
On 161009 09:24 PM, Gabor Grothendieck wrote:
> If you are not tied to that model the SSasymp() model in R could be
> considered and is easy to fit:
>
> # to plot points in order
> o < order(cl$Area)
> cl.o < cl[o, ]
>
> fm < nls(Retention ~ SSasymp(Area, Asym, R0, lrc), cl.o)
> summary(fm)
>
> plot(Retention ~ Area, cl.o)
> lines(fitted(fm) ~ Area, cl.o, col = "red")
>
> ______________________________________________
> [hidden email] mailing list  To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/rhelp> PLEASE do read the posting guide http://www.Rproject.org/postingguide.html> and provide commented, minimal, selfcontained, reproducible code.
>
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Thanks very much for taking time on this. Your assistances are very much
appreciated. But, I am afraid that I still have a question to bother you.
I am working on a paper about weed seeds dispersal with harvest machine. I
found three general models for seed dispersal and retention after a review
of relevant literature. All models were optimized using nonlinear least
squares via the nls function in the statistical package R. The model that
best described the data will be determined by comparing Akaike Information
Criterion (AIC) values and the model with the lowest AIC score will be
selected.
The first general model incorporated simple exponential and power
exponential functions, its starting value was easily to be found. But, I am
stuck with model 2 (which was mentioned previously) and model 3 with the
form: Retention = (b0*Area^th+1)^b1. The model 3 is totally different to
others. I tried the measures that you were mentioned. But I still can’t
find suitable starting values because of my limited knowledge. I hope you
can do me the favor again. I can send the draft to you when I finished the
paper, if it is necessary. Maybe you can give me some constructive
suggestion about statistic and model construction and I can name you as a
coauthor for your contributions.
Best,
Pinglei Gao
邮件原件
发件人: peter dalgaard [mailto: [hidden email]]
发送时间: 2016年10月10日 7:41
收件人: Pinglei Gao
抄送: Andrew Robinson; R help ( [hidden email]); Bert Gunter
主题: Re: [R] Finding starting values for the parameters using nls() or
nls2()
> On 10 Oct 2016, at 00:40 , Bert Gunter < [hidden email]> wrote:
>
> Well... (inline  and I hope this isn't homework!)
>
Pretty much same as I thought.
Fixing th=0.02 in the grid search looks wrong. Bert's plot is pretty linear,
so th=1 is a good guesstimate. There's a slight curvature but to reduce it,
you would increase th, not decrease it. Running the regression, as Bert
suggests, indicates that b0=5.16 and b1= 0.00024 could work as reasonable
starting values. Notice that the grid search had "b1 = seq(0.01, 4, by =
0.01)" which is wrong in both sign and scale.
Andrew's suggestion of dividing Retention by 100 is tempting, since it looks
like a percentage, but that would make all Y values less than 1 and the
double exponential function as written has values that are always bigger
than 1. (It is conceivable that the model itself is wrong, though. E.g. it
could be that Retention on a scale from 0 to 1 could be modeled as
exp(something), but we really have no idea of the context here.)
(If this was in fact homework, you should now go and write a proper
SelfStart initializer routine for this model. Even if it isn't homework, you
do need to study the text again, because you have clearly not understood how
selfstarting models work.)
pd
>
>
>
> On Sun, Oct 9, 2016 at 3:05 PM, Andrew Robinson
> < [hidden email]> wrote:
>> Here are some things to try. Maybe divide Area by 1000 and retention
>> by 100. Try plotting the data and superimposing the line that
>> corresponds to the 'fit' from nls2. See if you can correct it with
>> some careful guesses.
>>
>> Getting suitable starting parameters for nonlinear modeling is one
>> of the black arts of statistical fitting. ...
>>
>> Andrew
>
> True. But it's usually worthwhile thinking about the math a bit before
guessing.
>
> Note that the model can be linearized to:
>
> log(log(Retention)) = b0 + b1*Area^th
>
> So a plot of log(log(Retention)) vs Area may be informative and useful
> for finding starting values. e.g., for a grid of th's, do linear
> regression fits .
>
> However, when I look at that plot, it seems pretty linear with a
> negative slope. This suggests that you may have an overparametrization
> problem . i.e. fix th =1 and use the b0 and b1 from the above
> regression for starting values.
>
> Do note that this strategy isn't foolproof, as it ignores that the
> error term is additive in the above transformed metric, rather than
> the original. This can sometimes mislead. But this is just a
> heuristic.
>
> Cheers,
> Bert
>
>
>
>
>
>
>
>>
>> On 9 October 2016 at 22:21, Pinglei Gao < [hidden email]> wrote:
>>> Hi,
>>>
>>> I have some data that i'm trying to fit a double exponential model:
data.
>>> Frame (Area=c (521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
>>> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74,
>>> 2629.2),
>>>
>>> Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04,
>>> 23.46, 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) ) and the formula of
>>> the double exponential is: exp (b0*exp (b1*x^th)).
>>>
>>>
>>>
>>> I failed to guess the initial parameter values and then I learned a
>>> measure to find starting values from Nonlinear Regression with R (pp.
2527):
>>>
>>>
>>>
>>>> cl<data.frame(Area =c(521.5, 689.78, 1284.71, 2018.8, 2560.46,
>>>> 524.91,
>>> 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74,
>>> 2629.2),
>>>
>>> + Retention =c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24,
>>> + 33.04, 23.46,
>>> 9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
>>>
>>>> expFct < function(Area, b0, b1,th) {exp(b0*exp(b1*Area^th))}
>>>
>>>> grid.Disperse < expand.grid(list(b0 = seq(0.01,4, by = 0.01), th =
>>> c(0.02),b1 = seq(0.01, 4, by = 0.01)))
>>>
>>>> Disperse.m2a < nls2(Retention ~expFct(Area, b0, b1,th), data = cl,
>>>> start
>>> = grid.Disperse, algorithm = "bruteforce")
>>>
>>>> Disperse.m2a
>>>
>>> Nonlinear regression model
>>>
>>> model: Retention ~ expFct(Area, b0, th, b1)
>>>
>>> data: cl
>>>
>>> b0 th b1
>>>
>>> 3.82 0.02 0.01
>>>
>>> residual sumofsquares: 13596
>>>
>>> Number of iterations to convergence: 160000
>>>
>>> Achieved convergence tolerance: NA
>>>
>>>
>>>
>>> I got no error then I use the output as starting values to nls2 ():
>>>
>>>> nls.m2< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl,
>>>> start =
>>> list(b0 = 3.82, b1 = 0.02, th = 0.01))
>>>
>>> Error in (function (formula, data = parent.frame(), start, control =
>>> nls.control(), :
>>>
>>> Singular gradient
>>>
>>>
>>>
>>> Why? Did I do something wrong or misunderstand something?
>>>
>>>
>>>
>>> Later, I found another measure from Modern Applied Statistics with S
(pp.
>>> 216217):
>>>
>>>
>>>
>>>> negexp < selfStart(model = ~ exp(b0*exp(b1*x^th)),initial =
>>> negexp.SSival, parameters = c("b0", "b1", "th"),
>>>
>>> + template = function(x, b0, b1, th) {})
>>>
>>>> Disperse.ss < nls(Retention ~ negexp(Area, b0, b1, th),data = cl,
>>>> trace =
>>> T)
>>>
>>> b0 b1 th
>>>
>>> 4.208763 144.205455 1035.324595
>>>
>>> Error in qr.default(.swts * attr(rhs, "gradient")) :
>>>
>>> NA/NaN/Inf (arg1) can not be called when the external function is
called.
>>>
>>>
>>>
>>> Error happened again. How can I fix it? I am desperate.
>>>
>>>
>>>
>>> Best regards,
>>>
>>>
>>>
>>> Pinglei Gao
>>>
>>>
>>>
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> [hidden email] mailing list  To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/rhelp>>> PLEASE do read the posting guide
>>> http://www.Rproject.org/postingguide.html>>> and provide commented, minimal, selfcontained, reproducible code.
>>
>>
>>
>> 
>> Andrew Robinson
>> Deputy Director, CEBRA, School of Biosciences Reader & Associate
>> Professor in Applied Statistics Tel: (+61) 0403 138 955
>> School of Mathematics and Statistics Fax:
+6138344 4599

Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000
Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: [hidden email] Priv: [hidden email]
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Thanks very much for your kindness help. I run your script then came out
lots of outputs and I also studied the solution you posted. Forgive my
ignorance, I still can't find the suitable starting values. Did I
misunderstand something?
Best,
Pinglei Gao
邮件原件
发件人: ProfJCNash [mailto: [hidden email]]
发送时间: 2016年10月10日 10:41
收件人: Gabor Grothendieck; Pinglei Gao
主题: Re: [R] Finding starting values for the parameters using nls() or
nls2()
I forgot to post the "solution" found by nlmrt:
nlmrt class object: x
residual sumsquares = 1086.8 on 15 observations
after 5001 Jacobian and 6991 function evaluations
name coeff SE tstat pval gradient
JSingval
b0 5.3274e14 NA NA NA 6.614e+13
1.735e+16
b1 33.5574 NA NA NA 3.466
11518
th 0.00721203 NA NA NA 740.8
0.004635
Note the singular values  this is the worst SV(max)/SV(min) ratio I've
observed!
JN
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


The key lines are
library(nlmrt)
test < nlxb(expf2, start= c(b0=.1, b1=1, th=.1), trace=TRUE, data=cl)
Thus I started with .1 1 and .1. The "solution" from nlxb, which is using analytic derivatives
and a very aggressive Marquardt code to keep trying even in bad situations, was
as you included below. Note that the singular values of the Jacobian are given (they are
recorded on the same table as the parameters, but do NOT correspond to the parameters.
The placement was simply a tidy place to put these numbers.)
The ratio of these sv's is 1.735e+16/0.004635 or approx 4E+18, so the condition number
of the traditional Gauss Newton approach is about 1E+37. Not a nice problem!
You probably should reformulate.
JN
On 161010 10:41 AM, Pinglei Gao wrote:
> Thanks very much for your kindness help. I run your script then came out
> lots of outputs and I also studied the solution you posted. Forgive my
> ignorance, I still can't find the suitable starting values. Did I
> misunderstand something?
>
> Best,
>
> Pinglei Gao
>
> 邮件原件
> 发件人: ProfJCNash [mailto: [hidden email]]
> 发送时间: 2016年10月10日 10:41
> 收件人: Gabor Grothendieck; Pinglei Gao
> 主题: Re: [R] Finding starting values for the parameters using nls() or
> nls2()
>
> I forgot to post the "solution" found by nlmrt:
>
> nlmrt class object: x
> residual sumsquares = 1086.8 on 15 observations
> after 5001 Jacobian and 6991 function evaluations
> name coeff SE tstat pval gradient
> JSingval
> b0 5.3274e14 NA NA NA 6.614e+13
> 1.735e+16
> b1 33.5574 NA NA NA 3.466
> 11518
> th 0.00721203 NA NA NA 740.8
> 0.004635
>
>
> Note the singular values  this is the worst SV(max)/SV(min) ratio I've
> observed!
>
> JN
>
>
>
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Actually this converges very nicely if you use these starting values
that I obtained with
AD Model Builder
th 9.1180e01
b0 5.2104e+00
b1 4.6725e04
The R result looks like
nls.m2
Nonlinear regression model
model: Retention ~ expFct(Area, b0, b1, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
b0 b1 th
5.2104466 0.0004672 0.9118029
residual sumofsquares: 686.8
Number of iterations to convergence: 1
Achieved convergence tolerance: 1.75e06
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


I've spent quite a bit of time trying to convince people on various
lists that the solution to these kinds of
problems lies in the stable parameterization of the model. I write the
solutions in AD Model Builder because it
is easy. But R people are generally stuck in R (or mired) so the
message somehow always gets lost.
I thought I would bite the bullet and figure out how to do this in R.
It turns out that one can fit this model
with only 6 function evaluations using nls2. I apologize in advance
for my execrable R code. But it does do
the job.
The data were fit using 3 calls to nls2. For the first call only the
parameter th was estimated. This converged
in 4 function evaluations. For the second call all three of the
parameters were estimated
(but the other 2 parameters are different from b0, b1) .This converged
to the solution in four function evaluations.
The final call to nls2 uses the converged values to calculate b0,b1 and
theta and starts from there.
As you can see the model is already converged. This final call to nls2
is used to get the standard error
estimates for the parameters.
> nls.m1
Nonlinear regression model
model: Retention ~ expFct1(Area, y1, yn, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
th
0.9862
residual sumofsquares: 730
Number of iterations to convergence: 2
Achieved convergence tolerance: 1.616e06
> nls.m2
Nonlinear regression model
model: Retention ~ expFct2(Area, y1, yn, c, d, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
c d th
0.9716 1.1010 0.9118
residual sumofsquares: 686.8
Number of iterations to convergence: 4
Achieved convergence tolerance: 2.398e06
> nls.m
Nonlinear regression model
model: Retention ~ expFct(Area, b0, b1, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
b0 b1 th
5.2104452 0.0004672 0.9118040
residual sumofsquares: 686.8
Number of iterations to convergence: 0
Achieved convergence tolerance: 1.384e06
Parameters:
Estimate Std. Error t value Pr(>t)
b0 5.2104452 0.4999594 10.422 2.29e07 ***
b1 0.0004672 0.0013527 0.345 0.7358
th 0.9118040 0.3583575 2.544 0.0257 *
So what is the stable parameterization for this model. To simplify let
x be the independent variable and y be the
dependent variable and write t instead of th So the model is
y = exp(b0*exp(b1*x^t) (1)
Now it would be nice to reorder the x's so that they are monotone
increasing or decreasing, but in this case the
first x is almost the largest and the last x is almost the smallest
(slight reach) so they will do. Call them x1 and xn.
The new parameters of the model are the predicted y's for x1 and xn.
Call them y1 and yn. Note that y1 and yn
are parameters, not observations.
The data were fit using 3 calls to nls2. For the first call only the
parameter th was estimated. this converged
in 4 function evaluestions. for the second call all three of the
parameters were estimated
(but the other 2 parameters are different from b0, b1) .This converged
to the solution in four function evaluations.
The final call to nls2 uses the converged values to calculate b0,b1 and
theta and starts from there.
as you can see the model is already converged. this final call to nls2
is used to get the standard error
estimates for the parameters.
> nls.m1
Nonlinear regression model
model: Retention ~ expFct1(Area, y1, yn, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
th
0.9862
residual sumofsquares: 730
Number of iterations to convergence: 2
Achieved convergence tolerance: 1.616e06
> nls.m2
Nonlinear regression model
model: Retention ~ expFct2(Area, y1, yn, c, d, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
c d th
0.9716 1.1010 0.9118
residual sumofsquares: 686.8
Number of iterations to convergence: 4
Achieved convergence tolerance: 2.398e06
> nls.m
Nonlinear regression model
model: Retention ~ expFct(Area, b0, b1, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8,
2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52,
1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36,
18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44,
15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class
= "data.frame")
b0 b1 th
5.2104452 0.0004672 0.9118040
residual sumofsquares: 686.8
Number of iterations to convergence: 0
Achieved convergence tolerance: 1.384e06
Parameters:
Estimate Std. Error t value Pr(>t)
b0 5.2104452 0.4999594 10.422 2.29e07 ***
b1 0.0004672 0.0013527 0.345 0.7358
th 0.9118040 0.3583575 2.544 0.0257 *
So what is the stable parameterization for this model. To simplify letx
be the independent variable and y be the
dependent variable and write t insted of th So the model is
y = exp(b0*exp(b1*x^t) (1)
Now it would be nice to reorder the x's so that they are monotone
increasing or decreasing, but in this case the
first x is almost the largest and the last x is almost the smallest
(slight reach) so they will do. Call them x1 and xn.
The new parameters of the model are the predicted y's for x1 and xn.
Call them y1 and yn. Note that y1 and yn
are parameters, not observations.
y1 = exp(b0*exp(b1*x1^t)
(2)
yn = exp(b0*exp(b1*xn^t)
One can solve for b1 and b0 in terms of y1 and yn.
b1=log(log(y1)/log(yn))/(x1^txn^t);
(3)
b0=log(y1)*exp(b1*x1^t);
To use this we do the fitting of the model in two stages. In the first
stage we use the obvious
estimates of y[1] for y1 and y[n] for yn and fix these values and
estimate t using the obvious
starting value of 1 for t.
In the second stage we set
y1=c*y[1]
yn=d*y[n]
and estimate c, d, and t using the obvious starting values of 1 for c
and d.
That's it. this converges as stated in 6 function evaluations. This
technique works for
may curves such as 3,4,5 parameter logistic double exponential
vonBertalanffy
(where Schnute and I first discovered it in the early 1980's before
anyone was born).
One caveat is that usually not all values of y1 and yn are
permissible. For example in this case if b>0
then y1 and yn are both >1. To make this routine bulletproof one needs
to impose these kinds of
constraints. But this was not needed here.
Here is my R code for this model.
library("nls2")
cl=data.frame(Area=c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91,
989.05,
1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2),
Retention=c (95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46,
9.72, 97.92, 71.44, 44.52, 24.44, 15.26) )
y1 << cl$Retention[1];
yn << cl$Retention[length(cl$Retention)]
expFct1 < function(x, y1, yn,th)
{
x1=x[1]
xn=x[length(x)]
b1=log(log(y1)/log(yn))/(x1^thxn^th)
b0=log(y1)*exp(b1*x1^th)
exp(b0*exp(b1*x^th));
}
expFct2 < function(x, y_1,y_n,c,d,th)
{
x1=x[1]
xn=x[length(x)]
y1=c*y_1
yn=d*y_n
b1=log(log(y1)/log(yn))/(x1^thxn^th)
b0=log(y1)*exp(b1*x1^th)
exp(b0*exp(b1*x^th));
}
expFct < function(x, b0, b1,th)
{
exp(b0*exp(b1*x^th))
}
nls.m1< nls2(Retention ~ expFct1(Area, y1, yn, th), data = cl,
start = list(th = 1))
th_start=coef(nls.m1)
nls.m2< nls2(Retention ~ expFct2(Area, y1, yn, c, d, th), data = cl,
start = list(c=1, d=1, th = th_start))
x=cl$Area
x1=x[1]
xn=x[length(x)]
th_start=coef(nls.m2)[3]
cy1=coef(nls.m2)[1]*y1
dyn=coef(nls.m2)[2]*yn
b1_start=log(log(cy1)/log(dyn))/(x1^th_startxn^th_start)
b0_start=log(cy1)*exp(b1_start*x1^th_start)
th_start
b1_start
b0_start
nls.m< nls2(Retention ~ expFct(Area, b0, b1, th), data = cl,
start = list(b0 = b0_start, b1 = b1_start, th = th_start))
nls.m1
nls.m2
nls.m
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Dear Dave
Thanks for your kindness help. I am sorry, I am on a filed survey these days. I did not check my email for a long time. Please forgive me for the misunderstanding brought to you. Your answer works well for the model: Retention = exp (b0*exp (b1*Area^th)). Could you find the starting values for the another model: Retention = (b0*Area^(th+1))^b with the same data for me. I also asked the same question on Rhelp list before.
Bests,
PG
邮件原件
发件人: dave fournier [mailto: [hidden email]]
发送时间: 2016年10月20日 5:47
收件人: [hidden email]; [hidden email]
主题: Re: [R] Finding starting values for the parameters using nls() or nls2()
Actually this converges very nicely if you use these starting values that I obtained with AD Model Builder
th 9.1180e01
b0 5.2104e+00
b1 4.6725e04
The R result looks like
nls.m2
Nonlinear regression model
model: Retention ~ expFct(Area, b0, b1, th)
data: structure(list(Area = c(521.5, 689.78, 1284.71, 2018.8, 2560.46, 524.91, 989.05, 1646.32, 2239.65, 2972.96, 478.54, 875.52, 1432.5, 2144.74, 2629.2), Retention = c(95.3, 87.18, 44.94, 26.36, 18.12, 84.68, 37.24, 33.04, 23.46, 9.72, 97.92, 71.44, 44.52, 24.44, 15.26)), .Names = c("Area", "Retention"), row.names = c(NA, 15L), class = "data.frame")
b0 b1 th
5.2104466 0.0004672 0.9118029
residual sumofsquares: 686.8
Number of iterations to convergence: 1
Achieved convergence tolerance: 1.75e06
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


Unfortunately this problem does not appear to be well posed.
Retention = (b0*Area^(th+1))^b
If b0, th, and b are the parameter only the product (th+1)*b is determined.
This comes from noting that powers satisfy
(a^b)^c = a^(b*c)
So your model can be written as
(b0^b) * Area^((th+1)*b)
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


> On Oct 25, 2016, at 9:29 AM, dave fournier < [hidden email]> wrote:
>
>
>
> Unfortunately this problem does not appear to be well posed.
>
> Retention = (b0*Area^(th+1))^b
>
> If b0, th, and b are the parameter only the product (th+1)*b is determined.
>
> This comes from noting that powers satisfy
>
>
>
> (a^b)^c = a^(b*c)
>
>
>
> So your model can be written as
>
>
>
> (b0^b) * Area^((th+1)*b)
>
... which nicely completes the thread since one model had:
th1 = 9.1180e01
b01= 5.2104e+00
b11 = 4.6725e04
(th1+1)*b11
[1] 0.0008932885
b0 = 5.2104466 ; b1 = 0.0004672 ; th = 0.9118029
((th+1)*b1)
[1] 0.0008931943
So both the R's nls2 and AD_MOdel_Builder results yield that same predictions for any given data point at least up to four decimal places.
So perhaps there was some other physical interpretation of those parameters and there exists an underlying theory that would support adding some extra constraints?

David Winsemius
Alameda, CA, USA
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.


>
>> On Oct 25, 2016, at 9:29 AM, dave fournier <davef at otterrsch.com>
wrote:
>>
>>
>>
>> Unfortunately this problem does not appear to be well posed.
>>
>> Retention = (b0*Area^(th+1))^b
>>
>> If b0, th, and b are the parameter only the product (th+1)*b is
determined.
>>
>> This comes from noting that powers satisfy
>>
>>
>>
>> (a^b)^c = a^(b*c)
>>
>>
>>
>> So your model can be written as
>>
>>
>>
>> (b0^b) * Area^((th+1)*b)
>>
>
>... which nicely completes the thread since one model had:
>
>th1 = 9.1180e01
> b01= 5.2104e+00
> b11 = 4.6725e04
>(th1+1)*b11
>[1] 0.0008932885
>
>
> b0 = 5.2104466 ; b1 = 0.0004672 ; th = 0.9118029
>((th+1)*b1)
>[1] 0.0008931943
>
>So both the R's nls2 and AD_MOdel_Builder results yield that same
predictions for any given data point at least up to four decimal places.
>
>So perhaps there was some other physical interpretation of those
parameters and there exists an underlying theory that would support
adding some extra constraints?
>
>
>>David Winsemius
>Alameda, CA, USA
I'm not really sure what your point is. The OP has two models. One is well
posed and the other is not. I was discussing solution of the former model
which is well posed. The form of the model, using a, b, and t and x,y to
simplify the expression is
y = exp( a * exp(b * x^t) )
My point is that there are many models like this where the obvious
parameterization (something like the parameters the user is interested
in) leads to parameter estimates with highly correlated errors.
This does not necessarily mean that
the model is badly determined. The model exists independent of the
particular parameterization. To fix the ideas assume there are n
observations
(x_i,y_i) and simplify by assuming that x_1<=x_2<=...<=x_n
(but not all equal lets hope)
A stable parameterization of the model can often be obtained by
picking as new parameters y1 and yn where y1 and yn are the predicted
values for y_1 and y_n, that is
y1 = exp( a * exp(b * x_1^t) )
yn = exp( a * exp(b * x_n^t) )
Then one solves for some of the original model parameters in terms of y1
and yn.
The particular way this is done depends on the model. Often some has some
linear parameters and the procedure is easy. For this model as I showed
one can solve for a and b in terms of y1 and yn.
Then one can fit the model easily with AD Model Builder or
nls2 using this parameterization. nls2 provides the standard errors for
the parameter estimates. The AD Model Builder solution provides the
estimated variance covariance matrix of the parameter estimates via the
standard maximum likelihood Hessian calculations. It also provides the
covariance for any desired dependent variable. So one can fit the model
using y1,yn, and t and get the covariance matrix for a,b, and t
in one step. In nls2 one needs to fit the model using y1,yn and then
solve for a and b and run the model again from that point. That is no big
deal, and I showed how to do it, but it is one more step for the user.
It is interesting to see the correlation matrix for the parameter estimates
and the dependent variables.
std dev correlation
1 logt 9.233e02 3.4026e01 1.0000
2 c 9.7164e01 3.7006e02 0.2690 1.0000
3 d 1.1010e+00 1.7852e01 0.7495 0.0325 1.000
4 t 9.1180e01 3.1025e01 1.0000 0.2690 0.749 1.0000
5 a 5.2104e+00 4.3404e01 0.9781 0.4191 0.621 0.9781 1.000
6 b 4.6725e04 1.1714e03 0.9994 0.2836 0.726 0.9994 0.984 1.00
Here the independent variable are c, d, and logt
where y1=c*y_1 y2=d*y2
That is just an additional thing so that one can start with c=1 and d=1
Also logt is used where t=exp(logt) which makes sure t>0.
Notice that the correlation between c and d is 0.0325 which is small.
If a and b were the parameters of the model
4 t 9.1180e01 3.1025e01 1.0000
5 a 5.2104e+00 4.3404e01 0.9781 1.000
6 b 4.6725e04 1.1714e03 0.9994 0.984 1.00
One can see how close to 1 and 1 the correlations are.
Notice that the parameter b is very badly determined.
So rather than saying the model is no good one sees that
the model is no good if one want to get information about
the parameter b. In contrast the parameter a is fairly well
determined and the parameter t is "kind of" determined.
Because of this parameter confounding nls2 is extremely sensitive
to the starting parameter values using this parameterization.
If I change the parameters by about 15% or less as below
#b0start=5.210445
#b1_start=0.0004672373
#th_start=0.911804
b0_start=5.7
b1_start=0.00055
th_start=1.05
I get the dreaded
Error in (function (formula, data = parent.frame(), start, control =
nls.control(), :
singular gradient
So there is nothing wrong with either the data or the model. The model
just needs to be given a stable parameterization.
______________________________________________
[hidden email] mailing list  To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/rhelpPLEASE do read the posting guide http://www.Rproject.org/postingguide.htmland provide commented, minimal, selfcontained, reproducible code.

