use nnet

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

use nnet

aiminy
I want to adjust weight decay and number of hidden units for nnet by
a loop like
for(decay)
{
  for(number of unit)
   {
    for(#run)
     {model<-nnet()
       test.error<-....
     }
   }
}

for example:
I set decay=0.1, size=3, maxit=200, for this set I run 10 times, and
calculate test error

after that I want to get a matrix like this

decay  size   maxit  #run  test_error
0.1        3        200   1       1.2
0.1        3        200   2       1.1
0.1        3        200   3       1.0
0.1        3        200   4       3.4
0.1        3        200   5        ..
0.1        3        200   6         ..
0.1        3        200   7       ..
0.1        3        200   8      ..
0.1        3        200   9       ..
0.1        3        200   10       ..
0.2        3        200   1       1.2
0.2        3        200   2       1.1
0.2        3        200   3       1.0
0.2        3        200   4       3.4
0.2        3        200   5        ..
0.2        3        200   6         ..
0.2        3        200   7       ..
0.2        3        200   8      ..
0.2        3        200   9       ..
0.2        3        200   10       ..

I am not sure if this is correct way to do this?
Does anyone tune these parameters like this before?
thanks,

Aimin

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: use nnet

Wensui Liu
AM,
I have a pieice of junk on my blog. Here it is.
#################################################
# USE CROSS-VALIDATION TO DO A GRID-SEARCH FOR  #
# THE OPTIMAL SETTINGS (WEIGHT DECAY AND NUMBER #
# OF HIDDEN UNITS) OF NEURAL NETS               #
#################################################

library(nnet);
library(MASS);
data(Boston);
X <- I(as.matrix(Boston[-14]));
# STANDARDIZE PREDICTORS
st.X <- scale(X);
Y <- I(as.matrix(Boston[14]));
boston <- data.frame(X = st.X, Y);

# DIVIDE DATA INTO TESTING AND TRAINING SETS
set.seed(2005);
test.rows <- sample(1:nrow(boston), 100);
test.set <- boston[test.rows, ];
train.set <- boston[-test.rows, ];

# INITIATE A NULL TABLE
sse.table <- NULL;

# SEARCH FOR OPTIMAL WEIGHT DECAY
# RANGE OF WEIGHT DECAYS SUGGESTED BY B. RIPLEY
for (w in c(0.0001, 0.001, 0.01))
{
  # SEARCH FOR OPTIMAL NUMBER OF HIDDEN UNITS
  for (n in 1:10)
  {
    # UNITIATE A NULL VECTOR
    sse <- NULL;
    # FOR EACH SETTING, RUN NEURAL NET MULTIPLE TIMES
    for (i in 1:10)
    {
      # INITIATE THE RANDOM STATE FOR EACH NET
      set.seed(i);
      # TRAIN NEURAL NETS
      net <- nnet(Y~X, size = n, data = train.set, rang = 0.00001,
                       linout = TRUE, maxit = 10000, decay = w,
                       skip = FALSE, trace = FALSE);
      # CALCULATE SSE FOR TESTING SET
      test.sse <- sum((test.set$Y - predict(net, test.set))^2);
      # APPEND EACH SSE TO A VECTOR
      if (i == 1) sse <- test.sse else sse <- rbind(sse, test.sse);
    }
    # APPEND AVERAGED SSE WITH RELATED PARAMETERS TO A TABLE
    sse.table <- rbind(sse.table, c(WT = w, UNIT = n, SSE = mean(sse)));
  }
}
# PRINT OUT THE RESULT
print(sse.table);http://statcompute.spaces.live.com/Blog/cns!39C8032DBD1321B7!290.entry


On 3/9/07, Aimin Yan <[hidden email]> wrote:

> I want to adjust weight decay and number of hidden units for nnet by
> a loop like
> for(decay)
> {
>   for(number of unit)
>    {
>     for(#run)
>      {model<-nnet()
>        test.error<-....
>      }
>    }
> }
>
> for example:
> I set decay=0.1, size=3, maxit=200, for this set I run 10 times, and
> calculate test error
>
> after that I want to get a matrix like this
>
> decay  size   maxit  #run  test_error
> 0.1        3        200   1       1.2
> 0.1        3        200   2       1.1
> 0.1        3        200   3       1.0
> 0.1        3        200   4       3.4
> 0.1        3        200   5        ..
> 0.1        3        200   6         ..
> 0.1        3        200   7       ..
> 0.1        3        200   8      ..
> 0.1        3        200   9       ..
> 0.1        3        200   10       ..
> 0.2        3        200   1       1.2
> 0.2        3        200   2       1.1
> 0.2        3        200   3       1.0
> 0.2        3        200   4       3.4
> 0.2        3        200   5        ..
> 0.2        3        200   6         ..
> 0.2        3        200   7       ..
> 0.2        3        200   8      ..
> 0.2        3        200   9       ..
> 0.2        3        200   10       ..
>
> I am not sure if this is correct way to do this?
> Does anyone tune these parameters like this before?
> thanks,
>
> Aimin
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
WenSui Liu
A lousy statistician who happens to know a little programming
(http://spaces.msn.com/statcompute/blog)

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: use nnet

Wensui Liu
AM,
Sorry. please ignore the top box in the code. It is not actually a cv
validation but just a simple split-sample validation.
sorry for confusion.

On 3/9/07, Wensui Liu <[hidden email]> wrote:

> AM,
> I have a pieice of junk on my blog. Here it is.
> #################################################
> # USE CROSS-VALIDATION TO DO A GRID-SEARCH FOR  #
> # THE OPTIMAL SETTINGS (WEIGHT DECAY AND NUMBER #
> # OF HIDDEN UNITS) OF NEURAL NETS               #
> #################################################
>
> library(nnet);
> library(MASS);
> data(Boston);
> X <- I(as.matrix(Boston[-14]));
> # STANDARDIZE PREDICTORS
> st.X <- scale(X);
> Y <- I(as.matrix(Boston[14]));
> boston <- data.frame(X = st.X, Y);
>
> # DIVIDE DATA INTO TESTING AND TRAINING SETS
> set.seed(2005);
> test.rows <- sample(1:nrow(boston), 100);
> test.set <- boston[test.rows, ];
> train.set <- boston[-test.rows, ];
>
> # INITIATE A NULL TABLE
> sse.table <- NULL;
>
> # SEARCH FOR OPTIMAL WEIGHT DECAY
> # RANGE OF WEIGHT DECAYS SUGGESTED BY B. RIPLEY
> for (w in c(0.0001, 0.001, 0.01))
> {
>   # SEARCH FOR OPTIMAL NUMBER OF HIDDEN UNITS
>   for (n in 1:10)
>   {
>     # UNITIATE A NULL VECTOR
>     sse <- NULL;
>     # FOR EACH SETTING, RUN NEURAL NET MULTIPLE TIMES
>     for (i in 1:10)
>     {
>       # INITIATE THE RANDOM STATE FOR EACH NET
>       set.seed(i);
>       # TRAIN NEURAL NETS
>       net <- nnet(Y~X, size = n, data = train.set, rang = 0.00001,
>                        linout = TRUE, maxit = 10000, decay = w,
>                        skip = FALSE, trace = FALSE);
>       # CALCULATE SSE FOR TESTING SET
>       test.sse <- sum((test.set$Y - predict(net, test.set))^2);
>       # APPEND EACH SSE TO A VECTOR
>       if (i == 1) sse <- test.sse else sse <- rbind(sse, test.sse);
>     }
>     # APPEND AVERAGED SSE WITH RELATED PARAMETERS TO A TABLE
>     sse.table <- rbind(sse.table, c(WT = w, UNIT = n, SSE = mean(sse)));
>   }
> }
> # PRINT OUT THE RESULT
> print(sse.table);http://statcompute.spaces.live.com/Blog/cns!39C8032DBD1321B7!290.entry
>
>
> On 3/9/07, Aimin Yan <[hidden email]> wrote:
> > I want to adjust weight decay and number of hidden units for nnet by
> > a loop like
> > for(decay)
> > {
> >   for(number of unit)
> >    {
> >     for(#run)
> >      {model<-nnet()
> >        test.error<-....
> >      }
> >    }
> > }
> >
> > for example:
> > I set decay=0.1, size=3, maxit=200, for this set I run 10 times, and
> > calculate test error
> >
> > after that I want to get a matrix like this
> >
> > decay  size   maxit  #run  test_error
> > 0.1        3        200   1       1.2
> > 0.1        3        200   2       1.1
> > 0.1        3        200   3       1.0
> > 0.1        3        200   4       3.4
> > 0.1        3        200   5        ..
> > 0.1        3        200   6         ..
> > 0.1        3        200   7       ..
> > 0.1        3        200   8      ..
> > 0.1        3        200   9       ..
> > 0.1        3        200   10       ..
> > 0.2        3        200   1       1.2
> > 0.2        3        200   2       1.1
> > 0.2        3        200   3       1.0
> > 0.2        3        200   4       3.4
> > 0.2        3        200   5        ..
> > 0.2        3        200   6         ..
> > 0.2        3        200   7       ..
> > 0.2        3        200   8      ..
> > 0.2        3        200   9       ..
> > 0.2        3        200   10       ..
> >
> > I am not sure if this is correct way to do this?
> > Does anyone tune these parameters like this before?
> > thanks,
> >
> > Aimin
> >
> > ______________________________________________
> > [hidden email] mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
> --
> WenSui Liu
> A lousy statistician who happens to know a little programming
> (http://spaces.msn.com/statcompute/blog)
>


--
WenSui Liu
A lousy statistician who happens to know a little programming
(http://spaces.msn.com/statcompute/blog)

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: use nnet

aiminy
thank you very much.
I have a another question about nnet
if I set size=0, and skip=TRUE.
Then this network has just input layer and out layer.
Is this also called perceptron network?

thanks,

Aimin Yan


At 12:39 PM 3/9/2007, Wensui Liu wrote:

>AM,
>Sorry. please ignore the top box in the code. It is not actually a cv
>validation but just a simple split-sample validation.
>sorry for confusion.
>
>On 3/9/07, Wensui Liu <[hidden email]> wrote:
>>AM,
>>I have a pieice of junk on my blog. Here it is.
>>#################################################
>># USE CROSS-VALIDATION TO DO A GRID-SEARCH FOR  #
>># THE OPTIMAL SETTINGS (WEIGHT DECAY AND NUMBER #
>># OF HIDDEN UNITS) OF NEURAL NETS               #
>>#################################################
>>
>>library(nnet);
>>library(MASS);
>>data(Boston);
>>X <- I(as.matrix(Boston[-14]));
>># STANDARDIZE PREDICTORS
>>st.X <- scale(X);
>>Y <- I(as.matrix(Boston[14]));
>>boston <- data.frame(X = st.X, Y);
>>
>># DIVIDE DATA INTO TESTING AND TRAINING SETS
>>set.seed(2005);
>>test.rows <- sample(1:nrow(boston), 100);
>>test.set <- boston[test.rows, ];
>>train.set <- boston[-test.rows, ];
>>
>># INITIATE A NULL TABLE
>>sse.table <- NULL;
>>
>># SEARCH FOR OPTIMAL WEIGHT DECAY
>># RANGE OF WEIGHT DECAYS SUGGESTED BY B. RIPLEY
>>for (w in c(0.0001, 0.001, 0.01))
>>{
>>   # SEARCH FOR OPTIMAL NUMBER OF HIDDEN UNITS
>>   for (n in 1:10)
>>   {
>>     # UNITIATE A NULL VECTOR
>>     sse <- NULL;
>>     # FOR EACH SETTING, RUN NEURAL NET MULTIPLE TIMES
>>     for (i in 1:10)
>>     {
>>       # INITIATE THE RANDOM STATE FOR EACH NET
>>       set.seed(i);
>>       # TRAIN NEURAL NETS
>>       net <- nnet(Y~X, size = n, data = train.set, rang = 0.00001,
>>                        linout = TRUE, maxit = 10000, decay = w,
>>                        skip = FALSE, trace = FALSE);
>>       # CALCULATE SSE FOR TESTING SET
>>       test.sse <- sum((test.set$Y - predict(net, test.set))^2);
>>       # APPEND EACH SSE TO A VECTOR
>>       if (i == 1) sse <- test.sse else sse <- rbind(sse, test.sse);
>>     }
>>     # APPEND AVERAGED SSE WITH RELATED PARAMETERS TO A TABLE
>>     sse.table <- rbind(sse.table, c(WT = w, UNIT = n, SSE = mean(sse)));
>>   }
>>}
>># PRINT OUT THE RESULT
>>print(sse.table);http://statcompute.spaces.live.com/Blog/cns!39C8032DBD1321B7!290.entry
>>
>>
>>On 3/9/07, Aimin Yan <[hidden email]> wrote:
>> > I want to adjust weight decay and number of hidden units for nnet by
>> > a loop like
>> > for(decay)
>> > {
>> >   for(number of unit)
>> >    {
>> >     for(#run)
>> >      {model<-nnet()
>> >        test.error<-....
>> >      }
>> >    }
>> > }
>> >
>> > for example:
>> > I set decay=0.1, size=3, maxit=200, for this set I run 10 times, and
>> > calculate test error
>> >
>> > after that I want to get a matrix like this
>> >
>> > decay  size   maxit  #run  test_error
>> > 0.1        3        200   1       1.2
>> > 0.1        3        200   2       1.1
>> > 0.1        3        200   3       1.0
>> > 0.1        3        200   4       3.4
>> > 0.1        3        200   5        ..
>> > 0.1        3        200   6         ..
>> > 0.1        3        200   7       ..
>> > 0.1        3        200   8      ..
>> > 0.1        3        200   9       ..
>> > 0.1        3        200   10       ..
>> > 0.2        3        200   1       1.2
>> > 0.2        3        200   2       1.1
>> > 0.2        3        200   3       1.0
>> > 0.2        3        200   4       3.4
>> > 0.2        3        200   5        ..
>> > 0.2        3        200   6         ..
>> > 0.2        3        200   7       ..
>> > 0.2        3        200   8      ..
>> > 0.2        3        200   9       ..
>> > 0.2        3        200   10       ..
>> >
>> > I am not sure if this is correct way to do this?
>> > Does anyone tune these parameters like this before?
>> > thanks,
>> >
>> > Aimin
>> >
>> > ______________________________________________
>> > [hidden email] mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>> >
>>
>>
>>--
>>WenSui Liu
>>A lousy statistician who happens to know a little programming
>>(http://spaces.msn.com/statcompute/blog)
>
>
>--
>WenSui Liu
>A lousy statistician who happens to know a little programming
>(http://spaces.msn.com/statcompute/blog)

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: use nnet

Wensui Liu
no, it is called regression. ^_^.

On 3/9/07, Aimin Yan <[hidden email]> wrote:

> thank you very much.
> I have a another question about nnet
> if I set size=0, and skip=TRUE.
> Then this network has just input layer and out layer.
> Is this also called perceptron network?
>
> thanks,
>
> Aimin Yan
>
>
> At 12:39 PM 3/9/2007, Wensui Liu wrote:
> >AM,
> >Sorry. please ignore the top box in the code. It is not actually a cv
> >validation but just a simple split-sample validation.
> >sorry for confusion.
> >
> >On 3/9/07, Wensui Liu <[hidden email]> wrote:
> >>AM,
> >>I have a pieice of junk on my blog. Here it is.
> >>#################################################
> >># USE CROSS-VALIDATION TO DO A GRID-SEARCH FOR  #
> >># THE OPTIMAL SETTINGS (WEIGHT DECAY AND NUMBER #
> >># OF HIDDEN UNITS) OF NEURAL NETS               #
> >>#################################################
> >>
> >>library(nnet);
> >>library(MASS);
> >>data(Boston);
> >>X <- I(as.matrix(Boston[-14]));
> >># STANDARDIZE PREDICTORS
> >>st.X <- scale(X);
> >>Y <- I(as.matrix(Boston[14]));
> >>boston <- data.frame(X = st.X, Y);
> >>
> >># DIVIDE DATA INTO TESTING AND TRAINING SETS
> >>set.seed(2005);
> >>test.rows <- sample(1:nrow(boston), 100);
> >>test.set <- boston[test.rows, ];
> >>train.set <- boston[-test.rows, ];
> >>
> >># INITIATE A NULL TABLE
> >>sse.table <- NULL;
> >>
> >># SEARCH FOR OPTIMAL WEIGHT DECAY
> >># RANGE OF WEIGHT DECAYS SUGGESTED BY B. RIPLEY
> >>for (w in c(0.0001, 0.001, 0.01))
> >>{
> >>   # SEARCH FOR OPTIMAL NUMBER OF HIDDEN UNITS
> >>   for (n in 1:10)
> >>   {
> >>     # UNITIATE A NULL VECTOR
> >>     sse <- NULL;
> >>     # FOR EACH SETTING, RUN NEURAL NET MULTIPLE TIMES
> >>     for (i in 1:10)
> >>     {
> >>       # INITIATE THE RANDOM STATE FOR EACH NET
> >>       set.seed(i);
> >>       # TRAIN NEURAL NETS
> >>       net <- nnet(Y~X, size = n, data = train.set, rang = 0.00001,
> >>                        linout = TRUE, maxit = 10000, decay = w,
> >>                        skip = FALSE, trace = FALSE);
> >>       # CALCULATE SSE FOR TESTING SET
> >>       test.sse <- sum((test.set$Y - predict(net, test.set))^2);
> >>       # APPEND EACH SSE TO A VECTOR
> >>       if (i == 1) sse <- test.sse else sse <- rbind(sse, test.sse);
> >>     }
> >>     # APPEND AVERAGED SSE WITH RELATED PARAMETERS TO A TABLE
> >>     sse.table <- rbind(sse.table, c(WT = w, UNIT = n, SSE = mean(sse)));
> >>   }
> >>}
> >># PRINT OUT THE RESULT
> >>print(sse.table);http://statcompute.spaces.live.com/Blog/cns!39C8032DBD1321B7!290.entry
> >>
> >>
> >>On 3/9/07, Aimin Yan <[hidden email]> wrote:
> >> > I want to adjust weight decay and number of hidden units for nnet by
> >> > a loop like
> >> > for(decay)
> >> > {
> >> >   for(number of unit)
> >> >    {
> >> >     for(#run)
> >> >      {model<-nnet()
> >> >        test.error<-....
> >> >      }
> >> >    }
> >> > }
> >> >
> >> > for example:
> >> > I set decay=0.1, size=3, maxit=200, for this set I run 10 times, and
> >> > calculate test error
> >> >
> >> > after that I want to get a matrix like this
> >> >
> >> > decay  size   maxit  #run  test_error
> >> > 0.1        3        200   1       1.2
> >> > 0.1        3        200   2       1.1
> >> > 0.1        3        200   3       1.0
> >> > 0.1        3        200   4       3.4
> >> > 0.1        3        200   5        ..
> >> > 0.1        3        200   6         ..
> >> > 0.1        3        200   7       ..
> >> > 0.1        3        200   8      ..
> >> > 0.1        3        200   9       ..
> >> > 0.1        3        200   10       ..
> >> > 0.2        3        200   1       1.2
> >> > 0.2        3        200   2       1.1
> >> > 0.2        3        200   3       1.0
> >> > 0.2        3        200   4       3.4
> >> > 0.2        3        200   5        ..
> >> > 0.2        3        200   6         ..
> >> > 0.2        3        200   7       ..
> >> > 0.2        3        200   8      ..
> >> > 0.2        3        200   9       ..
> >> > 0.2        3        200   10       ..
> >> >
> >> > I am not sure if this is correct way to do this?
> >> > Does anyone tune these parameters like this before?
> >> > thanks,
> >> >
> >> > Aimin
> >> >
> >> > ______________________________________________
> >> > [hidden email] mailing list
> >> > https://stat.ethz.ch/mailman/listinfo/r-help
> >> > PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> > and provide commented, minimal, self-contained, reproducible code.
> >> >
> >>
> >>
> >>--
> >>WenSui Liu
> >>A lousy statistician who happens to know a little programming
> >>(http://spaces.msn.com/statcompute/blog)
> >
> >
> >--
> >WenSui Liu
> >A lousy statistician who happens to know a little programming
> >(http://spaces.msn.com/statcompute/blog)
>
>
>


--
WenSui Liu
A lousy statistician who happens to know a little programming
(http://spaces.msn.com/statcompute/blog)

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.