I have the following code:
int main ( int argc, char **argv ){
double lambda = 4;
double x = .2;
for ( int i=1; i<=30; i++ )
{
printf( "%.5f \n", x );
x = lambda * x * (1-x);
}
}
That outputs the following:
0.20000 0.64000 0.92160 0.28901 0.82194 0.58542 0.97081 0.11334 0.40197 0.96156 0.14784 0.50392 0.99994 0.00025 0.00098 0.00394 0.01568 0.06174 0.23173 0.71212 0.82001 0.59036 0.96734 0.12638 0.44165 0.98638 0.05374 0.20342 0.64815 0.91221
My question is: What is the most apt algorithmic/mathematical description for the manner in which 'x' is being manipulated every iteration?
You are using x
in the following statement x = lambda * x * (1-x);
. This way x
value changes on every iteration.