Ayon Chakraborty 1 , Kaushik Chakraborty 1 , Swarup Kumar Mitra 2 ,M. K. Naskar 3
description
Transcript of Ayon Chakraborty 1 , Kaushik Chakraborty 1 , Swarup Kumar Mitra 2 ,M. K. Naskar 3
Ayon Chakraborty1 , Kaushik Chakraborty1 ,Swarup Kumar Mitra2 ,M. K. Naskar3
2 Department of ECE, MCKV Institute of Engineering
1 Department of Computer Science and Engineering3 Department of Electronics and Telecommunications Engineering
Jadavpur University
An Optimized Lifetime Enhancement Scheme for Data Gathering in Wireless Sensor Networks
Design Challenges in Wireless Sensor Networks
Data Gathering Algorithms
Proposed Algorithm
Wireless Sensor Networks
Conclusion
Simulation Results
Contents
• Collect data / information from the sensor field.• Ad-hoc nature of WSNs Typically, severely energy constrained
Limited energy sources (e.g., batteries) Trade-off between performance and lifetime
•Self-organizing and self-healing Remote deployments
•Scalable Arbitrarily large number of nodes
GOAL• Lifetime enhancement of sensor nodes
POINTS•Sensor nodes lose power while transmitting or receiving data at the time of data gathering
SOLUTION•Develop efficient algorithm for data gathering
OBJECTIVE OF DEPLOYING SENSOR NODES
DESIGN CHALLENGES
Node Deployment Scenario
BS
C0
C1
C2
C3
•LEACH•PEGASIS
PHILOSOPHYDistribute the energy dissipation by the sensor nodes at the time of data gathering equally around the network.
LEACH protocol randomizes the selection of cluster heads for equal energy dissipation, the PEGASIS protocol uses a greedy chain to the sink.
Optimized Lifetime Enhancement (OLE) SchemePHILOSOPHYIncrease the network performance by ensuring a sub-optimal energy dissipation of the individual nodes despite their random deployment. Use of modern Heuristic Techniques.
DIFFERENT DATA GATHERING SCHEMES
PEGASIS
Particle Swarm Optimization (PSO):Kennedy and Eberhart, 1995
Randomly Scattered Particles over the fitness landscape and their randomly oriented velocities
All Particle
s in
A close vicin
ity of th
e
Global optimum The best Particle
Conquering the Peak
Situation after a few iterations
)()()()1( 21 xgxptvtv
)1()()1( tvtxtx
x
y
v(t)
v(t+1)
x(t)
x(t+1)
p(t)
g(t)
PSO (2) - Visually in 2D
A Close Look at Velocity Update
vid= vid Inertia
Cognitive learning Social learning
Update Position
pbest = the personal best solution fitness) a particle has achieved so far. gbest = the global best solution of all particles.
Initialize particles with random position and zero velocity
Evaluate fitness value
Compare & update fitness value with pbest and gbest
Meet stopping criterion?
Update velocity and position
Start
End
YES
NO
Flow-Chart: PSO algorithm
Evolutionary Algorithms
Use a more complex Evaluation Function:• Do sometimes accept candidates with higher
cost to escape from local optimum• Adapt the parameters of this Evaluation
Function during execution• Based upon the analogy with the simulation of
the annealing of solids
Simulated AnnealingSimulated Annealing
Analogy• Slowly cool down a heated solid, so that all particles
arrange in the ground energy state• At each temperature wait until the solid reaches its thermal
equilibrium• Probability of being in a state with energy E :
Pr { E = E } = 1/Z(T) . exp (-E / kB.T)
E EnergyT TemperaturekB Boltzmann constantZ(T) Normalization factor (temperature dependant)
Metropolis Acceptance
• At a fixed temperature T :• Perturb (randomly) the current state to a new state• E is the difference in energy between current and new
state• If E < 0 (new state is lower), accept new state as current
state• If E 0 , accept new state with probability
Pr (accepted) = exp (- E / kB.T)• Eventually the systems evolves into thermal equilibrium at
temperature T ; then the formula mentioned before holds• When equilibrium is reached, temperature T can be
lowered and the process can be repeated
Simulated Annealing in Combinatorial Optimization (S. Kirkpatrick et al.)
• Same algorithm can be used for combinatorial optimization problems:
• Energy E corresponds to the Cost function C• Temperature T corresponds to control parameter c
Pr { configuration = i } = 1/Q(c) . exp (-C(i) / c)
C Costc Control parameterQ(c) Normalization factor (not important)
PROBLEM MODEL•Total no. of nodes are N•Solution space , collection of arrangements of {1,2,3, … ,n}•Every arrangement Ci represents a chain, where U = {Ci | Ci is a permutation of (1,2,.. n)}DATA GATHERING SCHEME USING PSOConsider nth dimensional system, Ci denotes the ith particle in our n-dimensional systemEnergy Function for Ci is ∆f = f(Cnew) – f(Cold) = ∆E, PROBABILITY FUNCTION P P = 1 if ∆E ≤ 0 = exp (-∆f/ Ө) if ∆E > 0If P > rand(0,1) accept the solution else reject it. COOLING SCHEDULE The Control Parameter is Ө, called the annealing temperature.PROPERTIES OF Ө Decremented every time when the system of particles approach a better solution (or a low energy state)
Өi = initial temperature, Өf = final temperature, t = cooling time
Ө(t) = Өf + (Өi - Өf )*α t α = rate of cooling, usually 0.7 ≤ α < 1.0 Here t is the number of iterations.
OUR APPROACH TO SOLUTION
Step 1: Initialization : •Initialize the m particles C1,C2,C3….Cm , Ci={node[1], node[2],...,node[N]}.•Initialize parameters α, Өi , Өf
Step 2 : Finding a local best chain•At a temperature Ө and for for L iterations, the searching of local best chain is done by a random binary swapping Cold = {n1 ,n2 ,…….,nn}
•Select randomly two nodes say ni, nj , Cold = {n1 ,n2 ,…, ni ,…, nj,….,nn}
•Swap them•Cnew = {n1 ,n2 ,…, nj,…, ni,….,nn}
•Calculate P for the new chain i.e for Cnew
•If P > rand (0, 1), a random number between 0 and 1• Cold = Cnew = Cilpbest= local best solution of particle C
Step 3 : Updating the pbest and gbest values Cipbest = best solution for all particles
•Cipbest = Cilbest if { f(Cilbest) – f(Cipbest) } < 0
• = Cipbest if { f(Cilbest) – f(Cipbest) } ≥ 0
•comparing the all the Cipbest values
•Cgbest = global best solution
•Cgbest = min{ f(Cipbest)}
PROPOSED ALGORITHM
Step 4 : Formation of a new chain : Chain is formed based on Cipbest and Cgbest Cross over technique Suppose Cipbest = {4,5,2,3,6,1} and Cgbest = {5,2,1,4,3,6} The slot {2,1,4} is randomly chosen from Cgbest and inserted in the same Position in Cipbest and the node ids that are repeated are deleted Cinew = {5,2,1,4,3,6}
Step 5 : The temperature Ө(t) is calculated. If its value is less than or equal to Өf or the total number of iterations up to now exceeds the value of t, the algorithm comes to a halt. Thebest chain formed is Cgbest.
Leader selection phase
Formation of sub-optimal chain max[ Eresi /D4 . Here Eresi denotes the residual energy of an individual node before starting a data gathering round and D is the distance of the base station from that node. The node with the maximum value of Eresi /D
4 becomes the leader. Here we consider the multipath fading (distance4 power loss) channel mode, as the leader is concerned with communicating to the distant base station.
PROPOSED ALGORITHM
Simulation Results
NUMBER OF DATA GATHERING ROUNDS FOR VARIOUS SCHEMES WITH PERCENTAGE OF DEAD NODES
Simulation ResultsPerformance analysis of different protocols with Energy/node 1J and base station at (25,150).
The mean packet loss rate versus distance is shown, with error bars indicating one standard deviation from the mean. The model is highly variable at intermediate distances .TOSSIM radio loss model based on
empirical data
Greedy Chain
Chain by OLE Scheme
Simulation Results
CONCLUSION
• Optimal energy utilization occurs thereby increasing network lifetime as is validated by simulation results. • PSO along with Simulated Annealing helps to enhancethe performance of our scheme.
Two major advantages: (i) Development time is much shorter rather than using more traditional approaches. (ii) The systems are very robust, being relatively insensitive
to noisy and/or missing data.
Moreover, the OLE scheme has been coded in nesC, which justifies it to be feasible on real motes. Also, we have considered the TOSSIM interference model, while simulating packet loss rates for the various scheme
Our future goal is to study the problem using Genetic Algorithms and compare it to the OLE scheme
Reference
REFERENCES[1] Clare, Pottie, and Agre, “Self-Organizing Distributed Sensor Networks”,In SPIE Conference on Unattended Ground Sensor Technologies and Applications,pages 229–237, Apr. 1999.[2] Yunxia Chen and Qing Zhao, “On the Lifetime of Wireless Sensor Networks”, Communications Letters, IEEE, Volume 9, Issue 11, pp:976–978,DigitalObjectIdentifier10.1109/ LCOMM.2005.11010., Nov. 2005.[3] S. Lindsey, C. S. Raghavendra and K. Sivalingam, “Data Gathering in Sensor Networks using energy*delay metric”, In Proceedings of the 15th International Parallel and Distributed Processing Symposium, pp 188-200, 2001.[4] W. Heinzelman, A. Chandrakasan, H. Balakrishnan, “Energy- Efficient Communication Protocol for Wireless Microsensor Networks”, IEEE Proc. Of the Hawaii International Conf. on System Sciences, pp. 1-10,Jan 2000.[5] S. Lindsey, C.S. Raghavendra, “PEGASIS: Power Efficient Gathering in Sensor Information Systems”, In Proceedings of IEEE ICC 2001, pp. 1125-1130,June 2001.[6] Ayan Acharya, Anand Seetharam, Abhishek Bhattacharyya, Mrinal KantiNaskar, “Balancing Energy Dissipation in Data Gathering Wireless Sensor Networks Using Ant Colony optimization” ,10th International Conference on Distributed Computing and Networking-ICDCN 2009, pp437-443, January 3-6,2009.[7] Eberhart, R. C, Kennedy, J. “A new optimizer using particle swarm theory”,1995 .[8] Kirkpatrick S, “Simulated Annealing” , Sci, Vol 220, 1983.[9] David Gay, Philip Levis, David Culler, Eric Brewer, nesC1.1 Language Reference Manual, May 2003.[10] Philip Levis ,TinyOS Programming , June 28, 2006.[11] P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and ScalableSimulation of Entire TinyOS,[12] N. Metropolis et. al. J. Chem. Phys. 21. 1087 (1953).[13] Zhi-Feng Hao, Zhi-Gang Wang; Han Huang, “A Particle Swarm Optimization Algorithm with Crossover Operator”, International Conference on Machine Learning and Cybernetics 2007, pp -19-22, Aug.2007.
Particle Swarm OptimizationParticle Swarm Optimization (PSO) Algorithm was developed in 1995 by James Kennedy and Russ Eberhart
It was inspired by social behavior of bird flocking or fish schooling
PSO was applied to the concept of social interaction to problem solving
The Particle Swarm Optimization Algorithm
Homogeneous Algorithminitialize;
REPEAT
REPEAT
perturb ( config.i config.j, Cij);
IF Cij < 0 THEN accept
ELSE IF exp(-Cij/c) > random[0,1) THEN accept;
IF accept THEN update(config.j);
UNTIL equilibrium is approached sufficient closely;
c := next_lower(c);
UNTIL system is frozen or stop criterion is reached
Inhomogeneous Algorithm
• Previous algorithm is the homogeneous variant:
c is kept constant in the inner loop and is only decreased in the outer loop
• Alternative is the inhomogeneous variant:
There is only one loop; c is decreased each time in the loop, but only very slightly
Parameters• Choose the start value of c so that in the beginning
nearly all perturbations are accepted (exploration), but not too big to avoid long run times
• The function next_lower in the homogeneous variant is generally a simple function to decrease c, e.g. a fixed part (80%) of current c
• At the end c is so small that only a very small number of the perturbations is accepted (exploitation)
• If possible, always try to remember explicitly the best solution found so far; the algorithm itself can leave its best solution and not find it again
Pitfalls of PSO
• Particles tend to cluster, i.e., converge too fast and get stuck at local optimum especially in gbest PSO: Premature Convergence
• Movement of particle carried it into infeasible region, unnecessary loss of computational power.
• Inappropriate mapping of particle space into solution space
Other Names
• Monte Carlo Annealing• Statistical Cooling• Probabilistic Hill Climbing• Stochastic Relaxation• Probabilistic Exchange Algorithm