Vecchio, Massimo (2009) Novel approaches to in-network processing for the reduction of energy consumption in wireless sensor networks. Advisor: Lazzerini, Prof. Beatrice. Coadvisor: Marcelloni, Prof. Francesco . pp. 146. [IMT PhD Thesis]
Vecchio_phdthesis.pdf - Published Version
Available under License Creative Commons Attribution No Derivatives.
Download (2MB) | Preview
Wireless sensor networks (WSNs) are currently an active research area mainly due to the potential of their applications. However, the deployment of a large scale WSN still requires solutions to a number of technical challenges that stem primarily from the features of the sensor nodes such as limited computational power, reduced communication bandwidth and small storage capacity. Further, sensor nodes are typically powered by batteries with limited capacity which do not guarantee an attractive nodes’ lifetime unless adequate power saving policies are undertaken. Since the radio is the main cause of power consumption in a sensor node, most of the energy conservation schemes proposed in the literature have focused on minimizing the energy consumption of the communication unit. To achieve this objective, two main approaches have been followed: power saving through duty cycling and in-network processing. Duty cycling schemes define coordinated sleep/wakeup schedules among nodes in the network. On the other hand, in-network processing consists in reducing the amount of information to be transmitted by means of aggregation and/or compression techniques. This thesis focuses on the latter approach and proposes a novel distributed method to data aggregation and two algorithms to compress data locally on the sensor node. The distributed data aggregation technique is based on fuzzy numbers and weighted average operators to reduce data communication in WSNs when we are interested in the estimation of an aggregated value such as maximum or minimum temperature measured in the network. The first compression algorithm is a simple lossless entropy compression algorithm which can be implemented in a few lines of code, requires very low computational power, compresses data on the fly and uses a very small dictionary whose size is determined by the resolution of the analog-to-digital converter. The second compression algorithm tackles the problem of noisy sampling by performing lossy compression on single node based on a differential pulse code modulation scheme with quantization of the differences between consecutive samples. Since different combinations of the quantization process parameters determine different trade-offs between compression performance and information loss, we exploit a multi-objective evolutionary algorithm to generate a set of combinations of these parameters corresponding to different optimal trade-offs. The user can therefore choose the combination with the most suitable trade-off for the specific application.
|Item Type:||IMT PhD Thesis|
|Subjects:||Q Science > QA Mathematics > QA75 Electronic computers. Computer science|
|PhD Course:||Computer Science and Engineering|
|Date Deposited:||06 Jul 2012 14:50|
Actions (login required)