• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 638
  • 218
  • 133
  • 77
  • 42
  • 39
  • 20
  • 13
  • 10
  • 9
  • 8
  • 8
  • 7
  • 5
  • 5
  • Tagged with
  • 1454
  • 174
  • 165
  • 163
  • 137
  • 114
  • 100
  • 95
  • 93
  • 87
  • 70
  • 70
  • 67
  • 66
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Modeling the economics of prevention /

Lindgren, Peter, January 2005 (has links)
Diss. (sammanfattning) Stockholm : Karolinska institutet, 2005. / Härtill 4 uppsatser.
752

Effects of ecological scaling on biodiversity patterns

Antão, Laura H. January 2018 (has links)
Biodiversity is determined by a myriad of complex processes acting at different scales. Given the current rates of biodiversity loss and change, it is of paramount importance that we improve our understanding of the underlying structure of ecological communities. In this thesis, I focused on Species Abundance Distributions (SAD), as a synthetic measure of biodiversity and community structure, and on Beta (β) diversity patterns, as a description of the spatial variation of species composition. I systematically assessed the effect of scale on both these patterns, analysing a broad range of community data, including different taxa and habitats, from the terrestrial, marine and freshwater realms. Knowledge of the scaling properties of abundance and compositional patterns must be fully integrated in biodiversity research if we are to understand biodiversity and the processes underpinning it, from local to global scales. SADs depict the relative abundance of the species present in a community. Although typically described by unimodal logseries or lognormal distributions, empirical SADs can also exhibit multiple modes. However, the existence of multiple modes in SADs has largely been overlooked, assumed to be due to sampling errors or a rare pattern. Thus, we do not know how prevalent multimodality is, nor do we have an understanding of the factors leading to this pattern. Here, I provided the first global empirical assessment of the prevalence of multimodality across a wide range of taxa, habitats and spatial extents. I employed an improved method combining two model selection tools, and (conservatively) estimated that ~15% of the communities were multimodal with strong support. Furthermore, I showed that the pattern is more common for communities at broader spatial scales and with greater taxonomic diversity (i.e. more phylogenetically diverse communities, since taxonomic diversity was measured as number of families). This suggests a link between multimodality and ecological heterogeneity, broadly defined to incorporate the spatial, environmental, taxonomic and functional variability of ecological systems. Empirical understanding of how spatial scale affects SAD shape is still lacking. Here, I established a gradient in spatial scale spanning several orders of magnitude by decomposing the total extent of several datasets into smaller subsets. I performed an exploratory analysis of how SAD shape is affected by area sampled, species richness, total abundance and taxonomic diversity. Clear shifts in SAD shape can provide information about relevant ecological and spatial mechanisms affecting community structure. There was a clear effect of area, species richness and taxonomic diversity in determining SAD shape, while total abundance did not exhibit any directional effect. The results supported the findings of the previous analysis, with a higher prevalence of multimodal SADs for larger areas and for more taxonomically diverse communities, while also suggesting that species spatial aggregation patterns can be linked to SAD shape. On the other hand, there was a systematic departure from the predictions of two important macroecological theories for SAD across scales, specifically regarding logseries distributions being selected only for smaller scales and when species richness and number of families were proportionally much smaller than the total extent. β diversity quantifies the variation in species composition between sites. Although a fundamental component of biodiversity, its spatial scaling properties are still poorly understood. Here, I tested if two conceptual types of β diversity showed systematic variation with scale, while also explicitly accounting for the two β diversity components, turnover and nestedness (species replacement vs species richness differences). I provided the first empirical analysis of β diversity scaling patterns for different taxa, revealing remarkably consistent scaling curves. Total β diversity and turnover exhibit a power law decay with log area, while nestedness is largely insensitive to scale changes. For the distance decay of similarity analysis, while area sampled affected the overall dissimilarity values, rates of similarity were consistent across large variations in sampled area. Finally, in both these analyses, turnover was the main contributor to compositional change. These results suggest that species are spatially aggregated across spatial scales (from local to regional scales), while also illustrating that substantial change in community structure might occur, despite species richness remaining relatively stable. This systematic and comprehensive analysis of SAD and community similarity patterns highlighted spatial scale, ecological heterogeneity and species spatial aggregation patterns as critical components underlying the results found. This work expanded the range of scales at which both theories deriving SAD and community similarity studies have been developed and tested (from local plots to continents). The results here showed strong departures from two important macroecological theories for SAD at different scales. In addition, the overall findings in this thesis clearly indicate that unified theories of biodiversity (or assuming a set of synthetic minimal assumptions) are unable to accommodate the variability in SADs shape across spatial scales reported here, and cannot fully reproduce community similarity patterns across scales. Incorporating more realistic assumptions, or imposing scale dependent assumptions, may prove to be a fruitful avenue for ecological research regarding the scaling properties of SAD and community similarity patterns. This will allow deriving new predictions and improving the ability of theoretical models to incorporate the variability in abundance and similarity patterns across scales.
753

Data aggregation in wireless sensor networks / Agrégation de données dans les réseaux de capteurs sans fil

Cui, Jin 27 June 2016 (has links)
Depuis plusieurs années, l’agrégation de données sont considérés comme un domaine émergent et prometteur tant dans le milieu universitaire que dans l’industrie. L’énergie et la capacité du réseau seront donc économisées car il y aura moins de transmissions de données. Le travail de cette thèse s’intéresse principalement aux fonctions d’agrégation Nous faisons quatre contributions principales. Tout d’abord, nous proposons deux nouvelles métriques pour évaluer les performances des fonctions d’agrégations vue au niveau réseau : le taux d’agrégation et le facteur d’accroissement de la taille des paquets. Le taux d’agrégation est utilisé pour mesurer le gain de paquets non transmis grâce à l’agrégation tandis que le facteur d’accroissement de la taille des paquets permet d’évaluer la variation de la taille des paquets en fonction des politiques d’agrégation. Ces métriques permettent de quantifier l’apport de l’agrégation dans l’économie d’énergie et de la capacité utilisée en fonction du protocole de routage considéré et de la couche MAC retenue. Deuxièmement, pour réduire l’impact des données brutes collectées par les capteurs, nous proposons une méthode d’agrégation de données indépendante de la mesure physique et basée sur les tendances d’évolution des données. Nous montrons que cette méthode permet de faire une agrégation spatiale efficace tout en améliorant la fidélité des données agrégées. En troisième lieu, et parce que dans la plupart des travaux de la littérature, une hypothèse sur le comportement de l’application et/ou la topologie du réseau est toujours sous-entendue, nous proposons une nouvelle fonction d’agrégation agnostique de l’application et des données devant être collectées. Cette fonction est capable de s’adapter aux données mesurées et à leurs évolutions dynamiques. Enfin, nous nous intéressons aux outils pour proposer une classification des fonctions d’agrégation. Autrement dit, considérant une application donnée et une précision cible, comment choisir les meilleures fonctions d’agrégations en termes de performances. Les métriques, que nous avons proposé, sont utilisées pour mesurer la performance de la fonction, et un processus de décision markovien est utilisé pour les mesurer. Comment caractériser un ensemble de données est également discuté. Une classification est proposée dans un cadre précis. / Wireless Sensor Networks (WSNs) have been regarded as an emerging and promising field in both academia and industry. Currently, such networks are deployed due to their unique properties, such as self-organization and ease of deployment. However, there are still some technical challenges needed to be addressed, such as energy and network capacity constraints. Data aggregation, as a fundamental solution, processes information at sensor level as a useful digest, and only transmits the digest to the sink. The energy and capacity consumptions are reduced due to less data packets transmission. As a key category of data aggregation, aggregation function, solving how to aggregate information at sensor level, is investigated in this thesis. We make four main contributions: firstly, we propose two new networking-oriented metrics to evaluate the performance of aggregation function: aggregation ratio and packet size coefficient. Aggregation ratio is used to measure the energy saving by data aggregation, and packet size coefficient allows to evaluate the network capacity change due to data aggregation. Using these metrics, we confirm that data aggregation saves energy and capacity whatever the routing or MAC protocol is used. Secondly, to reduce the impact of sensitive raw data, we propose a data-independent aggregation method which benefits from similar data evolution and achieves better recovered fidelity. Thirdly, a property-independent aggregation function is proposed to adapt the dynamic data variations. Comparing to other functions, our proposal can fit the latest raw data better and achieve real adaptability without assumption about the application and the network topology. Finally, considering a given application, a target accuracy, we classify the forecasting aggregation functions by their performances. The networking-oriented metrics are used to measure the function performance, and a Markov Decision Process is used to compute them. Dataset characterization and classification framework are also presented to guide researcher and engineer to select an appropriate functions under specific requirements.
754

Essays on aggregation and cointegration of econometric models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.<p><p><p>Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. <p><p><p>A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.<p><p><p>Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.<p><p><p>Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.<p><p>Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". <p><p><p>The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. <p><p><p>The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. <p><p>The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). <p><p><p>Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. <p><p>The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. <p><p><p>Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. <p><p><p>Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. <p><p><p>Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.<p><p>The empirical analysis to examine debt stabilization is made up by two steps. <p><p>First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).<p><p>Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). <p><p>The priors used in the paper leads to straightforward posterior calculations which can be easily performed.<p>Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
755

Agregace hlášení o bezpečnostních událostech / Aggregation of Security Incident Reports

Kapičák, Daniel January 2016 (has links)
In this thesis, I present analysis of security incident reports in IDEA format from Mentat and their aggregation and correlation methods design and implementation. In data analysis, I show huge security reports diversity. Next, I show design of simple framework and system of templates. This framework and system of templates simplify aggregation and correlation methods design and implementation. Finally, I evaluate designed methods using Mentat database dumps. The results showed that designed methods can reduce the number of security reports up to 90% without loss of any significant information.
756

Modifying naphthalene diimide copolymers for applications in thermoelectric devices

Shin, Younghun 16 October 2020 (has links)
The aim of this thesis is to modify and improve the n-type semiconducting polymer PNDIT2 for thermoelectric generators (TEGs) applications. Although the PNDIT2 is considered a prime n-type material due to its high electron mobility, low air-stability of radical anions after doping and the low doping efficiency with molecular dopants are severe drawbacks and lead to limited application in TEGs. To this end, the backbone structure of PNDIT2 is modified by polymer analogous thionation and the branched aliphatic side chains are replaced by branched, fully oligoethylene glycol-based side chains. PNDIT2 was prepared by DAP and subjected to various conditions of thionation. The polymer analogous thionation of PNDIT2 was done by using Lawesson´s reagent (LR). The O/S conversion was controlled by solvent, T and amount of LR. For an excess of LR, only two carbonyls out of four present in the NDI repeating unit are converted to thiocarbonyls with regioselective trans-conformation (2S-trans-PNDIT2). Chlorobenzene (CB) is an excellent solvent in which the highest O/S conversion was achieved and the best reproducibility. Tri- or tetra- substitution in one NDI repeat unit did not take place due to steric hinderance of T2 comonomer. Thionation affected all properties. The lower thermal stability, UV-vis spectra were bathochromically shifted and a new band of the thionated NDI unit appeared. Chain aggregation was stronger as probed by UV-vis and NMR spectroscopy. The LUMO energy level of 2S-trans-PNDIT2 was lowered by 0.2 eV, giving -4.0 eV. This is at the border of what is needed for air stability of radical anions. The scattering on thin films indicated lower order and less crystalline textures of 2S-trans-PNDIT2 compared to PNDIT2. Likewise, electron mobility decreased with increasing conversion. While chapter 2 focused on the synthesis, opto-electronic and thermal properties of 2S-trans-PNDIT2, chapter 3 was concerned with details on morphology and electrical properties. To this end, 2S-trans-PNDIT2 was doped by N-DPBI in toluene at various concentrations and conductivities were determined. Undoped 2S-trans-PNDIT2 exhibited one order of magnitude higher conductivity than pristine PNDIT2. After doping with 5 wt.-% N-DPBI, the conductivity of 2S-trans-PNDIT2 increased by two orders of magnitude and reached a maximum conductivity of 6*10-3 S/cm at 15 wt.-% doping. This value was approx.5 times higher than the conductivity of PNDIT2 at the same doping level. Furthermore, the stability of conductivity of doped 2S-trans-PNDIT2 under ambient conditions was investigated and compared to PNDIT2. Upon exposure air (50 % humidity), conductivity of PNDIT2 rapidly decreased to the pristine film level, while the conductivity of 2S-trans-PNDIT2 was reduced by a factor of less than two after 16 h. While the initially higher conductivity of 2S-trans-PNDIT2 is ascribed to its less crystalline structure and thus higher doping efficacy, its better stability can be ascribed to the lower LUMO energy level. The topic of chapter 4 is on the synthesis of fully ether-based, polar and branched side chains (EO) and introduction into PNDIT2. The advantages of polar side chains over aliphatic side chains have been reported. However, previously reported PNDIT2 with linear polar side chains is limited in MW due to solubility. The EO side chain with amine functionality was synthesized in three steps and used for monomer synthesis (EO-NDIBr2). Initial efforts to use DAP to prepare P(EO-NDIT2) from EO-NDIBr2 and pristine bithiophene gave only oligomeric products. Stille polycondensation was therefore used, giving high MW. As extreme aggregation occurred in solvents used for GPC, absolute MW were determined by 1H NMR spectroscopy. To enable reliable end group analysis, model compounds with methyl end groups were prepared. In P(EO-NDIT2), methyl end groups are dominating as a result of incorrect transmetalation from the stannylated monomer. The end groups seen by 1H NMR spectroscopy were further confirmed by MALDI-ToF. Absolute MW were between Mn,NMR= 11 kg/mol to 116 kg/mol depending on reaction conditions. Aggregation was further probed by UV-vis and NMR spectroscopy as a function of the solvent and temperature, shining light into the degree of aggregation, which is important for thin film preparation. Solvent quality decreased with the following order: CHCl3, 1-Chloronaphthalene (CN), 1,2-Dichlorobenzene (o-DCB), DMF, 1,4-Dioxane, CB and Anisole (AN). According to these results, three doping protocols based on CB and o-DCB, as well as temperature variations, were used to prepare films for conductivity measurements. The best results were obtained for processing from chlorobenzene at 80 °C, which aggregates are dissolved. Strikingly, maximum conductivity values were achieved already for 5 wt.-% dopant concentration. The PF reached a maximum even for 1 wt.-% doping level. This unusually low value is promising and suggests a high doping efficacy.
757

Aggregation, Filterung und Visualisierung von Nachrichten aus heterogenen Quellen - Ein System für den unternehmensinternen Einsatz

Lunze, Torsten, Feldmann, Marius, Eixner, Thomas, Canbolat, Serkan, Schill, Alexander January 2009 (has links)
No description available.
758

Numerical Methods for the Chemical Master Equation

Zhang, Jingwei 20 January 2010 (has links)
The chemical master equation, formulated on the Markov assumption of underlying chemical kinetics, offers an accurate stochastic description of general chemical reaction systems on the mesoscopic scale. The chemical master equation is especially useful when formulating mathematical models of gene regulatory networks and protein-protein interaction networks, where the numbers of molecules of most species are around tens or hundreds. However, solving the master equation directly suffers from the so called "curse of dimensionality" issue. This thesis first tries to study the numerical properties of the master equation using existing numerical methods and parallel machines. Next, approximation algorithms, namely the adaptive aggregation method and the radial basis function collocation method, are proposed as new paths to resolve the "curse of dimensionality". Several numerical results are presented to illustrate the promises and potential problems of these new algorithms. Comparisons with other numerical methods like Monte Carlo methods are also included. Development and analysis of the linear Shepard algorithm and its variants, all of which could be used for high dimensional scattered data interpolation problems, are also included here, as a candidate to help solve the master equation by building surrogate models in high dimensions. / Ph. D.
759

Parallel Execution of Order Dependent Grouping Functions

Peters, Mathias 29 October 2024 (has links)
Der exponentielle Anstieg elektronisch gespeicherter Daten erfordert leistungsfähige Systeme zur Verarbeitung und Analyse großer Datenmengen. Parallel relationale Datenbanksysteme (PRDBMS) waren lange Zeit der Standard für analytische Abfragen. Neuere Systeme, wie Apache Flink, Tez und Spark, nutzen erweiterte Ansätze zur Analyse und trennen logische Spezifikationen von physischen Ausführungen. Ein weit verbreitetes Optimierungsverfahren in der analytischen Verarbeitung ist die partielle Aggregation, bei der Aggregation in zwei Stufen erfolgt: Zunächst werden partielle Aggregatgruppen erstellt, die dann zusammengeführt werden, um das Endergebnis zu berechnen. Dieses Verfahren ermöglicht eine parallele Verarbeitung und reduziert die Größe der Zwischenergebnisse. Bisherige Ansätze konzentrieren sich auf ordnungsunabhängige Gruppierungsfunktionen, bei denen Elemente ohne Berücksichtigung der Reihenfolge gruppiert werden können. In der Praxis gibt es jedoch auch ordnungsabhängige Gruppierungsfunktionen, die von der Reihenfolge der Eingaben abhängen und komplexer in der parallelen Ausführung sind. Derzeit existieren nur begrenzte Ansätze für eine effiziente Parallelisierung solcher Funktionen. Diese Dissertation präsentiert einen neuen Ansatz zur Parallelisierung von Aggregationsanfragen für drei ordnungsabhängige Gruppierungsfunktionen: Sessionization, Regular Expression Matching (REM) und Complex Event Recognition (CER). Unsere Methode nutzt zerlegbare Aggregationsfunktionen, um eine effiziente parallele Ausführung in modernen Shared-Nothing-Compute-Umgebungen zu ermöglichen. Die stufenweise Ausführung dieser Funktionen eröffnet neue Optimierungsmöglichkeiten. Unser Ansatz erlaubt es Optimierungsalgorithmen, zwischen sequentiellen und stufenweisen Verfahren zu wählen. Zusätzlich schlägt die Arbeit ein Schema vor, wie weitere Gruppierungsfunktionen zerlegt und in die partielle Aggregation integriert werden können. / Advances in information technologies and decreasing cost for storage and compute capacities lead to exponential growth of data being available electronically worldwide. Systems capable of processing these large amounts of data with the goal of analyzing and extracting information are essential for both: research and businesses. Analytical data processing systems employ various optimizations to execute queries efficiently. Partial Aggregation (PA) using GroupBy and decomposable aggregation functions is a common optimization approach in analytical query processing. Analytical systems execute PA in two stages: During the first stage, they create partial groups to compute partial aggregates. During the second stage, the partial aggregates are grouped and aggregated again to produce the final result. The main benefits of PA are an increased potential of parallel execution during the first stage and a reduction of intermediate result sizes by aggregating over the partial groups. So far, existing approaches to PA only use an order-agnostic grouping function on sets to create groups. There are grouping functions that depend on ordered input and information on previously processed input items to associate a given input item to its group. Staged execution of order-dependent grouping functions is more difficult than for order-agnostic grouping functions. Systems must compute correct partial states during the first stage and combine them during the final stage. Approaches for efficient parallel execution only exist in a limited way despite the high practical relevance. In this thesis, we present a novel approach for parallelizing aggregation for three order-dependent grouping functions: Sessionization, Regular Expression Matching (REM), and Complex Event Recognition (CER). Our approach of computing the three grouping functions in stages combined with decomposable aggregation functions allows for efficient parallel execution in state-of-the-art shared-nothing compute environments.
760

Data aggregation in sensor networks

Kallumadi, Surya Teja January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / Severe energy constraints and limited computing abilities of the nodes in a network present a major challenge in the design and deployment of a wireless sensor network. This thesis aims to present energy efficient algorithms for data fusion and information aggregation in a sensor network. The various methodologies of data fusion presented in this thesis intend to reduce the data traffic within a network by mapping the sensor network application task graph onto a sensor network topology. Partitioning of an application into sub-tasks that can be mapped onto the nodes of a sensor network offers opportunities to reduce the overall energy consumption of a sensor network. The first approach proposes a grid based coordinated incremental data fusion and routing with heterogeneous nodes of varied computational abilities. In this approach high performance nodes arranged in a mesh like structure spanning the network topology, are present amongst the resource constrained nodes. The sensor network protocol performance, measured in terms of hop-count is analysed for various grid sizes of the high performance nodes. To reduce network traffic and increase the energy efficiency in a randomly deployed sensor network, distributed clustering strategies which consider network density and structure similarity are applied on the network topology. The clustering methods aim to improve the energy efficiency of the sensor network by dividing the network into logical clusters and mapping the fusion points onto the clusters. Routing of network information is performed by inter-cluster and intra-cluster routing.

Page generated in 0.0909 seconds