• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 14
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 11
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Utilização de wavelets no processamento de sinais EMG

Ricciotti, Antonio Carlos Duarte 27 November 2006 (has links)
This study proposes an approach to analyze EMG signals using wavelets transformed as a method of signal features extraction. The adopted methodology is based on the study of the aggregated power envelope and the aggregate power spectrum envelope, which are obtained from the distribution of energy of a certain signal, based on the potency of wavelet coefficients, showed like wavelets spectrograms or from a wavelet scalegram. EMG signals were captured in the surface of the human skin and came from the right leg rectus femoris muscle in a static condition (isometric), also from the flexor muscle form the right hand in dynamic contraction (isotonic) and also form a train of motor unit action potential (MUAP) form the First Dorsal Interosseous muscle during dynamic contraction. Having those signals, there were taken two research phases: extraction of the feature based on the analytical wavelet transformed (AWT) in muscles during contraction (isometric and isotonic) and the phase of detection of MUAPs. In the AWT phase, considering the calculation of the envelopes in the timefrequency chart (spectrogram), the results shoed that the wavelet transformed can be applied for extraction of spectral content of the signal and also showed the possibility of verifying the potency signal spectrum and the energy of such signal intimae. Those variables were according to the expected features for EMG signal, reported by literature. In the second phase, MUAP detection, it was used the calculation of the envelopes based on the scalegram, having as a main wavelet the Daubechies of 4 (db4), Coiflet of 4 (coif4) and Symlet of 5 (sym5) . The result showed that the method allowed to locate in time of MUAPs and showed that it is sensible enough to detect signals form motor units, far from the sensor, which contribute to formation of the EMG signal. The use of the wavelet Db4 showed to be better to detect the muscle activity on the beginning of it ( set-on ), because the Db4 is similar to a MUAP. This work proposes that future studies can be based on the research of families of wavelets, using of the method of the aggregated power envelope to control proteases for arms, or hands for example. It is also proposed studies for detection of MUAPs as an important tool for muscles evaluation, in diagnosis of miopathologies and neuro-muscle disjunctions, envelope features extraction process for other biomedical signals, such as EEG and ECG. / Este trabalho propõe uma abordagem para a análise de sinais EMG utilizando as transformadas wavelet como método de extração de características do sinal. A metodologia aplicada utiliza o estudo da envoltória de potência agregada e da envoltória do espectro de potência agregada, que são extraídas a partir da distribuição de energia de um sinal, baseada na potência dos coeficientes wavelets exibidos sob a forma de espectrograma wavelet ou de escalograma wavelet. Os sinais EMG foram captados na superfície da pele e são oriundos, do músculo reto da coxa direita em contração estática (isométrica), do músculo flexor de punho direito em contração dinâmica (isotônica) e de um trem de potenciais de ação de unidade motora (MUAPs) do músculo primeiro dorsal interósseo em contrações dinâmicas. Com estes sinais, duas fases de investigação foram abordadas, as quais são: a fase de extração de característica baseada na transformada wavelet analítica nos músculos em contração (isométrica e isotônica) e a fase de detecção de MUAPs. Na fase baseada na transformada wavelet analítica (AWT), através dos cálculos das envoltórias na localização do plano tempo-freqüência (espectrograma), o resultado obtido foi que a transformada wavelet pode ser aplicada para extração do conteúdo espectral do sinal, e foi possível verificar que o espectro de potência do sinal e a energia deste sinal ao logo do tempo se mostraram dentro das características esperadas para o sinal EMG reportadas pela literatura. Na fase de detecção de MUAPs, utilizando o cálculo das envoltórias baseado no escalograma (diagrama tempo-escala), tendo como wavelet-mãe a Daubechies de ordem 4 (db4), Coiflet de ordem 4 (coif4) e Symlet de ordem 5 (sym5) , o resultado mostrou que o método permitiu a localização no tempo dos MUAPs e demonstrou que é sensível o suficiente para detectar sinais de unidades motoras distantes do sensor, os quais, contribuem para a formação do sinal EMG. O uso da wavelet Db4 mostrou-se melhor na detecção do início da atividade muscular ( set-on ) pois a Db4 se a semelha a uma MUAP. Este trabalho sugere que trabalhos futuros poderão ser baseados na investigação de famílias wavelets para análise de sinais EMG, bem como a utilização do método de envoltória de potência agregada para controle de próteses de membros superiores, a utilização de wavelets para detecção de MUAPs como uma importante ferramenta na avaliação muscular, no diagnóstico de miopatologias e disfunções neuromusculares e também a extração de características por envoltória para outros sinais biomédicos, como por exemplo, o EEG, o ECG etc. / Mestre em Ciências
42

Bayesian inference in aggregated hidden Markov models

Marklund, Emil January 2015 (has links)
Single molecule experiments study the kinetics of molecular biological systems. Many such studies generate data that can be described by aggregated hidden Markov models, whereby there is a need of doing inference on such data and models. In this study, model selection in aggregated Hidden Markov models was performed with a criterion of maximum Bayesian evidence. Variational Bayes inference was seen to underestimate the evidence for aggregated model fits. Estimation of the evidence integral by brute force Monte Carlo integration theoretically always converges to the correct value, but it converges in far from tractable time. Nested sampling is a promising method for solving this problem by doing faster Monte Carlo integration, but it was here seen to have difficulties generating uncorrelated samples.
43

Entwicklung eines aggregierten Modells zur Simulation der Gewässergüte in Talsperren als Baustein eines Flussgebietsmodells

Siemens, Katja 27 March 2009 (has links)
Der großräumige Abbau von Braunkohle in der Lausitz führte in der Vergangenheit zu einer extremen Beeinflussung des Wasserhaushaltes im Einzugsgebiet der Spree. Mit dem Beginn der Sanierung und Flutung der Tagebaue kommt es nun langfristig zu einer verstärkten Nutzung der existierenden Oberflächengewässer und der Einbindung der entstehenden Tagebaurestseen in das Fließgewässernetz. Die Kopplung von Mengenbewirtschaftungsmodellen mit Gütemodellen berücksichtigt die Verfügbarkeit und Verteilung der begrenzten Ressource Wasser im Einzugsgebiet und der aus der Bewirtschaftung resultierenden Gewässergüte. Dies entspricht auch dem Leitbild der EU-WRRL (2000) für ein integriertes Flussgebietsmanagement, was eine einzugsgebietsbezogene Betrachtung der vorhandenen Ressourcen unter Berücksichtigung aller beeinflussten und beeinflussenden Kriterien fordert. Werden Modelle, die unterschiedlich sensitive und komplexe Systeme abbilden, miteinander gekoppelt, erfordert dies eine Anpassung der Datenstruktur und der zeitlichen Skalen. Schwerpunkt dieser Arbeit war die Entwicklung einfacher, robuster Simulationswerkzeuge für die Prognose der Gewässergüte in den Talsperren Bautzen und Quitzdorf. Als Basis diente das komplexe Standgewässergütemodell SALMO. Das Modell wurde zunächst um einfache Algorithmen ergänzt, so dass es trotz einer angepassten, stark reduzierten Datengrundlage, plausible Ergebnisse simulierte. Stochastisch erzeugte Bewirtschaftungsszenarien und die komplex simulierten Modellergebnisse bezüglich der resultierenden Gewässergüte, wurden als Trainingsdaten für ein Künstliches Neuronales Netz (ANN) genutzt. Die für beide Talsperren trainierten ANN sind als effektive Black-Box-Module in der Lage, das komplexe Systemverhalten des deterministischen Modells SALMO widerzuspiegeln. Durch eine Kopplung der entwickelten ANN mit dem Bewirtschaftungsmodell WBalMo ist es möglich, Bewirtschaftungsalternativen hinsichtlich ihrer Konsequenzen für die Gewässergüte zu bewerten. ANN sind systemgebundene Modelle, die nicht auf andere Gewässersysteme übertragen werden können. Allerdings stellt die hier erarbeitete Methodik einen fundierten Ansatz dar, der für die Entwicklung weiterer aggregierter Gütemodule im Rahmen integrierter Bewirtschaftungsmodelle angewendet werden kann. / The large-scale extraction of lignite in Lusatia in the past had an extreme impact on the water balance of the Spree river catchment. The restoration and flooding of the opencast pits put heavy demand on existing surface waters for a long time period. The resulting artificial lakes have to be integrated in the riverine network. The coupling of management models and water quality models allows to consider both availability and distribution of limited water resources in the catchment and resulting water quality. This is corresponding to the principles of the EU-WFD for integrated river basin management, which is a basin-related consideration of available resources taking into account all influencing and influenced characteristics. Adjustment of data structure and time scale is necessary if models describing unequally sensitive and complex systems are to be coupled. Main focus of this task was to develop simple and robust simulation tools for the prediction of water quality in the reservoirs Bautzen and Quitzdorf. The complex water quality model SALMO served as a basis. In a first step, simple algorithms had to be amended in order to generate plausible simulation results despite of an adapted reduced data base. Stochastically generated management scenarios and complex simulated model results regarding the resulting water quality were employed as training data for an Artificial Neuronal Network (ANN). The trained ANN’s are efficient black box modules. As such they are able to mirror complex system behaviour of the deterministic model SALMO. By coupling the developed ANN with the management model WBalMo it is possible to evaluate management strategies in terms of their impact on the quality of the water bodies. ANN’s are system-linked models. A transfer to other aquatic systems is not possible. However, the methodology developed here represents an in-depth approach which is applicable to the development of further aggregated water quality models in the framework of integrated management models.
44

Att kombinera Värdeflödesanalys med Diskret händelsesimulering : För att ge support till beslutsfattandet och förbättringsprocessen inom tillverkningsindustrin / Combining Value Stream Mapping with Discrete Event Simulation : To support the decision-making and improvement process in the manufacturing industry

Josefsson, Isagel, Stenholm, Klara January 2023 (has links)
Purpose: The purpose of the research is to explore how to use and implement acombination of the tools value stream mapping and discrete event simulation, to supportthe decision-making in the improvement process in an organization. The researchfurther takes the sustainability aspect into consideration. Method: A deductive approach is applied in order to explore and elaborate on existingtheories to fill the identified gap in the research field. To identify already existingtheories a literature review along with a systematic literature review were conducted.Further, to enable the exploration of the purpose, a case study was performed toelaborate on the industrial implementation of the combinational tool use. Findings: The studied research field is well researched but has an identified gapregarding the combinational tool use and its industrial application. Even though, theadvantages and challenges of the integration of the tools are well elaborated on inexisting research, the methodology on how to use and implement them in anorganization was identified as inadequate and therefore this research proposes such acontribution. One of the main conclusions of the study is that integrating DES withVSM early in the process is greatly advantageous in order to utilize theircomplementary capabilities and thus achieve more decisive facts right from the start,from the so-called current state. Implications: The conclusions of the conducted research showed that there are severaladvantages with the combinational tool use but also some challenges that need to betaken into account. In order to bridge these challenges and to take them intoconsideration when implementing the tool combination, an approach was developed tosupport the decision-making within the improvement process. In this study, to enabletool integration on complex production systems, aggregation modeling was applied togenerate a reliable replica of the system. Limitations: This study contains some limitations, whereas the most influentiallimitation is related to the data collection, both in regard to time constraints and validity.Due to the limited time that the researchers had at the factory site, the data collectionperiod was limited to less than 3 months. This in turn resulted in the use of smallersample sizes. The data collected from the case company have been generalized whencreating the models, this was due to the data being too complex to fully compile andunderstand, and a “good enough” perspective was then applied. This perspective is notconsidered to affect the validity or reliability of the study in the wider sense.
45

Query optimization by using derivability in a data warehouse environment

Albrecht, Jens, Hümmer, Wolfgang, Lehner, Wolfgang, Schlesinger, Lutz 10 January 2023 (has links)
Materialized summary tables and cached query results are frequently used for the optimization of aggregate queries in a data warehouse. Query rewriting techniques are incorporated into database systems to use those materialized views and thus avoid the access of the possibly huge raw data. A rewriting is only possible if the query is derivable from these views. Several approaches can be found in the literature to check the derivability and find query rewritings. The specific application scenario of a data warehouse with its multidimensional perspective allows the consideration of much more semantic information, e.g. structural dependencies within the dimension hierarchies and different characteristics of measures. The motivation of this article is to use this information to present conditions for derivability in a large number of relevant cases which go beyond previous approaches.
46

The role of P2Y[subscript]2 nucleotide receptor in lipoprotein receptor-related protein 1 expression and aggregated low density lipoprotein uptake in vascular smooth muscle cells

Dissmore, Tixieanna January 1900 (has links)
Doctor of Philosophy / Department of Human Nutrition / Denis M. Medeiros / Laman Mamedova / The internalization of aggregated low-­density lipoprotein (agLDL) may involve the actin cytoskeleton in ways that differ from the endocytosis of soluble LDL. Based on previous findings the P2Y[subscript]2 receptor (P2Y[subscript]2R) mediates these effects through interaction with filamin‐A (FLN‐A), an actin binding protein. Our findings also showed that uridine 5’‐ triphosphate (UTP), a preferential agonist of the P2Y[subscript]2R, stimulates the uptake of agLDL, and increases expression of low‐density lipoprotein receptor related protein 1 (LRP 1) in cultured mouse vascular smooth muscle cells (SMCs). The strategy of this research was to define novel mechanisms of LDL uptake through the modulation of the actin cytoskeleton in order to identify molecular targets involved in foam cell formation in vascular SMCs. For this project, we isolated aortic SMCs from wild type (WT) and P2Y[subscript]2R‐/‐ mice to investigate whether UTP and the P2Y[subscript]2R modulate expression of LRP 1 and low‐density lipoprotein receptor (LDLR). We also investigated the effects of UTP on uptake of DiI‐labeled agLDL in WT and P2Y[subscript]2R‐/‐ vascular SMCs. For LRP1 expression, cells were stimulated in the presence or absence of 10 [mu]M UTP. To determine LDLR mRNA expression, and for agLDL uptake, cells were transiently transfected for 24 h with cDNA encoding hemagglutinin-­tagged (HA-­tagged) WT P2Y[subscript]2R or a mutant P2Y[subscript]2R that does not bind FLN‐A, and afterwards treated with 10 [mu]M UTP. Total RNA was isolated, reversed transcribed to cDNA, and mRNA relative abundance determined by RT-­PCR using the delta-­delta Ct method with GAPDH as control gene. Results show SMCs expressing the mutant P2Y[subscript]2R that lacks the FLN‐A binding domain exhibit 3‐fold lower LDLR expression than SMCs expressing the WT P2Y[subscript]2R. There was also decrease in LRP1 mRNA expression in response to UTP in P2Y[subscript]2R‐/‐ SMCs compared to WT. Actinomycin‐D (20 [mu]g/ml) significantly reduced UTP-­induced LRP1 mRNA expression in P2Y[subscript]2R‐/‐ SMCs (P < 0.05). Compared to cells transfected with mutant P2Y[subscript]2R, cells transfected with WT P2Y[subscript]2R showed greater agLDL uptake in both WT VSMC and P2Y[subscript]2R-­/-­ cells. Together these results show that both LRP 1 and LDLR expressions are dependent on an intact P2Y[subscript]2R, and P2Y[subscript]2R/ FLN‐ A interaction is necessary for agLDL uptake.
47

台灣寬頻影音匯聚網站之核心資源與競爭態勢分析 / An Analysis of Core Resources and Competition for the Video Aggregated Webcasters in Taiwan

蔡坤哲, Tsai, Kun Che Unknown Date (has links)
自從網際網路與影音服務的匯流出現以後,透過寬頻網路來傳輸影音內容的寬頻影音網站也隨之出現,本研究所定義的寬頻影音匯聚網站指的乃是以入口網站形式呈現的影音網站,其功能為收集各個類別的影音內容,並透過收集大量的影音內容,以提供用戶完整且多樣化的影音娛樂服務。 目前包括電信公司、網路公司、掌握娛樂內容的電視台等不同產業領域的業者都紛紛投入寬頻影音市場中,導致寬頻影音產業內的競爭越來越激烈,自2004年開始,台灣寬頻影音市場更發生一系列的併購行為,許多業者也開始提供影音部落格、線上下載電影等多元的加值性服務以尋求競爭上的優勢。由於在數位化匯流的趨勢之下,產業競爭環境的不確定性也隨之增加,就資源基礎理論的角度來看,企業的經營乃是持久而執著的努力,唯有不斷的累積資源,才能形成不敗的競爭優勢,因此,本研究透過資源基礎觀點來檢視企業所擁有的核心資源,瞭解目前寬頻影音匯聚網站所擁有的核心資源為何,這些核心資源給予寬頻影音匯聚網站哪些競爭上的利基,以及寬頻影音匯聚網站應如何在市場之中發展或拓展其核心資源以形成競爭上的優勢。 在上述的問題意識下,本研究運用文獻分析法與深度訪談法探索台灣寬頻影音產業之生態與產業環境,找出目前寬頻影音匯聚網站之核心資源與其資源特性,並配合市場面、通道面、網絡面、鏈局面與消費面這五個市場競爭構面的考量,以瞭解寬頻影音匯聚網站在市場中如何增強或拓展其核心資源以形成或維持其競爭優勢。 在結合文獻分析與寬頻影音業者的訪談結果後,本研究除了針對各研究案例業者給予實務建議外,研究結論認為寬頻影音匯聚網站的核心資源為相互組合的概念,市場領導者的共有特質便是掌握了「頻寬」、「內容」、「技術」、「資金」與「財務管理能力」等重要性資源,各業者在經營上都十分注重於「品牌名聲」、「影音壓縮與串流處理能力」與「團隊與業務運作能力」等資源與能力的培養,未來各業者可再進一步加強「人際網絡的連結與整合能力」與「行銷經驗與能力」這兩方面的資源,以增加自身在市場上的競爭力。 / Websites used to deliver audio or video services through broadband are getting popular due to the convergence of internet and audio/video services. The definition of video aggregated webcasters in this research is the broadband audio/video streaming website that is presented by portal site. Its function includes collection of all sorts of different audio/video contents and provides their consumers more diversified and completed audio/video services. Recently the webcasting industry has become very competitive since there are more and more investors from all different industries investing in this market, including telecommunication and internet company, and some entertainment leading television stations. Since 2004, a series of merger and amalgamation started to rise in webcasting market in Taiwan. In order to build up more competitive advantages, some operators have also started to introduce video blogs and some other valued-added services such as providing users to download online movies. Because of the digital convergence, the uncertainty of this competitive environment has relatively increased. From the resource-based theory's point of view, making durable and constant effort is the most important principle the enterprise operations follow. The only way to maintain extremely strong competitive advantages is to acquire and accumulate resources continuously. Thus, this research examines enterprises’ core resources according to resource-based theory and identifies what kind of core resources the video aggregated webcasters has so far, what these core resources can offer these webcasters in terms of competitive advantages, and how to further develop the core resources in order to gain more market shares within this industry. In order to discuss above-mentioned concerns, this research applies document analysis and in-depth interview to explore the webcasting industry in Taiwan and discover the core resources and its characteristics for video aggregated webcasters nowadays. In addition, this research discusses how video aggregated webcasters can enhance or further improve its core resources in order to build up it’s competitive advantages in 5 different competition aspects of market structure, route, network, chain and consumption. After combining document analysis and in-depth interview, apart from providing some practical recommendations based on the researched cases, this research demonstrates that combination concept is the core resource of video aggregated webcasters. The common characteristics for the market leaders are highly skilled at some significant resources such as bandwidth, content, technique, capital and financial management capability. Lots of enterprises tend to emphasis on promoting their brands and reputations, audio/video compression and streaming capability and team and business executive capability. In the future, the operators can improve their interpersonal network and marketing experience and competence in order to advance their competitive advantages in the market.
48

Analyse bayésienne et classification pour modèles continus modifiés à zéro

Labrecque-Synnott, Félix 08 1900 (has links)
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique. / Zero-inflated models, both discrete and continuous, have a large variety of applications and fairly well-known properties. Some work has been done on zero-deflated and zero-modified discrete models. The usual formulation of continuous zero-inflated models -- a mixture between a continuous density and a Dirac mass at zero -- precludes their extension to cover the zero-deflated case. We introduce an alternative formulation of zero-inflated continuous models, along with a natural extension to the zero-deflated case. Parameter estimation is first studied within the classical frequentist framework. Several methods for obtaining the maximum likelihood estimators are proposed. The problem of point estimation is considered from a Bayesian point of view. Hypothesis testing, aiming at determining whether data are zero-inflated, zero-deflated or not zero-modified, is also considered under both the classical and Bayesian paradigms. The proposed estimation and testing methods are assessed through simulation studies and applied to aggregated rainfall data. The data is shown to be zero-deflated, demonstrating the relevance of the proposed model. We next consider the clustering of samples of zero-deflated data. Such data present strong non-normality. Therefore, the usual methods for determining the number of clusters are expected to perform poorly. We argue that Bayesian clustering based on the marginal distribution of the observations would take into account the particularities of the model and exhibit better performance. Several clustering methods are compared using a simulation study. The proposed method is applied to aggregated rainfall data sampled from 28 measuring stations in British Columbia.
49

In-network database query processing for wireless sensor networks

Al-Hoqani, Noura Y. S. January 2018 (has links)
In the past research, smart sensor devices have become mature enough for large, distributed networks of such sensors to start to be deployed. Such networks can include tens or hundreds of independent nodes that can perform their functions without human interactions such as recharging of batteries, the configuration of network routes and others. Each of the sensors in the wireless sensor network is considered as microsystem, which consists of memory, processor, transducers and low bandwidth as well as a low range radio transceiver. This study investigates an adaptive sampling strategy for WSS aimed at reducing the number of data samples by sensing data only when a significant change in these processes is detected. This detection strategy is based on an extension to Holt's Method and statistical model. To investigate this strategy, the water consumption in a household is used as a case study. A query distribution approach is proposed, which is presented in detail in chapter 5. Our developed wireless sensor query engine is programmed on Sensinode testbed cc2430. The implemented model used on the wireless sensor platform and the architecture of the model is presented in chapters six, seven, and eight. This thesis presents a contribution by designing the experimental simulation setup and by developing the required database interface GUI sensing system, which enables the end user to send the inquiries to the sensor s network whenever needed, the On-Demand Query Sensing system ODQS is enhanced with a probabilistic model for the purpose of sensing only when the system is insufficient to answer the user queries. Moreover, a dynamic aggregation methodology is integrated so as to make the system more adaptive to query message costs. Dynamic on-demand approach for aggregated queries is implemented, based in a wireless sensor network by integrating the dynamic programming technique for the most optimal query decision, the optimality factor in our experiment is the query cost. In-network query processing of wireless sensor networks is discussed in detail in order to develop a more energy efficient approach to query processing. Initially, a survey of the research on existing WSN query processing approaches is presented. Building on this background, novel primary achievements includes an adaptive sampling mechanism and a dynamic query optimiser. These new approaches are extremely helpful when existing statistics are not sufficient to generate an optimal plan. There are two distinct aspects in query processing optimisation; query dynamic adaptive plans, which focus on improving the initial execution of a query, and dynamic adaptive statistics, which provide the best query execution plan to improve subsequent executions of the aggregation of on-demand queries requested by multiple end-users. In-network query processing is attractive to researchers developing user-friendly sensing systems. Since the sensors are a limited resource and battery powered devices, more robust features are recommended to limit the communication access to the sensor nodes in order to maximise the sensor lifetime. For this reason, a new architecture that combines a probability modelling technique with dynamic programming (DP) query processing to optimise the communication cost of queries is proposed. In this thesis, a dynamic technique to enhance the query engine for the interactive sensing system interface is developed. The probability technique is responsible for reducing communication costs for each query executed outside the wireless sensor networks. As remote sensors have limited resources and rely on battery power, control strategies should limit communication access to sensor nodes to maximise battery life. We propose an energy-efficient data acquisition system to extend the battery life of nodes in wireless sensor networks. The system considers a graph-based network structure, evaluates multiple query execution plans, and selects the best plan with the lowest cost obtained from an energy consumption model. Also, a genetic algorithm is used to analyse the performance of the approach. Experimental testing are provided to demonstrate the proposed on-demand sensing system capabilities to successfully predict the query answer injected by the on-demand sensing system end-user based-on a sensor network architecture and input query statement attributes and the query engine ability to determine the best and close to the optimal execution plan, given specific constraints of these query attributes . As a result of the above, the thesis contributes to the state-of-art in a network distributed wireless sensor network query design, implementation, analysis, evaluation, performance and optimisation.
50

Analyse bayésienne et classification pour modèles continus modifiés à zéro

Labrecque-Synnott, Félix 08 1900 (has links)
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique. / Zero-inflated models, both discrete and continuous, have a large variety of applications and fairly well-known properties. Some work has been done on zero-deflated and zero-modified discrete models. The usual formulation of continuous zero-inflated models -- a mixture between a continuous density and a Dirac mass at zero -- precludes their extension to cover the zero-deflated case. We introduce an alternative formulation of zero-inflated continuous models, along with a natural extension to the zero-deflated case. Parameter estimation is first studied within the classical frequentist framework. Several methods for obtaining the maximum likelihood estimators are proposed. The problem of point estimation is considered from a Bayesian point of view. Hypothesis testing, aiming at determining whether data are zero-inflated, zero-deflated or not zero-modified, is also considered under both the classical and Bayesian paradigms. The proposed estimation and testing methods are assessed through simulation studies and applied to aggregated rainfall data. The data is shown to be zero-deflated, demonstrating the relevance of the proposed model. We next consider the clustering of samples of zero-deflated data. Such data present strong non-normality. Therefore, the usual methods for determining the number of clusters are expected to perform poorly. We argue that Bayesian clustering based on the marginal distribution of the observations would take into account the particularities of the model and exhibit better performance. Several clustering methods are compared using a simulation study. The proposed method is applied to aggregated rainfall data sampled from 28 measuring stations in British Columbia.

Page generated in 0.0536 seconds