• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 875
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1726
  • 412
  • 311
  • 245
  • 228
  • 184
  • 173
  • 166
  • 166
  • 156
  • 154
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Data Augmentation and Dynamic Linear Models

Frühwirth-Schnatter, Sylvia January 1992 (has links) (PDF)
We define a subclass of dynamic linear models with unknown hyperparameters called d-inverse-gamma models. We then approximate the marginal p.d.f.s of the hyperparameter and the state vector by the data augmentation algorithm of Tanner/Wong. We prove that the regularity conditions for convergence hold. A sampling based scheme for practical implementation is discussed. Finally, we illustrate how to obtain an iterative importance sampling estimate of the model likelihood. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
662

A review on computation methods for Bayesian state-space model with case studies

Yang, Mengta, 1979- 24 November 2010 (has links)
Sequential Monte Carlo (SMC) and Forward Filtering Backward Sampling (FFBS) are the two most often seen algorithms for Bayesian state space models analysis. Various results regarding the applicability has been either claimed or shown. It is said that SMC would excel under nonlinear, non-Gaussian situations, and less computationally expansive. On the other hand, it has been shown that with techniques such as Grid approximation (Hore et al. 2010), FFBS based methods would do no worse, though still can be computationally expansive, but provide more exact information. The purpose of this report to compare the two methods with simulated data sets, and further explore whether there exist some clear criteria that may be used to determine a priori which methods would suit the study better. / text
663

Motion compensation-scalable video coding

Αθανασόπουλος, Διονύσιος 17 September 2007 (has links)
Αντικείμενο της διπλωματικής εργασίας αποτελεί η κλιμακοθετήσιμη κωδικοποίηση βίντεο (scalable video coding) με χρήση του μετασχηματισμού wavelet. Η κλιμακοθετήσιμη κωδικοποίηση βίντεο αποτελεί ένα πλαίσιο εργασίας, όπου από μια ενιαία συμπιεσμένη ακολουθία βίντεο μπορούν να προκύψουν αναπαραστάσεις του βίντεο με διαφορετική ποιότητα, ανάλυση και ρυθμό πλαισίων. Η κλιμακοθετησιμότητα του βίντεο αποτελεί σημαντική ιδιότητα ενός συστήματος στις μέρες μας, όπου το video-streaming και η επικοινωνία με βίντεο γίνεται μέσω μη αξιόπιστων μέσων διάδοσης και μεταξύ τερματικών με διαφορετικές δυνατότητες Στην εργασία αυτή αρχικά μελετάται ο μετασχηματισμός wavelet, ο οποίος αποτελεί το βασικό εργαλείο για την κλιμακοθετήσιμη κωδικοποίηση τόσο εικόνων όσο και ακολουθιών βίντεο. Στην συνέχεια, αναλύουμε την ιδέα της ανάλυσης πολλαπλής διακριτικής ικανότητας (multiresolution analysis) και την υλοποίηση του μετασχηματισμού wavelet με χρήση του σχήματος ανόρθωσης (lifting scheme), η οποία προκάλεσε νέο ενδιαφέρον στο χώρο της κλιμακοθετήσιμης κωδικοποίησης βίντεο. Τα κλιμακοθετήσιμα συστήματα κωδικοποίησης βίντεο διακρίνονται σε δύο κατηγορίες: σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χρόνου και έπειτα στο πεδίο του χώρου και σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χώρου και έπειτα στο πεδίο του χρόνου. Εμείς εστιάzουμε στη πρώτη κατηγορία και αναλύουμε τη διαδικάσια κλιμακοθετήσιμης κωδικοποίησης/αποκωδικοποίησης καθώς και τα επιμέρους κομμάτια από τα οποία αποτελείται. Τέλος, εξετάζουμε τον τρόπο με τον οποίο διάφορες παράμετρoι επηρεάζουν την απόδοση ενός συστήματος κλιμακοθετήσιμης κωδικοποίησης βίντεο και παρουσιάζουμε τα αποτελέσματα από τις πειραματικές μετρήσεις. Βασιζόμενοι στα πειραματικά αποτελέσματα προτείνουμε έναν προσαρμοστικό τρόπο επιλογής των παραμέτρων με σκοπό τη βελτίωση της απόδοσης και συγχρόνως τη μείωση της πολυπλοκότητας. / In this master thesis we examine the scalable video coding based on the wavelet transform. Scalable video coding refers to a compression framework where content representations with different quality, resolution, and frame-rate can be extracted from parts of one compressed bitstream. Scalable video coding based on motion-compensated spatiotemporal wavelet decompositions is becoming increasingly popular, as it provides coding performance competitive with state-of-the-art coders, while trying to accommodate varying network bandwidths and different receiver capabilities (frame-rate, display size, CPU, etc.) and to provide solutions for network congestion or video server design. In this master thesis we investigate the wavelet transform, the multiresolution analysis and the lifting scheme. Then, we focus on the scalable video coding/decoding. There exist two different architectures of scalable video coding. The first one performs the wavelet transform firstly on the temporal direction and then performs the spatial wavelet decomposition. The other architecture performs firstly the spatial wavelet transform and then the temporal decomposition. We focus on the first architecture, also known as t+2D scalable coding systems. Several coding parameters affect the performance of the scalable video coding scheme such as the number of temporal levels and the interpolation filter used for subpixel accuracy. We have conducted extensive experiments in order to test the influence of these parameters. The influence of these parameters proves to be dependent on the video content. Thus, we present an adaptive way of choosing the value of these parameters based on the video content. Experimental results show that the proposed method not only significantly improves the performance but reduces the complexity of the coding procedure.
664

All-Optical Clock Recovery, Photonic Balancing, and Saturated Asymmetric Filtering For Fiber Optic Communication Systems

Parsons, Earl Ryan January 2010 (has links)
In this dissertation I investigated a multi-channel and multi-bit rate all-optical clock recovery device. This device, a birefringent Fabry-Perot resonator, had previously been demonstrated to simultaneously recover the clock signal from 10 wavelength channels operating at 10 Gb/s and one channel at 40 Gb/s. Similar to clock signals recovered from a conventional Fabry-Perot resonator, the clock signal from the birefringent resonator suffers from a bit pattern effect. I investigated this bit pattern effect for birefringent resonators numerically and experimentally and found that the bit pattern effect is less prominent than for clock signals from a conventional Fabry-Perot resonator.I also demonstrated photonic balancing which is an all-optical alternative to electrical balanced detection for phase shift keyed signals. An RZ-DPSK data signal was demodulated using a delay interferometer. The two logically opposite outputs from the delay interferometer then counter-propagated in a saturated SOA. This process created a differential signal which used all the signal power present in two consecutive symbols. I showed that this scheme could provide an optical alternative to electrical balanced detection by reducing the required OSNR by 3 dB.I also show how this method can provide amplitude regeneration to a signal after modulation format conversion. In this case an RZ-DPSK signal was converted to an amplitude modulation signal by the delay interferometer. The resulting amplitude modulated signal is degraded by both the amplitude noise and the phase noise of the original signal. The two logically opposite outputs from the delay interferometer again counter-propagated in a saturated SOA. Through limiting amplification and noise modulation this scheme provided amplitude regeneration and improved the Q-factor of the demodulated signal by 3.5 dB.Finally I investigated how SPM provided by the SOA can provide a method to reduce the in-band noise of a communication signal. The marks, which represented data, experienced a spectral shift due to SPM while the spaces, which consisted of noise, did not. A bandpass filter placed after the SOA then selected the signal and filtered out what was originally in-band noise. The receiver sensitivity was improved by 3 dB.
665

INTEGRATED DECISION MAKING FOR PLANNING AND CONTROL OF DISTRIBUTED MANUFACTURING ENTERPRISES USING DYNAMIC-DATA-DRIVEN ADAPTIVE MULTI-SCALE SIMULATIONS (DDDAMS)

Celik, Nurcin January 2010 (has links)
Discrete-event simulation has become one of the most widely used analysis tools for large-scale, complex and dynamic systems such as supply chains as it can take randomness into account and address very detailed models. However, there are major challenges that are faced in simulating such systems, especially when they are used to support short-term decisions (e.g., operational decisions or maintenance and scheduling decisions considered in this research). First, a detailed simulation requires significant amounts of computation time. Second, given the enormous amount of dynamically-changing data that exists in the system, information needs to be updated wisely in the model in order to prevent unnecessary usage of computing and networking resources. Third, there is a lack of methods allowing dynamic data updates during the simulation execution. Overall, in a simulation-based planning and control framework, timely monitoring, analysis, and control is important not to disrupt a dynamically changing system. To meet this temporal requirement and address the above mentioned challenges, a Dynamic-Data-Driven Adaptive Multi-Scale Simulation (DDDAMS) paradigm is proposed to adaptively adjust the fidelity of a simulation model against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective data update. To the best of our knowledge, the proposed DDDAMS methodology is one of the first efforts to present a coherent integrated decision making framework for timely planning and control of distributed manufacturing enterprises.To this end, comprehensive system architecture and methodologies are first proposed, where the components include 1) real time DDDAM-Simulation, 2) grid computing modules, 3) Web Service communication server, 4) database, 5) various sensors, and 6) real system. Four algorithms are then developed and embedded into a real-time simulator for enabling its DDDAMS capabilities such as abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation. As part of the developed algorithms, improvements are made to the resampling techniques for sequential Bayesian inferencing, and their performance is benchmarked in terms of their resampling qualities and computational efficiencies. Grid computing and Web Services are used for computational resources management and inter-operable communications among distributed software components, respectively. A prototype of proposed DDDAM-Simulation was successfully implemented for preventive maintenance scheduling and part routing scheduling in a semiconductor manufacturing supply chain, where the results look quite promising.
666

Ecological Processes in a Spatially and Temporally Heterogeneous Landscape: a Study on Invasive Alliaria Petiolata

Biswas, Shekhar R 20 March 2014 (has links)
The dynamics of ecological populations and communities are predominantly governed by three ecological processes, environmental filtering, species interactions and dispersal, and these processes may vary with heterogeneity of the environment. In my PhD research, I investigated how ecologists conceptualize landscape heterogeneity, and how these three ecological processes may vary with spatial and temporal environmental heterogeneity. I conducted my empirical work in Alliaria petiolata, a non-native invasive species in North America, at the Koffler Scientific Reserve at Joker’s Hill in Ontario, Canada. The thesis contains six chapters, where chapters 2 – 5 are structured as stand-alone manuscripts. In chapter 2, I conducted a quantitative review to link the metacommunity concept (which combines the above-mentioned three processes) with different conceptual models of landscape spatial heterogeneity. I found that 78% of metacommunity studies were not explicit about the underlying model of landscape heterogeneity, though there was a significant association between the implied model of landscape heterogeneity and the observed metacommunity model. In chapter 3, I quantified dispersal of Alliaria petiolata, assessed the spatial structure of rosette and adult density, and compared the effects of the different processes on rosette and adult density. Seed dispersal followed a lognormal distribution (μ = 0.01, σ = 0.65). Both adults and rosettes exhibited significant spatial structure up to 2 m. Propagule pressure and interactions among life stages were significant processes shaping rosette density, whereas propagule pressure was the only important process shaping adult density. In chapter 4, I investigated patterns, determinants and demographic consequences of herbivory in A. petiolata. I found that patterns, determinants and demographic consequences of herbivory may vary between life stages and habitat types. One striking finding was that herbivory incidence in A. petiolata may strongly depend on plant life stage, possibly due to a defense–fitness trade off. In chapter 5, I tested whether intra-specific interactions in A. petiolata shift with temporal environmental heterogeneity (seasonality). I found significant negative density-dependent survival in summer and positive density-dependent survival over winter. I suggested that predictions of the stress gradient hypothesis at the intra-specific level are applicable to seasonal variation in environmental stress.
667

Ecological Processes in a Spatially and Temporally Heterogeneous Landscape: a Study on Invasive Alliaria Petiolata

Biswas, Shekhar R 20 March 2014 (has links)
The dynamics of ecological populations and communities are predominantly governed by three ecological processes, environmental filtering, species interactions and dispersal, and these processes may vary with heterogeneity of the environment. In my PhD research, I investigated how ecologists conceptualize landscape heterogeneity, and how these three ecological processes may vary with spatial and temporal environmental heterogeneity. I conducted my empirical work in Alliaria petiolata, a non-native invasive species in North America, at the Koffler Scientific Reserve at Joker’s Hill in Ontario, Canada. The thesis contains six chapters, where chapters 2 – 5 are structured as stand-alone manuscripts. In chapter 2, I conducted a quantitative review to link the metacommunity concept (which combines the above-mentioned three processes) with different conceptual models of landscape spatial heterogeneity. I found that 78% of metacommunity studies were not explicit about the underlying model of landscape heterogeneity, though there was a significant association between the implied model of landscape heterogeneity and the observed metacommunity model. In chapter 3, I quantified dispersal of Alliaria petiolata, assessed the spatial structure of rosette and adult density, and compared the effects of the different processes on rosette and adult density. Seed dispersal followed a lognormal distribution (μ = 0.01, σ = 0.65). Both adults and rosettes exhibited significant spatial structure up to 2 m. Propagule pressure and interactions among life stages were significant processes shaping rosette density, whereas propagule pressure was the only important process shaping adult density. In chapter 4, I investigated patterns, determinants and demographic consequences of herbivory in A. petiolata. I found that patterns, determinants and demographic consequences of herbivory may vary between life stages and habitat types. One striking finding was that herbivory incidence in A. petiolata may strongly depend on plant life stage, possibly due to a defense–fitness trade off. In chapter 5, I tested whether intra-specific interactions in A. petiolata shift with temporal environmental heterogeneity (seasonality). I found significant negative density-dependent survival in summer and positive density-dependent survival over winter. I suggested that predictions of the stress gradient hypothesis at the intra-specific level are applicable to seasonal variation in environmental stress.
668

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
669

Nonlinear control of a voltage source converter

Xu, Ning Unknown Date
No description available.
670

Data Privacy Preservation in Collaborative Filtering Based Recommender Systems

Wang, Xiwei 01 January 2015 (has links)
This dissertation studies data privacy preservation in collaborative filtering based recommender systems and proposes several collaborative filtering models that aim at preserving user privacy from different perspectives. The empirical study on multiple classical recommendation algorithms presents the basic idea of the models and explores their performance on real world datasets. The algorithms that are investigated in this study include a popularity based model, an item similarity based model, a singular value decomposition based model, and a bipartite graph model. Top-N recommendations are evaluated to examine the prediction accuracy. It is apparent that with more customers' preference data, recommender systems can better profile customers' shopping patterns which in turn produces product recommendations with higher accuracy. The precautions should be taken to address the privacy issues that arise during data sharing between two vendors. Study shows that matrix factorization techniques are ideal choices for data privacy preservation by their nature. In this dissertation, singular value decomposition (SVD) and nonnegative matrix factorization (NMF) are adopted as the fundamental techniques for collaborative filtering to make privacy-preserving recommendations. The proposed SVD based model utilizes missing value imputation, randomization technique, and the truncated SVD to perturb the raw rating data. The NMF based models, namely iAux-NMF and iCluster-NMF, take into account the auxiliary information of users and items to help missing value imputation and privacy preservation. Additionally, these models support efficient incremental data update as well. A good number of online vendors allow people to leave their feedback on products. It is considered as users' public preferences. However, due to the connections between users' public and private preferences, if a recommender system fails to distinguish real customers from attackers, the private preferences of real customers can be exposed. This dissertation addresses an attack model in which an attacker holds real customers' partial ratings and tries to obtain their private preferences by cheating recommender systems. To resolve this problem, trustworthiness information is incorporated into NMF based collaborative filtering techniques to detect the attackers and make reasonably different recommendations to the normal users and the attackers. By doing so, users' private preferences can be effectively protected.

Page generated in 0.3189 seconds