• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 7
  • 5
  • 2
  • 1
  • Tagged with
  • 68
  • 68
  • 68
  • 24
  • 24
  • 17
  • 15
  • 15
  • 14
  • 14
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Suitability of FPGA-based computing for cyber-physical systems

Lauzon, Thomas Charles 18 August 2010 (has links)
Cyber-Physical Systems theory is a new concept that is about to revolutionize the way computers interact with the physical world by integrating physical knowledge into the computing systems and tailoring such computing systems in a way that is more compatible with the way processes happen in the physical world. In this master’s thesis, Field Programmable Gate Arrays (FPGA) are studied as a potential technological asset that may contribute to the enablement of the Cyber-Physical paradigm. As an example application that may benefit from cyber-physical system support, the Electro-Slag Remelting process - a process for remelting metals into better alloys - has been chosen due to the maturity of its related physical models and controller designs. In particular, the Particle Filter that estimates the state of the process is studied as a candidate for FPGA-based computing enhancements. In comparison with CPUs, through the designs and experiments carried in relationship with this study, the FPGA reveals itself as a serious contender in the arsenal of v computing means for Cyber-Physical Systems, due to its capacity to mimic the ubiquitous parallelism of physical processes. / text
12

Bayesian Emulation for Sequential Modeling, Inference and Decision Analysis

Irie, Kaoru January 2016 (has links)
<p>The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.</p><p>Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.</p> / Dissertation
13

Programové prostředí pro asimilační metody v radiační ochraně / Software environment for data assimilation in radiation protection

Majer, Peter January 2015 (has links)
In this work we apply data assimilation onto meteorological model WRF for local domain. We use bayesian statistics, namely Sequential Monte Carlo method combined with particle filtering. Only surface wind data are considered. An application written in Python programming language is also part of this work. This application forms interface with WRF, performs data assimilation and provides set of charts as output of data assimilation. In case of stable wind conditions, wind predictions of assimilated WRF are significantly closer to measured data than predictions of non-assimilated WRF. In this kind of conditions, this assimilated model can be used for more accurate short-term local weather predictions. Powered by TCPDF (www.tcpdf.org)
14

Contributions to statistical analysis methods for neural spiking activity

Tao, Long 27 November 2018 (has links)
With the technical advances in neuroscience experiments in the past few decades, we have seen a massive expansion in our ability to record neural activity. These advances enable neuroscientists to analyze more complex neural coding and communication properties, and at the same time, raise new challenges for analyzing neural spiking data, which keeps growing in scale, dimension, and complexity. This thesis proposes several new statistical methods that advance statistical analysis approaches for neural spiking data, including sequential Monte Carlo (SMC) methods for efficient estimation of neural dynamics from membrane potential threshold crossings, state-space models using multimodal observation processes, and goodness-of-fit analysis methods for neural marked point process models. In a first project, we derive a set of iterative formulas that enable us to simulate trajectories from stochastic, dynamic neural spiking models that are consistent with a set of spike time observations. We develop a SMC method to simultaneously estimate the parameters of the model and the unobserved dynamic variables from spike train data. We investigate the performance of this approach on a leaky integrate-and-fire model. In another project, we define a semi-latent state-space model to estimate information related to the phenomenon of hippocampal replay. Replay is a recently discovered phenomenon where patterns of hippocampal spiking activity that typically occur during exploration of an environment are reactivated when an animal is at rest. This reactivation is accompanied by high frequency oscillations in hippocampal local field potentials. However, methods to define replay mathematically remain undeveloped. In this project, we construct a novel state-space model that enables us to identify whether replay is occurring, and if so to estimate the movement trajectories consistent with the observed neural activity, and to categorize the content of each event. The state-space model integrates information from the spiking activity from the hippocampal population, the rhythms in the local field potential, and the rat's movement behavior. Finally, we develop a new, general time-rescaling theorem for marked point processes, and use this to develop a general goodness-of-fit framework for neural population spiking models. We investigate this approach through simulation and a real data application.
15

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
<p>In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.</p>
16

VLSI Implementation of Digital Signal Processing Algorithms for MIMO/SISO Systems

Shabany, Mahdi 30 July 2009 (has links)
The efficient high-throughput VLSI implementation of near-optimal multiple-input multiple-output (MIMO) detectors for 4x4 MIMO systems in high-order quadrature amplitude modulation (QAM) schemes has been a major challenge in the literature. To address this challenge, this thesis introduces a novel scalable pipelined VLSI architecture for a 4x4 64-QAM MIMO receiver based on K-Best lattice decoders. The key contribution is a means of expanding/visiting the intermediate nodes of the search tree on-demand, rather than exhaustively along with three types of distributed sorters operating in a pipelined structure. The combined expansion and sorting cores are able to find the K best candidates in K clock cycles. The proposed architecture has a fixed critical path independent of the constellation order, on-demand expansion scheme, efficient distributed sorters, and is scalable to a higher number of antennas/constellation orders. Fabricated in 0.13um CMOS, it operates at a significantly higher throughput (5.8x better) than currently reported schemes and occupies 0.95 mm2 core area. Operating at 282 MHz clock frequency, it dissipates 135 mW at 1.3 V supply with no performance loss. It achieves an SNR-independent decoding throughput of 675 Mbps satisfying the requirements of IEEE 802.16m and Long Term Evolution (LTE) systems. The measurements confirm that this design consumes 3.0x less energy/bit compared to the previous best design.
17

Dynamic Data Driven Application System for Wildfire Spread Simulation

Gu, Feng 14 December 2010 (has links)
Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation.
18

Bulk electric system reliability simulation and application

Wangdee, Wijarn 19 December 2005
Bulk electric system reliability analysis is an important activity in both vertically integrated and unbundled electric power utilities. Competition and uncertainty in the new deregulated electric utility industry are serious concerns. New planning criteria with broader engineering consideration of transmission access and consistent risk assessment must be explicitly addressed. Modern developments in high speed computation facilities now permit the realistic utilization of sequential Monte Carlo simulation technique in practical bulk electric system reliability assessment resulting in a more complete understanding of bulk electric system risks and associated uncertainties. Two significant advantages when utilizing sequential simulation are the ability to obtain accurate frequency and duration indices, and the opportunity to synthesize reliability index probability distributions which describe the annual index variability. <p>This research work introduces the concept of applying reliability index probability distributions to assess bulk electric system risk. Bulk electric system reliability performance index probability distributions are used as integral elements in a performance based regulation (PBR) mechanism. An appreciation of the annual variability of the reliability performance indices can assist power engineers and risk managers to manage and control future potential risks under a PBR reward/penalty structure. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the system well-being of bulk electric systems and to evaluate the likelihood, not only of entering a complete failure state, but also the likelihood of being very close to trouble. The system well-being concept presented in this thesis is a probabilistic framework that incorporates the accepted deterministic N-1 security criterion, and provides valuable information on what the degree of the system vulnerability might be under a particular system condition using a quantitative interpretation of the degree of system security and insecurity. An overall reliability analysis framework considering both adequacy and security perspectives is proposed using system well-being analysis and traditional adequacy assessment. The system planning process using combined adequacy and security considerations offers an additional reliability-based dimension. Sequential Monte Carlo simulation is also ideally suited to the analysis of intermittent generating resources such as wind energy conversion systems (WECS) as its framework can incorporate the chronological characteristics of wind. The reliability impacts of wind power in a bulk electric system are examined in this thesis. Transmission reinforcement planning associated with large-scale WECS and the utilization of reliability cost/worth analysis in the examination of reinforcement alternatives are also illustrated.
19

VLSI Implementation of Digital Signal Processing Algorithms for MIMO/SISO Systems

Shabany, Mahdi 30 July 2009 (has links)
The efficient high-throughput VLSI implementation of near-optimal multiple-input multiple-output (MIMO) detectors for 4x4 MIMO systems in high-order quadrature amplitude modulation (QAM) schemes has been a major challenge in the literature. To address this challenge, this thesis introduces a novel scalable pipelined VLSI architecture for a 4x4 64-QAM MIMO receiver based on K-Best lattice decoders. The key contribution is a means of expanding/visiting the intermediate nodes of the search tree on-demand, rather than exhaustively along with three types of distributed sorters operating in a pipelined structure. The combined expansion and sorting cores are able to find the K best candidates in K clock cycles. The proposed architecture has a fixed critical path independent of the constellation order, on-demand expansion scheme, efficient distributed sorters, and is scalable to a higher number of antennas/constellation orders. Fabricated in 0.13um CMOS, it operates at a significantly higher throughput (5.8x better) than currently reported schemes and occupies 0.95 mm2 core area. Operating at 282 MHz clock frequency, it dissipates 135 mW at 1.3 V supply with no performance loss. It achieves an SNR-independent decoding throughput of 675 Mbps satisfying the requirements of IEEE 802.16m and Long Term Evolution (LTE) systems. The measurements confirm that this design consumes 3.0x less energy/bit compared to the previous best design.
20

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.

Page generated in 0.069 seconds