• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of Algorithms to Estimate Post-Disaster Population Dislocation--A Research-Based Approach

Lin, Yi-Sz 2009 August 1900 (has links)
This study uses an empirical approach to develop algorithms to estimate population dislocation following a natural disaster. It starts with an empirical reexamination of the South Dade Population Impact Survey data, integrated with the Miami-Dade County tax appraisal data and 1990 block group census data, to investigate the effects of household and neighborhood socioeconomic characteristics on household dislocation. The empirical analyses found evidence suggesting that households with higher socio-economic status have a greater tendency to leave their homes following a natural disaster. Then one of the statistical models is selected from the empirical analysis and integrated into the algorithm that estimates the probability of household dislocation based on structural damage, housing type, and the percentages of Black and Hispanic population in block groups. This study also develops a population dislocation algorithm using a modified Hazard-US (HAZUS) approach that integrates the damage state probabilities proposed by Bai, Hueste and Gardoni in 2007, accompanied with dislocation factors described in HAZUS to produce structural level estimates. These algorithms were integrated into MAEviz, the Mid-American Earthquake Centers Seismic Loss Assessment System, to produce post-disaster dislocation estimates at either the structure or block group level, whichever is appropriate for the user's planning purposes. Sensitivity analysis follows to examine the difference among the estimates produced by the two newly-developed algorithms and the HAZUS population dislocation algorithm.
2

Development, improvement and assessment of image classification and probability mapping algorithms

Wang, Qing 01 December 2018 (has links) (PDF)
Remotely sensed imagery is one of the most important data sources for large-scale and multi-temporal agricultural, forestry, soil, environmental, social and economic applications. In order to accurately extract useful thematic information of the earth surface from images, various techniques and methods have been developed. The methods can be divided into parametric and non-parametric based on the requirement of data distribution, or into global and local based on the characteristics of modeling global trends and local variability, or into unsupervised and supervised based on whether training data are required, and into design-based and model-based in terms of the theory based on which the estimators are developed. The methods have their own disadvantages that impede the improvement of estimation accuracy. Thus, developing novel methods and improving the existing methods are needed. This dissertation focused on the development of a feature-space indicator simulation (FSIS), the improvement of geographically weighted sigmoidal simulation (GWSS) and k-nearest neighbors (kNN), and their assessment of land use and land cover (LULC) classification and probability (fraction) mapping of percentage vegetation cover (PVC) in Duolun County, Xilingol League, Inner Mongolia, China. The FSIS employs an indicator simulation in a high-dimensional feature space and expends derivation of indicator variograms from geographic space to feature space that leads to feature space indicator variograms (FSIV), to circumvent the issues existing in traditional indicator simulation in geostatistics. The GWSS is a stochastic and probability mapping method and considers a spatially nonstationary sample data and the local variation of an interest variable. The improved kNN, called Optimal k-nearest neighbors (OkNN), searches for an optimal number of nearest neighbors at each location based on local variability, and can be used for both classification and probability mapping. Three methods were validated and compared with several widely used approaches for LULC classification and PVC mapping in the study area. The datasets used in the study included a Landsat 8 image and a total of 920 field plots. The results obtained showed that 1) Compared with maximum likelihood classification (ML), support vector machine (SVM) and random forest (RF), the proposed FSIS classifier led to statistically significantly higher classification accuracy of six LULC types (water, agricultural land, grassland, bare soil, built-up area, and forested area); 2) Compared with linear regression (LR), polynomial regression (PR), sigmoidal regression (SR), geographically weighted regression (GWR), and geographically weighted polynomial regression (GWPR), GWSS did not only resulted in more accurate estimates of PVC, but also greatly reduced the underestimations and overestimation of PVC for the small and large values respectively; 3) Most of the red and near infrared bands relevant vegetation indices had significant contributions to improving the accuracy of mapping PVC; 4) OkNN resulted in spatially variable and optimized k values and higher prediction accuracy of PVC than the global methods; and 5) The range parameter of FSIVs was the major factor that spatially affected the classification accuracy of LULC types, but the FSIVs were less sensitive to the number of training samples. Thus, the results answered all six research questions proposed.
3

Development of an Efficient Super-Resolution Image Reconstruction Algorithm for Implementation on a Hardware Platform

Pestak, Thomas C. 28 June 2010 (has links)
No description available.
4

Network approach to impedance computerized tomography

Dai, Hong January 1985 (has links)
No description available.
5

Developing Real Time Automatic Step Detection in the three dimensional Accelerometer Signal implemented on a Microcontroller System

Seyrafi, Aylar January 2009 (has links)
Parkinson’s disease is associated with reduced coordination between respiration and locomotion. For the neurological rehabilitation research, it requires a long-time monitoring system, which enables the online analysis of the patient’s vegetative locomotor coordination. In this work a real time step detector using three-dimensional accelerometer signal for the patients with Parkinson‘s disease is developed. This step detector is a complement for a recently developed system included of intelligent, wirelessly communicating sensors. The system helps to focus on the scientific questions whether this coordination may serve as a measure for the rehabilitation progress of PD patients. / +46-762453110 +46-462886970
6

Efficient Bayesian Tracking of Multiple Sources of Neural Activity: Algorithms and Real-Time FPGA Implementation

January 2013 (has links)
abstract: Electrical neural activity detection and tracking have many applications in medical research and brain computer interface technologies. In this thesis, we focus on the development of advanced signal processing algorithms to track neural activity and on the mapping of these algorithms onto hardware to enable real-time tracking. At the heart of these algorithms is particle filtering (PF), a sequential Monte Carlo technique used to estimate the unknown parameters of dynamic systems. First, we analyze the bottlenecks in existing PF algorithms, and we propose a new parallel PF (PPF) algorithm based on the independent Metropolis-Hastings (IMH) algorithm. We show that the proposed PPF-IMH algorithm improves the root mean-squared error (RMSE) estimation performance, and we demonstrate that a parallel implementation of the algorithm results in significant reduction in inter-processor communication. We apply our implementation on a Xilinx Virtex-5 field programmable gate array (FPGA) platform to demonstrate that, for a one-dimensional problem, the PPF-IMH architecture with four processing elements and 1,000 particles can process input samples at 170 kHz by using less than 5% FPGA resources. We also apply the proposed PPF-IMH to waveform-agile sensing to achieve real-time tracking of dynamic targets with high RMSE tracking performance. We next integrate the PPF-IMH algorithm to track the dynamic parameters in neural sensing when the number of neural dipole sources is known. We analyze the computational complexity of a PF based method and propose the use of multiple particle filtering (MPF) to reduce the complexity. We demonstrate the improved performance of MPF using numerical simulations with both synthetic and real data. We also propose an FPGA implementation of the MPF algorithm and show that the implementation supports real-time tracking. For the more realistic scenario of automatically estimating an unknown number of time-varying neural dipole sources, we propose a new approach based on the probability hypothesis density filtering (PHDF) algorithm. The PHDF is implemented using particle filtering (PF-PHDF), and it is applied in a closed-loop to first estimate the number of dipole sources and then their corresponding amplitude, location and orientation parameters. We demonstrate the improved tracking performance of the proposed PF-PHDF algorithm and map it onto a Xilinx Virtex-5 FPGA platform to show its real-time implementation potential. Finally, we propose the use of sensor scheduling and compressive sensing techniques to reduce the number of active sensors, and thus overall power consumption, of electroencephalography (EEG) systems. We propose an efficient sensor scheduling algorithm which adaptively configures EEG sensors at each measurement time interval to reduce the number of sensors needed for accurate tracking. We combine the sensor scheduling method with PF-PHDF and implement the system on an FPGA platform to achieve real-time tracking. We also investigate the sparsity of EEG signals and integrate compressive sensing with PF to estimate neural activity. Simulation results show that both sensor scheduling and compressive sensing based methods achieve comparable tracking performance with significantly reduced number of sensors. / Dissertation/Thesis / Ph.D. Electrical Engineering 2013
7

Dynamics of the North American Plate: Numerical Development, Mantle Flow Modeling, and Receiver Function Analysis

Liu, Shangxin 15 June 2021 (has links)
With only approximately one quarter of plate margins composed of subduction zones, North American plate is an unique continental plate featured with a western active continental margin atop widespread slow seismic velocity anomalies in the asthenosphere, an eastern passive continental margin covering several localized regions of slow seismic velocity, and a strong central cratonic root (Laurentia). The coexistence of the prominent thermal and compositional structures beneath the North American plate complicates the construction of numerical models needed to investigate the dynamics of the whole plate. Recently, a new generation mantle convection code, ASPECT (Advanced Solver for Problems in Earth ConvecTion) equipped with fully adaptive mesh refinement (AMR) technology opens up the potential to build a multi-scale global mantle flow model with a local high-resolution focus beneath the North America plate. Given the immature state of this new code for mantle flow modeling in 3-D spherical shell geometry at the beginning of my doctoral study, I first developed a new geoid algorithm for the 3-D spherical AMR numerical modeling based on ASPECT. Then I systematically benchmarked the velocity, dynamic topography, and geoid solutions from ASPECT through analytical kernel approach in the uniform mesh. I further verified the accuracy of the AMR mantle flow computation in the 3-D spherical shell geometry. Based on the improved ASPECT code, I construct global mantle flow models to investigate the driving forces for the North American plate motion. I focus on the comparison between the effects of near-field slabs (Aleutian, central American, and Caribbean slabs) and far-field slabs (primarily those around western Pacific subduction margins) and find that the far-field slabs provide the dominant driving forces for the North American plate. I further identified that interpreting the extremely slow seismic anomalies associated with the partial melt in the uppermost mantle around southwestern U.S. as purely thermal in origin results in considerably excessive resistance to North American plate motion. My numerical experiments prove that a significantly reduced velocity-to-density scaling (0.05 or smaller in our models) from the original thermal scaling coefficients (0.25 in our models) for these negative seismic shear-velocity anomalies must be incorporated into the construction of the buoyancy field to predict North American plate motion. I also examine the role of the lower mantle buoyancy including the ancient descending Kula-Farallon plates and the active upwelling below the Pacific margin of North American plate. Lower mantle buoyancy primarily affects the amplitudes, as opposed to the patterns of both North American and global plate motions. Another part of this dissertation reports the receiver function analysis along a recent dense seismic array across the eastern U.S from the western border of Ohio to the Atlantic coast of Virginia. 3D stacking yields shallowing trends of 410-km and 660-km discontinuities and thinning transition zone thickness from the inland to the coast. These results are hard to reconcile with any of the three existing hypotheses regarding the vertical mantle flow patterns beneath the eastern U.S., including edge-driven convection excited by the craton edge, hydrous upwelling from the dehydration of the deep Farallon slab, and the sinking of the delaminated or dripped mantle lithospheric block below the central West Virginia/Virginia border. A hydro-thermal upwelling beneath the eastern U.S. coastal plain due to hydrated transition zone and the neighboring passive hot upwelling induced by the descending Farallon slab in the lower mantle is consistent with the results from 3D stacking. The hydro-thermal upwelling hypothesis is also able to reconcile the shallower tectonic processes and deeper mantle dynamics below the eastern U.S. through its dehydration melting atop 410-km discontinuity. Overall, this dissertation documents the technical details on the improvements of the ASPECT code in mantle flow modeling and provides new insights into the dynamics and evolution of the North American continent. / Doctor of Philosophy / Chapter 1 details the motivation of the study in this dissertation, which covers three topics in the disciplines of geodynamics and seismology. Recently, the new computational tools of geodynamic modeling into the Earth's interior have been extensively developed. One of the cutting-edge technical advances is adaptive mesh refinement (AMR), which enables the construction of mantle flow models in highly variable resolution within the domain. However, the accuracy of the results from these multi-scale models needs to be verified. In addition, the algorithms of the geoid (equipotential surface of the gravity on the Earth) in spherical harmonic domain needs to be updated in accordance with AMR mantle flow computation. Chapter 2 documents a geoid algorithms in spherical harmonic domain working with AMR mantle flow simulation. This geoid algorithm is developed based on a a new generation mantle convection code, ASPECT (Advanced Solver for Problems in Earth ConvecTion). The numerical results including velocity, topography, and geoid from ASPECT are systematically benchmarked in both the uniform mesh and the adaptive variable mesh. The AMR simulation of ASPECT is able to achieve nearly the same high accuracy as that from the highly resolved uniform mesh. Chapter 3 systematically investigates the driving forces for the North American plate motion. Through the mantle flow modeling based on well-developed ASPECT code, I find that the remote subducting slabs (primarily those around western Pacific subduction margins), as opposed to the nearby marginal slabs (Aleutian, central American, and Caribbean slabs), provide the dominant driving forces for the motion of the North American plate. I further confirm that a reasonable estimation on the positive buoyancy from the extremely slow seismic velocity anomalies associated with partial melt in the uppermost mantle around southwestern U.S. is necessary to predict North American plate motion. Lower mantle buoyancy primarily affects the amplitudes, as opposed to the patterns of both North American and global plate motions. Chapter 4 reports the results of a seismic survey on the transition zone (the mantle region between ~410-km and ~660-km depths) structures below the eastern U.S. Our results can be explained by a hydro-thermal upwelling beneath the eastern U.S. coastal plain due to hydrated transition zone and the neighboring passive hot upwelling induced by the descending Farallon slab (an ancient oceanic plate subducting below the North American plate) in the lower mantle. Chapter 5 concisely summarizes the major findings of the above three topics in this study.
8

Simulator test and evaluation of a drowsy driver detection system and revisions to drowsiness detection algorithms

Lewin, Mark Gustav 22 August 2008 (has links)
This study was undertaken to simulator test and evaluate a complete drowsy driver detection system. The goal of the study was to recommend optimal specifications for a system to be further studied in an actual vehicle. The system used a set of algorithms developed from previously collected data and a set of previously optimized advisory tones, advisory messages, alarm stimuli, and drowsiness countermeasures. Detection occurred if eye closure or lane excursion exceeded predetermined thresholds. Data were obtained from six sleep-deprived subjects who drove a motion base automobile simulator late at night. Each subject was trained in carefully observing lane boundaries, using a device which sounded an alarm if lane boundaries were exceeded. The performance aspect of the system dominated the detection process. None of the algorithms tracked well with the measures they were designed to estimate; correlations were much lower than expected. The algorithms relied heavily on the positioning of the vehicle relative to the lane. / Master of Science
9

Algorithm development in computational electrochemistry

Cutress, Ian James January 2011 (has links)
This thesis presents algorithm development in computational chemistry, and applies new computer science concepts to voltammetric simulation. To begin, this thesis discusses why algorithm development is necessary, and inherent problems found in commercial simulation solvers. As a result of this discussion, this thesis describes the need for simulators to keep abreast of recent computational developments. Algorithm development in this thesis is taken through stages. Chapter 3 applies known theory relating to the stripping voltammetry at a macroelectrode to the diffusional model of a microdisk, using finite difference and alternating direction implicit simulation techniques. Chapter 4 introduces the concept of parallel computing, and how computational hardware has developed recently to take advantage of out-of-order calculations, by processing them in parallel to reduce simulation time. The novel area of graphics card simulation for highly parallel algorithms is also explained in detail. Chapter 5 discusses the adaptation of voltammetric finite difference algorithms to a purely parallel format for simulation by explicit solution. Through explicit solution, finite difference algorithms are applied to electrode geometries which necessitate a three-dimensional solution – elliptical electrodes; square, rectangular, and microband electrodes; and dual microdisk electrodes in collector-generator mode. Chapter 6 introduces 'Random Walk' simulations, whereby individual particles in the simulation are modelled and their trajectories over time are calculated. The random walk technique in this thesis is improved for pure three-dimensional diffusion, and adapted to graphics cards, allowing up to a factor 4000 increase in speed over previous computational methods. This method is adapted to various systems of low concentration confined voltammetry (chapter 6.4) and single molecule detection, ultra low concentration cyclic voltammetry (chapter 6.5), and underpotential deposition of thallium on mobile silver nanoparticles (chapter 6.6). Overall, this thesis presents, and applies, a series of algorithm development concepts in computational electrochemistry.
10

Optimal Information-Weighted Kalman Consensus Filter

Shiraz Khan (8782250) 30 April 2020 (has links)
<div>Distributed estimation algorithms have received considerable attention lately, owing to the advancements in computing, communication and battery technologies. They offer increased scalability, robustness and efficiency. In applications such as formation flight, where any discrepancies between sensor estimates has severe consequences, it becomes crucial to require consensus of estimates amongst all sensors. The Kalman Consensus Filter (KCF) is a seminal work in the field of distributed consensus-based estimation, which accomplishes this. </div><div><br></div><div>However, the KCF algorithm is mathematically sub-optimal, and does not account for the cross-correlation between the estimates of sensors. Other popular algorithms, such as the Information weighted Consensus Filter (ICF) rely on ad-hoc definitions and approximations, rendering them sub-optimal as well. Another major drawback of KCF is that it utilizes unweighted consensus, i.e., each sensor assigns equal weightage to the estimates of its neighbors. This fact has been shown to cause severely degraded performance of KCF when some sensors cannot observe the target, and can even cause the algorithm to be unstable.</div><div><br></div><div>In this work, we develop a novel algorithm, which we call Optimal Kalman Consensus Filter for Weighted Directed Graphs (OKCF-WDG), which addresses both of these limitations of existing algorithms. OKCF-WDG integrates the KCF formulation with that of matrix-weighted consensus. The algorithm achieves consensus on a weighted digraph, enabling a directed flow of information within the network. This aspect of the algorithm is shown to offer significant performance improvements over KCF, as the information may be directed from well-performing sensors to other sensors which have high estimation error due to environmental factors or sensor limitations. We validate the algorithm through simulations and compare it to existing algorithms. It is shown that the proposed algorithm outperforms existing algorithms by a considerable margin, especially in the case where some sensors are naive (i.e., cannot observe the target).</div>

Page generated in 0.0258 seconds