• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 880
  • 201
  • 126
  • 109
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • Tagged with
  • 1734
  • 414
  • 312
  • 246
  • 229
  • 184
  • 174
  • 168
  • 166
  • 157
  • 157
  • 152
  • 152
  • 151
  • 141
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Dynamic Multispectral Imaging System with Spectral Zooming Capability and Its Applications

Chen, Bing 21 July 2010 (has links)
The main focus of this dissertation is to develop a multispectral imaging system with spectral zooming capability and also successfully demonstrate its promising medical applications through combining this technique with microscope system. The realization of the multispectral imaging method in this dissertation is based on the 4-f spatial filtering principle. When a collimated light is dispersed by the grating, there exists a clear linear distribution spectral line or spectrum at the Fourier plane of the Fourier transform lens group base on the Abbe imaging theory and optics Fourier Transform principle. The optical images, not the collimated light, are applied into this setup and the spectrum distribution still keeps linear relationship with the spatial positions at Fourier plane, even through there exists additional spectral crosstalk or overlap. The spatial filter or dynamic electrical filters used at the Fourier plane will facilitate randomly access the desired spectral waveband and agilely adjust the passband width. It offers the multispectral imaging functionality with spectral zooming capability. The system is flexible and efficiency. A dual-channel spectral imaging system based on the multispectral imaging method and acousto-optical tunable filter (AOTF) is proposed in the dissertation. The multispectral imaging method and the AOTF will form spate imaging channels and the two spectral channels work together to enhance the system efficiency. The AOTF retro reflection design is explored in the dissertation and experimental results demonstrate this design could effectively improve the spectral resolution of the passband. Moreover, a field lens is introduced into the multispectral imaging system to enhance the field of view of the system detection range. The application of field lens also improves the system spectral resolution, image quality and minimizes the system size. This spectral imaging system can be used for many applications. The compact prototype multispectral imaging system has been built and many outdoor remote spectral imaging tests have been performed. The spectral imaging design has also been successfully applied into microscope imaging. The prototype multispectral microscopy system shows excellent capability for normal optical detection of medical specimen and fluorescent emission imaging/diagnosis. Experiment results have demonstrated this design could realize both spectral zoom and optical zoom at the same time. This design facilitates fast spectral waveband adjustment as well as increasing speed, flexibility, and reduced cost.
282

Nonlinear estimation and modeling of noisy time-series by dual Kalman filtering methods

Nelson, Alex Tremain 09 1900 (has links) (PDF)
Ph.D. / Electrical and Computer Engineering / Numerous applications require either the estimation or prediction of a noisy time-series. Examples include speech enhancement, economic forecasting, and geophysical modeling. A noisy time-series can be described in terms of a probabilistic model, which accounts for both the deterministic and stochastic components of the dynamics. Such a model can be used with a Kalman filter (or extended Kalman filter) to estimate and predict the time-series from noisy measurements. When the model is unknown, it must be estimated as well; dual estimation refers to the problem of estimating both the time-series, and its underlying probabilistic model, from noisy data. The majority of dual estimation techniques in the literature are for signals described by linear models, and many are restricted to off-line application domains. Using a probabilistic approach to dual estimation, this work unifies many of the approaches in the literature within a common theoretical and algorithmic framework, and extends their capabilities to include sequential dual estimation of both linear and nonlinear signals. The dual Kalman filtering method is developed as a method for minimizing a variety of dual estimation cost functions, and is shown to be an effective general method for estimating the signal, model parameters, and noise variances in both on-line and off-line environments.
283

Hull/Mooring/Riser coupled motion simulations of thruster-assisted moored platforms

Ryu, Sangsoo 17 February 2005 (has links)
To reduce large motion responses of moored platforms in a harsh environment in deep waters, a thruster-assisted position mooring system can be applied. By applying the system, global dynamic responses can be improved in terms of the mooring line/riser top tensions, operational radii, and the top and bottom angle of the production risers. Kalman filtering as an optimum observer and estimator for stochastic disturbances is implemented in the developed control algorithm to filter out wave frequency responses. Investigation of the performance of thruster-assisted moored offshore platforms was conducted in terms of six-degree-of-freedom motions and mooring line/riser top tensions by means of a fully coupled hull/mooring/riser dynamic analysis program in the time domain and a spectral analysis. The two cases, motion analyses of a platform with thrusters and without thrusters, are extensively compared. The numerical examples illustrate that for deepwater position-keeping of platforms a thruster-assisted moored platform can be an effective solution compared to a conventionally moored platform.
284

Efficient algorithms for highly automated evaluation of liquid chromatography - mass spectrometry data

Fredriksson, Mattias January 2010 (has links)
Liquid chromatography coupled to mass spectrometry (LC‐MS) has due to its superiorresolving capabilities become one of the most common analytical instruments fordetermining the constituents in an unknown sample. Each type of sample requires a specificset‐up of the instrument parameters, a procedure referred to as method development.During the requisite experiments, a huge amount of data is acquired which often need to bescrutinised in several different ways. This thesis elucidates data processing methods forhandling this type of data in an automated fashion.The properties of different commonly used digital filters were compared for LC‐MS datade‐noising, of which one was later selected as an essential data processing step during adeveloped peak detection step. Reconstructed data was further discriminated into clusterswith equal retention times into components by an adopted method. This enabled anunsupervised and accurate comparison and matching routine by which components fromthe same sample could be tracked during different chromatographic conditions.The results show that the characteristics of the noise have an impact on the performanceof the tested digital filters. Peak detection with the proposed method was robust to thetested noise and baseline variations but functioned optimally when the analytical peaks hada frequency band different from the uninformative parts of the signal. The algorithm couldeasily be tuned to handle adjacent peaks with lower resolution. It was possible to assignpeaks into components without typical rotational and intensity ambiguities associated tocommon curve resolution methods, which are an alternative approach. The underlyingfunctions for matching components between different experiments yielded satisfactoryresults. The methods have been tested on various experimental data with a high successrate. / De analysinstrument som används för att ta reda på vad ett prov innehåller(och till vilken mängd) måste vanligtvis ställas in för det specifika fallet, för attfungera optimalt. Det finns ofta en mängd olika variabler att undersöka som harmer eller mindre inverkan på resultatet och när provet är okänt kan man oftast inteförutspå de optimala inställningarna i förtid.En vätskekromatograf med en masspektrometer som detektor är ett sådantinstrument som är utvecklat för att separera och identifiera organiska ämnen lösta ivätska. Med detta mycket potenta system kan man ofta med rätt inställningar delaupp de ingående ämnena i provet var för sig och samtidigt erhålla mått som kanrelateras till dess massa och mängd. Detta system används flitigt av analytiskalaboratorer inom bl.a. läkemedelsindustrin för att undersöka stabilitet och renhethos potentiella läkemedel. För att optimera instrumentet för det okända provetkrävs dock att en hel del försök utförs där inställningarna varieras. Syftet är attmed en mindre mängd designade försök bygga en modell som klarar av att peka åtvilket håll de optimala inställningarna finns. Data som genereras från instrumentetför denna typ av applikation är i matrisform då instrumentet scannar och spararintensiteten av ett intervall av massor varje tidpunkt en mätning sker. Om enanalyt når detektorn vid aktuell tidpunkt återges det som en eller flera överlagdanormalfördelade toppar som ett specifikt mönster på en annars oregelbundenbakgrundssignal. Förutom att alla topparna i det färdiga datasetet helst ska varavälseparerade och ha den rätta formen, så ska tiden analysen pågår vara så kortsom möjlig. Det är ändå inte ovanligt att ett färdigt dataset består av tiotalsmiljoner uppmätta intensiteter och att det kan krävas runt 10 försök med olikabetingelser för att åstadkomma ett godtagbart resultat.Dataseten kan dock till mycket stor del innehålla brus och andra störandesignaler vilket gör de extra krångligt att tolka och utvärdera. Eftersom man ävenofta får att komponenterna byter plats i ett dataset när betingelserna ändras kan enmanuell utvärdering ta mycket lång tid.Syftet med denna avhandling har varit att hitta metoder som kan vara till nyttaför den som snabbt och automatiskt behöver jämföra dataset analyserade medolika kromatografiska betingelser, men med samma prov. Det slutgiltiga målet harfrämst varit att identifiera hur olika komponenter i provet har rört sig mellan deolika dataseten, men de steg som ingår kan även nyttjas till andra applikationer.
285

Context Dependent Thresholding and Filter Selection for Optical Character Recognition

Kieri, Andreas January 2012 (has links)
Thresholding algorithms and filters are of great importance when utilizing OCR to extract information from text documents such as invoices. Invoice documents vary greatly and since the performance of image processing methods when applied to those documents will vary accordingly, selecting appropriate methods is critical if a high recognition rate is to be obtained. This paper aims to determine if a document recognition system that automatically selects optimal processing methods, based on the characteristics of input images, will yield a higher recognition rate than what can be achieved by a manual choice. Such a recognition system, including a learning framework for selecting optimal thresholding algorithms and filters, was developed and evaluated. It was established that an automatic selection will ensure a high recognition rate when applied to a set of arbitrary invoice images by successfully adapting and avoiding the methods that yield poor recognition rates.
286

GPU Implementation of the Particle Filter / GPU implementation av partikelfiltret

Gebart, Joakim January 2013 (has links)
This thesis work analyses the obstacles faced when adapting the particle filtering algorithm to run on massively parallel compute architectures. Graphics processing units are one example of massively parallel compute architectures which allow for the developer to distribute computational load over hundreds or thousands of processor cores. This thesis studies an implementation written for NVIDIA GeForce GPUs, yielding varying speed ups, up to 3000% in some cases, when compared to the equivalent algorithm performed on CPU. The particle filter, also known in the literature as sequential Monte-Carlo methods, is an algorithm used for signal processing when the system generating the signals has a highly nonlinear behaviour or non-Gaussian noise distributions where a Kalman filter and its extended variants are not effective. The particle filter was chosen as a good candidate for parallelisation because of its inherently parallel nature. There are, however, several steps of the classic formulation where computations are dependent on other computations in the same step which requires them to be run in sequence instead of in parallel. To avoid these difficulties alternative ways of computing the results must be used, such as parallel scan operations and scatter/gather methods. Another area where parallel programming still is not widespread is the area of pseudo-random number generation. Pseudo-random numbers are required by the algorithm to simulate the process noise as well as for avoiding the particle depletion problem using a resampling step. In this thesis a recently published counter-based pseudo-random number generator is used.
287

Design and Application of Discrete Explicit Filters for Large Eddy Simulation of Compressible Turbulent Flows

Deconinck, Willem 24 February 2009 (has links)
In the context of Large Eddy Simulation (LES) of turbulent flows, there is a current need to compare and evaluate different proposed subfilter-scale models. In order to carefully compare subfilter-scale models and compare LES predictions to Direct Numerical Simulation (DNS) results (the latter would be helpful in the comparison and validation of models), there is a real need for a "grid-independent" LES capability and explicit filtering methods offer one means by which this may be achieved. Advantages of explicit filtering are that it provides a means for eliminating aliasing errors, allows for the direct control of commutation errors, and most importantly allows a decoupling between the mesh spacing and the filter width which is the primary reason why there are difficulties in comparing LES solutions obtained on different grids. This thesis considers the design and assessment of discrete explicit filters and their application to isotropic turbulence prediction.
288

Embedded network firewall on FPGA

Ajami, Raouf 22 November 2010
The Internet has profoundly changed todays human being life. A variety of information and online services are offered by various companies and organizations via the Internet. Although these services have substantially improved the quality of life, at the same time they have brought new challenges and difficulties. The information security can be easily tampered by many threats from attackers for different purposes. A catastrophe event can happen when a computer or a computer network is exposed to the Internet without any security protection and an attacker can compromise the computer or the network resources for destructive intention.<p> The security issues can be mitigated by setting up a firewall between the inside network and the outside world. A firewall is a software or hardware network device used to enforce the security policy to the inbound and outbound network traffic, either installed on a single host or a network gateway. A packet filtering firewall controls the header field in each network data packet based on its configuration and permits or denies the data passing thorough the network.<p> The objective of this thesis is to design a highly customizable hardware packet filtering firewall to be embedded on a network gateway. This firewall has the ability to process the data packets based on: source and destination TCP/UDP port number, source and destination IP address range, source MAC address and combination of source IP address and destination port number. It is capable of accepting configuration changes in real time. An Altera FPGA platform has been used for implementing and evaluating the network firewall.
289

Design and Application of Discrete Explicit Filters for Large Eddy Simulation of Compressible Turbulent Flows

Deconinck, Willem 24 February 2009 (has links)
In the context of Large Eddy Simulation (LES) of turbulent flows, there is a current need to compare and evaluate different proposed subfilter-scale models. In order to carefully compare subfilter-scale models and compare LES predictions to Direct Numerical Simulation (DNS) results (the latter would be helpful in the comparison and validation of models), there is a real need for a "grid-independent" LES capability and explicit filtering methods offer one means by which this may be achieved. Advantages of explicit filtering are that it provides a means for eliminating aliasing errors, allows for the direct control of commutation errors, and most importantly allows a decoupling between the mesh spacing and the filter width which is the primary reason why there are difficulties in comparing LES solutions obtained on different grids. This thesis considers the design and assessment of discrete explicit filters and their application to isotropic turbulence prediction.
290

Estimation Strategies for Constrained and Hybrid Dynamical Systems

Parish, Julie Marie Jones 2011 August 1900 (has links)
The estimation approaches examined in this dissertation focus on manipulating system dynamical models to allow the well-known form of the continuous-discrete extended Kalman filter (CDEKF) to accommodate constrained and hybrid systems. This estimation algorithm filters sequential discrete measurements for nonlinear continuous systems modeled with ordinary differential equations. The aim of the research is to broaden the class of systems for which this common tool can be easily applied. Equality constraints, holonomic or nonholonomic, or both, are commonly found in the system dynamics for vehicles, spacecraft, and robotics. These systems are frequently modeled with differential algebraic equations. In this dissertation, three tools for adapting the dynamics of constrained systems for implementation in the CDEKF are presented. These strategies address (1) constrained systems with quasivelocities, (2) kinematically constrained redundant coordinate systems, and (3) systems for which an equality constraint can be broken. The direct linearization work for constrained systems modeled with quasi-velocities is demonstrated to be particularly useful for systems subject to nonholonomic constraints. Concerning redundant coordinate systems, the "constraint force" perspective is shown to be an effective approximation for facilitating implementation of the CDEKF while providing similar performance to that of the fully developed estimation scheme. For systems subject to constraint violation, constraint monitoring methods are presented that allow the CDEKF to autonomously switch between constrained and unconstrained models. The efficacy of each of these approaches is shown through illustrative examples. Hybrid dynamical systems are those modeled with both finite- and infinite-dimensional coordinates. The associated governing equations are integro-partial differential equations. As with constrained systems, these governing equations must be transformed in order to employ the CDEKF. Here, this transformation is accomplished through two finite-dimensional representations of the infinite-dimensional coordinate. The application of these two assumed modes methods to hybrid dynamical systems is outlined, and the performance of the approaches within the CDEKF are compared. Initial simulation results indicate that a quadratic assumed modes approach is more advantageous than a linear assumed modes approach for implementation in the CDEKF. The dissertation concludes with a direct estimation methodology that constructs the Kalman filter directly from the system kinematics, potential energy, and measurement model. This derivation provides a straightforward method for building the CDEKF for discrete systems and relates these direct estimation ideas to the other work presented throughout the dissertation. Together, this collection of estimation strategies provides methods for expanding the class of systems for which a proven, well-known estimation algorithm, the extended Kalman filter, can be applied. The accompanying illustrative examples and simulation results demonstrate the utility of the methods proposed herein.

Page generated in 0.0386 seconds