• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 875
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1726
  • 412
  • 311
  • 245
  • 228
  • 184
  • 173
  • 166
  • 166
  • 156
  • 154
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

3D tracking between satellites using monocular computer vision

Malan, Daniel Francois 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2005. / Visually estimating three-dimensional position, orientation and motion, between an observer and a target, is an important problem in computer vision. Solutions which compute threedimensional movement from two-dimensional intensity images, usually rely on stereoscopic vision. Some research has also been done in systems utilising a single (monocular) camera. This thesis investigates methods for estimating position and pose from monocular image sequences. The intended future application is of visual tracking between satellites flying in close formation. The ideas explored in this thesis build on methods developed for use in camera calibration, and structure from motion (SfM). All these methods rely heavily on the use of different variations of the Kalman Filter. After describing the problem from a mathematical perspective we develop different approaches to solving the estimation problem. The different approaches are successfully tested on simulated as well as real-world image sequences, and their performance analysed.
432

Estimation and Control of Resonant Systems with Stochastic Disturbances

Nauclér, Peter January 2008 (has links)
<p>The presence of vibration is an important problem in many engineering applications. Various passive techniques have traditionally been used in order to reduce waves and vibrations, and their harmful effects. Passive techniques are, however, difficult to apply in the low frequency region. In addition, the use of passive techniques often involve adding mass to the system, which is undesirable in many applications.</p><p>As an alternative, active techniques can be used to manipulate system dynamics and to control the propagation of waves and vibrations. This thesis deals with modeling, estimation and active control of systems that have resonant dynamics. The systems are exposed to stochastic disturbances. Some of them excite the system and generate vibrational responses and other corrupt measured signals. </p><p>Feedback control of a beam with attached piezoelectrical elements is studied. A detailed modeling approach is described and system identification techniques are employed for model order reduction. Disturbance attenuation of a non-measured variable shows to be difficult. This issue is further analyzed and the problems are shown to depend on fundamental design limitations.</p><p>Feedforward control of traveling waves is also considered. A device with properties analogous to those of an electrical diode is introduced. An `ideal´ feedforward controller based on the mechanical properties of the system is derived. It has, however, poor noise rejection properties and it therefore needs to be modified. A number of feedforward controllers that treat the measurement noise in a statistically sound way are derived.</p><p>Separation of overlapping traveling waves is another topic under investigation. This operation also is sensitive to measurement noise. The problem is thoroughly analyzed and Kalman filtering techniques are employed to derive wave estimators with high statistical performance. </p><p>Finally, a nonlinear regression problem with close connections to unbalance estimation of rotating machinery is treated. Different estimation techniques are derived and analyzed with respect to their statistical accuracy. The estimators are evaluated using the example of separator balancing. </p>
433

A Content Boosted Collaborative Filtering Approach For Recommender Systems Based On Multi Level And Bidirectional Trust Data

Sahinkaya, Ferhat 01 June 2010 (has links) (PDF)
As the Internet became widespread all over the world, people started to share great amount of data on the web and almost every people joined different data networks in order to have a quick access to data shared among people and survive against the information overload on the web. Recommender systems are created to provide users more personalized information services and to make data available for people without an extra effort. Most of these systems aim to get or learn user preferences, explicitly or implicitly depending to the system, and guess &ldquo / preferable data&rdquo / that has not already been consumed by the user. Traditional approaches use user/item similarity or item content information to filter items for the active user / however most of the recent approaches also consider the trustworthiness of users. By using trustworthiness, only reliable users according to the target user opinion will be considered during information retrieval. Within this thesis work, a content boosted method of using trust data in recommender systems is proposed. It is aimed to be shown that people who trust the active user and the people, whom the active user trusts, also have correlated opinions with the active user. This results the fact that the rated items by these people can also be used while offering new items to users. For this research, www.epinions.com site is crawled, in order to access user trust relationships, product content information and review ratings which are ratings given by users to product reviews that are written by other users.
434

A Singular Value Decomposition Approach For Recommendation Systems

Osmanli, Osman Nuri 01 July 2010 (has links) (PDF)
Data analysis has become a very important area for both companies and researchers as a consequence of the technological developments in recent years. Companies are trying to increase their profit by analyzing the existing data about their customers and making decisions for the future according to the results of these analyses. Parallel to the need of companies, researchers are investigating different methodologies to analyze data more accurately with high performance. Recommender systems are one of the most popular and widespread data analysis tools. A recommender system applies knowledge discovery techniques to the existing data and makes personalized product recommendations during live customer interaction. However, the huge growth of customers and products especially on the internet, poses some challenges for recommender systems, producing high quality recommendations and performing millions of recommendations per second. In order to improve the performance of recommender systems, researchers have proposed many different methods. Singular Value Decomposition (SVD) technique based on dimension reduction is one of these methods which produces high quality recommendations, but has to undergo very expensive matrix calculations. In this thesis, we propose and experimentally validate some contributions to SVD technique which are based on the user and the item categorization. Besides, we adopt tags to classical 2D (User-Item) SVD technique and report the results of experiments. Results are promising to make more accurate and scalable recommender systems.
435

A Hybrid Veideo Recommendation System Based On A Graph Based Algorithm

Ozturk, Gizem 01 September 2010 (has links) (PDF)
This thesis proposes the design, development and evaluation of a hybrid video recommendation system. The proposed hybrid video recommendation system is based on a graph algorithm called Adsorption. Adsorption is a collaborative filtering algorithm in which relations between users are used to make recommendations. Adsorption is used to generate the base recommendation list. In order to overcome the problems that occur in pure collaborative system, content based filtering is injected. Content based filtering uses the idea of suggesting similar items that matches user preferences. In order to use content based filtering, first, the base recommendation list is updated by removing weak recommendations. Following this, item similarities of the remaining list are calculated and new items are inserted to form the final recommendations. Thus, collaborative recommendations are empowered considering item similarities. Therefore, the developed hybrid system combines both collaborative and content based approaches to produce more effective suggestions.
436

Dynamics and correlations in sparse signal acquisition

Charles, Adam Shabti 08 June 2015 (has links)
One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth. One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited. While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference. Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].
437

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
438

Non-Linear Adaptive Bayesian Filtering for Brain Machine Interfaces

Li, Zheng January 2010 (has links)
<p>Brain-machine interfaces (BMI) are systems which connect brains directly to machines or computers for communication. BMI-controlled prosthetic devices use algorithms to decode neuronal recordings into movement commands. These algorithms operate using models of how recorded neuronal signals relate to desired movements, called models of tuning. Models of tuning have typically been linear in prior work, due to the simplicity and speed of the algorithms used with them. Neuronal tuning has been shown to slowly change over time, but most prior work do not adapt tuning models to these changes. Furthermore, extracellular electrical recordings of neurons' action potentials slowly change over time, impairing the preprocessing step of spike-sorting, during which the neurons responsible for recorded action potentials are identified.</p> <p></p> <p>This dissertation presents a non-linear adaptive Bayesian filter and an adaptive spike-sorting method for BMI decoding. The adaptive filter consists of the n-th order unscented Kalman filter and Bayesian regression self-training updates. The unscented Kalman filter estimates desired prosthetic movements using a non-linear model of tuning as its observation model. The model is quadratic with terms for position, velocity, distance from center of workspace, and velocity magnitude. The tuning model relates neuronal activity to movements at multiple time offsets simultaneously, and the movement model of the filter is an order n autoregressive model.</p> <p>To adapt the tuning model parameters to changes in the brain, Bayesian regression self-training updates are performed periodically. Tuning model parameters are stored as probability distributions instead of point estimates. Bayesian regression uses the previous model parameters as priors and calculates the posteriors of the regression between filter outputs, which are assumed to be the desired movements, and neuronal recordings. Before each update, filter outputs are smoothed using a Kalman smoother, and tuning model parameters are passed through a transition model describing how parameters change over time. Two variants of Bayesian regression are presented: one uses a joint distribution for the model parameters which allows analytical inference, and the other uses a more flexible factorized distribution that requires approximate inference using variational Bayes.</p> <p>To adapt spike-sorting parameters to changes in spike waveforms, variational Bayesian Gaussian mixture clustering updates are used to update the waveform clustering used to calculate these parameters. This Bayesian extension of expectation-maximization clustering uses the previous clustering parameters as priors and computes the new parameters as posteriors. The use of priors allows tracking of clustering parameters over time and facilitates fast convergence.</p> <p>To evaluate the proposed methods, experiments were performed with 3 Rhesus monkeys implanted with micro-wire electrode arrays in arm-related areas of the cortex. Off-line reconstructions and on-line, closed-loop experiments with brain-control show that the n-th order unscented Kalman filter is more accurate than previous linear methods. Closed-loop experiments over 29 days show that Bayesian regression self-training helps maintain control accuracy. Experiments on synthetic data show that Bayesian regression self-training can be applied to other tracking problems with changing observation models. Bayesian clustering updates on synthetic and neuronal data demonstrate tracking of cluster and waveform changes. These results indicate the proposed methods improve the accuracy and robustness of BMIs for prosthetic devices, bringing BMI-controlled prosthetics closer to clinical use.</p> / Dissertation
439

Filtering of thin objects : applications to vascular image analysis

Tankyevych, Olena 19 October 2010 (has links) (PDF)
The motivation of this work is filtering of elongated curvilinear objects in digital images. Their narrowness presents difficulties for their detection. In addition, they are prone to disconnections due to noise, image acquisition artefacts and occlusions by other objects. This work is focused on thin objects detection and linkage. For these purposes, a hybrid second-order derivative-based and morphological linear filtering method is proposed within the framework of scale-space theory. The theory of spatially-variant morphological filters is discussed and efficient algorithms are presented. From the application point of view, our work is motivated by the diagnosis, treatment planning and follow-up of vascular diseases. The first application is aimed at the assessment of arteriovenous malformations (AVM) of cerebral vasculature. The small size and the complexity of the vascular structures, coupled to noise, image acquisition artefacts, and blood signal heterogeneity make the analysis of such data a challenging task. This work is focused on cerebral angiographic image enhancement, segmentation and vascular network analysis with the final purpose to further assist the study of cerebral AVM. The second medical application concerns the processing of low dose X-ray images used in interventional radiology therapies observing insertion of guide-wires in the vascular system of patients. Such procedures are used in aneurysm treatment, tumour embolization and other clinical procedures. Due to low signal-to-noise ratio of such data, guide-wire detection is needed for their visualization and reconstruction. Here, we compare the performance of several line detection algorithms. The purpose of this work is to select a few of the most promising line detection methods for this medical application
440

Jádrové metody v částicových filtrech / Kernel Methods in Particle Filtering

Coufal, David January 2018 (has links)
Kernel Methods in Particle Filtering David Coufal Doctoral thesis - abstract The thesis deals with the use of kernel density estimates in particle filtering. In particular, it examines the convergence of the kernel density estimates to the filtering densities. The estimates are constructed on the basis of an out- put from particle filtering. It is proved theoretically that using the standard kernel density estimation methodology is effective in the context of particle filtering, although particle filtering does not produce random samples from the filtering densities. The main theoretical results are: 1) specification of the upper bounds on the MISE error of the estimates of the filtering densities and their partial derivatives; 2) specification of the related lower bounds and 3) providing a suitable tool for checking persistence of the Sobolev character of the filtering densities over time. In addition, the thesis also focuses on designing kernels suitable for practical use. 1

Page generated in 0.0977 seconds