• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 875
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1726
  • 412
  • 311
  • 245
  • 228
  • 184
  • 173
  • 166
  • 166
  • 156
  • 154
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Achieving Scalable, Exhaustive Network Data Processing by Exploiting Parallelism

Mawji, Afzal January 2004 (has links)
Telecommunications companies (telcos) and Internet Service Providers (ISPs) monitor the traffic passing through their networks for the purposes of network evaluation and planning for future growth. Most monitoring techniques currently use a form of packet sampling. However, exhaustive monitoring is a preferable solution because it ensures accurate traffic characterization and also allows encoding operations, such as compression and encryption, to be performed. To overcome the very high computational cost of exhaustive monitoring and encoding of data, this thesis suggests exploiting parallelism. By utilizing a parallel cluster in conjunction with load balancing techniques, a simulation is created to distribute the load across the parallel processors. It is shown that a very scalable system, capable of supporting a fairly high data rate can potentially be designed and implemented. A complete system is then implemented in the form of a transparent Ethernet bridge, ensuring that the system can be deployed into a network without any change to the network. The system focuses its encoding efforts on obtaining the maximum compression rate and, to that end, utilizes the concept of streams, which attempts to separate data packets into individual flows that are correlated and whose redundancy can be removed through compression. Experiments show that compression rates are favourable and confirms good throughput rates and high scalability.
622

Dynamic Factored Particle Filtering for Context-Specific Correlations

Mostinski, Dimitri 03 May 2007 (has links)
In order to control any system one needs to know the system's current state. In many real-world scenarios the state of the system cannot be determined with certainty due to the sensors being noisy or simply missing. In cases like these one needs to use probabilistic inference techniques to compute the likely states of the system and because such cases are common, there are lots of techniques to choose from in the field of Artificial Intelligence. Formally, we must compute a probability distribution function over all possible states. Doing this exactly is difficult because the number of states is exponential in the number of variables in the system and because the joint PDF may not have a closed form. Many approximation techniques have been developed over the years, but none ideally suited the problem we faced. Particle filtering is a popular scheme that approximates the joint PDF over the variables in the system by a set of weighted samples. It works even when the joint PDF has no closed form and the size of the sample can be adjusted to trade off accuracy for computation time. However, with many variables the size of the sample required for a good approximation can still become prohibitively large. Factored particle filtering uses the structure of variable dependencies to split the problem into many smaller subproblems and scales better if such decomposition is possible. However, our problem was unusual because some normally independent variables would become strongly correlated for short periods of time. This dynamically-changing dependency structure was not handled effectively by existing techniques. Considering variables to be always correlated meant the problem did not scale, considering them to be always independent introduced errors too large to tolerate. It was necessary to develop an approach that would utilize variables' independence whenever possible, but not introduce large errors when variables become correlated. We have developed a new technique for monitoring the state of the system for a class of systems with context-specific correlations. It is based on the idea of caching the context in which correlations arise and otherwise keeping the variables independent. Our evaluation shows that our technique outperforms existing techniques and is the first viable solution for the class of problems we consider.
623

Rendering Antialiased Shadows using Warped Variance Shadow Maps

Lauritzen, Andrew Timothy January 2008 (has links)
Shadows contribute significantly to the perceived realism of an image, and provide an important depth cue. Rendering high quality, antialiased shadows efficiently is a difficult problem. To antialias shadows, it is necessary to compute partial visibilities, but computing these visibilities using existing approaches is often too slow for interactive applications. Shadow maps are a widely used technique for real-time shadow rendering. One major drawback of shadow maps is aliasing, because the shadow map data cannot be filtered in the same way as colour textures. In this thesis, I present variance shadow maps (VSMs). Variance shadow maps use a linear representation of the depth distributions in the shadow map, which enables the use of standard linear texture filtering algorithms. Thus VSMs can address the problem of shadow aliasing using the same highly-tuned mechanisms that are available for colour images. Given the mean and variance of the depth distribution, Chebyshev's inequality provides an upper bound on the fraction of a shaded fragment that is occluded, and I show that this bound often provides a good approximation to the true partial occlusion. For more difficult cases, I show that warping the depth distribution can produce multiple bounds, some tighter than others. Based on this insight, I present layered variance shadow maps, a scalable generalization of variance shadow maps that partitions the depth distribution into multiple segments. This reduces or eliminates an artifact - "light bleeding" - that can appear when using the simpler version of variance shadow maps. Additionally, I demonstrate exponential variance shadow maps, which combine moments computed from two exponentially-warped depth distributions. Using this approach, high quality results are produced at a fraction of the storage cost of layered variance shadow maps. These algorithms are easy to implement on current graphics hardware and provide efficient, scalable solutions to the problem of shadow map aliasing.
624

A Multi-scale Stochastic Filter Based Approach to Inverse Scattering for 3D Ultrasound Soft Tissue Characterization

Tsui, Patrick Pak Chuen January 2009 (has links)
The goal of this research is to achieve accurate characterization of multi-layered soft tissues in three dimensions using focused ultrasound. The characterization of the acoustic parameters of each tissue layer is formulated as recursive processes of forward- and inverse- scattering. Forward scattering deals with the modeling of focused ultrasound wave propagation in multi-layered tissues, and the computation of the focused wave amplitudes in the tissues based on the acoustic parameters of the tissue as generated by inverse scattering. The model for mapping the tissue acoustic parameters to focused waves is highly nonlinear and stochastic. In addition, solving (or inverting) the model to obtain tissue acoustic parameters is an ill-posed problem. Therefore, a nonlinear stochastic inverse scattering method is proposed such that no linearization and mathematical inversion of the model are required. Inverse scattering aims to estimate the tissue acoustic parameters based on the forward scattering model and ultrasound measurements of the tissues. A multi-scale stochastic filter (MSF) is proposed to perform inverse scattering. MSF generates a set of tissue acoustic parameters, which are then mapped into focused wave amplitudes in the multi-layered tissues by forward scattering. The tissue acoustic parameters are weighted by comparing their focused wave amplitudes to the actual ultrasound measurements. The weighted parameters are used to estimate a weighted Gaussian mixture as the posterior probability density function (PDF) of the parameters. This PDF is optimized to achieve minimum estimation error variance in the sense of the posterior Cramer-Rao bound. The optimized posterior PDF is used to produce minimum mean-square-error estimates of the tissue acoustic parameters. As a result, both the estimation error and uncertainty of the parameters are minimized. PDF optimization is formulated based on a novel multi-scale PDF analysis framework. This framework is founded based on exploiting the analogy between PDFs and analog (or digital) signals. PDFs and signals are similar in the sense that they represent characteristics of variables in their respective domains, except that there are constraints imposed on PDFs. Therefore, it is reasonable to consider a PDF as a signal that is subject to amplitude constraints, and as such apply signal processing techniques to analyze the PDF. The multi-scale PDF analysis framework is proposed to recursively decompose an arbitrary PDF from its fine to coarse scales. The recursive decompositions are designed so as to ensure that requirements such as PDF constraints, zero-phase shift and non-creation of artifacts are satisfied. The relationship between the PDFs at consecutive scales is derived in order for the PDF optimization process to recursively reconstruct the posterior PDF from its coarse to fine scales. At each scale, PDF reconstruction aims to reduce the variances of the posterior PDF Gaussian components, and as a result the confidence in the estimate is increased. The overall posterior PDF variance reduction is guided by the posterior Cramer-Rao bound. A series of experiments is conducted to investigate the performance of the proposed method on ultrasound multi-layered soft tissue characterization. Multi-layered tissue phantoms that emulate ocular components of the eye are fabricated as test subjects. Experimental results confirm that the proposed MSF inverse scattering approach is well suited for three-dimensional ultrasound tissue characterization. In addition, performance comparisons between MSF and a state-of-the-art nonlinear stochastic filter are conducted. Results show that MSF is more accurate and less computational intensive than the state-of-the-art filter.
625

Intuitive Teleoperation of an Intelligent Robotic System Using Low-Cost 6-DOF Motion Capture

Gagne, Jonathan January 2011 (has links)
There is currently a wide variety of six degree-of-freedom (6-DOF) motion capture technologies available. However, these systems tend to be very expensive and thus cost prohibitive. A software system was developed to provide 6-DOF motion capture using the Nintendo Wii remote’s (wiimote) sensors, an infrared beacon, and a novel hierarchical linear-quaternion Kalman filter. The software is made freely available, and the hardware costs less than one hundred dollars. Using this motion capture software, a robotic control system was developed to teleoperate a 6-DOF robotic manipulator via the operator’s natural hand movements. The teleoperation system requires calibration of the wiimote’s infrared cameras to obtain an estimate of the wiimote’s 6-DOF pose. However, since the raw images from the wiimote’s infrared camera are not available, a novel camera-calibration method was developed to obtain the camera’s intrinsic parameters, which are used to obtain a low-accuracy estimate of the 6-DOF pose. By fusing the low-accuracy estimate of 6-DOF pose with accelerometer and gyroscope measurements, an accurate estimation of 6-DOF pose is obtained for teleoperation. Preliminary testing suggests that the motion capture system has an accuracy of less than a millimetre in position and less than one degree in attitude. Furthermore, whole-system tests demonstrate that the teleoperation system is capable of controlling the end effector of a robotic manipulator to match the pose of the wiimote. Since this system can provide 6-DOF motion capture at a fraction of the cost of traditional methods, it has wide applicability in the field of robotics and as a 6-DOF human input device to control 3D virtual computer environments.
626

Visible relations in online communities : modeling and using social networks

Webster, Andrew 21 September 2007 (has links)
The Internet represents a unique opportunity for people to interact with each other across time and space, and online communities have existed long before the Internet's solidification in everyday living. There are two inherent challenges that online communities continue to contend with: motivating participation and organizing information. An online community's success or failure rests on the content generated by its users. Specifically, users need to continually participate by contributing new content and organizing existing content for others to be attracted and retained. I propose both participation and organization can be enhanced if users have an explicit awareness of the implicit social network which results from their online interactions. My approach makes this normally ``hidden" social network visible and shows users that these intangible relations have an impact on satisfying their information needs and vice versa. That is, users can more readily situate their information needs within social processes, understanding that the value of information they receive and give is influenced and has influence on the mostly incidental relations they have formed with others. First, I describe how to model a social network within an online discussion forum and visualize the subsequent relationships in a way that motivates participation. Second, I show that social networks can also be modeled to generate recommendations of information items and that, through an interactive visualization, users can make direct adjustments to the model in order to improve their personal recommendations. I conclude that these modeling and visualization techniques are beneficial to online communities as their social capital is enhanced by "weaving" users more tightly together.
627

Networked Control System Design and Parameter Estimation

Yu, Bo 29 September 2008 (has links)
Networked control systems (NCSs) are a kind of distributed control systems in which the data between control components are exchanged via communication networks. Because of the attractive advantages of NCSs such as reduced system wiring, low weight, and ease of system diagnosis and maintenance, the research on NCSs has received much attention in recent years. The first part (Chapter 2 - Chapter 4) of the thesis is devoted to designing new controllers for NCSs by incorporating the network-induced delays. The thesis also conducts research on filtering of multirate systems and identification of Hammerstein systems in the second part (Chapter 5 - Chapter 6).<br /><br /> Network-induced delays exist in both sensor-to-controller (S-C) and controller-to-actuator (C-A) links. A novel two-mode-dependent control scheme is proposed, in which the to-be-designed controller depends on both S-C and C-A delays. The resulting closed-loop system is a special jump linear system. Then, the conditions for stochastic stability are obtained in terms of a set of linear matrix inequalities (LMIs) with nonconvex constraints, which can be efficiently solved by a sequential LMI optimization algorithm. Further, the control synthesis problem for the NCSs is considered. The definitions of <em>H<sub>2</sub></em> and <em>H<sub>∞</sub></em> norms for the special system are first proposed. Also, the plant uncertainties are considered in the design. Finally, the robust mixed <em>H<sub>2</sub>/H<sub>&infin;</sub></em> control problem is solved under the framework of LMIs. <br /><br /> To compensate for both S-C and C-A delays modeled by Markov chains, the generalized predictive control method is modified to choose certain predicted future control signal as the current control effort on the actuator node, whenever the control signal is delayed. Further, stability criteria in terms of LMIs are provided to check the system stability. The proposed method is also tested on an experimental hydraulic position control system. <br /><br /> Multirate systems exist in many practical applications where different sampling rates co-exist in the same system. The <em>l<sub>2</sub>-l<sub>&infin;</sub></em> filtering problem for multirate systems is considered in the thesis. By using the lifting technique, the system is first transformed to a linear time-invariant one, and then the filter design is formulated as an optimization problem which can be solved by using LMI techniques. <br /><br /> Hammerstein model consists of a static nonlinear block followed in series by a linear dynamic system, which can find many applications in different areas. New switching sequences to handle the two-segment nonlinearities are proposed in this thesis. This leads to less parameters to be estimated and thus reduces the computational cost. Further, a stochastic gradient algorithm based on the idea of replacing the unmeasurable terms with their estimates is developed to identify the Hammerstein model with two-segment nonlinearities. <br /><br /> Finally, several open problems are listed as the future research directions.
628

Structural Estimation Using Sequential Monte Carlo Methods

Chen, Hao January 2011 (has links)
<p>This dissertation aims to introduce a new sequential Monte Carlo (SMC) based estimation framework for structural models used in macroeconomics and industrial organization. Current Markov chain Monte Carlo (MCMC) estimation methods for structural models suffer from slow Markov chain convergence, which means parameter and state spaces of interest might not be properly explored unless huge numbers of samples are simulated. This could lead to insurmountable computational burdens for the estimation of those structural models that are expensive to solve. In contrast, SMC methods rely on the principle of sequential importance sampling to jointly evolve simulated particles, thus bypassing the dependence on Markov chain convergence altogether. This dissertation will explore the feasibility and the potential benefits to estimating structural models using SMC based methods.</p><p> Chapter 1 casts the structural estimation problem in the form of inference of hidden Markov models and demonstrates with a simple growth model.</p><p> Chapter 2 presents the key ingredients, both conceptual and theoretical, to successful SMC parameter estimation strategies in the context of structural economic models.</p><p> Chapter 3, based on Chen, Petralia and Lopes (2010), develops SMC estimation methods for dynamic stochastic general equilibrium (DSGE) models. SMC algorithms allow a simultaneous filtering of time-varying state vectors and estimation of fixed parameters. We first establish empirical feasibility of the full SMC approach by comparing estimation results from both MCMC batch estimation and SMC on-line estimation on a simple neoclassical growth model. We then estimate a large scale DSGE model for the Euro area developed in Smets and Wouters (2003) with a full SMC approach, and revisit the on-going debate between the merits of reduced form and structural models in the macroeconomics context by performing sequential model assessment between the DSGE model and various VAR/BVAR models.</p><p> Chapter 4 proposes an SMC estimation procedure and show that it readily applies to the estimation of dynamic discrete games with serially correlated endogenous state variables. I apply this estimation procedure to a dynamic oligopolistic game of entry using data from the generic pharmaceutical industry and demonstrate that the proposed SMC method can potentially better explore the parameter posterior space while being more computationally efficient than MCMC estimation. In addition, I show how the unobserved endogenous cost paths could be recovered using particle smoothing, both with and without parameter uncertainty. Parameter estimates obtained using this SMC based method largely concur with earlier findings that spillover effect from market entry is significant and plays an important role in the generic drug industry, but that it might not be as high as previously thought when full model uncertainty is taken into account during estimation.</p> / Dissertation
629

Online Learning of Non-Stationary Networks, with Application to Financial Data

Hongo, Yasunori January 2012 (has links)
<p>In this paper, we propose a new learning algorithm for non-stationary Dynamic Bayesian Networks is proposed. Although a number of effective learning algorithms for non-stationary DBNs have previously been proposed and applied in Signal Pro- cessing and Computational Biology, those algorithms are based on batch learning algorithms that cannot be applied to online time-series data. Therefore, we propose a learning algorithm based on a Particle Filtering approach so that we can apply that algorithm to online time-series data. To evaluate our algorithm, we apply it to the simulated data set and the real-world financial data set. The result on the simulated data set shows that our algorithm performs accurately makes estimation and detects change. The result applying our algorithm to the real-world financial data set shows several features, which are suggested in previous research that also implies the effectiveness of our algorithm.</p> / Thesis
630

Estimation of the Longitudinal and Lateral Velocities of a Vehicle using Extended Kalman Filters

Alvarez, Juan Camilo 20 November 2006 (has links)
Vehicle motion and tire forces have been estimated using extended Kalman filters for many years. The use of extended Kalman filters is primarily motivated by the simultaneous presence of nonlinear dynamics and sensor noise. Two versions of extended Kalman filters are employed in this thesis: one using a deterministic tire-force model and the other using a stochastic tire-force model. Previous literature has focused on linear stochastic tire-force models and on linear deterministic tire-force models. However, it is well known that there exists a nonlinear relationship between slip variables and tire-force variables. For this reason, it is suitable to use a nonlinear deterministic tire-force model for the extended Kalman filter, and this is the novel aspect at this work. The objective of this research is to show the improvement of the extended Kalman filter using a nonlinear deterministic tire-force model in comparison to linear stochastic tire-force model. The simulation model is a seven degree-of-freedom bicycle model that includes vertical suspension dynamics but neglects the roll motion. A comparison between the linear stochastic tire-force model and the nonlinear deterministic tire-force model confirms the expected results. Simulation studies are performed on some illustrative examples obtaining good tracking performance.

Page generated in 0.0578 seconds