221 |
Contributions to filtering under randomly delayed observations and additive-multiplicative noiseAllahyani, Seham January 2017 (has links)
This thesis deals with the estimation of unobserved variables or states from a time series of noisy observations. Approximate minimum variance filters for a class of discrete time systems with both additive and multiplicative noise, where the measurement might be delayed randomly by one or more sample times, are investigated. The delayed observations are modelled by up to N sample times by using N Bernoulli random variables with values of 0 or 1. We seek to minimize variance over a class of filters which are linear in the current measurement (although potentially nonlinear in past measurements) and present a closed-form solution. An interpretation of the multiplicative noise in both transition and measurement equations in terms of filtering under additive noise and stochastic perturbations in the parameters of the state space system is also provided. This filtering algorithm extends to the case when the system has continuous time state dynamics and discrete time state measurements. The Euler scheme is used to transform the process into a discrete time state space system in which the state dynamics have a smaller sampling time than the measurement sampling time. The number of sample times by which the observation is delayed is considered to be uncertain and a fraction of the measurement sample time. The same problem is considered for nonlinear state space models of discrete time systems, where the measurement might be delayed randomly by one sample time. The linearisation error is modelled as an additional source of noise which is multiplicative in nature. The algorithms developed are demonstrated throughout with simulated examples.
|
222 |
An Incidence Approach to the Distinct Distances ProblemMcLaughlin, Bryce 01 January 2018 (has links)
In 1946, Erdös posed the distinct distances problem, which asks for the minimum number of distinct distances that any set of n points in the real plane must realize. Erdös showed that any point set must realize at least &Omega(n1/2) distances, but could only provide a construction which offered &Omega(n/&radic(log(n)))$ distances. He conjectured that the actual minimum number of distances was &Omega(n1-&epsilon) for any &epsilon > 0, but that sublinear constructions were possible. This lower bound has been improved over the years, but Erdös' conjecture seemed to hold until in 2010 Larry Guth and Nets Hawk Katz used an incidence theory approach to show any point set must realize at least &Omega(n/log(n)) distances. In this thesis we will explore how incidence theory played a roll in this process and expand upon recent work by Adam Sheffer and Cosmin Pohoata, using geometric incidences to achieve bounds on the bipartite variant of this problem. A consequence of our extensions on their work is that the theoretical upper bound on the original distinct distances problem of &Omega(n/&radic(log(n))) holds for any point set which is structured such that half of the n points lies on an algebraic curve of arbitrary degree.
|
223 |
Asymptotic expansions for bounded solutions to semilinear Fuchsian equationsXiaochun, Liu, Witt, Ingo January 2001 (has links)
It is shown that bounded solutions to semilinear elliptic Fuchsian equations obey complete asymptoic expansions in terms of powers and logarithms in the distance to the boundary. For that purpose, Schuze's notion of asymptotic type for conormal asymptotics close to a conical point is refined. This in turn allows to perform explicit calculations on asymptotic types - modulo the resolution of the spectral problem for determining the singular exponents in the asmptotic expansions.
|
224 |
Exploring Discrete Cosine Transform for Multi-resolution AnalysisAbedi, Safdar Ali Syed 10 August 2005 (has links)
Multi-resolution analysis has been a very popular technique in the recent years. Wavelets have been used extensively to perform multi resolution image expansion and analysis. DCT, however, has been used to compress image but not for multi resolution image analysis. This thesis is an attempt to explore the possibilities of using DCT for multi-resolution image analysis. Naive implementation of block DCT for multi-resolution expansion has many difficulties that lead to signal distortion. One of the main causes of distortion is the blocking artifacts that appear when reconstructing images transformed by DCT. The new algorithm is based on line DCT which eliminates the need for block processing. The line DCT is one dimensional array based on cascading the image rows and columns in one transform operation. Several images have been used to test the algorithm at various resolution levels. The reconstruction mean square error rate is used as an indication to the success of the method. The proposed algorithm has also been tested against the traditional block DCT.
|
225 |
An Algorithm to Generate Two-Dimensional Drawings of Conway Algebraic KnotsTung, Jen-Fu 01 May 2010 (has links)
The problem of finding an efficient algorithm to create a two-dimensional embedding of a knot diagram is not an easy one. Typically, knots with a large number of crossings will not nicely generate two-dimensional drawings. This thesis presents an efficient algorithm to generate a knot and to create a nice two-dimensional embedding of the knot. For the purpose of this thesis a drawing is “nice” if the number of tangles in the diagram consisting of half-twists is minimal. More specifically, the algorithm generates prime, alternating Conway algebraic knots in O(n) time where n is the number of crossings in the knot, and it derives a precise representation of the knot’s nice drawing in O(n) time (The rendering of the drawing is not O(n).). Central to the algorithm is a special type of rooted binary tree which represents a distinct prime, alternating Conway algebraic knot. Each leaf in the tree represents a crossing in the knot. The algorithm first generates the tree and then modifies such a tree repeatedly to reduce the number of its leaves while ensuring that the knot type associated with the tree is not modified. The result of the algorithm is a tree (for the knot) with a minimum number of leaves. This minimum tree is the basis of deriving a 4-regular plane map which represents the knot embedding and to finally draw the knot’s diagram.
|
226 |
Efficient Memory Arrangement Methods and VLSI Implementations for Discrete Fourier and Cosine TransformsHsu, Fang-Chii 24 July 2001 (has links)
The thesis proposes using the efficient memory arrangement methods for the implementation of radix-r multi-dimensional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT). By using the memory instead of the registers to buffer and reorder data, hardware complexity is significantly reduced. We use the recursive architecture that requires only one arithmetic-processing element to compute the entire DFT/DCT operation. The algorithm is based on efficient coefficient matrix factorization and data allocation. By exploiting the features of Kronecker product representation in the fast algorithm, the multi-dimensional DFT/DCT operation is converted into its corresponding 1-D problem and the intermediate data is stored in several memory units. In addition to the smaller area, we also propose a method to reduce the power consumption of the DFT/DCT processors.
|
227 |
Hardware acceleration for conservative parallel discrete event simulation on multi-core systemsLynch, Elizabeth Whitaker 07 February 2011 (has links)
Multi-core architectures are becoming more common and core counts continue to increase. There are six- and eight-core chips currently in production, such as Intel Gulftown, and many-core chips with dozens of cores, such as the Intel Teraflops 80-core chip, are projected in the next five years. However, adding more cores often does not improve the performance of applications. It would be desirable to take advantage of the multi-core environment to speed up parallel discrete event simulation. The current bottleneck for many parallel simulations is time synchronization. This is especially true for simulations of wireless networks and on-chip networks, which have low lookahead. Message passing is also a common simulation bottleneck. In order to address the issue of time synchronization, we have designed hardware at a functional level that performs the time synchronization for parallel discrete event simulation asynchronously and in just a few clock cycles, eliminating the need for global communication with message passing or lock contention for shared memory. This hardware, the Global Synchronization Unit, consists of 3 register files, each the size of the number of cores, and is accessed using 5 new atomic instructions. In order to reduce the simulation overhead from message passing, we have also designed two independent pieces of hardware at a functional level, the Atomic Shared Heap and Atomic Message Passing, which can be used to perform lock-free, zero-copy message passing on a multi-core system. The impact of these specialized hardware units on the performance of parallel discrete event simulation is assessed and compared to traditional shared-memory techniques.
|
228 |
Framework for robust design: a forecast environment using intelligent discrete event simulationBeisecker, Elise K. 29 March 2012 (has links)
The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach.
The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition.
The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms.
The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point.
Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm.
The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
|
229 |
On generalizing the multiple discrete-continuous extreme value (MDCEV) modelCastro, Marisol Andrea 22 February 2013 (has links)
The overall goal of the dissertation is to contribute to the growing literature on multiple discrete-continuous (MDC) choice models. In MDC choice situations, consumers often encounter two inter-related decisions at a choice instance – which alternative(s) to choose for consumption from a set of available alternatives, and the amount to consume of the chosen alternatives. In the recent literature, there is increasing attention on modeling MDC situations based on a rigorous underlying micro-economic utility maximization framework. Among these models, the multiple-discrete continuous extreme value MDCEV model (Bhat, 2005, 2008) provides a number of advantages over other models. The primary objective of this dissertation is to extend the MDCEV framework to accommodate more realistic decision-making processes from a behavioral standpoint. The dissertation has two secondary objectives. The first is to advance the current operationalization and the econometric modeling of MDC choice situations. The second is to contribute to the transportation literature by estimating MDC models that provide new insights on individuals’ travel decision processes.
The proposed extensions of the MDCEV model include: (1) To formulate and estimate a latent choice set generation model within the MDCEV framework, (2) To develop a random utility-based model formulation that extends the MDCEV model to include multiple linear constraints, and (3) To extend the MDCEV model to relax the assumption of an additively separable utility function. The methodologies developed in this dissertation allow the specification and estimation of complex MDC choice models, and may be viewed as a major advance with the potential to lead to significant breakthroughs in the way MDC choices are structured and implemented. These methodologies provide a more realistic representation of the choice process. The proposed extensions are applied to different empirical contexts within the transportation field, including participation in and travel mileage allocated to non-work activities during various time periods of the day for workers, participation in recreational activities and time allocation for workers, and household expenditures in disaggregate transportation categories. The results from these exercises clearly underline the importance of relaxing some of the assumptions made, not only in the MDCEV model, but in MDC models in general. / text
|
230 |
Discrete gate sizing and threshold voltage assignment to optimize power under performance constraintsSingh, Jagmohan 2013 August 1900 (has links)
In today's world, it is becoming increasingly important to be able to design high performance integrated circuits (ICs) and have them run at as low power as possible. Gate sizing and threshold voltage (Vt) assignment optimizations are one of the major contributors to such trade-offs for power and performance of ICs. In fact, the ever increasing design sizes and more aggressive timing requirements make gate sizing and Vt assignment one of the most important CAD problems in physical synthesis. A promising gate sizing optimization algorithm has to satisfy requirements like being scalable to tackle very large design sizes, being able to optimally utilize a large (but finite) number of possible gate configurations available in standard cell library based on different gate sizes and/or threshold voltages (Vt) and/or gate lengths (Lg), and also, being able to handle non-convex cell delays in
modern cell libraries.
The work in this thesis makes use of the research-oriented infrastructure made available as part of the ISPD (International Symposium on Physical Design) 2012 Gate Sizing Contest that addresses the issues encountered in modern gate sizing problems. We present a two-phase optimization approach where Lagrangian Relaxation is used to formulate the optimization problem. In the first phase, the Lagrangian relaxed subproblem is iteratively solved using a greedy algorithm, while in the second phase, a cell downsizing and Vt upscaling heuristic is employed to further recover power from the timing-feasible and power-optimized sizing solution obtained at the end of first phase. We also propose a multi-core implementation of the first-phase optimizations, which constitute majority of the total runtime, to take advantage of multi-core processors available today. A speedup of the order of 4 to 9 times is seen on different benchmarks as compared to serial implementation when run on a 2 socket 6-core machine. Compared to the winner of ISPD 2012 contest, we further reduce leakage power by 17.21% and runtime by 87.92%, on average, while obtaining feasible sizing solutions on all the benchmark designs. / text
|
Page generated in 0.0482 seconds