• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 88
  • 88
  • 35
  • 31
  • 26
  • 25
  • 24
  • 17
  • 17
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The Worst-case and Best-case Coverage Problems in Wireless Sensor Networks

Hou, Yung-tsung 10 June 2009 (has links)
Wireless sensor networks provide a wide range of applications, such as environment surveillance, hazard monitoring, traffic control, and other commercial or military applications. The quality of service provided by a sensor network relies on its coverage, i.e., how well an event can be tracked by sensors. This research studies issues about sensor coverage: (1) how to optimally deploy new sensors in order to improve the coverage of an existing network, (2) how to properly measure the coverage when the path is a line. The best- and worst-case coverage problems that are related to the observability of a path are addressed and formulated into computational geometry problems. We prove that there exists a duality between the two coverage problems, and then solve the two problems together. The presented new-node placement algorithm is shown to deploy new nodes optimally in polynomial time. However, in some applications, such as highway monitoring and anti-missile interception systems, the trajectory of a target is linear but we can not find suitable coverage measurement for the straight-line path in previous research. Therefore, this research presents novel algorithms for coverage measurement of straight-line paths. Based on computational geometry and graph theory, we propose plane sweep algorithms to find the optimal straight-line paths for both the best-case and worst-case coverage problems in polynomial time. Both mathematical analysis and simulations are used to prove the optimality of our algorithms.
22

Probabilistic Error Analysis Models for Nano-Domain VLSI Circuits

Lingasubramanian, Karthikeyan 03 March 2010 (has links)
Technology scaling to the nanometer levels has paved the way to realize multi-dimensional applications in a single product by increasing the density of the electronic devices on integrated chips. This has naturally attracted a wide variety of industries like medicine, communication, automobile, defense and even house-hold appliance, to use high speed multi-functional computing machines. Apart from the advantages of these nano-domain computing devices, their usage in safety-centric applications like implantable biomedical chips and automobile safety has immensely increased the need for comprehensive error analysis to enhance their reliability. Moreover, these nano-electronic devices have increased propensity to transient errors due to extremely small device dimensions and low switching energy. The nature of these transient errors is more probabilistic than deterministic, and so requires probabilistic models for estimation and analysis. In this dissertation, we present comprehensive analytic studies of error behavior in nano-level digital logic circuits using probabilistic reliability models. It comprises the design of exact probabilistic error models, to compute the maximum error over all possible input space in a circuit-specific manner; to study the behavior of transient errors in sequential circuits; and to achieve error mitigation through redundancy techniques. The model to compute maximum error, also provides the worst-case input vector, which has the highest probability to generate an erroneous output, for any given logic circuit. The model for sequential logic that can measure the expected output error probability, given a probabilistic input space, can account for both spatial dependencies and temporal correlations across the logic, using a time evolving causal network. For comprehensive error reduction in logic circuits, temporal, spatial and hybrid redundancy models, are implemented. The temporal redundancy model uses the triple temporal redundancy technique that applies redundancy in the input space, spatial redundancy model uses the cascaded triple modular redundancy technique that applies redundancy in the intermediate signal space and the hybrid redundancy techniques encapsulates both temporal and spatial redundancy schemes. All the above studies are performed on standard benchmark circuits from ISCAS and MCNC suites and the subsequent experimental results are obtained. These results clearly encompasses the various aspects of error behavior in nano VLSI circuits and also shows the efficiency and versatility of the probabilistic error models.
23

Robust optimization for portfolio risk : a ravisit of worst-case risk management procedures after Basel III award.

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
24

Compiler optimization VS WCET : Battle of the ages / Kompilatoroptimering VS WCET

Harrius, Tova, Nordin, Max January 2022 (has links)
Optimization by a compiler can be executed with many different methods. The defence company Saab provided us with a mission, to see if we could optimize their code with the help of the GCC compiler and its optimization flags. For this thesis we have conducted a study of the optimization flags to decrease the worst case execution time. The first step to assemble an effective base of flags was reading the documentation for the flags. We then tested the different flags and analysed them. In the end we ended up with four chosen sets that we saw fitted to be discussed and analyzed further. The results did not live up to our expectations, as we thought the flags would optimize the execution time. The flags int he majority of cases gave an, although small, increase of the execution time. We only had one set where the flags gave us a decrease, which we called the Expensive Optimization.With these results we can conclude that Saab do not need to change their existing set of optimization flags to optimize their compiler further.
25

Energy Management in Grid-connected Microgrids with On-site Storage Devices

Khodabakhsh, Raheleh 11 1900 (has links)
A growing need for clean and sustainable energy is causing a significant shift in the electricity generation paradigm. In the electricity system of the future, integration of renewable energy sources with smart grid technologies can lead to potentially huge economical and environmental benefits ranging from lesser dependency on fossil fuels and improved efficiency to greater reliability and eventually reduced cost of electricity. In this context, microgrids serve as one of the main components of smart grids with high penetration of renewable resources and modern control strategies. This dissertation is concerned with developing optimal control strategies to manage an energy storage unit in a grid-connected microgrid under uncertainty of electricity demand and prices. Two methods are proposed based on the concept of rolling horizon control, where charge/discharge activities of the storage unit are determined by repeatedly solving an optimization problem over a moving control window. The predicted values of the microgrid net electricity demand and electricity prices over the control horizon are assumed uncertain. The first formulation of the control is based on the scenario-based stochastic conditional value at risk (CVaR) optimization, where the cost function includes electricity usage cost, battery operation costs, and grid signal smoothing objectives. Gaussian uncertainty is assumed in both net demand and electricity prices. The second formulation reduces the computations by taking a worst-case CVaR stochastic optimization approach. In this case, the uncertainty in demand is still stochastic but the problem constraints are made robust with respect to price changes in a given range. The optimization problems are initially formulated as mixed integer linear programs (MILP), which are non-convex. Later, reformulations of the optimization problems into convex linear programs are presented, which are easier and faster to solve. Simulation results under different operation scenarios are presented to demonstrate the effectiveness of the proposed methods. Finally, the energy management problem in network of grid-connected microgrids is investigated and a strategy is devised to allocate the resulting net savings/costs of operation of the microgrids to the individual microgrids. In the proposed approach, the energy management problem is formulated in a deterministic co-operative game theoretic framework for a group of connected microgrids as a single entity and the individual savings are distributed based on the Shapley value theory. Simulation results demonstrate that this co-operation leads to higher economical return for individual microgrids compared to the case where each of them is operating independently. Furthermore, this reduces the dependency of the microgrids on the utility grid by exchanging power locally. / Thesis / Master of Applied Science (MASc)
26

Influence of Customer Locations on Heuristics and Solutions for the Vehicle Routing Problem

Tilashalski, Melissa Christine 07 July 2023 (has links)
The vehicle routing problem (VRP) determines preferred vehicle routes to visit multiple customer locations from a depot location based on a defined objective function. The VRP is an NP-hard network optimization problem that is challenging to solve to optimality. Over the past 60 years, multitudes of heuristics and metaheuristics have been developed in order to minimize the computational burden of solving the VRP. In order to compare the performance of VRP heuristics, researchers have developed bench-marking datasets. These datasets, however, lack properties found in industry datasets. In this dissertation, we explore how properties of industry datasets influence VRP heuristics and objective functions. In Chapter 2, we quantify and compare features of bench-marking and industry datasets. In order to determine if these features influence heuristic performance, we conduct extensive computational runs on three heuristics, Tabu Search, Genetic Algorithm, and Clarke-Wright Savings Procedure, on standard and industry datasets. In Chapter 3, we derive worst-case analysis on how VRP objective functions and metrics relate to one another. These bounds depend on properties of customer locations. These bounds illustrate how customer locations can influence how different routes behave for different routing metrics. Finally, in Chapter 4, we improve two VRP heuristics, Clarke-Wright Saving Procedure and Hybrid Genetic Search Algorithm, by developing new enhancements to the algorithms. These enhancements rely on certain properties of the datasets in order to perform well. Thus, these heuristics perform better on specific VRP dataset types. / Doctor of Philosophy / The vehicle routing problem (VRP) creates vehicle routes that have the shortest travel distance. The routes determine how vehicles should visit multipl customer locations, to deliver or pickup goods, and return to a depot location. While explaining what the VRP entails is simple, the VRP is actually very difficult for even the most sophisticated algorithms on the best computers to solve. Over the past 60 years, many algorithms have been developed in order to more easily and quickly solve the VRP. In order to compare the performance of VRP algorithms, researchers have developed bench-marking datasets. However, these datasets lack properties of datasets found in industry. In this dissertation, we look to connect the disconnect between industry and bench-marking datasets by 1) comparing feature differences between these two types of datasets, 2) determining if differences in datasets imply differences in algorithm performance, 3) proving how problem differences influence VRP routes, and 4) enhancing existing VRP algorithms to perform better on specific VRP dataset types.
27

A Scalable, Load-Balancing Data Structure for Highly Dynamic Environments

Foster, Anthony 05 June 2008 (has links)
No description available.
28

SIMULATION-BASED TOLERANCE STACKUP ANALYSIS IN MACHINING

MUSA, RAMI ADNAN 02 September 2003 (has links)
No description available.
29

Robust Control Design and Analysis for Small Fixed-Wing Unmanned Aircraft Systems Using Integral Quadratic Constraints

Palframan, Mark C. 29 July 2016 (has links)
The main contributions of this work are applications of robust control and analysis methods to complex engineering systems, namely, small fixed-wing unmanned aircraft systems (UAS). Multiple path-following controllers for a small fixed-wing Telemaster UAS are presented, including a linear parameter-varying (LPV) controller scheduled over path curvature. The controllers are synthesized based on a lumped path-following and UAS dynamic system, effectively combining the six degree-of-freedom aircraft dynamics with established parallel transport frame virtual vehicle dynamics. The robustness and performance of these controllers are tested in a rigorous MATLAB simulation environment that includes steady winds, turbulence, measurement noise, and delays. After being synthesized off-line, the controllers allow the aircraft to follow prescribed geometrically defined paths bounded by a maximum curvature. The controllers presented within are found to be robust to the disturbances and uncertainties in the simulation environment. A robust analysis framework for mathematical validation of flight control systems is also presented. The framework is specifically developed for the complete uncertainty characterization, quantification, and analysis of small fixed-wing UAS. The analytical approach presented within is based on integral quadratic constraint (IQC) analysis methods and uses linear fractional transformations (LFTs) on uncertainties to represent system models. The IQC approach can handle a wide range of uncertainties, including static and dynamic, linear time-invariant and linear time-varying perturbations. While IQC-based uncertainty analysis has a sound theoretical foundation, it has thus far mostly been applied to academic examples, and there are major challenges when it comes to applying this approach to complex engineering systems, such as UAS. The difficulty mainly lies in appropriately characterizing and quantifying the uncertainties such that the resulting uncertain model is representative of the physical system without being overly conservative, and the associated computational problem is tractable. These challenges are addressed by applying IQC-based analysis tools to analyze the robustness of the Telemaster UAS flight control system. Specifically, uncertainties are characterized and quantified based on mathematical models and flight test data obtained in house for the Telemaster platform and custom autopilot. IQC-based analysis is performed on several time-invariant H∞ controllers along with various sets of uncertainties aimed at providing valuable information for use in controller analysis, controller synthesis, and comparison of multiple controllers. The proposed framework is also transferable to other fixed-wing UAS platforms, effectively taking IQC-based analysis beyond academic examples to practical application in UAS control design and airworthiness certification. IQC-based analysis problems are traditionally solved using convex optimization techniques, which can be slow and memory intensive for large problems. An oracle for discrete-time IQC analysis problems is presented to facilitate the use of a cutting plane algorithm in lieu of convex optimization in order to solve large uncertainty analysis problems relatively quickly, and with reasonable computational effort. The oracle is reformulated to a skew-Hamiltonian/Hamiltonian eigenvalue problem in order to improve the robustness of eigenvalue calculations by eliminating unnecessary matrix multiplications and inverses. Furthermore, fast, structure exploiting eigensolvers can be employed with the skew-Hamiltonian/Hamiltonian oracle to accurately determine critical frequencies when solving IQC problems. Applicable solution algorithms utilizing the IQC oracle are briefly presented, and an example shows that these algorithms can solve large problems significantly faster than convex optimization techniques. Finally, a large complex engineering system is analyzed using the oracle and a cutting-plane algorithm. Analysis of the same system using the same computer hardware failed when employing convex optimization techniques. / Ph. D.
30

An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks

Bijinemula, Sandeep Kumar 01 February 2019 (has links)
Engine-triggered tasks are real-time tasks that are released when the crankshaft arrives at certain positions in its path of rotation. This makes the rate of release of these jobs a function of the crankshaft's angular speed and acceleration. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique. / Master of Science / Real-time systems require temporal correctness along with accuracy. This notion of temporal correctness is achieved by specifying deadlines to each of the tasks. In order to ensure that all the deadlines are met, it is important to know the processor requirement, also known as demand, of a task over a given interval. For some tasks, the demand is not constant, instead it depends on several external factors. For such tasks, it becomes necessary to calculate the worst-case demand. Engine-triggered tasks are activated when the crankshaft in an engine is at certain points in its path of rotation. This makes their activation rate dependent on the angular speed and acceleration of the crankshaft. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique.

Page generated in 0.0722 seconds