• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 13
  • 9
  • 6
  • 5
  • 1
  • 1
  • Tagged with
  • 115
  • 88
  • 35
  • 31
  • 26
  • 26
  • 25
  • 18
  • 17
  • 16
  • 15
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Robust optimization for portfolio risk : a ravisit of worst-case risk management procedures after Basel III award.

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
32

Compiler optimization VS WCET : Battle of the ages / Kompilatoroptimering VS WCET

Harrius, Tova, Nordin, Max January 2022 (has links)
Optimization by a compiler can be executed with many different methods. The defence company Saab provided us with a mission, to see if we could optimize their code with the help of the GCC compiler and its optimization flags. For this thesis we have conducted a study of the optimization flags to decrease the worst case execution time. The first step to assemble an effective base of flags was reading the documentation for the flags. We then tested the different flags and analysed them. In the end we ended up with four chosen sets that we saw fitted to be discussed and analyzed further. The results did not live up to our expectations, as we thought the flags would optimize the execution time. The flags int he majority of cases gave an, although small, increase of the execution time. We only had one set where the flags gave us a decrease, which we called the Expensive Optimization.With these results we can conclude that Saab do not need to change their existing set of optimization flags to optimize their compiler further.
33

Energy Management in Grid-connected Microgrids with On-site Storage Devices

Khodabakhsh, Raheleh 11 1900 (has links)
A growing need for clean and sustainable energy is causing a significant shift in the electricity generation paradigm. In the electricity system of the future, integration of renewable energy sources with smart grid technologies can lead to potentially huge economical and environmental benefits ranging from lesser dependency on fossil fuels and improved efficiency to greater reliability and eventually reduced cost of electricity. In this context, microgrids serve as one of the main components of smart grids with high penetration of renewable resources and modern control strategies. This dissertation is concerned with developing optimal control strategies to manage an energy storage unit in a grid-connected microgrid under uncertainty of electricity demand and prices. Two methods are proposed based on the concept of rolling horizon control, where charge/discharge activities of the storage unit are determined by repeatedly solving an optimization problem over a moving control window. The predicted values of the microgrid net electricity demand and electricity prices over the control horizon are assumed uncertain. The first formulation of the control is based on the scenario-based stochastic conditional value at risk (CVaR) optimization, where the cost function includes electricity usage cost, battery operation costs, and grid signal smoothing objectives. Gaussian uncertainty is assumed in both net demand and electricity prices. The second formulation reduces the computations by taking a worst-case CVaR stochastic optimization approach. In this case, the uncertainty in demand is still stochastic but the problem constraints are made robust with respect to price changes in a given range. The optimization problems are initially formulated as mixed integer linear programs (MILP), which are non-convex. Later, reformulations of the optimization problems into convex linear programs are presented, which are easier and faster to solve. Simulation results under different operation scenarios are presented to demonstrate the effectiveness of the proposed methods. Finally, the energy management problem in network of grid-connected microgrids is investigated and a strategy is devised to allocate the resulting net savings/costs of operation of the microgrids to the individual microgrids. In the proposed approach, the energy management problem is formulated in a deterministic co-operative game theoretic framework for a group of connected microgrids as a single entity and the individual savings are distributed based on the Shapley value theory. Simulation results demonstrate that this co-operation leads to higher economical return for individual microgrids compared to the case where each of them is operating independently. Furthermore, this reduces the dependency of the microgrids on the utility grid by exchanging power locally. / Thesis / Master of Applied Science (MASc)
34

Impediments to the elimination of child labor : A critical review of child labor policies and laws of Liberia

Okodi, Thomas January 2023 (has links)
Child labor is a pressing issue in Liberia, as it is in many other developing countries. Poverty is a significant factor that drives child labor in Liberia, as many families rely on the income generated by their children to survive. While the government has developed numerous policy interventions and laws to address the issue, recent reports show that the prevalence of child labor within the ages of 5-17 is still very high This study aims to critically evaluate the effectiveness of governments efforts by critically examining key policies and laws set up by government in relation to established international legal standards to combat the scourge. It relies on Bacchi's "What's the problem represented to be?" (WPR) policy analysis approach.The analysis revealed that there are gaps in policy and law that has stalled government‟s efforts in achieving its resolution to reducing the prevalence of child labor. The minimum age for employment is below international standards, hazardous work is allowed for children aged16 and above, domestic work is not included in the list of hazardous work, light work is not defined or regulated, and penalties for violating child labor laws are weak. In addition, enforcement of child labor laws is weak, particularly in the informal sector, where most child labor takes place.These gaps are incompatible with international standards hindering progress towards eliminating child labor in the country. This study argues that effective policies are crucial to child labor elimination in Liberia, without which children will continue to be engaged in exploitative work which puts them at risk and denies them of their fundamental human rights.
35

Influence of Customer Locations on Heuristics and Solutions for the Vehicle Routing Problem

Tilashalski, Melissa Christine 07 July 2023 (has links)
The vehicle routing problem (VRP) determines preferred vehicle routes to visit multiple customer locations from a depot location based on a defined objective function. The VRP is an NP-hard network optimization problem that is challenging to solve to optimality. Over the past 60 years, multitudes of heuristics and metaheuristics have been developed in order to minimize the computational burden of solving the VRP. In order to compare the performance of VRP heuristics, researchers have developed bench-marking datasets. These datasets, however, lack properties found in industry datasets. In this dissertation, we explore how properties of industry datasets influence VRP heuristics and objective functions. In Chapter 2, we quantify and compare features of bench-marking and industry datasets. In order to determine if these features influence heuristic performance, we conduct extensive computational runs on three heuristics, Tabu Search, Genetic Algorithm, and Clarke-Wright Savings Procedure, on standard and industry datasets. In Chapter 3, we derive worst-case analysis on how VRP objective functions and metrics relate to one another. These bounds depend on properties of customer locations. These bounds illustrate how customer locations can influence how different routes behave for different routing metrics. Finally, in Chapter 4, we improve two VRP heuristics, Clarke-Wright Saving Procedure and Hybrid Genetic Search Algorithm, by developing new enhancements to the algorithms. These enhancements rely on certain properties of the datasets in order to perform well. Thus, these heuristics perform better on specific VRP dataset types. / Doctor of Philosophy / The vehicle routing problem (VRP) creates vehicle routes that have the shortest travel distance. The routes determine how vehicles should visit multipl customer locations, to deliver or pickup goods, and return to a depot location. While explaining what the VRP entails is simple, the VRP is actually very difficult for even the most sophisticated algorithms on the best computers to solve. Over the past 60 years, many algorithms have been developed in order to more easily and quickly solve the VRP. In order to compare the performance of VRP algorithms, researchers have developed bench-marking datasets. However, these datasets lack properties of datasets found in industry. In this dissertation, we look to connect the disconnect between industry and bench-marking datasets by 1) comparing feature differences between these two types of datasets, 2) determining if differences in datasets imply differences in algorithm performance, 3) proving how problem differences influence VRP routes, and 4) enhancing existing VRP algorithms to perform better on specific VRP dataset types.
36

A Scalable, Load-Balancing Data Structure for Highly Dynamic Environments

Foster, Anthony 05 June 2008 (has links)
No description available.
37

SIMULATION-BASED TOLERANCE STACKUP ANALYSIS IN MACHINING

MUSA, RAMI ADNAN 02 September 2003 (has links)
No description available.
38

Robust Control Design and Analysis for Small Fixed-Wing Unmanned Aircraft Systems Using Integral Quadratic Constraints

Palframan, Mark C. 29 July 2016 (has links)
The main contributions of this work are applications of robust control and analysis methods to complex engineering systems, namely, small fixed-wing unmanned aircraft systems (UAS). Multiple path-following controllers for a small fixed-wing Telemaster UAS are presented, including a linear parameter-varying (LPV) controller scheduled over path curvature. The controllers are synthesized based on a lumped path-following and UAS dynamic system, effectively combining the six degree-of-freedom aircraft dynamics with established parallel transport frame virtual vehicle dynamics. The robustness and performance of these controllers are tested in a rigorous MATLAB simulation environment that includes steady winds, turbulence, measurement noise, and delays. After being synthesized off-line, the controllers allow the aircraft to follow prescribed geometrically defined paths bounded by a maximum curvature. The controllers presented within are found to be robust to the disturbances and uncertainties in the simulation environment. A robust analysis framework for mathematical validation of flight control systems is also presented. The framework is specifically developed for the complete uncertainty characterization, quantification, and analysis of small fixed-wing UAS. The analytical approach presented within is based on integral quadratic constraint (IQC) analysis methods and uses linear fractional transformations (LFTs) on uncertainties to represent system models. The IQC approach can handle a wide range of uncertainties, including static and dynamic, linear time-invariant and linear time-varying perturbations. While IQC-based uncertainty analysis has a sound theoretical foundation, it has thus far mostly been applied to academic examples, and there are major challenges when it comes to applying this approach to complex engineering systems, such as UAS. The difficulty mainly lies in appropriately characterizing and quantifying the uncertainties such that the resulting uncertain model is representative of the physical system without being overly conservative, and the associated computational problem is tractable. These challenges are addressed by applying IQC-based analysis tools to analyze the robustness of the Telemaster UAS flight control system. Specifically, uncertainties are characterized and quantified based on mathematical models and flight test data obtained in house for the Telemaster platform and custom autopilot. IQC-based analysis is performed on several time-invariant H∞ controllers along with various sets of uncertainties aimed at providing valuable information for use in controller analysis, controller synthesis, and comparison of multiple controllers. The proposed framework is also transferable to other fixed-wing UAS platforms, effectively taking IQC-based analysis beyond academic examples to practical application in UAS control design and airworthiness certification. IQC-based analysis problems are traditionally solved using convex optimization techniques, which can be slow and memory intensive for large problems. An oracle for discrete-time IQC analysis problems is presented to facilitate the use of a cutting plane algorithm in lieu of convex optimization in order to solve large uncertainty analysis problems relatively quickly, and with reasonable computational effort. The oracle is reformulated to a skew-Hamiltonian/Hamiltonian eigenvalue problem in order to improve the robustness of eigenvalue calculations by eliminating unnecessary matrix multiplications and inverses. Furthermore, fast, structure exploiting eigensolvers can be employed with the skew-Hamiltonian/Hamiltonian oracle to accurately determine critical frequencies when solving IQC problems. Applicable solution algorithms utilizing the IQC oracle are briefly presented, and an example shows that these algorithms can solve large problems significantly faster than convex optimization techniques. Finally, a large complex engineering system is analyzed using the oracle and a cutting-plane algorithm. Analysis of the same system using the same computer hardware failed when employing convex optimization techniques. / Ph. D.
39

An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks

Bijinemula, Sandeep Kumar 01 February 2019 (has links)
Engine-triggered tasks are real-time tasks that are released when the crankshaft arrives at certain positions in its path of rotation. This makes the rate of release of these jobs a function of the crankshaft's angular speed and acceleration. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique. / Master of Science / Real-time systems require temporal correctness along with accuracy. This notion of temporal correctness is achieved by specifying deadlines to each of the tasks. In order to ensure that all the deadlines are met, it is important to know the processor requirement, also known as demand, of a task over a given interval. For some tasks, the demand is not constant, instead it depends on several external factors. For such tasks, it becomes necessary to calculate the worst-case demand. Engine-triggered tasks are activated when the crankshaft in an engine is at certain points in its path of rotation. This makes their activation rate dependent on the angular speed and acceleration of the crankshaft. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique.
40

Scalable Trajectory Approach for ensuring deterministic guarantees in large networks / Passage à l'échelle de l'approche par trajectoire dans de larges réseaux

Medlej, Sara 26 September 2013 (has links)
Tout comportement défectueux d’un système temps-réel critique, comme celui utilisé dans le réseau avionique ou le secteur nucléaire, peut mettre en danger des vies. Par conséquent, la vérification et validation de ces systèmes est indispensable avant leurs déploiements. En fait, les autorités de sécurité demandent d’assurer des garanties déterministes. Dans cette thèse, nous nous intéressons à obtenir des garanties temporelles, en particulier nous avons besoin de prouver que le temps de réponse de bout-en-bout de chaque flux présent dans le réseau est borné. Ce sujet a été abordé durant de nombreuses années et plusieurs approches ont été développées. Après une brève comparaison entre les différentes approches existantes, une semble être un bon candidat. Elle s’appelle l’approche par trajectoire; cette méthode utilise les résultats établis par la théorie de l'ordonnancement afin de calculer une limite supérieure. En réalité, la surestimation de la borne calculée peut entrainer la rejection de certification du réseau. Ainsi une première partie du travail consiste à détecter les sources de pessimisme de l’approche adoptée. Dans le cadre d’un ordonnancement FIFO, les termes ajoutant du pessimisme à la borne calculée ont été identifiés. Cependant, comme les autres méthodes, l’approche par trajectoire souffre du problème de passage à l’échelle. En fait, l’approche doit être appliquée sur un réseau composé d’une centaine de commutateur et d’un nombre de flux qui dépasse les milliers. Ainsi, il est important qu’elle soit en mesure d'offrir des résultats dans un délai acceptable. La première étape consiste à identifier, dans le cas d’un ordonnancement FIFO, les termes conduisant à un temps de calcul important. L'analyse montre que la complexité du calcul est due à un processus récursif et itératif. Ensuite, en se basant toujours sur l’approche par trajectoire, nous proposons de calculer une limite supérieure dans un intervalle de temps réduit et sans perte significative de précision. C'est ce qu'on appelle l'approche par trajectoire scalable. Un outil a été développé permettant de comparer les résultats obtenus par l’approche par trajectoire et notre proposition. Après application sur un réseau de taille réduite (composé de 10 commutateurs), les résultats de simulations montrent que la durée totale nécessaire pour calculer les bornes des milles flux a été réduite de plusieurs jours à une dizaine de secondes. / In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.

Page generated in 0.0567 seconds