• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • 2
  • 1
  • Tagged with
  • 27
  • 27
  • 9
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modelling, Optimisation and Advanced Duty Detection in a Mining Machine

Charles Mcinnes Unknown Date (has links)
This thesis presents advanced algorithms for realtime detection of dragline duty, the quantification of its causes and the combined optimisation of dragline motion to minimise cycle time and duty. Draglines are large, powerful, rotating, multibody systems that operate in a similar manner to cranes and certain pick and place robots. Duty is an estimate of fatigue damage on the dragline boom caused by cyclic stresses that are associated with the repetitive dig and dump operation. Neither realtime detection of duty nor the quantification of its causes were previously available. In addition, no previous researchers have optimised the dynamic motion of mining equipment to achieve the combined maximisation of productivity and minimisation of maintenance measures. The advanced duty detection system was developed to improve feedback to dragline operators. The algorithms that were developed are based on the mechanics of dragline motion and fatigue. In particular, fatigue cycles in measured stress are identified at the earliest possible time, based on a novel proof and modification to the rainflow cycle counting algorithm. The contributions of specific causes to each individual stress range are quantified based on the mechanics of operator dependent control and dragline dynamics. In this manner, specific causes of duty are measured. The algorithms confirmed the significant contribution from operator dependent factors and identified the major causes, attributing 28% of the total duty to out-of-plane bucket motion and 15% to dynamic vibration. Further improvements to dragline performance required the development of a dragline dynamic model for offline testing and optimisation. A complete, condensed set of equations for a four-degree-of-freedom nonlinear coupled model of a dragline was derived using Lagrange’s method, allowing direct insight into dragline behaviour not available from previous research. The model was used to investigate the relationship between motor power, operator behaviour, bucket trajectory, productivity and duty during the swing and return phases of operation. Significant potential for increasing productivity and reducing duty was demonstrated. The advanced duty detection system and the dragline model were validated with field measured data, video footage, alternative modelling and expert review. Realtime and end-of-cycle feedback was simulated over many cycles of measured data. Experts from industry and research were consulted to verify the causes of duty based on detailed measured data analysis. The forces, stresses and out-of-plane angle predicted by the dragline model were closely compared with measured data over various indicative cycles. The dragline model was also validated against an alternative model constructed in ADAMS. The development of the dragline model enabled model-based numerical optimisation. Significant nonlinearities in the model and the constraints necessitated the use of the Lagrange multiplier method. The bucket trajectory during the swing and return phase was directly optimised. In order to minimise cycle time and duty, a penalty for duty incurred was added to the cycle time, effectively maximising long-term productivity. For a slew torque optimisation scenario using measured rope lengths, the numerical optimisation performance was shown to be 10-30% better than manual optimisation and 50-60% better than the operator performance. This thesis outlines several significant contributions to improving dragline performance. Underpinning the advanced duty detection system are three significant contributions to fatigue cycle counting algorithms: a proof of the equivalence of two pre-existing algorithms; a new algorithm that enables realtime detection of duty; and an algorithm that can attribute duty to specific causes. These novel feedback tools can provide realtime operator feedback and identify the causes of excess duty and when it was incurred. A complete and condensed set of equations for the four-degree-of-freedom model enabled, for the first time, the optimisation of dragline operation to concurrently reduce duty and increase productivity. The models and feedback algorithms were validated with field measured data. Future work could include installation and extension of the advanced duty detection system. In addition, further modelling and optimisation research could focus on improving the heuristics used for bucket trajectory control, realtime determination of optimum bucket trajectory and testing proposed dragline modifications.
2

Likelihood-Based Tests for Common and Idiosyncratic Unit Roots in the Exact Factor Model

Solberger, Martin January 2013 (has links)
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. / Solving Macroeconomic Problems Using Non-Stationary Panel Data
3

An Adaptive Mixed Finite Element Method using the Lagrange Multiplier Technique

Gagnon, Michael Anthony 04 May 2009 (has links)
Adaptive methods in finite element analysis are essential tools in the efficient computation and error control of problems that may exhibit singularities. In this paper, we consider solving a boundary value problem which exhibits a singularity at the origin due to both the structure of the domain and the regularity of the exact solution. We introduce a hybrid mixed finite element method using Lagrange Multipliers to initially solve the partial differential equation for the both the flux and displacement. An a posteriori error estimate is then applied both locally and globally to approximate the error in the computed flux with that of the exact flux. Local estimation is the key tool in identifying where the mesh should be refined so that the error in the computed flux is controlled while maintaining efficiency in computation. Finally, we introduce a simple refinement process in order to improve the accuracy in the computed solutions. Numerical experiments are conducted to support the advantages of mesh refinement over a fixed uniform mesh.
4

形状最適化問題の解法における多制約の取り扱い

小山, 悟史, KOYAMA, Satoshi, 畔上, 秀幸, AZEGAMI, Hideyuki 10 1900 (has links)
No description available.
5

Error Analysis for Hybrid Trefftz Methods Coupling Neumann Conditions

Hsu, Wei-chia 08 July 2009 (has links)
The Lagrange multiplier used for the Dirichlet condition is well known in mathematics community, and the Lagrange multiplier used for the Neumann condition is popular for the Trefftz method in engineering community, in particular for elasticity problems. The latter is called the Hybrid Trefftz method (HTM). However, it seems to export no analysis for HTM. This paper is devoted to error analysis of the HTM for −£Gu + cu = 0 with c = 1 or c = 0. Error bounds are derived to provide the optimal convergence rates. Numerical experiments and comparisons between two kinds of Lagrange multipliers are also reported. The analysis in this paper can also be extended to the HTM for elasticity problems.
6

Hybrid Trefftz Methods Coupling Traction Conditions in Linear Elastostatics

Tsai, Wu-chung 08 July 2009 (has links)
The Lagrange multiplier used for the displacement (i.e., Dirichlet) condition is well known in mathematics community (see [1, 2, 10, 18]), and the Lagrange multiplier used for the traction (i.e., Neumann)condition is popular for the Trefftz method for elasticity problems in engineering community, which is called the Hybrid Trefftz method (HTM). However, it seems to export no analysis for HTM. This paper is devoted to error analysis of the HTM for elasticity problems. Numerical experiments are reported to support the analysis made.
7

Unifying Low-Rank Models for Visual Learning

Cabral, Ricardo da Silveira 01 February 2015 (has links)
Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex.
8

Likelihood-Based Panel Unit Root Tests for Factor Models

Zhou, Xingwu January 2014 (has links)
The thesis consists of four papers that address likelihood-based unit root tests for panel data with cross-sectional dependence arising from common factors. In the first three papers, we derive Lagrange multiplier (LM)-type tests for common and idiosyncratic unit roots in the exact factor models based on the likelihood function of the differenced data. Also derived are the asymptotic distributions of these test statistics. The finite sample properties of these tests are compared by simulation with other commonly used unit root tests. The results show that our LM-type tests have better size and local power properties. In the fourth paper, we estimate the spaces spanned by the common factors and the spaces spanned by the idiosyncratic components of the static factor model by using the quasi-maximum likelihood (ML) method and compare it with the widely used method of principal components (PC). Next, by simulation, we compare the size and power properties of established tests for idiosyncratic unit roots, using both the ML and PC methods. Simulation results show that the idiosyncratic unit root tests based on the likelihood-based residuals generally have better size and higher size-adjusted power, especially when the cross-sectional dimension is small and the time series dimension is large.
9

A Market approach to balance services pricing

Naidoo, Robin January 2013 (has links)
The co-optimization of energy and reserves has become a standard requirement in integrated markets. This is due to the inverse relationship that exists between energy and reserves. The provision of reserves generally reduces the amount of primary energy a generating unit can produce and vice versa. This suggests that these products should be procured through a simultaneous auction to ensure optimal procurement and pricing. Furthermore, forward markets dictate that this co-optimization of energy and reserves be done over a multi-period planning horizon. This dissertation addresses the problem of optimal scheduling and pricing of energy and reserves over a multi-period planning horizon using an optimal power flow formulation. The extension of the problem from a static optimization problem to a dynamic optimization problem is presented. Price definitions for energy and reserves in terms of shadow prices emanating from the optimization algorithm are provided. It is shown that the proposed formulation of prices leads to the cascading of reserve prices and eliminates the problem of “price reversal” where lower quality reserves are priced higher than higher ii quality reserves. Pricing conditions are also established for the downward substitution of higher quality reserves for lower quality reserves. The proposed pricing formulations are tested on the IEEE 24 Bus Reliability Test System and on the South African power network. The simulated results show that cascading of reserve prices does occur and that prices of different types of reserves are equal when downward substitution of reserves occurs. Zonal reserve requirements result in higher energy and reserve prices, which in term result in higher procurement costs to the system operator and higher profits to market participants. Congestion on the network also results in higher procurement costs to the system operator and higher profits to market participants in the case of zonal pricing of reserves. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
10

Optimierung eines Mean-Variance Portfolios

Janke, Oliver 26 October 2017 (has links)
Diese Diplomarbeit untersucht die Optimierung eines Mean-Variance Portfolios auf einem vollständigen Markt unter der Bedingung, dass die Insolvenz des Investors ausgeschlossen ist. Hierbei wird die duale Methode (auch Martingalmethode genannt)

Page generated in 0.0921 seconds