1 |
Designed for Better Control: Using Kinematic and Dynamic Metrics to Optimize Robot Manipulator DesignMorrell, John R. 17 August 2023 (has links) (PDF)
In the field of control theory, optimal performance is generally defined as the best possible controlled performance given a static, unchangeable plant system. However, principled design of the underlying system can make designing effective controllers easier and dramatically improve the final control performance beyond what any finely tuned controller could achieve alone. This work develops performance metrics for serial robot arms which help guide the design and optimization of the structure of the arm to achieve greater final performance. First, a kinematic (motion-based) metric called the Actuator Independence Metric (AIM) measures the uniqueness of the movement capabilities of the different joints in a robot arm. Arms which are optimized with respect to the AIM exhibit a greater freedom of movement. In particular, it is shown that the AIM score of a robot correlates strongly with their ability to find solutions to the Inverse Kinematics problem, and that redundant arms with a high AIM score have more useful null-spaces with significant ability to change configuration while maintaining a fixed end-effector pose. Second, a dynamic metric called the Acceleration Radius is explored. The acceleration radius measures the maximum acceleration which a robot arm is capable of generating in any direction. An efficient algorithm for calculating the acceleration radius is developed which exploits the geometry of the mapping from joint torques to acceleration. A design optimization is carried out to demonstrate how the acceleration radius predicts the dynamic movement capabilities of robot arms. It is shown that arms which are optimal with respect to the acceleration radius can follow faster paths through a task space. The metrics developed in this thesis can be used to create customized robot arm designs for specific tasks, which will exhibit desirable control performance.
|
2 |
Is economic value added (eva) the best way to assemble a portfolio?Pataky, Tamas 01 December 2012 (has links)
In search of a better investment metric, researchers began to study Economic Value Added, or EVA, which was introduced in 1991 by Stern Stewart & Co in their book, "The Quest for Value" (Turvey, 2000). Stern Stewart & Co devised EVA as a better alternative to evaluate investment projects within the corporate finance field, later to be considered for use as a performance metric for investor use. A wide array of multinational corporations, such as Coca-Cola, Briggs and Stratton, and AT&T adopted the EVA method, which led to EVA's worldwide acclaim. Several points in the study reveal that EVA does not offer less risk, higher returns, and more adaptability for an investor. In fact, EVA underperformed the traditional portfolio performance metrics in key measurements including mean returns, and confidence intervals. EVA is a difficult performance metric to calculate, with several complex components that can be calculated in several different ways such as NOPAT, cost of equity, and cost of debt. Any information that is inaccurate or lacking can significantly impact the outcomes. Traditional performance metrics, on the other hand, such as ROA, ROE, and E/P are simple to calculate with few components, and only one way to calculate them.
|
3 |
The Impact of Minimum Investment Barriers on Hedge Funds: Are Retail Investors Getting the Short End of Performance?Huang, Kelvin 05 January 2009 (has links)
Using paired tests of high and low minimum investment group funds on several performance measures for hedge funds and funds-of-funds from 1991-2005, we find that funds imposing a higher entry fee requirement on their investors produce significantly better performance both on a raw basis and a risk-adjusted basis. Differences in the performance of the high and low entry fee funds are found to be less significant economically and statistically in later years, suggesting a diminishing gap in performance differences. We also find that there is considerably more cross-sectional dispersion in investing in funds with lower minimum investment levels, which indicates a much higher level of fund selection risk for undiversified investors desiring investment in funds with low entry fee barriers. / Thesis (Ph.D, Management) -- Queen's University, 2008-12-21 23:57:11.475
|
4 |
A Development of Performance Metrics for Forecasting Schedule SlippageArcuri, Frank John 16 May 2007 (has links)
Project schedules should mirror the project, as the project takes place. Accurate project schedules, when updated and revised, reflect the actual progress of construction as performed in the field. Various methods for monitoring progress of construction are successful in their representation of actual construction as it takes place. Progress monitoring techniques clearly identify when we are behind schedule, yet it is less obvious to recognize when we are going to slip behind schedule.
This research explores how schedule performance measurement mechanisms are used to recognize construction projects that may potentially slip behind schedule, as well as what type of early warning they provide in order to take corrective action. Such early warning systems help prevent situations where the contractor and/or owner are in denial for a number of months that a possible catastrophe of a project is going to finish on time.
This research develops the intellectual framework for schedule control systems, based on a review of control systems in the construction industry. The framework forms the foundation for the development of a schedule control technique for forecasting schedule slippage — the Required Performance Method (RPM). The RPM forecasts the required performance needed for timely project completion, and is based on the contractor's ability to expand future work. The RPM is a paradigm shift from control based on scheduled completion date to control based on required performance. This shift enables forecasts to express concern in terms that are more tangible. Furthermore, the shift represents a focus on what needs to be done to achieve a target completion date, as opposed to the traditional focus on what has been done. The RPM is demonstrated through a case study, revealing its ability to forecast impending schedule slippage. / Master of Science
|
5 |
Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative PositioningNounagnon, Jeannette Donan 12 July 2016 (has links)
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking.
This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty.
However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy.
The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications. / Ph. D.
|
6 |
Possible Difficulties in Evaluating University PerformanceBased on Publications Due to Power Law Distributions : Evidence from SwedenSadric, Haroon, Zia, Sarah January 2023 (has links)
Measuring the research performance of a university is important to the universities themselves, governments, and students alike. Among other metrics, the number of publications is easy to obtain, and due to the large number of publications each university produces during one year, it suggests to be one accurate metric. However, the number of publications depends largely on the size of the institution, suggesting, if not addressed, that larger universities are better. Thus, one might intuitively try to normalize by size and use publications per researcher instead. A better institution would allow individual researchers to have more publications each year. However, publications, like many other things, might follow a power-law distribution, where most researchers have few, and only a few researchers have very many publications. These power-law distributions violate the assumptions the central limit the orem makes, for example, having a well-defined mean or variance. Specifically, one can not normalize or use averages from power-law distributed data, making the comparison of university publications impossible if they indeed follow a power-law distribution. While it has been shown that some scientific domains or universities show this power-law distribution, it is not known if Swedish universities also show this phenomenon. Thus, here we collect publication data for Swedish universities and determine whether or not, they are power-law distributed. Interestingly, if they are, one might use the slope of the power-law distribution as a proxy to determine research output. If the slope is steep, it suggests that the ratio between highly published authors and those with few publications is small. Where as a flatter slope suggests that a university has more highly published authors than a university with a steeper slope. Thus, the second objective here is to assess if the slope of the distribution can be determined or to which extent this is possible. This study will show that eight of the fifteen Swedish universities considered follow a power-law distribution (Kolmogorov-Smirnov statistic<0.05), while the remaining seven do not. The key determinant is the total number of publications. The difficulty here is that often the total number of publications is so small that one can not reject a power-law distribution, and it is also impossible to determine the slope of the distribution with any accuracy in those cases. While this study suggests that in principle, the slopes of the power-law distributions can be used as a comparative metric, it also showed that for half of Sweden’s universities, the data is insufficient for this type of analysis.
|
7 |
Hybrid Methods for Feature SelectionCheng, Iunniang 01 May 2013 (has links)
Feature selection is one of the important data preprocessing steps in data mining. The feature selection problem involves finding a feature subset such that a classification model built only with this subset would have better predictive accuracy than model built with a complete set of features. In this study, we propose two hybrid methods for feature selection. The best features are selected through either the hybrid methods or existing feature selection methods. Next, the reduced dataset is used to build classification models using five classifiers. The classification accuracy was evaluated in terms of the area under the Receiver Operating Characteristic (ROC) curve (AUC) performance metric. The proposed methods have been shown empirically to improve the performance of existing feature selection methods.
|
8 |
Generic design and investigation of solar cooling systemsSaulich, Sven January 2013 (has links)
This thesis presents work on a holistic approach for improving the overall design of solar cooling systems driven by solar thermal collectors. Newly developed methods for thermodynamic optimization of hydraulics and control were used to redesign an existing pilot plant. Measurements taken from the newly developed system show an 81% increase of the Solar Cooling Efficiency (SCEth) factor compared to the original pilot system. In addition to the improvements in system design, new efficiency factors for benchmarking solar cooling systems are presented. The Solar Supply Efficiency (SSEth) factor provides a means of quantifying the quality of solar thermal charging systems relative to the usable heat to drive the sorption process. The product of the SSEth with the already established COPth of the chiller, leads to the SCEth factor which, for the first time, provides a clear and concise benchmarking method for the overall design of solar cooling systems. Furthermore, the definition of a coefficient of performance, including irreversibilities from energy conversion (COPcon), enables a direct comparison of compression and sorption chiller technology. This new performance metric is applicable to all low-temperature heat-supply machines for direct comparison of different types or technologies. The achieved findings of this work led to an optimized generic design for solar cooling systems, which was successfully transferred to the market.
|
Page generated in 0.0953 seconds