• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • Tagged with
  • 15
  • 15
  • 15
  • 15
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computational Methods for Dynamics and Mobility Analysis of Multiloop Mechanisms and Robotic Systems

Wehage, Kristopher 07 June 2017 (has links)
<p> In this work, a systematic method based on graph theoretic concepts is presented that allows setting up a general mechanism's governing equations and analyzing transmission performance for a wide range of parametric and topological variations. The algorithms and methods described in this work are designed to be both fully automatic &mdash; requiring minimal supervision from an analyst for successful execution, robust &mdash; capable of handling instantaneous bifurcations and end-of-stroke conditions, and numerically efficient &mdash; through the application of numerical reduction strategies, custom sparse matrix methods and vectorization. </p><p> In the first primary section, the focus is on automatic, graph-theoretic methods for setting up a mechanism's constraint equations and solving the dynamic equations of motion. A multibody system's constraint equations, i.e. the Jacobian matrix, plays a central role in the equations of motion, and is almost never full-rank, which complicates the solution process even for relatively simple systems. Therefore, Generalized Coordinate Partitioning (GCP), a numerical method based on LU decomposition applied to the Jacobian matrix is applied to find the optimal set of independent, generalized coordinates to describe the system. To increase the efficiency of the GCP algorithm, a new general purpose graph-partitioning algorithm, referred to as "Kinematic Substructuring" is introduced and numerical results are provided. Furthermore, a new numerical implementation of solving the equations of motion, referred to as the "Preconditioned Spatial Equations of Motion" is presented and new sparse matrix solver is described and demonstrated in several numerical examples. </p><p> In the second primary section, it is shown how a simple numerical procedure applied to a mechanism's constraint equations can be used as a measure of transmission performance. The metric, referred to as "mobility numbers" provides an indication of a joint's ability to affect a change on a mechanism's overall configuration and is directly related to a mechanism's instantaneous mobility. The relationship between mobility, transmission and manipulability is discussed. Unlike many other measures of transmission performance, mobility numbers are normalized and bound between 0 and 1, and can be computed simply and efficiently from the Jacobian matrix using LU and QR matrix decomposition methods. Examples of applications of mobility numbers are provided. </p><p> Finally, in the last section, aspects of software design, including external and internal storage formats and memoization programming methods are discussed. </p>
2

Methods to analyze large automotive fleet-tracking datasets with application to light- and medium-duty plug-in hybrid electric vehicle work trucks

Vore, Spencer 04 January 2017 (has links)
<p> This work seeks to define methodologies and techniques to analyze automotive fleet-tracking big data and provide sample results that have implications to the real world. To perform this work, vehicle fleet-tracking data from Odyne and Via Plug-in Hybrid Electric Trucks collected by the Electric Power Research Institute (EPRI) was used. Both CAN-communication bus signals and GPS data were recorded off of these vehicles with a second-by-second data collection rate. Colorado State University (CSU) was responsible for analyzing this data after it had been collected by EPRI and producing results with application to the real world. </p><p> A list of potential research questions is presented and an initial feasibility assessment is performed to determine how these questions might be answered using vehicle fleet-tracking data. Later, a subset of these questions are analyzed and answered in detail using the EPRI dataset. </p><p> The methodologies, techniques, and software used for this data analysis are described in detail. An algorithm that summarizes second-by-second vehicle tracking data into a list of higher-level driving and charging events is presented and utility factor (UF) curves and other statistics of interest are generated from this summarized event data. </p><p> In addition, another algorithm was built on the driving event identification algorithm to discretize the driving event data into approximately 90-second drive intervals. This allows for a regression model to be fit onto the data. A correlation between ambient temperature and equivalent vehicle fuel economy (in miles per gallon) is presented for Odyne and it is similar to the trend seen in conventional vehicle fuel economy vs. ambient temperature. It is also shown how ambient temperature variations can influence the vehicle fuel economy and there is a discussion about how changes in HVAC use could influence the fuel economy results. </p><p> It is also demonstrated how variations in the data analysis methodology can influence the final results. This provides evidence that vehicle fleet-tracking data analysis methodologies need to be defined to ensure that the data analysis results are of the highest quality. The questions and assumptions behind the presented analysis results are examined and a list of future work to address potential concerns and unanswered questions about the data analysis process is presented. Hopefully, this future work list will be beneficial to future vehicle data analysis projects. </p><p> The importance of using real-world driving data is demonstrated by comparing fuel economy results from our real-world data to the fuel economy calculated by EPA drive cycles. Utility factor curves calculated from the real-world data are also compared to standard utility factor curves that are presented in the SAE J2841 specification. Both of these comparisons showed a difference in real-world driving data, demonstrating the potential utility of evaluating vehicle technologies using the real-world big data techniques presented in this work. </p><p> Overall, this work documents some of the data analysis techniques that can be used for analyzing vehicle fleet-tracking big data and demonstrates the impact of the analysis results in the real world. It also provides evidence that the data analysis methodologies used to analyze vehicle fleet-tracking data need to be better defined and evaluated in future work.</p>
3

Finite element analysis using component decomposition and knowledge-based control

Holzhauer, Douglas J 01 January 1999 (has links)
A novel component decomposition finite element method is proposed that employs knowledge based analysis for solution control and object-oriented programming methods for implementation. Application of these methods have enabled the development of a Thermal Analysis Blackboard System (TABS) that exhibits many improvements over traditional finite element methods. This system has five distinct characteristics: (1) The analyzer is for a specific application domain (electronic module) and phenomenological analysis (FEA) domain. Imbedded knowledge of these domains enable the analysis process to be intelligently controlled. (2) The control of the analysis process occurs at a higher level of abstraction than that of finite element nodes and elements. Instead control is based on the major components of the application domain, such as chips, substrates, and package. (3) The finite element analysis is conducted separately for each system component using decomposition techniques. This allows analysis of individual components to be optimized based on each component's unique characteristics. (4) The numerical analyzer that solves the matrix equations was written using object oriented programming methods. (5) Data storage and analysis control has been accomplished by using a blackboard architecture. The use of component decomposition allowed for small incompatibility to exist at the intercomponent boundaries. This introduces an additional approximation into the problem solution. However, test cases showed that the small incompatibility at the component interfaces have a relatively small effect on the accuracy of the analysis results in the critical regions of interest, with a maximum error of 7% observer in one test case. This decomposition method may not be appropriate if the critical region of interest is near to the intercomponent boundary or if high accuracy is required. However, this relative error may be acceptable if the analysis is part of the design process. In this case timely results, with an acceptable amount of error, are more important than high accuracy of results at the expense of timeliness. (Abstract shortened by UMI.)
4

Semantic methods for intelligent distributed design environments

Witherell, Paul W 01 January 2009 (has links)
Continuous advancements in technology have led to increasingly comprehensive and distributed product development processes while in pursuit of improved products at reduced costs. Information associated with these products is ever changing, and structured frameworks have become integral to managing such fluid information. Ontologies and the Semantic Web have emerged as key alternatives for capturing product knowledge in both a human-readable and computable manner. The primary and conclusive focus of this research is to characterize relationships formed within methodically developed distributed design knowledge frameworks to ultimately provide a pervasive real-time awareness in distributed design processes. Utilizing formal logics in the form of the Semantic Web’s OWL and SWRL, causal relationships are expressed to guide and facilitate knowledge acquisition as well as identify contradictions between knowledge in a knowledge base. To improve the efficiency during both the development and operational phases of these “intelligent” frameworks, a semantic relatedness algorithm is designed specifically to identify and rank underlying relationships within product development processes. After reviewing several semantic relatedness measures, three techniques, including a novel meronomic technique, are combined to create AIERO, the Algorithm for Identifying Engineering Relationships in Ontologies. In determining its applicability and accuracy, AIERO was applied to three separate, independently developed ontologies. The results indicate AIERO is capable of consistently returning relatedness values one would intuitively expect. To assess the effectiveness of AIERO in exposing underlying causal relationships across product development platforms, a case study involving the development of an industry-inspired printed circuit board (PCB) is presented. After instantiating the PCB knowledge base and developing an initial set of rules, FIDOE, the Framework for Intelligent Distributed Ontologies in Engineering, was employed to identify additional causal relationships through extensional relatedness measurements. In a conclusive PCB redesign, the resulting “intelligent” framework demonstrates its ability to pass values between instances, identify inconsistencies amongst instantiated knowledge, and identify conflicting values within product development frameworks. The results highlight how the introduced semantic methods can enhance the current knowledge acquisition, knowledge management, and knowledge validation capabilities of traditional knowledge bases.
5

Acceleration of CFD and data analysis using graphics processors

Khajeh-Saeed, Ali 01 January 2011 (has links)
Graphics processing units function well as high performance computing devices for scientific computing. The non-standard processor architecture and high memory bandwidth allow graphics processing units (GPUs) to provide some of the best performance in terms of FLOPS per dollar. Recently these capabilities became accessible for general purpose computations with the CUDA programming environment on NVIDIA GPUs and ATI Stream Computing environment on ATI GPUs. Many applications in computational science are constrained by memory access speeds and can be accelerated significantly by using GPUs as the compute engine. Using graphics processing units as a compute engine gives the personal desktop computer a processing capacity that competes with supercomputers. Graphics Processing Units represent an energy efficient architecture for high performance computing in flow simulations and many other fields. This document reviews the graphic processing unit and its features and limitations.
6

A semi-analytical model of turbulent jets injected into a cross-flow

Kosanovic, Dragoljub B 01 January 1996 (has links)
This thesis describes the mixing of a single jet discharged into a free stream, or a row of jets discharged normally into a confined cross flow. Flow configurations of this type are relevant to the primary and dilution zones of gas turbine combustors. A better understanding of this process can provide more effective design for the dilution air process in gas turbine combustion systems. A finite element numerical procedure is used to solve governing partial differential equations for velocity, temperature, pressure, turbulent kinetic energy and energy dissipation rate. The accuracy of the results, obtained using the k-$\varepsilon$ turbulence model, is discussed and compared with the experimental data. The results have a good quantitative agreement with the existing experimental data. Quantitative differences exist and are due to both the numerical diffusion and turbulence model error. By applying the entrainment theory, and simple assumptions about self-similarity on a section across the jet, a general asymptotic form of the jet temperature centerline trajectory is obtained. The solution has the same form as is predicted by the integral analysis of the jet in a cross flow. Using results from the numerical calculations, the entrainment coefficient is calculated. The entrainment coefficient is then used to determine the asymptotic form of the temperature centerline trajectory, which is compared with the calculations. Results are presented for the jet-to-main-stream velocity ratios of 2.3 and 3. The three main-stream-to-jet temperature differences used for each geometry are 100$\sp\circ$C, 500$\sp\circ$C and 1000$\sp\circ$C. In the case of confined flow, channel heights of four and eight jet diameters, and spacing between adjacent jets of four jet diameters is used. Better agreement between the calculations and analytical predictions is obtained for lower temperature differences, and for a free jet in a cross flow. The results for the confined jet show that the $\rm {z\sim x\sp{1/3}}$ law predicts the jet temperature centerline trajectory over the whole flow field much more accurately. The results also suggest that the geometric characteristics of the problem play a significant role and need to be included in the development of the trajectory expressions.
7

Designing mechanical components with features: Representing the form and intent of in-progress designs for automated modification and evaluation

Nielsen, Eric Hilton 01 January 1991 (has links)
A feature-based mechanical design system for thin walled components has been developed and implemented in an experimental computer program. It is a Design-With Features System (DWFS). Features in this system are defined as form plus intent, and are used for entry, modification, and evaluation of the design. To create this system, advances were required in the representations of form and intent, as well as the ability to propagate modifications to the geometry of the design. This research has advanced these three areas, and has coupled them together to create the DWFS. The key results of this research are several new and promising ideas: the basis for more intelligent computer based mechanical design process; an abstract representation of form based on the "virtual boundary" concept which covers a broad domain and is manageable and supports knowledge-based evaluations of the form; a representation of geometric intent which flexibly documents the level of commitment and limits on the design as determined by the designer or other source, such as a manufacturing engineer or automated design evaluation program; a method which automates the incorporation of geometry modifications consistent with intent whether over-, under-, or fully-constrained. Each of these developments can stand alone with separable conclusions, and each opens new avenues of investigation. Together they form the core of this design system. The important characteristics of the DWFS are: composing the form of the component at the configuration level; representing the geometric intent of the design, and thus its level of commitment; automatic, interactive (i.e. fast) propagation of geometry modifications based on intent; supports user-defined features while limiting designers in some other ways; able to support interactive knowledge-based (i.e. feature-based) evaluations of designs (including user-defined feature-forms); and encourages a least commitment design process. These characteristics are all implemented in a manageable, extensible, and computationally efficient way. The DWFS is shown to be a viable candidate as the basis of future CAD/CAE systems. While the DWFS may fit with current design processes, its use is expected to promote a different design process which produces a more manufacturable design more quickly.
8

Conceptual design of mechanical systems: A representation and computational model

Welch, Richard Vann 01 January 1992 (has links)
This dissertation presents a computational model for the conceptual stage of mechanical design. Mechanical design is the process whereby abstract specifications for a product, such as a transmission, are iteratively refined until a set of physical plants, often in the form of blueprints, have been developed. The conceptual stage of design, the first step in this process, consists of transforming the required functionality of an artifact into a preliminary arrangement of artifacts to meet that functionality. Conceptual design is a critical step in the product development cycle which is still poorly understood. By the end of the conceptual design most of the project costs have been committed. The success or failure of a product is often determined by its conceptual design: no amount of optimization or manufacturing finesse can overcome an inadequate or poorly developed concept. Existing engineering tools, such as CAD and CAE software, aid only the detailed geometric design tasks. The practical need to develop good conceptual design methods and tools is therefore clear. The aim of this research is to explore a solution method for a limited class of conceptual design problems and provide a theoretical basis for future engineering tools. Function describes the external requirement of a system, what it must do, while behavior describes how those functions are met. Behavior provides an intermediate step between function and form allowing a variety of solution principles to be explored before considering particular components. The hypothesis underlying this dissertation is that behavioral reasoning is an important process in solving conceptual design problems. This allows for the decomposition of conceptual design into a two steps: (1) developing behavior specifications to meet the functionality based on a variety of physical principles and phenomena; and (2) developing configurations of components to meet the required behavior. The purpose of this research is to develop a computational model of conceptual design, and the required representations, based on this two step process. The hypothesis is validated by the model's ability to generate solutions to a class of conceptual design problems.
9

Simulation, Control Design, and Experiments on Single and Double Inverted Pendulum Systems

Jacobs, Gregory 23 February 2016 (has links)
<p> The discipline of control engineering has been applied for thousands of years. As long as humans have needed a system to vary automatically, different devices, electronics and algorithms have been designed to attain system control and stability. This study intends on implementing the theory developed my mathematicians such as Henri Poincar&eacute;, Aleksandr Lyapunov, Rudolf E. K&aacute;lm&aacute;n and many others in an attempt to stabilize an unstable system: a cart and inverted pendulum. In order to stabilize the inverted pendulum system, control designs consisting of both classical and modern approaches will be explored to design effective PID and LQR controllers. Furthermore, an adaptive controller will be designed as well for a one-degree-of-freedom unstable system. For accurate control design, linear and non-linear system identification techniques will be used to attain mathematical dynamic system models. Multiple tuning techniques will be utilized to achieve the most stable system possible. A micro-processor (Arduino) will be used in conjunction with a computer for data communication and digital control algorithms. The utilization of an Arduino will require the design and implementation of digital control systems, digital tuning techniques, and digital filtering. If successful, the implemented theory will result in the stabilization of a multiple degree of freedom system with chaotic potential.</p>
10

Computer vision for dual spacecraft proximity operations -- A feasibility study

Stich, Melanie Katherine 04 December 2015 (has links)
<p> A computer vision-based navigation feasibility study consisting of two navigation algorithms is presented to determine whether computer vision can be used to safely navigate a small semi-autonomous inspection satellite in proximity to the International Space Station. Using stereoscopic image-sensors and computer vision, the relative attitude determination and the relative distance determination algorithms estimate the inspection satellite's relative position in relation to its host spacecraft. An algorithm needed to calibrate the stereo camera system is presented, and this calibration method is discussed. These relative navigation algorithms are tested in NASA Johnson Space Center's simulation software, Engineering Dynamic On-board Ubiquitous Graphics (DOUG) Graphics for Exploration (EDGE), using a rendered model of the International Space Station to serve as the host spacecraft. Both vision-based algorithms proved to attain successful results, and the recommended future work is discussed. </p>

Page generated in 0.1051 seconds