• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 26
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 122
  • 122
  • 98
  • 41
  • 32
  • 17
  • 16
  • 15
  • 15
  • 15
  • 14
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Design and implementation of a system for integrating material and process selection in automated manufacturing

Chen, Hsueh-Jen 21 April 1992 (has links)
Today's manufacturing environment is characterized by competition and continuous change in product and process requirements. The concept of "design for manufacturability" integrates product specifications with manufacturing capabilities by considering the design and manufacturing phases as an integrated system, evaluating the combined system during the design phase of a product , and adjusting the design for maximum efficiency and production economics. This research focuses on one aspect of design for manufacturability, that of process technology evaluation for a specified product design. The objective of the proposed system developed in this study is to evaluate technology alternatives for manufacturing a specified part design and to identify the best combination of product-process characteristics that would minimize production costs within the constraints set by the product's functional requirements and available processing technology. The research objectives are accomplished by developing a simulation based analysis system. The user inputs product specifications through structural screens. The system maintains data bases of work and tool materials, and machining operations. Based on user input, the system then extracts appropriate information from these data bases, and analyzes of the production system in terms of production economics, and other operational measures such as throughput times and work-in-process inventories. Sensitivity analysis may then be performed to explore tradeoffs in design and production parameters. The system is completely integrated, and a user with no prior experience of either simulation or data base technology can use the system effectively. / Graduation date: 1992
92

A profile of HOV lane vehicle characteristics on I-85 prior to HOV-to-HOT conversion

Smith, Katie S. 16 November 2011 (has links)
The conversion of high-occupancy vehicle (HOV) lanes to high-occupancy toll (HOT) lanes is currently being implemented in metro Atlanta on a demonstration basis and is under consideration for more widespread adoption throughout the metro region. Further conversion of HOV lanes to HOT lanes is a major policy decision that depends on knowledge of the likely impacts, including the equity of the new HOT lane. Rather than estimating these impacts using modeling or surveys, this study collects revealed preference data in the form of observed vehicle license plate data and vehicle occupancy data from users of the HOV corridor. Building on a methodology created in Spring 2011, researchers created a new methodology for matching license plate data to vehicle occupancy data that required extensive post-processing of the data. The new methodology also presented an opportunity to take an in-depth look at errors in both occupancy and license plate data (in terms of data collection efforts, processing, and the vehicle registration database). Characteristics of individual vehicles were determined from vehicle registration records associated with the license plate data collected during AM and PM peak periods immediately prior to the HOV lanes conversion to HOT lanes. More than 70,000 individual vehicle license plates were collected for analysis, and over 3,500 records are matched to occupancy values. Analysis of these data have shown that government and commercial vehicle were more prevalent in the HOV lane, while hybrid and alternative fuel vehicles were much less common in either lane than expected. Vehicle occupancy data from the first four quarters of data collection were used to create the distribution of occupancy on the HOV and general purpose lane, and then the matched occupancy and license plate data were examined. A sensitivity analysis of the occupancy data established that the current use of uncertain occupancy values is acceptable and that bus and vanpool occupancy should be considered when determining the average occupancy of all vehicles on the HOV lane. Using a bootstrap analysis, vehicle values were compared to vehicle occupancy values and the results found that there is no correlation between vehicle value and vehicle occupancy. A conclusions section suggests possible impacts of the findings on policy decisions as Georgia considers expanding the HOT network. Further research using these data, and additional data that will be collected after the HOT lane opens, will include emissions modeling and a study of changes in vehicle characteristics associated with the HOT lane conversion.
93

Material design using surrogate optimization algorithm

Khadke, Kunal R. 28 February 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Nanocomposite ceramics have been widely studied in order to tailor desired properties at high temperatures. Methodologies for development of material design are still under effect. While finite element modeling (FEM) provides significant insight on material behavior, few design researchers have addressed the design paradox that accompanies this rapid design space expansion. A surrogate optimization model management framework has been proposed to make this design process tractable. In the surrogate optimization material design tool, the analysis cost is reduced by performing simulations on the surrogate model instead of high fidelity finite element model. The methodology is incorporated to and the optimal number of silicon carbide (SiC) particles, in a silicon-nitride(Si3N4) composite with maximum fracture energy [2]. Along with a deterministic optimization algorithm, model uncertainties have also been considered with the use of robust design optimization (RDO) method ensuring a design of minimum sensitivity to changes in the parameters. These methodologies applied to nanocomposites design have a significant impact on cost and design cycle time reduced.
94

A one-dimensional fuel burnup model of a PWR

Gilliatt, Douglas Lee January 1982 (has links)
A fuel burnup model of a Pressurized Water Reactor (PWR) was developed based on one-group diffusion theory and used simple thermal cross sections. A computer program which simulates the depletion of the core of a PWR was written based on this model. The basic idea was to develop a fuel depletion program which could be readily understood by nuclear engineering students. Thus, accuracy was sacrificed for the sake of simplicity. The model was based upon a typical PWR with three concentric regions in the radial direction of differing fuel enrichment. Each of the regions was homogenized and the concentrations of the isotopes in each region were considered constant over a time interval. The isotopes considered were U-235, Pu-239, U-238, Xe-135, I-135, Sm-149, Pm-149 and the lumped burnable poison isotope. The flux was approximated by the sum of two trigonometric functions. The magnitude and shape of the flux were determined by holding power constant, constraining system to be critical and varying the soluble boron concentration to find the fla~test possible positive flux. A flux magnitude computed in this manner was compared to a similar flux magnitude given in a Final Safety Analysis Report. The concentrations of the isotopes were determined from the differential equations describing the rate of change of the concentrations. The behavior of the isotopes over core life was graphed and wherever possible compared to graphs from other sources. The concentrations calculated for U-235, U-238 and Pu-239 after 450 days were compared to the concentrations of the same isotopes calculated by a zero dimensional three-group model. The percentage difference between the concentrations determined by the two models varied from about 69% for Pu-239 to 1% for U-238. / Master of Science
95

Finite element analysis of subregions using a specified boundary stiffness/force method

Jara-Almonte, C. C. January 1985 (has links)
The accurate finite element analysis of subregions of large structures is difficult to carry out because of uncertainties about how the rest of the structure influences the boundary conditions and loadings of the subregion model. This dissertation describes the theoretical development and computer implementation of a new approach to this problem of modeling subregions. This method, the specified boundary stiffness/force (SBSF) method, results in accurate displacement and stress solutions as the boundary loading and the interaction between the stiffness of the subregion and the rest of the structure are taken into account. This method is computationally efficient because each time that the subregion model is analyzed, only the equations involving the degrees of freedom within the subregion model are solved. Numerical examples are presented which compare this method to some of the existing methods for subregion analysis on the basis of both accuracy of results and computational efficiency. The SBSF method is shown to be more accurate than another approximate method, the specified boundary displacement (SBD) method and to require approximately the same number of computations for the solution. For one case, the average error in the results of the SBD method was +2.75% while for the SBSF method the average error was -0.3%. The comparisons between the SBSF method and the efficient and exact zooming methods demonstrate that the SBSF method is less accurate than these methods but is computationally more efficient. In one example, the error for the exact zooming method was -0.9% while for the SBSF method it was -3.7%. Computationally, the exact zooming method requires almost 185% more operations than the SBSF method. Similar results were obtained for the comparison of the efficient zooming method and the SBSF method. Another use of the SBSF method is in the analysis of design changes which are incorporated into the subregion model but not into the parent model. In one subregion model a circular hole was changed to an elliptical hole. The boundary forces and stiffnesses from the parent model with the circular hole were used in the analysis of the modified subregion model. The results of the analysis of the most refined mesh in this example had an error of only -0.52% when compared to the theoretical result for the modified geometry. The results of the research presented in this dissertation indicate that the SBSF method is better suited to the analysis of subregions than the other methods documented in the literature. The method is both accurate and computationally efficient as well as easy to use and implement. The SBSF method can also be extended to the accurate analysis of subregion models with design changes which are not incorporated into the parent model. / Ph. D.
96

Vital sign monitoring and data fusion in haemodialysis

Borhani, Yasmina January 2013 (has links)
Intra-dialytic hypotension (IDH) is the most common complication in haemodialysis (HD) treatment and has been linked with increased mortality in HD patients. Despite various approaches towards understanding the underlying physiological mechanisms giving rise to IDH, the causes of IDH are poorly understood. Heart Rate Variability (HRV) has previously been suggested as a predictive measure of IDH. In contrast to conventional spectral HRV measures in which the frequency bands are defined by fixed limits, a new spectral measure of HRV is introduced in which the breathing rate is used to identify and measure the physiologically-relevant peaks of the frequency spectrum. The ratio of peaks leading up to the IDH event was assessed as a possible measure for IDH prediction. Changes in the proposed measure correlate well with the magnitude of abrupt changes in blood pressure in patients with autonomic dysfunction, but there is no such correlation in patients without autonomic dysfunction. At present, routine clinical vital sign monitoring beyond simple weight and blood pressure measurements at the start and end of each session has not established itself in clinical practice. To investigate the benefits of continuous vital sign monitoring in HD patients with regard to detecting and predicting IDH, different population-based and patient-specific models of normality were devised and tested on data from an observational study at the Oxford Renal Unit in which vital signs were recorded during HD sessions. Patient-specific models of normality performed better in distinguishing between IDH and non-IDH data, primarily due to the wide range of vital sign data included as part of the training data in the population-based models. Further, a patient-specific data fusion model was constructed using Parzen windows to estimate a probability density function from the training data consisting of vital signs from IDH-free sessions. Although the model was constructed using four vital sign inputs, novelty detection was found to be primarily driven by blood pressure decreases.
97

Bond strength of concrete plugs embedded in tubular steel piles

Nezamian, Abolghasem, 1968- January 2003 (has links)
Abstract not available
98

Forecasting water resources variables using artificial neural networks

Bowden, G. J. (Gavin James) January 2003 (has links) (PDF)
"February 2003." Corrigenda for, inserted at back Includes bibliographical references (leaves 475-524 ) A methodology is formulated for the successful design and implementation of artificial neural networks (ANN) models for water resources applications. Attention is paid to each of the steps that should be followed in order to develop an optimal ANN model; including when ANNs should be used in preference to more conventional statistical models; dividing the available data into subsets for modelling purposes; deciding on a suitable data transformation; determination of significant model inputs; choice of network type and architecture; selection of an appropriate performance measure; training (optimisation) of the networks weights; and, deployment of the optimised ANN model in an operational environment. The developed methodology is successfully applied to two water resorces case studies; the forecasting of salinity in the River Murray at Murray Bridge, South Australia; and the the forecasting of cyanobacteria (Anabaena spp.) in the River Murray at Morgan, South Australia.
99

Traffic engineering for multi-homed mobile networks.

Chung, Albert Yuen Tai, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This research is motivated by the recent developments in the Internet Engineering Task Force (IETF) to support seamless integration of moving networks deployed in vehicles to the global Internet. The effort, known as Network Mobility (NEMO), paves the way to support high-speed Internet access in mass transit systems, e.g. trains; buses; ferries; and planes; through the use of on-board mobile routers embedded in the vehicle. One of the critical research challenges of this vision is to achieve high-speed and reliable back-haul connectivity between the mobile router and the rest of the Internet. The problem is particularly challenging due to the fact that a mobile router must rely on wireless links with limited bandwidth and unpredictable quality variations as the vehicle moves around. In this thesis, the multi-homing concept is applied to approach the problem. With multi-homing, mobile router has more than one connection to the Internet. This is achieved by connecting the mobile router to a diverse array of wireless access technologies (e.g., GPRS, CDMA, 802.11, and 802.16) and/or a multiplicity of wireless service providers. While the aggregation helps addressing the bandwidth problem, quality variation problem can be mitigated by employing advanced traffic engineering techniques that dynamically control inbound and outbound traffic over multiple connections. More specifically, the thesis investigates traffic engineering solutions for mobile networks that can effectively address the performance objectives, e.g. maximizing profit for mobile network operator; guaranteeing quality of service for the users; and maintaining fair access to the back-haul bandwidth. Traffic engineering solutions with three different levels of control have been investigated. First, it is shown, using detailed computer simulation of popular applications and networking protocols(e.g., File Transfer Protocol and Transmission Control Protocol), that packet-level traffic engineering which makes decisions of which Internet connection to use for each and every packet, leads to poor system throughput. The main problem with packet-based traffic engineering stems from the fact that in mobile environment where link bandwidths and delay can vary significantly, packets using different connections may experience different delays causing unexpected arrivals at destinations. Second, a maximum utility flow-level traffic engineering has been proposed that aims to maximize a utility function that accounts for bandwidth utilization on the one hand, and fairness on the other. The proposed solution is compared against previously proposed flow-level traffic engineering schemes and shown to have better performance in terms of throughput and fairness. The third traffic engineering proposal addresses the issue of maximizing operator?s profit when different Internet connections have different charging rates, and guaranteeing per user bandwidth through admission control. Finally, a new signaling protocol is designed to allow the mobile router to control its inbound traffic.
100

Forecasting water resources variables using artificial neural networks / by Gavin James Bowden.

Bowden, G. J. (Gavin James) January 2003 (has links)
"February 2003." / Corrigenda for, inserted at back / Includes bibliographical references (leaves 475-524 ) / xxx, 524 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / A methodology is formulated for the successful design and implementation of artificial neural networks (ANN) models for water resources applications. Attention is paid to each of the steps that should be followed in order to develop an optimal ANN model; including when ANNs should be used in preference to more conventional statistical models; dividing the available data into subsets for modelling purposes; deciding on a suitable data transformation; determination of significant model inputs; choice of network type and architecture; selection of an appropriate performance measure; training (optimisation) of the networks weights; and, deployment of the optimised ANN model in an operational environment. The developed methodology is successfully applied to two water resorces case studies; the forecasting of salinity in the River Murray at Murray Bridge, South Australia; and the the forecasting of cyanobacteria (Anabaena spp.) in the River Murray at Morgan, South Australia. / Thesis (Ph.D.)--University of Adelaide, School of Civil and Environmental Engineering, 2003

Page generated in 0.1067 seconds