• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 45
  • 25
  • 25
  • 25
  • 25
  • 25
  • 25
  • 5
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 366
  • 366
  • 345
  • 66
  • 63
  • 54
  • 26
  • 24
  • 23
  • 21
  • 18
  • 18
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Optimization of multi-scale decision-oriented dynamic systems and distributed computing

Yu, Lihua January 2004 (has links)
In this dissertation, a stochastic programming model is presented for multi-scale decision-oriented dynamic systems (DODS) which are discrete-time systems in which decisions are made according to alternative discrete-time sequences which depend upon the organizational layer within a hierarchical system. A multi-scale DODS consists of multiple modules, each of which makes decisions on a time-scale that matches their specific task. For instance, in a large production planning system, the aggregate planning module may make decisions on a quarterly basis, whereas, weekly, and daily planning may use short-term scheduling models. In order to avoid mismatches between these schedules, it is important to integrate the short-term and long-term models. In studying models that accommodate multiple time-scales, one of the challenges that must be overcome is the incorporation of uncertainty. For instance, aggregate production planning is carried out several months prior to obtaining accurate demand estimates. In order to make decisions that are cognizant of uncertainty, we propose a stochastic programming model for the multi-scale DODS. Furthermore, we propose a modular algorithm motivated by the column generation decomposition strategy. The convergence of this modular algorithm is also demonstrated. Our experimental results demonstrate that the modular algorithm is robust in solving large-scale multi-scale DODS problems under uncertainty. Another main issue addressed in this dissertation is the application of the above modeling method and solution technique to decision aids for scheduling and hedging in a deregulated electricity market (DASH). The DASH model for power portfolio optimization provides a tool which helps decision-makers coordinate production decisions with opportunities in the wholesale power market. The methodology is based on a multi-scale DODS. This model selects portfolio positions for electricity and fuel forwards, while remaining cognizant of spot market prices, and generation costs. When compared with a commonly used fixed-mix policy, our experiments demonstrate that the DASH model provides significant advantages over fixed-mix policies. Finally, a multi-level distributed computing system is designed in a manner that implements the nested column generation decomposition approach for multi-scale decision-oriented dynamic systems based on a nested column generation decomposition approach. The implementation of a three-level distributed computing system is discussed in detail. The computational experiments are based on a large-scale real-world problem arising in power portfolio optimization. The deterministic equivalent LP for this instance with 200 scenarios has over one million constraints. Our computational results illustrate the effectiveness of this distributed computing approach.
122

Variation monitoring, diagnosis and control for complex solar cell manufacturing processes

Guo, Huairui January 2004 (has links)
Interest in photovoltaic products has expanded dramatically, but wide-scale commercial use remains limited due to the high manufacturing cost and insufficient efficiency of solar products. Therefore, it is critical to develop effective process monitoring, diagnosing, and control methods for quality and productivity improvement. This dissertation is motivated by this timely need to develop effective process control methods for variation reduction in thin film solar cell manufacturing processes. Three fundamental research issues related to process monitoring, diagnosis, and control have been studied accordingly. The major research activities and the corresponding contributions are summarized as follows: (1) Online SPC is integrated with generalized predictive control (GPC) for the first time for effective process monitoring and control. This research emphasizes on the importance of developing supervisory strategies, in which the controller parameters are adaptively changed based on the detection of different process change patterns using SPC techniques. It has been shown that the integration of SPC and GPC provides great potential for the development of effective controllers especially for a complex manufacturing process with a large time varying delay and different process change patterns. (2) A generic hierarchical ANOVA method is developed for systematic variation decomposition and diagnosis in batch manufacturing processes. Different from SPC, which focuses on variation reduction due to assignable causes, this research aims to reduce inherent normal process variation by assessing and diagnosing inherent variance components from production data. A systematic method of how to use a full factor decomposition model to systematically determine an appropriate nested model structure is investigated for the first time in this dissertation. (3) A Multiscale statistical process monitoring method is proposed for the first time to simultaneously detect mean shift and variance change for autocorrelated data. Three wavelet-based monitoring charts are developed to separately detect process variance change, measurement error variance change, and process mean shift simultaneously. Although the solar cell manufacturing process is used as an example in the dissertation, the developed methodologies are generic for process monitoring, diagnosis, and control in process variation reduction, which are expected to be applicable to various other semiconductor and chemical manufacturing processes.
123

Fine-grain parallelism and run-time decision-making

Lowenthal, David K., 1968- January 1996 (has links)
While parallel programming is needed to solve large-scale scientific applications, it is more difficult than sequential programming. Programming parallel machines requires specifying what can execute concurrently, when and how processes communicate, and how data is placed in the memories of the processors. The challenge is to address these issues simply, portably, and efficiently. This dissertation presents the Filaments package, which provides fine-grain parallelism and a shared-memory programming model and makes extensive use of run-time decisions. Fine-grain parallelism and a shared-memory programming model simplify parallel programs by allowing the programmer or compiler to concentrate on the application and not the architecture of the target machine. This dissertation shows that this simpler programming model can be implemented with low overhead. Determining the best data placement at run time frees the programmer or compiler from this task and allows the placement to adapt to the particular application. The dissertation discusses the implementation of Adapt, a run-time data placement system, and compare its performance to static placements on three classes of applications. The performance of programs using Adapt is better than those using static data placements on adaptive applications such as particle simulation.
124

On System Engineering a Barter-Based Re-allocation of Space System Key Development Resources

Kosmann, William J. 21 May 2013 (has links)
<p> NASA has had a decades-long problem with cost growth during the development of space science missions. Numerous agency-sponsored studies have produced average mission level development cost growths ranging from 23 to 77%. </p><p> A new study of 26 historical NASA science instrument set developments using expert judgment to re-allocate key development resources has an average cost growth of 73.77%. Twice in history, during the Cassini and EOS-Terra science instrument developments, a barter-based mechanism has been used to re-allocate key development resources. The mean instrument set development cost growth was -1.55%. Performing a bivariate inference on the means of these two distributions, there is statistical evidence to support the claim that using a barter-based mechanism to re-allocate key instrument development resources will result in a lower expected cost growth than using the expert judgment approach. </p><p> Agent-based discrete event simulation is the natural way to model a trade environment. A NetLogo agent-based barter-based simulation of science instrument development was created. The agent-based model was validated against the Cassini historical example, as the starting and ending instrument development conditions are available. The resulting validated agent-based barter-based science instrument resource re-allocation simulation was used to perform 300 instrument development simulations, using barter to re-allocate development resources. The mean cost growth was -3.365%. A bivariate inference on the means was performed to determine that additional significant statistical evidence exists to support a claim that using barter-based resource re-allocation will result in lower expected cost growth, with respect to the historical expert judgment approach. </p><p> Barter-based key development resource re-allocation should work on science spacecraft development as well as it has worked on science instrument development. A new study of 28 historical NASA science spacecraft developments has an average cost growth of 46.04%. As barter-based key development resource re-allocation has never been tried in a spacecraft development, no historical results exist, and an inference on the means test is not possible. </p><p> A simulation of using barter-based resource re-allocation should be developed. The NetLogo instrument development simulation should be modified to account for spacecraft development market participant differences. The resulting agent-based barter-based spacecraft resource re-allocation simulation would then be used to determine if significant statistical evidence exists to prove a claim that using barter-based resource re-allocation will result in lower expected cost growth.</p>
125

Whole System Design and Evolutionary 21st Century American Buildings + Infrastructure

Franz, Anna Young 09 October 2013 (has links)
<p> This study explores whole system design and evolutionary 21st century American buildings + infrastructure. The ideas and findings of this dissertation research, as presented at the Seventh International Conference on Design Principles and Practices in Chiba, Japan on March 6, 2013, are provided in a forthcoming publication by the authors (Franz, Sarkani, and Mazzuchi 2013). </p><p> Since the introduction of the theory of ecological design in the mid-1970s, whole system design, based on collaboration, research, new technologies and iterative value management, has been increasingly applied to drive sustainable and more innovative solutions (Franz 2011, 2012). While this systems engineering approach for achieving substantial environmental and economic benefits is more commonplace today, it is theorized that evolutionary buildings + infrastructure are achieved through an expanded model of whole system design, one combining art and science, and disciplined processes for the purpose of innovation and differentiation (Franz, Sarkani, and Mazzuchi 2013). This model integrating whole system design (integrated design) with project management, systems engineering process models and radial innovation drives design innovation, promotes change in the built environment and prompts new market opportunities for the Architect Engineer and Construction industry (Franz, Sarkani, and Mazzuchi 2013). </p><p> Franz, Sarkani, and Mazzuchi (2013) note that understanding critical success factors for producing distinguished projects is key to sustaining architectural and engineering practice and the building industry. Through quantitative measurement and qualitative case study analyses, the study using winning projects from <i>Engineering News Record's</i> (ENR) Best of the Best 2011 Project Awards (as announced on February 13, 2012 in ENR, The 2011 Best of the Best Projects) examines four questions: 1) what are critical success factors for producing evolutionary 21<sup>st</sup> century buildings + infrastructure? 2) does whole system design enable project success? 3) do systems engineering process models enhance whole system design? and 4) is radical innovation critical for producing evolutionary American buildings + infrastructure? (Franz, Sarkani, and Mazzuchi 2013) </p><p> The study indicates that significant evidence exists to support prior research for factors related to people, project activities, barriers and success (Germuenden and Lechler 1997), and that whole system design (Coley and Lemon, 2008, 2009; Charnley, Lemon and Evans, 2011), as implemented through systems engineering process models (Bersson, Mazzuchi, and Sarkani 2012), and radical innovation (Norman and Verganti 2011) additionally are important factors. Case study information suggests that buildings + infrastructure evolve through design innovation, enhanced by an expanded model for whole system design aligning goals, vision, whole system design and outcomes (Franz, Sarkani, and Mazzuchi 2013). The study informs professionals and students about design innovation and effective project delivery strategies strengthened through systems engineering (Franz, Sarkani, and Mazzuchi 2013). </p><p> <i>Keywords: Critical Success Factors, Whole System Design, Systems Engineering, Radical Innovation.</i></p>
126

Optimal sensor placement for parameter identification

Green, Kary January 2007 (has links)
This thesis surveys methods for determining sensor locations which maximize the achievable accuracy in parameter estimation of differential equations. The approach considers the placement of sensors as an experimental design problem. The optimal sensor locations are found based on the so-called Fisher Information Matrix. After illustration of the general methodology, a particular algorithm is presented which finds optimal weights associated with a given set of potential sensor locations. Numerical results are provided to show that improvements in the accuracy of parameter estimates can be achieved using the ideas reviewed in this thesis.
127

Robust model predictive control of stable and integrating linear systems

Ralhan, Sameer January 2000 (has links)
Model Predictive Control (MPC) has become one of the dominant methods of chemical process control in terms of successful industrial applications. A rich theory has been developed to study the closed loop stability of MPC algorithms when the plant model is perfect, referred to as the nominal stability problem. In practical applications, however, the process model is never perfect and nominal stability results are not strictly applicable. The primary disadvantage of the current MPC design techniques is their inability to deal explicitly with the plant model uncertainty. In this thesis we develop a new framework for robust MPC synthesis that allows explicit incorporation of the plant uncertainty description in the problem formulation. The robust stability results are developed for general uncertainty descriptions. Hard input and soft output constraints can be easily added to the algorithms without affecting closed loop stability. Robust stability is achieved through the addition of constraints that prevent the sequence of the optimal controller costs from increasing for the true plant. These cost function constraints can be solved analytically for the special case of bounded input matrix uncertainty. The closed loop system also remains stable in the face of asymptotically decaying disturbances. The framework developed for bounded input matrix uncertainty for stable plants can also be used for integrating plants. Two formulations are developed; a single stage optimization method that minimizes the state error at the end of the horizon, and a two stage optimization method which minimizes the state error at the end of the horizon in the first stage but uses the remaining degrees in a second stage to minimize state deviations over the full prediction horizon. Hard input and soft output constraints can be easily added to the algorithms without affecting the closed loop stability.
128

Dynamic optimization of job distribution on machine tools using time decomposition into constant job-mix stages

Natarajan, Subramanian January 1992 (has links)
This thesis deals with the development, analysis and application of a new method to optimize the allocation of jobs on machine tools. The benefits of this method are derived through time-decomposition of the scheduling horizon. / The decomposition scheme is based on the scheduled flow of jobs i.e., the input of jobs to the shop floor and their departure after processing. The partitioning procedure divides the planning horizon into 'stages', or time periods, at which the job-mix remains constant. The optimization of job allocation is carried out within each partition and successive stages are treated sequentially. The dynamic nature of the problem is such that the solution at a stage affects the boundary conditions of the subsequent stage. The Constant Job-Mix Stage (CMS) algorithm developed to solve the job allocation problem, accounts of the setup times and enables one to obtain integer solutions while reducing slack on machines and enforcing due date on jobs. / The application of the algorithm is demonstrated for three different cases. The first two cases focus on single operation jobs and represent two different approaches to scheduling. The third case deals with the assignment of multiple operation jobs to machine tools which are grouped according to processes.
129

A new architecture of multimedia distributed systems

Zhang, Mingsheng. January 1998 (has links)
This paper proposes a now architecture for distributed multimedia systems, we called it MSZ, which allows users to access multimedia documents in the system. MSZ assumes a system consisting of a not of client machines and a got of server machines. All of these machines are connected through the Internet. A user can ask for a document to be played with the desired quality of service from any server; MSZ is responsible for selecting the beat server that is able to deliver the document to the user most efficiently. The best server satisfies the following properties: (1) It stores the requested document and has the ability to deliver the document to the user (the ability includes its free CPU time and free bandwidth and free memory space). (2) The server is much closer to the user geographically in comparison to other servers in the system, and the related network path has the most available resource, which is more than the service requires. (3) The server has a lighter system load compared to other servers. / MSZ is able to minimize the response time and optimize the service quality as much as possible (the users are rejected only when the requested document is unavailable in all servers or servers are all loaded at maximum, or the communication network resource is less than the service required). Also MSZ has the self-learn ability. The more it is used, the better it works. / The system has the ability to detect any degradation in service and to automatically recover during the presentation of the document. A server's failure doesn't affect the whole system and it is very easy for the system to add and remove a server. Generally, MSZ offers a better service to users with less blocking time, less cost and higher quality service.
130

Method for estimation of continuous-time models of linear time-invariant systems via the bilinear transform

Kukreja, Sunil L. January 1996 (has links)
In this thesis, we develop a technique which is capable of identifying and rejecting the sampling zeros for continuous-time linear time-invariant systems. The method makes use of the MOESP family of identification algorithms (59), (60), (61), (62) to obtain a discrete-time model of the system. Since, the structure and parameters of discrete-time models are difficult to relate to the underlying continuous-time system of interest it is necessary to compute the continuous-time equivalent from the discrete-time model. The bilinear transform (19), (54) does this effectively but also converts extraneous system zeros (the so-called process or sampling zeros) to the continuous-time model. Here, we present an approach to distinguish which of the system zeros are due to sampling effects and which are truly part of the model's dynamics. Dropping sampling zeros yields an accurate description of the system under test. Digital simulations demonstrate that this method is robust in the presence of measurement noise. Moreover, series of experiments performed on a known, physical, linear system validate the simulation results. Finally, an investigation of the passive dynamics of the ankle was used to demonstrate the applicability of this method to a physiological system.

Page generated in 0.065 seconds