Spelling suggestions: "subject:"engineering - bindustrial"" "subject:"engineering - 0industrial""
141 |
Optimization of multi-scale decision-oriented dynamic systems and distributed computingYu, Lihua January 2004 (has links)
In this dissertation, a stochastic programming model is presented for multi-scale decision-oriented dynamic systems (DODS) which are discrete-time systems in which decisions are made according to alternative discrete-time sequences which depend upon the organizational layer within a hierarchical system. A multi-scale DODS consists of multiple modules, each of which makes decisions on a time-scale that matches their specific task. For instance, in a large production planning system, the aggregate planning module may make decisions on a quarterly basis, whereas, weekly, and daily planning may use short-term scheduling models. In order to avoid mismatches between these schedules, it is important to integrate the short-term and long-term models. In studying models that accommodate multiple time-scales, one of the challenges that must be overcome is the incorporation of uncertainty. For instance, aggregate production planning is carried out several months prior to obtaining accurate demand estimates. In order to make decisions that are cognizant of uncertainty, we propose a stochastic programming model for the multi-scale DODS. Furthermore, we propose a modular algorithm motivated by the column generation decomposition strategy. The convergence of this modular algorithm is also demonstrated. Our experimental results demonstrate that the modular algorithm is robust in solving large-scale multi-scale DODS problems under uncertainty. Another main issue addressed in this dissertation is the application of the above modeling method and solution technique to decision aids for scheduling and hedging in a deregulated electricity market (DASH). The DASH model for power portfolio optimization provides a tool which helps decision-makers coordinate production decisions with opportunities in the wholesale power market. The methodology is based on a multi-scale DODS. This model selects portfolio positions for electricity and fuel forwards, while remaining cognizant of spot market prices, and generation costs. When compared with a commonly used fixed-mix policy, our experiments demonstrate that the DASH model provides significant advantages over fixed-mix policies. Finally, a multi-level distributed computing system is designed in a manner that implements the nested column generation decomposition approach for multi-scale decision-oriented dynamic systems based on a nested column generation decomposition approach. The implementation of a three-level distributed computing system is discussed in detail. The computational experiments are based on a large-scale real-world problem arising in power portfolio optimization. The deterministic equivalent LP for this instance with 200 scenarios has over one million constraints. Our computational results illustrate the effectiveness of this distributed computing approach.
|
142 |
Variation monitoring, diagnosis and control for complex solar cell manufacturing processesGuo, Huairui January 2004 (has links)
Interest in photovoltaic products has expanded dramatically, but wide-scale commercial use remains limited due to the high manufacturing cost and insufficient efficiency of solar products. Therefore, it is critical to develop effective process monitoring, diagnosing, and control methods for quality and productivity improvement. This dissertation is motivated by this timely need to develop effective process control methods for variation reduction in thin film solar cell manufacturing processes. Three fundamental research issues related to process monitoring, diagnosis, and control have been studied accordingly. The major research activities and the corresponding contributions are summarized as follows: (1) Online SPC is integrated with generalized predictive control (GPC) for the first time for effective process monitoring and control. This research emphasizes on the importance of developing supervisory strategies, in which the controller parameters are adaptively changed based on the detection of different process change patterns using SPC techniques. It has been shown that the integration of SPC and GPC provides great potential for the development of effective controllers especially for a complex manufacturing process with a large time varying delay and different process change patterns. (2) A generic hierarchical ANOVA method is developed for systematic variation decomposition and diagnosis in batch manufacturing processes. Different from SPC, which focuses on variation reduction due to assignable causes, this research aims to reduce inherent normal process variation by assessing and diagnosing inherent variance components from production data. A systematic method of how to use a full factor decomposition model to systematically determine an appropriate nested model structure is investigated for the first time in this dissertation. (3) A Multiscale statistical process monitoring method is proposed for the first time to simultaneously detect mean shift and variance change for autocorrelated data. Three wavelet-based monitoring charts are developed to separately detect process variance change, measurement error variance change, and process mean shift simultaneously. Although the solar cell manufacturing process is used as an example in the dissertation, the developed methodologies are generic for process monitoring, diagnosis, and control in process variation reduction, which are expected to be applicable to various other semiconductor and chemical manufacturing processes.
|
143 |
Assessing customer satisfaction of campus information technology departments in a community college setting using TQM principlesNiederriter, Sandy Peck January 1999 (has links)
The purpose of this study was to assess customer satisfaction of campus information technology (IT) departments in a community college setting using TQM principles. The study used both quantitative and qualitative research methods. A survey with Likert-scale questions and open-ended questions was utilized to obtain data from 104 full-time faculty and full-time staff employed by a multi-campus community college. Fifty-eight surveys were returned by respondents for a response rate of 56%. Those surveys provided the data for the five research questions of the study. Findings of the study led to several conclusions regarding customers' satisfaction with their campus IT department. The findings revealed that there were no statistically significant differences between faculty and staff in their customer satisfaction in various service dimensions (e.g., responsiveness, access, and reliability) as well as their overall customer satisfaction. The IT services customers cited as most satisfying were the maintenance services. Customers also reported staffing as an issue needing improvement. In particular, they cited their IT department as understaffed. Comments expressed by customers reported their satisfaction with the personal attributes or characteristics of the IT staff. Implications to campus IT decision makers and IT departments included: (1) a review of IT staffing to determine if departments are adequately staffed, (2) the adoption of TQM strategies and policies to improve IT services, (3) an increase in software and hardware training to faculty and staff, and (4) the ongoing evaluation of IT customers to determine their customer satisfaction. Recommendations for future research included studies to determine: (1) customer satisfaction and the degree of TQM principles utilized by IT leaders, (2) customer satisfaction of various service dimensions using only qualitative research, (3) the success of IT departments which have implemented TQM principles, (4) appropriate staffing levels for IT departments, and (5) appropriate assessment techniques to measure customer satisfaction in the various services provided by an IT department.
|
144 |
Determining user interface effects of superficial presentation of dialog and visual representation of system objects in user directed transaction processing systemsCooney, Vance January 2001 (has links)
At the point of sale in retail businesses, employees are a problem. A problem comprised of high turnover, unmet consumer expectations and lost sales, among other things. One of the traditional strategies used by human resource departments to cope with employee behavior, or "misbehavior," has been to strictly script employee/customer interactions. Another, more recent approach, has been the development of systems to replace the human worker; in other words, to effect transactions directly between customers and an information system. In these systems one determinant of public acceptance may be the system's affect, whether that affect is "human-like" or takes some other form. Human-like affect can be portrayed by the use of multimedia presentation and interaction techniques to depict "employees" in familiar settings, as well as incorporating elements of human exchange (i.e., having the system use the customer's name in dialogs). The field of Human-Computer Interaction, which informs design decisions for such multimedia systems, is still evolving, and research on the application of multimedia to User Interfaces for automated transaction processing of this type is just beginning. This dissertation investigates two dimensions of User Interface design that bear on the issues of emulating "natural human" transactions by using a laboratory experiment employing a 2 x 2 factorial design. The first dimension investigated is personalization. Personalization is a theoretical construct derived from social role theory and applied in marketing. It is, briefly, the inclusion of scripted dialog crafted to make the customer feel a transaction is personalized. In addition to using the customer's name, scripts might call for ending a transaction with the ubiquitous "Have a nice day!" The second dimension investigated is the "richness" of representation of the UI. Richness is here defined as the degree of realism of visual presentation in the interface and bears on the concept of direct manipulation. An object's richness could vary from a text based description of the object to a full motion movie depicting the object. The design implications of the presence or absence of personalization at varying levels of richness in a prototype UI simulating a fast food ordering system are investigated. The results are presented.
|
145 |
A multiobjective global optimization algorithm with application to calibration of hydrologic modelsYapo, Patrice Ogou, 1967- January 1996 (has links)
This dissertation presents a new multiple objective optimization algorithm that is capable of solving for the entire Pareto set in one single optimization run. The multi-objective complex evolution (MOCOM-UA) procedure is based on the following three concepts: (1) population, (2) rank-based selection, and (3) competitive evolution. In the MOCOM-UA algorithm, a population of candidate solutions is evolved in the feasible space to search for the Pareto set. Ranking of the population is accomplished through Pareto Ranking, where all points are successively placed on different Pareto fronts. Competitive evolution consists of selecting subsets of points (including all worst points in the population) based on their ranks and moving the worst points toward the Pareto set using the newly developed multi-objective simplex (MOSIM) procedure. Test analysis on the MOCOM-UA algorithm is accomplished on mathematical problems of increasing complexity and based on a bi-criterion measure of performance. The two performance criteria used are (1) efficiency, as measured by the ability of the algorithm to converge quickly and (2) effectiveness, as measured by the ability of the algorithm to locate the Pareto set. Comparison of the MOCOM-UA algorithm against three multi-objective genetic algorithms (MOGAs) favors the former. In a realistic application, the MOCOM-UA algorithm is used to calibrate the Soil Moisture Accounting model of the National Weather Service River Forecasting Systems (NWSRFS-SMA). Multi-objective calibration of this model is accomplished using two bi-criterion objective functions, namely the Daily Root Mean Square-Heteroscedastic Maximum Likelihood Estimator (DRMS, HMLE) and rising limb-falling limb (RISE, FALL) objective functions. These two multi-objective calibrations provide some interesting insights into the influence of different objectives in the location of final parameter values as well as limitations in the structure of the NWSRFS-SMA model.
|
146 |
Disturbed state concept of materials and interfaces with applications in electronic packagingDishongh, Terrance John, 1964- January 1996 (has links)
Although a number of idealized constitutive models have been proposed in the past, to include factors such as elastic, plastic and creep strains and microcracking and damage, no unified model has yet been developed to understand and model the behavior of materials and joints in semiconductor chip-subtrate systems and packaging. Such models are important to analyze and predict the response for design and reliability assessments of packaging problems. This dissertation presents formalization and use of the recently developed approach called the disturbed state concept (DSC) for the characterization of the thermo-mechanical behavior of materials and joints. It is a unified approach and allows hierarchical use of the model for factors such as elastic, plastic, and creep strains, microcracking (damage) and softening and stiffening. The DSC model is used here for a number of materials such as ceramics (e.g. Aluminum Nitride), silicon ribbon and silicon doped with oxygen. The joining materials considered are different solders (e.g. 60%Sn-40%Pb, 90%Pb-10%Sn, and 95%Pb-5%Sn). Particular attention is given to solders used in the IBM-604 PowerPC package; which is a ceramic ball grid array (CBGA). A number of mechanical and ultrasonic tests are performed under uniaxial tension and compression loading for aluminum nitride. Test data available from the literature is used for the solders, silicon ribbon and silicon doped with oxygen. The DSC model is calibrated with respect to the test data in which the material parameters are found and affected by factors such as stress, cyclic loading and temperature. Then the incremental constitutive equations are integrated to backpredict the observed behavior for the tests used in the calibration and independent tests not used in the calibration. Overall, the model provides satisfactory correlation with the observed behavior. A nonlinear finite element procedure with the DSC is used to analyze the CBGA package with emphasis on the thermomechanical response of changing the via spacing. It is found that the DSC model predictions provide satisfactory comparison with a previous analysis by others, and with observed laboratory behavior.
|
147 |
Human fatigue in prolonged mentally demanding work-tasks| An observational study in the fieldAhmed, Shaheen 21 September 2013 (has links)
<p> Worker fatigue has been the focus of research for many years. However, there is limited research available on the evaluation and measurement of fatigue for prolonged mentally demanding activities.</p><p> The objectives of the study are (1 )to evaluate fatigue for prolonged, mentally demanding work-tasks by considering task-dependent, task-independent and personal factors, (2) to identify effective subjective and objective fatigue measures, (3) to establish a relationship between time and factors that affect fatigue (4) to develop models to predict fatigue.</p><p> A total of 16 participants, eight participants with western cultural backgrounds and eight participants with eastern cultural backgrounds, currently employed in mentally demanding work-tasks (e.g., programmers, computer simulation experts, etc.) completed the study protocols. Each participant was evaluated during normal working hours in their workplace for a 4-hour test session, with a 15-minute break provided after two hours. Fatigue was evaluated using subjective questionnaires (Borg Perceived Level of Fatigue Scale and the Swedish Occupational Fatigue Index (SOFI)); and objective measures (change in resting heart rate and salivary cortisol excretion). Workload was also assessed using the NASA-TLX. Fatigue and workload scales were collected every 30 minutes, cortisol at the start and finish of each 2-hour work block, and heart rate throughout the test session.</p><p> Fatigue significantly increased over time (p-value <0.0001). All measures, except cortisol hormone, returned to near baseline level following the 15-minute break (p-value <0.0001). Ethnicity was found to have limited effects on fatigue development. Poor to moderate (Rho = 0.35 to 0.75) significant correlations were observed between the subjective and objective measures. Time and fatigue load (a factor that impacts fatigue development) significantly interact to explain fatigue represented by a hyperbolic relationship. Predictive models explained a maximum of 87% of the variation in the fatigue measures.</p><p> As expected, fatigue develops over time, especially when considering other factors that can impact fatigue (e.g. hours slept, hours of work), providing further evidence of the complex nature of fatigue. As the 15-minute break was found to reduce all measures of fatigue, the development of appropriate rest breaks may mitigate some of the negative consequences of fatigue.</p>
|
148 |
Model Predictive Control for Energy Efficient BuildingsMa, Yudong 11 October 2013 (has links)
<p> The building sector consumes about 40% of energy used in the United States and is responsible for nearly 40% of greenhouse gas emissions. Energy reduction in this sector by means of cost-effective and scalable approaches will have an enormous economic, social, and environmental impact. Achieving substantial energy reduction in buildings may require to rethink the entire processes of design, construction, and operation of buildings. This thesis focuses on advanced control system design for energy efficient commercial buildings. </p><p> Commercial buildings are plants that process air in order to provide comfort for their occupants. The components used are similar to those employed in the process industry: chillers, boilers, heat exchangers, pumps, and fans. The control design complexity resides in adapting to time-varying user loads as well as occupant requirements, and quickly responding to weather changes. Today this is easily achievable by over sizing the building components and using simple control strategies. Building controls design becomes challenging when predictions of weather, occupancy, renewable energy availability, and energy price are used for feedback control. Green buildings are expected to maintain occupants comfort while minimizing energy consumption, being robust to intermittency in the renewable energy generation and responsive to signals from the smart grid. Achieving all these features in a systematic and cost-effective way is challenging. The challenge is even greater when conventional systems are replaced by innovative heating and cooling systems that use active storage of thermal energy with critical operational constraints.</p><p> Model predictive control (MPC) is the only control methodology that can systematically take into account future predictions during the control design stage while satisfying the system operating constraints. This thesis focuses on the design and implementation of MPC for building cooling and heating systems. The objective is to develop a control methodology that can 1) reduce building energy consumption while maintaining indoor thermal comfort by using predictive knowledge of occupancy loads and weather information, (2) easily and systematically take into account the presence of storage devices, demand response signals from the grid, and occupants feedback, (3) be implemented on existing inexpensive and distributed building control platform in real-time, and (4) handle model uncertainties and prediction errors both at the design and implementation stage.</p><p> The thesis is organized into six chapters. Chapter 1 motivates our research and reviews existing control approaches for building cooling and heating systems. </p><p> Chapter 2 presents our approach to developing low-complexity control oriented models learned from historical data. Details on models for building components and spaces thermal response are provided. The thesis focuses on the dynamics of both the energy conversion and storage as well as energy distribution by means of heating ventilation and air conditioning (HVAC) systems.</p><p> In Chapter 3, deterministic model predictive control problems are formulated for the energy conversion systems and energy distribution systems to minimize the energy consumption while maintaining comfort requirement and operational constraints. Experimental and simulative results demonstrate the effectiveness of the MPC scheme, and reveal significant energy reduction without compromising indoor comfort requirement.</p><p> As the size and complexity of buildings grow, the MPC problem quickly becomes computationally intractable to be solved in a centralized fashion. This limitation is addressed in Chapter 4. We propose a distributed algorithm to decompose the MPC problem into a set of small problems using dual decomposition and fast gradient projection. Simulation results show good performance and computational tractability of the resulting scheme.</p><p> The MPC formulation in Chapter 3 and 4 assumes prefect knowledge of system model, load disturbance, and weather. However, the predictions in practice are different from actual realizations. In order to take into account the prediction uncertainties at control design stage, stochastic MPC (SMPC) is introduced in Chapter 5 to minimize expected costs and satisfy constraints with a given probability. In particular, the proposed novel SMPC method applies feedback linearization to handle system nonlinearity, propagates the state statistics of linear systems subject to finite-support (non Gaussian) disturbances, and solves the resulting optimization problem by using large-scale nonlinear optimization solvers.</p>
|
149 |
An integrated CAD framework linking VLSI layout editors and process simulatorsSengupta, Chaitali January 1995 (has links)
This thesis presents an Integrated CAD Framework which links VLSI layout editors to lithographic simulators and provides information on the simulated resolution of a feature to the circuit designer. This will help designers to design more compact circuits, as they will be able to see the effect on manufactured silicon. The Framework identifies areas in a layout (in Magic or CIF format) that are more prone to problems arising out of the photolithographic process. It then creates the corresponding inputs for closer analysis with a process simulator (Depict) and analyzes the simulator outputs to decide whether the printed layout will match the designed mask for a particular set of process parameters. The designer can modify the original layout based upon this analysis. The Framework has been used to evaluate layouts for various process techniques. These evaluations illustrate the use of the Framework in determining the limits of any lithographic process.
|
150 |
Exact and inexact Newton linesearch interior-point algorithms for nonlinear programming problemsArgaez Ramos, Miguel January 1997 (has links)
In the first part of this research we consider a linesearch globalization of the local primal-dual interior-point Newton's method for nonlinear programming recently introduced by El-Bakry, Tapia, Tsuchiya, and Zhang. Our linesearch uses a new merit function and a new notion of centrality. We establish a global convergence theory and present rather promising numerical experimentation.
In the second part, we consider an inexact Newton's method implementation of the linesearch globalization algorithm given in the first part. This inexact method is designed to solve large scale nonlinear programming problems. The iterative solution technique uses an orthogonal projection-Krylov subspace scheme to solve the highly indefinite and nonsymmetric linear systems associated with nonlinear programming. Our iterative projection method maintains linearized feasibility with respect to both the equality constraints and complementarity condition. This guarantees that in each iteration the linear solver generates a descent direction, so that the iterative solver is not required to find a Newton step but rather cheaply provides a way to march toward an optimal solution of the problem. This makes the use of a preconditioner inconsequential except near the solution of the problem, where the Newton step is effective. Moreover, we limit the problem to finding a good preconditioner only for the Hessian of the Lagrangian function associated with the nonlinear programming problem plus a positive diagonal matrix. We report numerical experimentation for several large scale problems to illustrate the viability of the method.
|
Page generated in 0.0842 seconds