• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 1
  • Tagged with
  • 43
  • 43
  • 17
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Mixed Integer Linear Programming for Time-Optimal Cyclic Scheduling of High Throughput Screening Systems

Sahin, Deniz 08 June 2018 (has links)
<p> High Throughput Screening (HTS) systems are highly technological and fully automated plants which are used for the analysis of thousands of biochemical substances to provide basis for the drug discovery process. As the operation of these systems is remarkably expensive, the scheduling for the processes of such complex systems is critical to the HTS companies. Since the processing time affects the throughput and the efficiency of the system, a time-optimal schedule must be developed for the system which can yield high throughputs. In this thesis, a Mixed Integer Programming model is presented, minimizing the overall processing time and therefore maximizing the throughput of the system. To generate the mathematical model, the principles of Job-Shop Scheduling and Cyclic Scheduling are utilized. The results of the study are supported by an experiment conducted at the High Throughput Screening plant at Washington University in St. Louis. As a conclusion, the model has generated a time-optimal cyclic schedule which improves the total processing time of the system by 3 minutes for 25 batches. The projection of the model for experiments that run with hundreds of batches is interpreted to generate greater improvements for the overall processing time of the system.</p><p>
22

Structured Expert Judgment Elicitation of Use Error Probabilities for Drug Delivery Device Risk Assessment

Zampa, Nicholas Joseph 17 August 2018 (has links)
<p> In the pharmaceutical industry, estimating the probability of occurrence for use errors and use-error-causes (here forth referred to as use error probabilities) when developing drug delivery devices is hindered by a lack of data, ultimately limiting the ability to conduct robust usability risk assessments. A lack of reliable data is the result of small sample sizes and challenges simulating actual use environments in simulated use studies, compromising the applicability of observed use error rates. Further, post-market surveillance databases and internal complaint databases are limited in their ability to provide reliable data for product development. Inadequate usability risk assessment hinders drug delivery device manufacturers' understanding of safety and efficacy risks. The current industry and regulatory paradigm with respect to use error probabilities is to de-emphasize them, focusing instead of assessing the severity of harms. However, de-emphasis of use error probabilities is not rooted in a belief that probability estimates inherently lack value. Rather, the status quo is based on the absence of suitable methodologies for estimating use error probabilities. </p><p> In instances in which data is lacking, engineers and scientist may turn to structured expert judgment elicitation methodologies, in which subjective expert opinions are quantified and aggregated in a scientific manner. This research is a case study in adapting and applying one particular structured expert judgment methodology, Cooke&rsquo;s Classical model, to human factors experts for estimating use error probabilities for a drug delivery device. Results indicate that a performance-weighted linear pooling of expert judgments significantly outperforms any one expert and an equal-weighted linear pooling. Additionally, this research demonstrates that a performance-weighted linear pooling of expert judgments is statistically accurate, robust to the choice of experts, and robust to choice elicitation questions. Lastly, this research validates the good statistical accuracy of a performance-weighted linear pooling of experts on a new set of use error probabilities, indicating that good expert performance translates to use error probabilities estimates for different devices. Through structured expert judgment elicitation according to Cooke&rsquo;s Classical model, this research demonstrates that it is possible to reinstall use error probability estimates, with quantified uncertainty, into usability risk assessments for drug delivery devices.</p><p>
23

Slow Pyrolysis Experiments for High Yields of Solid Carbon

LeBlanc, Jeffrey 22 March 2017 (has links)
<p> Coal and biomass slow pyrolysis reactions were investigated using thermogravimetric analysis close coupled to gas chromatography (TG-GC). The pyrolysis mass balance via this system was closed to >99 wt. %. Parallel in-situ Diffuse Reflectance Infrared Fourier-Transform Spectroscopy pyrolysis experiments were used to explain the mechanistic relationship between functional groups and volatile products. Gas and tar evolution profiles correspond to the loss of surface oxygenated functional groups and increases in char aromaticity during pyrolysis. Various pyrolysis conditions including heating rates, particle size, and reaction confinements were investigated secondary pyrolysis reactions via TG-GC. The investigation demonstrated that increasing the residence time of tar in the solid-gas interface by 0.23-0.31 seconds results in a 2.1-2.5 wt. % decrease in tar production with a commensurate 0.6-5.7 wt. % increase in solid product, a 40 wt. % increase in CH<sub>4</sub>, and a 10-30 wt. % increase in H<sub> 2</sub> between 510 and 575 &deg;C. Matrix-assisted laser desorption/ionization-time-of-flight mass spectroscopy (MALDI-TOF) measured the molecular weight distribution (MWD) of the pyrolysis tar product to be between 200 and 550 amu. Gas chromatographic-mass spectroscopy (GC-MS) was used to identify 120 distinct species in pyrolysis tar. Tar products of the different reaction conditions show that extended residence time of pyrolysis tars in the solid-gas interface decreased the average MWD, decreased the H/C ratio, and resulted in a more expansive speciation of nitrogen and sulfur species in the tar. Further investigations of tar show that coal tar vaporizes by 1000 &ordm;C without producing secondary gas products or coke. Biomass was found to produce a 40 wt. % char product plus CO<sub> 2</sub>, CO, CH<sub>4</sub>, C<sub>2</sub>H<sub>4</sub>, C<sub>2</sub>H<sub> 6</sub>, and H<sub>2</sub>. The experimentally measured mass closure insists that the product distributions and profiles from slow pyrolysis are absolute and the error may be directly calculated. These are used to estimate the rates, kinetic parameters and number of reactions during pyrolysis.</p>
24

Product realization for mechanical assemblies: A model for decision support

Duffey, Michael Robert 01 January 1992 (has links)
Product realization is a very complex, interdisciplinary process. At early design stages, decisions must be made not only about physical attributes of the design, but also about scheduling and resource allocation for many product and manufacturing engineering activities, as well as purchasing, finance, marketing, etc. Typically, complex interdependencies exist among these disparate activities, and it is difficult to predict how decisions will affect overall organizational objectives of low cost, high quality and short time-to-market. Many decision support needs in this process seem to fall in a gap between emerging design-for-manufacture models (which evaluate design attributes for cost of a specific manufacturing activity) and management-level models (such as very abstract but comprehensive PERT-type networks). This research addresses this "gap." In the proposed model there are three distinct object representations that together define a product realization problem: product attributes, activities, and resources. In the first stage of the model, two relational matrices are used to (i) match product attributes to the required design and manufacturing activities, and (ii) then match the activities to the resources required for realization. In the second stage, an activity network is generated from the data in the relational matrices. The network is assembled from predefined "templates" of activities which have default precedence relationships (for example, sequences of prototyping and tooling activities). This activity network is then used to simulate aggregate cash flow. There are several applications envisioned for a computer tool based on this model: as a "prospectus" for new product designs to assess aggregate cost and development time within a specific organizational context; to assist managers in "concurrent" scheduling of design, tooling, and other preproduction activities; as a vehicle for budget negotiation between engineers and financial managers during the design process; and as an aid for value analysis. After reporting results of a field study and prototype computer implementation, I conclude that the model could potentially be used for decision support, but several important conceptual and implementation limitations remain to be addressed.
25

Improved manufacturing productivity with recursive constraint bounding

Ivester, Robert Wayne 01 January 1996 (has links)
Complexity of manufacturing processes has hindered methodical specification of machine settings for improving productivity. For processes where analytical models are available (e.g., turning and grinding), modeling uncertainty caused by diversity of process conditions and time-variability has precluded the application of traditional optimization methods to minimize cost or production time. In these cases, the machine settings are selected conservatively to ensure part quality satisfaction at the expense of longer production times. In other cases, where it is prohibitively difficult to represent the process by an analytical model (e.g., injection molding), the machine settings are assigned either by trial and error, based on heuristic knowledge of an experienced operator, or statistical Design of Experiments methods which require a comprehensive empirical model between the inputs and part quality attributes. The purpose of this thesis is to present Recursive Constraint Bounding (RCB) as a general methodology for machine setting selection in manufacturing processes. In RCB, measurements of part quality attributes (e.g., size and surface integrity) are used as feedback to assess optimality/integrity of the process, and the machine settings are adjusted by formulating and solving a customized optimization problem with the objective of improving part quality or reducing production time. RCB is applied to cylindrical plunge grinding, where an approximate model is available, and injection molding, where adequate process models are unavailable. For cylindrical plunge grinding, cycle-time is minimized while satisfying constraints. For injection molding, machine settings are selected so as to satisfy part quality constraints.
26

Optimal routing and resource allocation within state-dependent queueing networks

Bakuli, David Luvisia 01 January 1993 (has links)
Recent advances in the study of fire spread and behavior of building materials under fire have helped designers to set minimum standards for both structural and finishing materials for different types of building occupancies. However, when a fire breaks out in a building, the immediate hazard is to the occupants, yet there are no precisely defined ways of designing adequate means of escape. It is hypothesized that this apparent lack of research in this direction is due in part to the differences in the design of buildings as a result of unique site conditions or the building configuration itself. Coupled with this uniqueness of design is the tendency of humans to panic when an emergency arises leading to unpredictable actions. Given that deterministic models are not capable of handling such unpredictable behavior, designs based purely on the intuition of the designer can lead to very disastrous results in case of an emergency. Two methodologies for the design and evaluation of building facilities and regional emergency evacuation plans have been proposed. A building plan provided by the decision maker is translated into a mathematical format useful for analysis. The analysis is performed and the feasible alternatives given to the designer or decision maker. The methodologies were tested on several examples including evacuation of a medical facility which was used as a case study. Both methodologies for routing and resource allocation efficiently solved the problem, thus aiding the designer in identifying critical parameters. Finally the dissertation proposes future work for the Emergency Evacuation Problem. This work includes incorporating the models that were developed here in a decision support environment. This enhancement would improve the decision making process as it would enable the designer to interactively test various design strategies.
27

Models for a carbon constrained, reliable biofuel supply chain network design and management

Marufuzzaman, Mohammad 01 October 2014 (has links)
<p> This dissertation studies two important problems in the field of biomass supply chain network. In the first part of the dissertation, we study the impact of different carbon regulatory policies such as carbon cap, carbon tax, carbon cap-and-trade and carbon offsetmechanism on the design and management of a biofuel supply chain network under both deterministic and stochastic settings. These mathematical models identify locations and production capacities for biocrude production plants by exploring the trade-offs that exist between transportations costs, facility investment costs and emissions. The model is solved using a modified L-shaped algorithm. We used the state of Mississippi as a testing ground for our model. A number of observations are made about the impact of each policy on the biofuel supply chain network. </p><p> In the second part of the dissertation, we study the impact of intermodal hub disruption on a biofuel supply chain network. We present mathematical model that designs multimodal transportation network for a biofuel supply chain system, where intermodal hubs are subject to site-dependent probabilistic disruptions. The disruption probabilities of intermodal hubs are estimated by using a probabilistic model which is developed using real world data. We further extend this model to develop a mixed integer nonlinear program that allocates intermodal hub dynamically to cope with biomass supply fluctuations and to hedge against natural disasters. We developed a rolling horizon based Benders decomposition algorithm to solve this challenging NP-hard problem. Numerical experiments show that this proposed algorithm can solve large scale problem instances to a near optimal solution in a reasonable time. We applied the models to a case study using data from the southeast region of U.S. Finally, a number of managerial insights are drawn into the impact of intermodal-related risk on the supply chain performance.</p>
28

Blood Glucose Management Streptozotocin-Induced Diabetic Rats by Artificial Neural Network Based Model Predictive Control

Bahremand, Saeid 12 March 2017 (has links)
<p> Diabetes is a group of metabolic diseases where the body&rsquo;s pancreas does not produce enough insulin or does not properly respond to insulin produced, resulting in high blood sugar levels over a prolonged period. There are several different types of diabetes, but the most common forms are type 1 and type 2 diabetes. Type 1 diabetes Mellitus (T1DM) can occur at any age, but is most commonly diagnosed from infancy to late 30s. If a person is diagnosed with type 1 diabetes, their pancreas produces little to no insulin, and the body&rsquo;s immune system destroys the insulin-producing cells in the pancreas. Those diagnosed with type 1 diabetes must inject insulin several times every day or continually infuse insulin through a pump, as well as manage their diet and exercise habits. If not treated appropriately, it can cause serious complications such as cardiovascular disease, stroke, kidney failure, foot ulcers, and damage to eyes.</p><p> During the past decade, researchers have developed artificial pancreas (AP) to ease management of diabetes. AP has three components: continuous glucose monitor (CGM), insulin pump, and closed-loop control algorithm. Researchers have developed algorithms based on control techniques such as Proportional Integral Derivative (PID) and Model Predictive Control (MPC) for blood glucose level (BGL) control; however, variability in metabolism between or within individuals hinders reliable control. </p><p> This study aims to develop an adaptive algorithm using Artificial Neural Networks (ANN) based Model Predictive Control (NN-MPC) to perform proper insulin injections according to BGL predictions in diabetic rats. This study is a ground work to implement NN-MPC algorithm on real subjects. BGL data collected from diabetic rats using CGM are used with other inputs such as insulin injection and meal information to develop a virtual plant model based on a mathematical model of glucose&ndash;insulin homeostasis proposed by Lombarte et al. Since this model is proposed for healthy rats; a revised version on this model with three additional equations representing diabetic rats is used to generate data for training ANN which is applicable for the identi?cation of dynamics and the glycemic regulation of rats. The trained ANN is coupled with MPC algorithm to control BGL of the plant model within the normal range of 100 to 130 mg/dl by injecting appropriate amount of insulin. The ANN performed well with less than 5 mg/dl error (2%) for 5-minute prediction and about 15 mg/dl error (7%) for 30-minute prediction. In &not;&not;addition, the NN-MPC algorithm kept BGL of diabetic rats more than 90 percent of the time within the normal range without hyper/hypo-glycaemia.</p>
29

The Localization of Free-Form Surfaces by the Maximization of the Shared Extent

Geisler, Jeannette 31 March 2015 (has links)
<p> Feedback, such as an inspection of a part, is a key step in the design and manufacture of complex products. It determines where a product or manufacturing process should be re-evaluated to conform to design specifications. The inspection of a part is characteristically accomplished by comparing the CAD model to the measurements of a manufactured part. This is simple for parts that contain a commonality: central axis, plane on a flat side, center of a sphere, etc. When a part does not share a commonality&mdash;like free-form surfaces&mdash;the comparison analysis becomes complex. </p><p> This complexity occurs when determining the process for correspondence of every point on a manufactured part to every point on a design model. Whenever one coordinate system is shifted, the comparison can be lost and, then, has to be reevaluated, creating an iteration. The demand for substantial iterations protracts the process and thwarts optimization. It is, also, challenging to mathematically determine which points should be compared to another. Is the selected point optimal for comparison? Is a higher resolution of points needed? This problem of how the coordinate systems of the CAD model and the measured part can be aligned is termed as localization and is extensively researched [1]. Currently, most algorithms use a line or surface fitting technique that minimizes the sum of the square of the errors, drawing upon Gunnarsson and Prinz's original idea [2]. Such nonlinear approaches may result in local minima when minimized, resulting in false solutions. Additionally, a solution achieved may not be optimal due to averaging of outliers in the data. </p><p> This thesis proposes a methodology that automatically aligns the coordinate systems of free-form CAD models to collected manufactured measurements, with resiliency to outliers of the fit and false solutions given by local minima, by maximizing the shared extent depending on dimension. To perform this, data from the manufactured surface and design surface are polygonized and compared until geometrically similar. Then, the overlapping or intersecting extent is calculated depending on the dimension and maximized using a heuristic approach, particle swarm optimization. At the maximum shared extent, two coordinate systems should be aligned in the optimal position. In this work, only two dimensional free-form curves are used to determine if the maximization of the shared extent results in an optimal solution, reducing complexity from three dimensions. Results obtained validated the approach and that the manufactured curve was aligned to the design, as measured by the sum of the squared errors. Also, the method was discovered to resist outliers, demonstrated by the tight alignment of consistent sloped areas while not necessarily aligned to peaks and valley features. Error observed is mainly due to inaccurate polygon geometry between the curves rather than the maximization of shared area process.</p>
30

Agent based human behavior modeling: A knowledge engineering based systems methodology for integrating social science frameworks for modeling agents with cognition, personality and culture

Bharathy, Gnana K. Unknown Date (has links)
Thesis (Ph.D.)--University of Pennsylvania, 2006. / (UMI)AAI3246140. Adviser: Barry Silverman. Source: Dissertation Abstracts International, Volume: 67-12, Section: B, page: 7352.

Page generated in 0.4928 seconds