• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 379
  • 161
  • 125
  • 81
  • 80
  • 26
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 1094
  • 156
  • 90
  • 75
  • 71
  • 67
  • 66
  • 66
  • 65
  • 65
  • 63
  • 61
  • 58
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

OPTIMIZATION TECHNIQUES FOR PHARMACEUTICAL MANUFACTURING AND DESIGN SPACE ANALYSIS

Daniel Joseph Laky (13120485) 21 July 2022 (has links)
<p>In this dissertation, numerical analysis frameworks and software tools for digital design of process systems are developed. More specifically, these tools have been focused on digital design within the pharmaceutical manufacturing space. Batch processing represents the traditional and still predominant pathway to manufacture pharmaceuticals in both the drug substance and drug product spaces. Drug substance processes start with raw materials or precursors to produce an active pharmaceutical ingredient (API) through synthesis and purification. Drug product processes take this pure API in powder form, add excipients, and process the powder into consumer doses such as capsules or tablets.  Continuous manufacturing has allowed many other chemical industries to take advantage of real-time process management through process control, process optimization, and real-time detection of off-spec material. Also, the possibility to reduce total cleaning time of units and encourage green chemistry through solvent reduction or recycling make continuous manufacturing an attractive alternative to batch manufacturing. However, to fully understand and take advantage of real-time process management, digital tools are required, both as soft sensors during process control or during process design and optimization.  Since the shift from batch to continuous manufacturing will proceed in stages, processes will likely adopt both continuous and batch unit operations in the same process, which we will call {\em hybrid} pharmaceutical manufacturing routes. Even though these processes will soon become common in the industry, digital tools that address comparison of batch, hybrid, and continuous manufacturing routes in the pharmaceutical space are lacking. This is especially true when considering hybrid routes. For this reason, PharmaPy, an open-source tool for pharmaceutical process development, was created to address rapid in-silico design of hybrid pharmaceutical processes.  Throughout this work, the focus is on analyzing alternative operating modes within the drug substance manufacturing context. First, the mathematical models for PharmaPy's synthesis, crystallization, and filtration units are discussed. Then, the simulation capabilities of PharmaPy are highlighted, showcasing dynamic simulation of both fully continuous and hybrid processes. However, the technical focus of the work as a whole is primarily on optimization techniques for pharmaceutical process design. Thus, many derivative-free optimization frameworks for simulation-optimization were constructed and utilized with PharmaPy performing simulations of pharmaceutical processes.  The timeline of work originally began with derivative-based methods to solve mixed-integer programs (MIP) for water network sampling and security, as well as nonlinear programs (NLPs) and some mixed-integer nonlinear programs (MINLPs) for design space and feasibility analysis. Therefore, a method for process design that combines both the ease of implementation from a process simulator (PharmaPy) with the computational performance of derivative-based optimization was implemented. Recent developments in Pyomo through the PyNumero package allow callbacks to an input-output or black-box model while using {\sc Ipopt} as a derivative-based solver through the cyipopt interface. Using this approach, it was found that using a PharmaPy simulation as a black box within a derivative-based solver resulted in quicker solve times when compared with traditional derivative-free optimization strategies, and offers a much quicker implementation strategy than using a simultaneous equation-oriented algebraic definition of the problem.  Also, uncertainty exists in virtually all process systems. Traditionally, uncertainty is analyzed through sampling approaches such as Monte Carlo simulation. These sampling approaches quickly become computational obstacles as problem scale increases. In the 1980s, chemical plant design under uncertainty through {\em flexibility analysis} became an option for explicitly considering model uncertainty using mathematical programming. However, such formulations provide computational obstacles of their own as most process models produce challenging MINLPs under the flexibility analysis framework.  Specifically when considering pharmaceutical processes, recent initiatives by the FDA have peaked interest in flexibility analysis because of the so called {\em design space}. The design space is the region for which critical quality attributes (CQAs) may be guaranteed over a set of interactions between the inputs and process parameters. Since uncertainty is intrinsic to such operations, industry is interested in guaranteeing that CQAs hold with a set confidence level over a given operating region. In this work, the {\em probabilistic design space} defined by these levels of confidence is presented to address the computational advantages of using a fully model-based flexibility analysis framework instead of a Monte Carlo sampling approach. From the results, it is seen that using the flexibility analysis framework decreased design space identification time by more than two orders of magnitude.  Given implementation difficulty with new digital tools for both students and professionals, educational material was developed for PharmaPy and was presented as part of a pharmaceutical API process development course at Purdue. The students were surveyed afterward and many of the students found the framework to be approachable through the use of Jupyter notebooks, and would consider using PharmaPy and Python for pharmaceutical modeling and data analysis in the future, respectively.  Through software development and the development of numerical analysis frameworks, digital design of pharmaceutical processes has expanded and become more approachable. The incorporation of rigorous simulations under process uncertainty promotes the use of digital tools in regulatory filings and reduces unnecessary process development costs using model-based design. Examples of these improvements are evident through the development of PharmaPy, a simulation-optimization framework using PharmaPy, and flexibility analysis tools. These tools resulted in a computational benefit of 1 to 2 orders of magnitude when compared to methods used in practice and in some cases reduce the modeling time required to determine optimal operating conditions, or the design space of a pharmaceutical manufacturing process.</p>
512

Motion Control of Under-actuated Aerial Robotic Manipulators

Jafarinasab, Mohammad January 2018 (has links)
This thesis presents model-based adaptive motion control algorithms for under-actuated aerial robotic manipulators combining a conventional multi-rotor Unmanned Aerial Vehicle (UAV) and a multi-link serial robotic arm. The resulting control problem is quite challenging due to the complexity of the combined system dynamics, under-actuation, and possible kinematic redundancy. The under-actuation imposes second-order nonholonomic constraints on the system motion and prevents independent control of all system degrees of freedom (DOFs). Desired reference trajectories can only be provided for a selected group of independent DOFs, whereas the references for the remaining DOFs must be determined such that they are consistent with the motion constraints. This restriction prevents the application of common model-based control methods to the problem of this thesis. Using insights from the system under-actuated dynamics, four motion control strategies are proposed which allow for semi-autonomous and fully-autonomous operation. The control algorithm is fully developed and presented for two of these strategies; its development for the other two configurations follows similar steps and hence is omitted from the thesis. The proposed controllers incorporate the combined dynamics of the UAV base and the serial arm, and properly account for the two degrees of under-actuation in the plane of the propellers. The algorithms develop and employ the second-order nonholonomic constraints to numerically determine motion references for the dependent DOFs which are consistent with the motion constraints. This is a unique feature of the motion control algorithms in this thesis which sets them apart from all other prior work in the literature of UAVmanipulators. The control developments follow the so-called method of virtual decomposition, which by employing a Newtonian formulation of the UAV-Manipulator dynamics, sidesteps the complexities associated with the derivation and parametrization of a lumped Lagrangian dynamics model. The algorithms are guaranteed to produce feasible control commands as the constraints associated with the under-actuation are explicitly considered in the control calculations. A method is proposed to handle possible kinematic redundancy in the presence of second-order motion constraints. The control design is also extended to include the propeller dynamics, for cases that such dynamics may significantly impact the system response. A Lyapunov analysis demonstrates the stability of the overall system and the convergence of the motion tracking errors. Experimental results with an octo-copter integrated with a 3 DOF robotic manipulator show the effectiveness of the proposed control strategies. / Thesis / Doctor of Philosophy (PhD)
513

Emotional working memory training, work demands, stress and anxiety in cognitive performance and decision-making under uncertainty

Heath, Amanda J. January 2018 (has links)
The study seeks to bring together literature on decision-making, the effects of work-related demands and stress, and individual differences in trait anxiety on near and far transfer effects of emotional working memory training (eWM). A sample of 31 students and working participants underwent emotional working memory training through an adaptive dual n-back method or a placebo face match training task for 14 days. Pre- and post-training measures were taken of a near transfer task, digit span, medium transfer measure of executive control, emotional Stroop, and a far transfer task of decision-making under uncertainty, the Iowa Gambling Task (IGT). In line with previous studies, eWM was expected to show gains in transfer task performance between pre- and post-training, and, especially for those scoring high on trait anxiety and workplace measures of stress demands (taken from COPSOQ), for whom there is more scope for improvement in emotional regulation. Gains in emotional Stroop specifically were further expected to show support for the effects of eWM training on emotional well-being in addition to decision-making. Results fell short of replicating previous work on transfer gains, though interference effects in Stroop did lessen in the eWM training group. Relationships between work demands, anxiety, stress and performance in the training itself, reinforce previous research showing that work stress and anxiety lead to cognitive failures, highlighting the importance of intervention studies in the organizational field, but they were not linked to benefits of the training. Resource and methodological limitations of the current study are considered, especially those involved in conducting pre-post designs and cognitive testing online.
514

Haptic-Enabled Robotic Arms to Achieve Handshakes in the Metaverse

Mohd Faisal, 26 September 2022 (has links)
Humans are social by nature, and the physical distancing due to COVID has converted many of our daily interactions into virtual ones. Among the negative consequences of this, we find the lack of an element that is essential to humans' well-being, which is the physical touch. With more interactions shifting towards the digital world of the metaverse, we want to provide individuals with the means to include the physical touch in their interactions. We explore the Digital Twin technology's prospect to support in reducing the impact of this on humans. We provide a definition of the concept of Robo Twin and explain its role in mediating human interactions. Besides, we survey research works related to Digital Twin's physical representation with a focus on under-actuated Digital Twin's robotic arms. In this thesis, we first provide findings from the literature, to support researchers' decisions in the adoption and use of designs and implementations of Digital Twin's robotic arms, and to inform future research on current challenges and gaps in existing research works. Subsequently, we design and implement two right-handed under-actuated Digital Twin's robotic arms to mediate the physical interaction between two individuals by allowing them to perform a handshake while they are physically distanced. This experiment served as a proof of concept for our proposed idea of Robo Twin. The findings are very promising as our evaluation shows that the participants are highly interested in using our system to make a handshake with their loved ones when they are physically separated. With this Robo Twin Arm system, we also find a correlation between the handshake characteristics and gender and/or personality traits of the participants from the quantitative handshake data collected during the experiment. Moreover, it is a step towards the design and development of Digital Twin's under-actuated robotic arms and ways to enhance the overall user experience with such a system.
515

Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary Mission

Siddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
516

Robust Post-donation Blood Screening under Limited Information

El-Amine, Hadi 10 June 2016 (has links)
Blood products are essential components of any healthcare system, and their safety, in terms of being free of transfusion-transmittable infections, is crucial. While the Food and Drug Administration (FDA) in the United States requires all blood donations to be tested for a set of infections, it does not dictate which particular tests should be used by blood collection centers. Multiple FDA-licensed blood screening tests are available for each infection, but all screening tests are imperfectly reliable and have different costs. In addition, infection prevalence rates and several donor characteristics are uncertain, while surveillance methods are highly resource- and time-intensive. Therefore, only limited information is available to budget-constrained blood collection centers that need to devise a post-donation blood screening scheme so as to minimize the risk of an infectious donation being released into the blood supply. Our focus is on "robust" screening schemes under limited information. Toward this goal, we consider various objectives, and characterize structural properties of the optimal solutions under each objective. This allows us to gain insight and to develop efficient algorithms. Our research shows that using the proposed optimization-based approaches provides robust solutions with significantly lower expected infection risk compared to other testing schemes that satisfy the FDA requirements. Our findings have important public policy implications. / Ph. D.
517

När blir högläsning undervisning? : - En analys av lärares metoder och föreställningar i årskurserna F-3.

Alves pereira, Nadia January 2024 (has links)
Studien syftar till att undersöka hur och varför lärare i årskurs F-3 använder högläsning som ett pedagogiskt verktyg för att främja läsningens roll i de tidiga skolåren. Genom intervjuer och observationer analyseras lärarnas föreställningar om högläsningens syften och tillvägagångssätt, samt eventuella utmaningar för dess effektivitet som pedagogiskt verktyg. Studien är grundad i det teoretiska konceptet Pedagogical Content Knowledge (PCK) och använder en fenomenografisk metod. Undersökningen fokuserar på två lärares arbete och deras föreställningar om högläsning. Resultaten visar att en stor del av lärarnas arbetsmetoder vid högläsningssessioner fokuserar på att skapa en interaktiv läsmiljö där elevernas engagemang och delaktighet prioriteras. Hur ofta och vilka didaktiska strategier lärarna tillämpar vid högläsningssessioner varierar beroende på deras individuella tillvägagångssätt och mål med läsundervisningen. En av lärarna använde en mer aktiv och direkt metodik medan den andra tenderade att skapa en avslappnad miljö för eleverna. Resultaten visade även att lärarnas arbetsmetoder överensstämmer väl med rekommendationerna från tidigare forskning. Slutligen visar resultaten att det inte verkar finnas en universell metod för högläsning, utan att lärarna anpassar sina tillvägagångssätt efter elevernas behov och klassens dynamik. Både öppna och stängda frågor används, och valet av strategi verkar bero på lärarnas egna uppfattningar om syftet med högläsningen och deras specifika undervisningsmål.
518

Systematic Review and Meta-Analysis of the Diagnostic Performance of Stockholm3: A Methodological Evaluation

Heiter, Linus, Skagerlund, Hampus January 2024 (has links)
This thesis investigates two questions: the methodological strengths and weaknesses of meta-analysis and the diagnostic performance of the Stockholm3 test for clinically significant prostate cancer. Through a systematic review and meta-analysis, we explore the robustness and limitations of meta-analysis, focusing on aspects such as bias assessment, heterogeneity, and the impact of the file-drawer problem. Applying these methods, we evaluate the Stockholm3 test’s performance, comparing it to the conventional Prostate-Specific Antigen (PSA) test. Our analysis synthesizes data from four studies consisting of 6 497 men, indicating that the Stockholm3 test offers improved diagnostic accuracy, with a higher pooled Area Under the Curve (AUC), in turn suggesting better identification of clinically significant prostate cancer. Nonetheless, the study also reveals challenges within the practice of meta-analysis, including variation among study methodologies and the presence of bias. These findings highlight the dual purpose of the research: demonstrating the utility and drawbacks of meta-analysis and validating the Stockholm3 test’s potential as a diagnostic tool. The conclusions drawn emphasize the need for continued research to enhance both meta-analytic methods and the clinical applicability of the Stockholm3 test in broader populations.
519

Using random forest and decision tree models for a new vehicle prediction approach in computational toxicology

Mistry, Pritesh, Neagu, Daniel, Trundle, Paul R., Vessey, J.D. 22 October 2015 (has links)
Yes / Drug vehicles are chemical carriers that provide beneficial aid to the drugs they bear. Taking advantage of their favourable properties can potentially allow the safer use of drugs that are considered highly toxic. A means for vehicle selection without experimental trial would therefore be of benefit in saving time and money for the industry. Although machine learning is increasingly used in predictive toxicology, to our knowledge there is no reported work in using machine learning techniques to model drug-vehicle relationships for vehicle selection to minimise toxicity. In this paper we demonstrate the use of data mining and machine learning techniques to process, extract and build models based on classifiers (decision trees and random forests) that allow us to predict which vehicle would be most suited to reduce a drug’s toxicity. Using data acquired from the National Institute of Health’s (NIH) Developmental Therapeutics Program (DTP) we propose a methodology using an area under a curve (AUC) approach that allows us to distinguish which vehicle provides the best toxicity profile for a drug and build classification models based on this knowledge. Our results show that we can achieve prediction accuracies of 80 % using random forest models whilst the decision tree models produce accuracies in the 70 % region. We consider our methodology widely applicable within the scientific domain and beyond for comprehensively building classification models for the comparison of functional relationships between two variables.
520

Přiměřenost trestní sankce: Komparace trestání dopravních trestných činů a přiléhajících dopravních přestupků spáchaných pod vlivem návykových látek v České a Slovenské republice. / Adequacy of penal sanction: Comparison of punishing traffic criminal offences and decumbent traffic administrative offences committed under influence of addictive substances in Czech and Slovak republic.

Mikuš, Michal January 2017 (has links)
Adequacy of penal sanction: Comparison of punishing traffic criminal offences and decumbent traffic administrative offences committed under influence of addictive substances in Czech and Slovak republic. Master's Thesis Michal Mikuš Summary. This thesis makes survey on a punishing adequacy of traffic offences committed under influence of alcohol and the other addictive substances. The theoretical basis, that knowledge is necessary prerequisite for reviewing punishing adequacy, like theories of punishment, purpose of punishment and a principle of adequacy, are in the theoretical part of the thesis. In the special part of the thesis is comparison of valid and effective law in the Czech and Slovak Republic. The practical part of the thesis is composed of an analysis of decisions delivered by the County Traffic Inspectorate Banská Bystrica, County Traffic Inspectorate Bratislava I., Banská Bystrica County Court, Bratislava I. County Court, City Hall of Zlín, City Hall of Prague, Zlín County Court and the Prague 2 Circuit Court. The analysis is composed not only of punishment adequacy review, but also of the all substantive and procedural deficiencies, that occurred in the decisions of particular state's body. At the end is provided comparison of analysis outcomes, which stemmed from decisions of national...

Page generated in 0.038 seconds