• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 33
  • 32
  • 21
  • 19
  • 19
  • 17
  • 17
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A Multi-Fidelity Approach to Testing and Evaluation of AI-Enabled Systems

Robert Joseph Seif (19206790) 27 July 2024 (has links)
<p dir="ltr">Approaches to system testing and evaluation (T&E) are becoming increasingly relevant as artificial intelligence (AI)/machine learning (ML) technology expands across the industry’s current landscape. As the AI/ML landscape continues to develop, greater amounts of data are required to build the next generation of technology. Multiple communities have worked to create frameworks to interact with such scales of data, yet a gap persists in the ability to utilize data generated throughout the development process to support the for use in a T&E program. The objective of this thesis is to address this gap through a multi-fidelity approach to the test and evaluation of AI-enabled systems. This approach is constructed using a space of models to visualize similarities and differences between each individual model. Once requirements and potential tests that models can be employed to fulfill are organized, a method to sequentially select models for testing is utilized. Models are selected to maximize utility, dependent on model performance and cost to the T&E team. Experimentation was conducted through the case of an autonomous vehicle (AV) perception system, where models were constructed using a simulation of the Purdue University campus for AVs to drive around. Results show that the proposed approach, when paired with Bayesian Optimization for sequential test selection through an expected improvement acquisition function, can effectively select models in a manner that works to minimize uncertainty and cost for the test team. Through computational experiments, the proposed approach can be used to develop test combinations that minimize costs and maximize utility while maximizing the information a T&E team has on how well a system can meet a set of testing requirements in operational conditions.</p>
122

<b>Information Extraction from Pilot Weather Reports (PIREPs) using a Structured Two-Level Named Entity Recognition (NER) Approach</b>

Shantanu Gupta (18881197) 03 July 2024 (has links)
<p dir="ltr">Weather conditions such as thunderstorms, wind shear, snowstorms, turbulence, icing, and fog can create potentially hazardous flying conditions in the National Airspace System (NAS) (FAA, 2021). In general aviation (GA), hazardous weather conditions are most likely to cause accidents with fatalities (FAA, 2013). Therefore, it is critical to communicate weather conditions to pilots and controllers to increase awareness of such conditions, help pilots avoid weather hazards, and improve aviation safety (NTSB, 2017b). Pilot Reports (PIREPs) are one way to communicate pertinent weather conditions encountered by pilots (FAA, 2017a). However, in a hazardous weather situation, communication adds to pilot workload and GA pilots may need to aviate and navigate to another area before feeling safe enough to communicate the weather conditions. The delay in communication may result in PIREPs that are both inaccurate and untimely, potentially misleading other pilots in the area with incorrect weather information (NTSB, 2017a). Therefore, it is crucial to enhance the PIREP submission process to improve the accuracy, timeliness, and usefulness of PIREPs, while simultaneously reducing the need for hands-on communication.</p><p dir="ltr">In this study, a potential method to incrementally improve the performance of an automated spoken-to-coded-PIREP system is explored. This research aims at improving the information extraction model within the spoken-to-coded-PIREP system by using underlying structures and patterns in the pilot spoken phrases. The first part of this research is focused on exploring the structural elements, patterns, and sub-level variability in the Location, Turbulence, and Icing pilot phrases. The second part of the research is focused on developing and demonstrating a structured two-level Named Entity Recognition (NER) model that utilizes the underlying structures within pilot phrases. A structured two-level NER model is designed, developed, tested, and compared with the initial single level NER model in the spoken-to-coded-PIREP system. The model follows a structured approach to extract information at two levels within three PIREP information categories – Location, Turbulence, and Icing. The two-level NER model is trained and tested using a total of 126 PIREPs containing Turbulence and Icing weather conditions. The performance of the structured two-level NER model is compared to the performance of a comparable single level initial NER model using three metrics – precision, recall, and F1-Score. The overall F1-Score of the initial single level NER model was in the range of 68% – 77%, while the two-level NER model was able to achieve an overall F1-Score in the range of 89% – 92%. The two-level NER model was successful in recognizing and labelling specific phrases into broader entity labels such as Location, Turbulence, and Icing, and then processing those phrases to segregate their structural elements such as Distance, Location Name, Turbulence Intensity, and Icing Type. With improvements to the information extraction model, the performance of the overall spoken-to-coded-PIREP system may be increased and the system may be better equipped to handle the variations in pilot phrases and weather situations. Automating the PIREP submission process may reduce the pilot’s hands-on task-requirement in submitting a PIREP during hazardous weather situations, potentially increase the quality and quantity of PIREPs, and share accurate weather-related information in a timely manner, ultimately making GA flying safter.</p>
123

Prediction of Change in Quality of 'Cripps Pink' Apples during Storage

Pham, Van Tan January 2008 (has links)
Doctor of Philosophy (PhD) / The goal of this research was to investigate changes in the physiological properties including firmness, stiffness, weight, background colour, ethylene production and respiration of ‘Cripps Pink’ apple stored under different temperature and atmosphere conditions,. This research also seeks to establish mathematical models for the prediction of changes in firmness and stiffness of the apple during normal atmosphere (NA) storage. Experiments were conducted to determine the quality changes in ‘Cripps Pink’ apple under three sets of storage conditions. The first set of storage conditions consisted of NA storage at 0oC, 2.5oC, 5oC, 10oC, 20oC and 30oC. In the second set of conditions the apples were placed in NA cold storage at 0oC for 61 days, followed by NA storage at the aforementioned six temperatures. The third set of conditions consisted of controlled atmosphere (CA) (2 kPa O2 : 1 kPa CO2) at 0oC storage for 102 days followed by NA storage at the six temperatures mentioned previously. The firmness, stiffness, weight loss, skin colour, ethylene and carbon dioxide production of the apples were monitored at specific time intervals during storage. Firmness was measured using a HortPlus Quick Measure Penetrometer (HortPlus Ltd, Hawke Bat, New Zealand); stiffness was measured using a commercial acoustic firmness sensor-AFS (AWETA, Nootdorp, The Netherlands). Experimental data analysis was performed using the GraphPad Prism 4.03, 2005 software package. The Least-Squares method and iterative non-linear regression were used to model and simulate changes in firmness and stiffness in GraphPad Prism 4.03, 2005 and DataFit 8.1, 2005 softwares. The experimental results indicated that the firmness and stiffness of ‘Cripps Pink’ apple stored in NA decreased with increases in temperature and time. Under NA, the softening pattern was tri-phasic for apples stored at 0oC, 2.5oC and 5oC for firmness, and at 0oC and 2.5oC for stiffness. However, there were only two softening phases for apples stored at higher temperatures. NA at 0oC, 2.5oC and 5oC improved skin background colour and extended the storage ability of apples compared to higher temperatures. CA during the first stage of storage better maintained the firmness and stiffness of the apples. However, it reduced subsequent ethylene and carbon dioxide (CO2) production after removal from storage. Steep increases in ethylene and CO2 production coincided with rapid softening in the fruit flesh and yellowing of the skin background colour, under NA conditions. The exponential decay model was the best model for predicting changes in the firmness, stiffness and keeping quality of the apples. The exponential decay model satisfied the biochemical theory of softening in the apple, and had the highest fitness to the experimental data collected over the wide range of temperatures. The softening rate increased exponentially with storage temperature complying with the Arrhenius equation. Therefore a combination of the exponential decay model with the Arrhenius equation was found to best characterise the softening process and to predict changes in the firmness and stiffness of apples stored at different temperatures in NA conditions.
124

Prediction of Change in Quality of 'Cripps Pink' Apples during Storage

Pham, Van Tan January 2008 (has links)
Doctor of Philosophy (PhD) / The goal of this research was to investigate changes in the physiological properties including firmness, stiffness, weight, background colour, ethylene production and respiration of ‘Cripps Pink’ apple stored under different temperature and atmosphere conditions,. This research also seeks to establish mathematical models for the prediction of changes in firmness and stiffness of the apple during normal atmosphere (NA) storage. Experiments were conducted to determine the quality changes in ‘Cripps Pink’ apple under three sets of storage conditions. The first set of storage conditions consisted of NA storage at 0oC, 2.5oC, 5oC, 10oC, 20oC and 30oC. In the second set of conditions the apples were placed in NA cold storage at 0oC for 61 days, followed by NA storage at the aforementioned six temperatures. The third set of conditions consisted of controlled atmosphere (CA) (2 kPa O2 : 1 kPa CO2) at 0oC storage for 102 days followed by NA storage at the six temperatures mentioned previously. The firmness, stiffness, weight loss, skin colour, ethylene and carbon dioxide production of the apples were monitored at specific time intervals during storage. Firmness was measured using a HortPlus Quick Measure Penetrometer (HortPlus Ltd, Hawke Bat, New Zealand); stiffness was measured using a commercial acoustic firmness sensor-AFS (AWETA, Nootdorp, The Netherlands). Experimental data analysis was performed using the GraphPad Prism 4.03, 2005 software package. The Least-Squares method and iterative non-linear regression were used to model and simulate changes in firmness and stiffness in GraphPad Prism 4.03, 2005 and DataFit 8.1, 2005 softwares. The experimental results indicated that the firmness and stiffness of ‘Cripps Pink’ apple stored in NA decreased with increases in temperature and time. Under NA, the softening pattern was tri-phasic for apples stored at 0oC, 2.5oC and 5oC for firmness, and at 0oC and 2.5oC for stiffness. However, there were only two softening phases for apples stored at higher temperatures. NA at 0oC, 2.5oC and 5oC improved skin background colour and extended the storage ability of apples compared to higher temperatures. CA during the first stage of storage better maintained the firmness and stiffness of the apples. However, it reduced subsequent ethylene and carbon dioxide (CO2) production after removal from storage. Steep increases in ethylene and CO2 production coincided with rapid softening in the fruit flesh and yellowing of the skin background colour, under NA conditions. The exponential decay model was the best model for predicting changes in the firmness, stiffness and keeping quality of the apples. The exponential decay model satisfied the biochemical theory of softening in the apple, and had the highest fitness to the experimental data collected over the wide range of temperatures. The softening rate increased exponentially with storage temperature complying with the Arrhenius equation. Therefore a combination of the exponential decay model with the Arrhenius equation was found to best characterise the softening process and to predict changes in the firmness and stiffness of apples stored at different temperatures in NA conditions.
125

Conduite orientée ordonnancement d'un simulateur dynamique hybride : application aux procédés discontinus / Control oriented scheduling of a dynamic hybrid simulator : application to batch processes

Fabre, Florian 20 October 2009 (has links)
Ce manuscrit présente des travaux visant à intégrer un module d'ordonnancement (ProSched) à l'environnement de modélisation et simulation dynamique hybride PrODHyS dans le but d'automatiser la génération de scénarii de simulation de procédés discontinus sur la base d'une recette et d'une liste d'ordres de fabrication (OF). La méthodologie développée repose sur une approche mixte optimisation/simulation. Dans ce cadre, trois points essentiels ont été développés dans ces travaux : - tout d'abord, concevoir et développer des composants réutilisables (classes de recette) permettant de modéliser de manière hiérarchisée et systématique le déroulement des opérations unitaires. Pour cela, les notions de jeton Task et de macro-place paramétrable ont été introduites dans les RdPDO et permettent de décrire les recettes à réaliser par assemblage de ces composants prédéfinis. - ensuite, définir un modèle mathématique générique d'ordonnancement basé sur un formalisme de représentation bien établi (le R.T.N.) qui permet de modéliser les principales caractéristiques d'un procédé discontinu et de fournir l'ensemble des données d'entrée nécessaires au modèle de simulation. Pour cela, un modèle PLNE basé sur la formulation Unit Specific Event a été mis en œuvre. - enfin, définir l'interface existant entre le modèle d'optimisation et le modèle de simulation, à travers la notion de place de pilotage et de centre de décision au niveau du simulateur. Dans ce cadre, différentes stratégies de couplage sont proposées. Les potentialités de cette approche sont illustrées par la simulation d'un procédé complet. / This thesis presents works which aim to incorporate a scheduling module (ProSched) to an environment for modeling and dynamic hybrid simulation PrODHyS in order to automate the generation of scenarios for simulation of batch processes based on a recipe and a list of production orders (OF). The methodology developed is based on a mixed optimization / simulation approach. In this context, three key points have been developed in this work: - First, design and develop reusable components (recipe classes) for the hierarchical and systematic modeling of the sequencing of unit operations. For this, the notions of Task token and macro-place have been introduced in the RdPDO formalism and allow the modeling of recipes by assembling these predefined components. - Secondly, define a generic mathematical model of scheduling based on a well defined graphical formalism (RTN) that models the main characteristics of batch processes and provide all input data necessary to the simulation model. For this, a MILP model based on the Unit Specific Event formulation has been implemented. - Finally, define the interface between the optimization model and the simulation model through the concept of control place and decision-making center at the simulator level. In this context, various strategies of mixing optimization and simulation are proposed. The potential of this approach is illustrated by the simulation of a complete manufacturing process
126

Implementation and comparison of the Aircraft Intent Description Language and point-mass Non-Linear Dynamic Inversion approach to aircraft modelling in Modelica

Shreepal, Arcot Manjunath, Vijaya Kumar, Shree Harsha January 2021 (has links)
The study is conducted to determine practical modelling and simulation techniques to perform dynamic stability and performance analysis on a 3 Degrees of freedom aircraft model using a Modelica-based commercial tool called Modelon Impact. This study is based on a conceptual aircraft model where in-depth details about the aircraft configuration are unknown and the aim is to determine a suitable model that can capture the longitudinal dynamics and aerodynamic constraints of the aircraft during the conceptual design phase. Requirements include short execution time, easy model development, and minimal data requirements. Therefore, this thesis aims at developing plant and control architectures in  Modelon Impact which can be utilized for the rapid development of aircraft concepts with adequate fidelity in a longitudinal mission-based tracking environment. In a conceptual aircraft design environment, to identify a suitable methodology that mitigates the limitations of a traditional feedback controller, two methodologies are considered for comparison: Sequential DAE resolution (SDR) and Dynamic inversion (DI) control which is discussed from an object-oriented aircraft model. The advantages and shortcomings of each of the models discussed above are compared by conducting several experiments in increasing order of longitudinal mission complexity, and the most appropriate model among the two for a conceptual stage of aircraft design development is ascertained. The two methodologies discussed are compared for their level of complexity, code structure, readability, and ease of usability.
127

Problemlösekompetenz in komplexen technischen Systemen – Möglichkeiten der Entwicklung und Förderung im Unterricht der Berufsschule mit Hilfe computergestützter Modellbildung und Simulation. Theoretische und empirische Analyse in der gewerblich-technischen Berufsbildung

Tauschek, Rüdiger 10 July 2006 (has links)
Immer wieder ist zu beobachten, wie schwierig es für Lernende im gewerblich-technischen Bereich ist, einen verständnisvollen Zugang zu komplexen technischen Systemen zu finden und wie schwer es ihnen fällt, geeignete mentale Modelle mit ausreichend großer Reichweite zum erfolgreichen Umgang mit komplexem Systemverhalten für ihren späteren Beruf zu entwickeln. Vor dem Hintergrund dieser unterrichtspraktischen Relevanz thematisiert der vorliegende Beitrag als übergeordnetes Ziel, mit Hilfe computergestützter Modellbildung und Simulation (CMS) das Erfassen und Beherrschen komplexer technischer Systeme zum Aufbau beruflicher Handlungskompetenz als wesentlicher Bestandteil des Bildungsauftrags der Berufsschule in verstärktem Maße unterrichtlich etablieren zu können. Eine vorrangige Aufgabe dabei ist es, Lehrenden bei der praktischen Gestaltung geeigneter Lernsituationen Hilfestellung zu leisten bei der Entscheidung, ob und in welcher Form computergestützte Modellbildung und Simulation zum Zwecke ihrer unterrichtlichen Entwicklung sowie einer geeigneten Diagnostik geeignet ist. Den Lehrenden im Bereich der gewerblichen Bildung soll so eine Möglichkeit aufgezeigt werden, wie die Entwicklung und Förderung dieser bereichsübergreifenden Kompetenzen angelegt sein sollte und wie sie diese in ihrem Unterricht entwickeln und nutzen können. Dabei wird offensichtlich, dass komplexe Problemlösekompetenz durch lediglich einen einzigen, umfassenden Indikator nicht bestimm- beziehungsweise darstellbar ist. Die Befähigung zum Lösen komplexer Probleme ist jeweils nur in einem Bündel einzelner Kompetenzen darstellbar. Im konkreten Einzelfall ist jedoch recht gut beschreibbar, welche Komponenten zu diesem Bündel gehören. Eine allgemeine, Domänen unabhängige komplexe Problemlösekompetenz, die in spezifischen Kontexten und unterschiedlichen Berufsfeldern flexibel eingesetzt werden könnte, gibt es nicht. Der Grund liegt u. a. darin, dass Wissen als auch Fähigkeiten und Kompetenzen kontextgebunden sind. Erst über mannigfaltige und zeitintensive Einübungen können sie zunehmend bereichsübergreifend verallgemeinert werden. Mit der im empirischen Teil vorgestellten Untersuchung wird versucht, zur bislang im gewerblich-technischen Bereich nur sehr gering ausgeprägten experimentellen Fundierung der Entwicklung und Erfassung von Kompetenzen mit computergestützter Modellbildung und Simulation (CMS) beizutragen. Hauptanliegen der quasiexperimentellen und explorativen Studie war es, in der beruflichen Erstausbildung an einem ausgewählten regelungstechnischen Beispiel zu prüfen, ob Lernenden über CMS unter der Verwendungsperspektive als so genanntes „Kognitives Tool“ eine (alternative) Zugangs- beziehungsweise Erschließungsmöglichkeit für den erfolgreichen Umgang mit komplexen technischen Systemen ermöglicht werden kann. An dem ausgewählten regelungstechnischen Beispiel wurde untersucht, ob Auszubildenden der Berufsschule, die über keine Kenntnisse in höherer Mathematik verfügen, ein solcher Zugang zu komplexem Systemverhalten gelingen kann. Im Rahmen der empirischen Untersuchungen der Arbeit konnte gezeigt werden, dass computergestützte Modellbildung und Simulation, wenn sie unter bestimmten Konstruktionsbedingungen entwickelt und unter bestimmten Implementationsbedingungen eingesetzt wird, Lernergebnisse erbringt, die den theoretischen Anforderungen an Entwicklung und Erfassung einer komplexen Problemlösekompetenz genügen. In diesem Zusammenhang konnte auch aufgezeigt werden, dass sich eine komplexe Problemlösekompetenz in einer beruflichen Domäne operationalisieren und es sich beobachten lässt, ob bei den Auszubildenden eine Bewegung in Richtung auf den Aufbau entsprechender Fähigkeiten festzustellen ist. Genau so wichtig ist es aber auch zu zeigen, dass der Einsatz computergestützter Modellbildung undSimulation kein Selbstläufer ist, sondern vielmehr seitens der Lehrenden ein didaktisches Expertenwissen voraussetzt. Das Problem der notwendigen Balance zwischen (Lerner-)Konstruktion und (Lehrer-)Instruktion wurde in der Arbeit ausführlich dargestellt. Sind diese Bedingungen gegeben, so führt die Gestaltung computerbasierter Lernumgebungen durch Modellbildung und Simulation bei der unterrichtlichen Förderung einer Problemlösekompetenz zum erfolgreichen Umgang mit komplexen technischen Systemen zu einer tiefen Elaboration von Konzepten und Zusammenhängen.
128

Dynamic Network Modeling from Temporal Motifs and Attributed Node Activity

Giselle Zeno (16675878) 26 July 2023 (has links)
<p>The most important networks from different domains—such as Computing, Organization, Economic, Social, Academic, and Biology—are networks that change over time. For example, in an organization there are email and collaboration networks (e.g., different people or teams working on a document). Apart from the connectivity of the networks changing over time, they can contain attributes such as the topic of an email or message, contents of a document, or the interests of a person in an academic citation or a social network. Analyzing these dynamic networks can be critical in decision-making processes. For instance, in an organization, getting insight into how people from different teams collaborate, provides important information that can be used to optimize workflows.</p> <p><br></p> <p>Network generative models provide a way to study and analyze networks. For example, benchmarking model performance and generalization in tasks like node classification, can be done by evaluating models on synthetic networks generated with varying structure and attribute correlation. In this work, we begin by presenting our systemic study of the impact that graph structure and attribute auto-correlation on the task of node classification using collective inference. This is the first time such an extensive study has been done. We take advantage of a recently developed method that samples attributed networks—although static—with varying network structure jointly with correlated attributes. We find that the graph connectivity that contributes to the network auto-correlation (i.e., the local relationships of nodes) and density have the highest impact on the performance of collective inference methods.</p> <p><br></p> <p>Most of the literature to date has focused on static representations of networks, partially due to the difficulty of finding readily-available datasets of dynamic networks. Dynamic network generative models can bridge this gap by generating synthetic graphs similar to observed real-world networks. Given that motifs have been established as building blocks for the structure of real-world networks, modeling them can help to generate the graph structure seen and capture correlations in node connections and activity. Therefore, we continue with a study of motif evolution in <em>dynamic</em> temporal graphs. Our key insight is that motifs rarely change configurations in fast-changing dynamic networks (e.g. wedges intotriangles, and vice-versa), but rather keep reappearing at different times while keeping the same configuration. This finding motivates the generative process of our proposed models, using temporal motifs as building blocks, that generates dynamic graphs with links that appear and disappear over time.</p> <p><br></p> <p>Our first proposed model generates dynamic networks based on motif-activity and the roles that nodes play in a motif. For example, a wedge is sampled based on the likelihood of one node having the role of hub with the two other nodes being the spokes. Our model learns all parameters from observed data, with the goal of producing synthetic graphs with similar graph structure and node behavior. We find that using motifs and node roles helps our model generate the more complex structures and the temporal node behavior seen in real-world dynamic networks.</p> <p><br></p> <p>After observing that using motif node-roles helps to capture the changing local structure and behavior of nodes, we extend our work to also consider the attributes generated by nodes’ activities. We propose a second generative model for attributed dynamic networks that (i) captures network structure dynamics through temporal motifs, and (ii) extends the structural roles of nodes in motifs to roles that generate content embeddings. Our new proposed model is the first to generate synthetic dynamic networks and sample content embeddings based on motif node roles. To the best of our knowledge, it is the only attributed dynamic network model that can generate <em>new</em> content embeddings—not observed in the input graph, but still similar to that of the input graph. Our results show that modeling the network attributes with higher-order structures (e.g., motifs) improves the quality of the networks generated.</p> <p><br></p> <p>The generative models proposed address the difficulty of finding readily-available datasets of dynamic networks—attributed or not. This work will also allow others to: (i) generate networks that they can share without divulging individual’s private data, (ii) benchmark model performance, and (iii) explore model generalization on a broader range of conditions, among other uses. Finally, the evaluation measures proposed will elucidate models, allowing fellow researchers to push forward in these domains.</p>
129

Digital Twin Development and Advanced Process Control for Continuous Pharmaceutical Manufacturing

Yan-Shu Huang (9175667) 25 July 2023 (has links)
<p>To apply Industry 4.0 technologies and accelerate the modernization of continuous pharmaceutical manufacturing, digital twin (DT) and advanced process control (APC) strategies are indispensable. The DT serves as a virtual representation that mirrors the behavior of the physical process system, enabling real-time monitoring and predictive capabilities. Consequently, this facilitates the feasibility of real-time release testing (RTRT) and enhances drug product development and manufacturing efficiency by reducing the need for extensive sampling and testing. Moreover, APC strategies are required to address variations in raw material properties and process uncertainties while ensuring that desired critical quality attributes (CQAs) of in-process materials and final products are maintained. When deviations from quality targets are detected, APC must provide optimal real-time corrective actions, offering better control performance than the traditional open loop-control method. The progress in DT and APC is beneficial in shifting from the paradigm of Quality-by-Test (QbT) to that of Quality-by-Design (QbD) and Quality-by-Control (QbC), which emphasize the importance of process knowledge and real-time information to ensure product quality.</p> <p><br></p> <p>This study focuses on four key elements and their applications in a continuous dry granulation tableting process, including feeding, blending, roll compaction, ribbon milling and tableting unit operations. Firstly, the necessity of a digital infrastructure for data collection and integration is emphasized. An ISA-95-based hierarchical automation framework is implemented for continuous pharmaceutical manufacturing, with each level serving specific purposes related to production, sensing, process control, manufacturing operations, and business planning. Secondly, investigation of process analytical technology (PAT) tools for real-time measurements is highlighted as a prerequisite for effective real-time process management. For instance, the measurement of mass flow rate, a critical process parameter (CPP) in continuous manufacturing, was previously limited to loss-in-weight (LIW) feeders. To overcome this limitation, a novel capacitance-based mass flow sensor, the ECVT sensor, has been integrated into the continuous direct compaction process to capture real-time powder flow rates downstream of the LIW feeders. Additionally, the use of near-infrared (NIR)-based sensor for real-time measurement of ribbon solid fraction in dry granulation processes is explored. Proper spectra selection and pre-processing techniques are employed to transform the spectra into useful real-time information. Thirdly, the development of quantitative models that establish a link between CPPs and CQAs is addressed, enabling effective product design and process control. Mechanistic models and hybrid models are employed to describe the continuous direct compaction (DC) and dry granulation (DG) processes. Finally, applying APC strategies becomes feasible with the aid of real-time measurements and model predictions. Real-time optimization techniques are used to combine measurements and model predictions to infer unmeasured states or mitigate the impact of measurement noise. In this work, the moving horizon estimation-based nonlinear model predictive control (MHE-NMPC) framework is utilized. It leverages the capabilities of MHE for parameter updates and state estimation to enable adaptive models using data from the past time window. Simultaneously, NMPC ensures satisfactory setpoint tracking and disturbance rejection by minimizing the error between the model predictions and setpoint in the future time window. The MHE-NMPC framework has been implemented in the tableting process and demonstrated satisfactory control performance even when plant model mismatch exists. In addition, the application of MHE enables the sensor fusion framework, where at-line measurements and online measurements can be integrated if the past time window length is sufficient. The sensor fusion framework proves to be beneficial in extending the at-line measurement application from just validation to real-time decision-making.</p>
130

DISTRIBUTED MACHINE LEARNING OVER LARGE-SCALE NETWORKS

Frank Lin (16553082) 18 July 2023 (has links)
<p>The swift emergence and wide-ranging utilization of machine learning (ML) across various industries, including healthcare, transportation, and robotics, have underscored the escalating need for efficient, scalable, and privacy-preserving solutions. Recognizing this, we present an integrated examination of three novel frameworks, each addressing different aspects of distributed learning and privacy issues: Two Timescale Hybrid Federated Learning (TT-HF), Delay-Aware Federated Learning (DFL), and Differential Privacy Hierarchical Federated Learning (DP-HFL). TT-HF introduces a semi-decentralized architecture that combines device-to-server and device-to-device (D2D) communications. Devices execute multiple stochastic gradient descent iterations on their datasets and sporadically synchronize model parameters via D2D communications. A unique adaptive control algorithm optimizes step size, D2D communication rounds, and global aggregation period to minimize network resource utilization and achieve a sublinear convergence rate. TT-HF outperforms conventional FL approaches in terms of model accuracy, energy consumption, and resilience against outages. DFL focuses on enhancing distributed ML training efficiency by accounting for communication delays between edge and cloud. It also uses multiple stochastic gradient descent iterations and periodically consolidates model parameters via edge servers. The adaptive control algorithm for DFL mitigates energy consumption and edge-to-cloud latency, resulting in faster global model convergence, reduced resource consumption, and robustness against delays. Lastly, DP-HFL is introduced to combat privacy vulnerabilities in FL. Merging the benefits of FL and Hierarchical Differential Privacy (HDP), DP-HFL significantly reduces the need for differential privacy noise while maintaining model performance, exhibiting an optimal privacy-performance trade-off. Theoretical analysis under both convex and nonconvex loss functions confirms DP-HFL’s effectiveness regarding convergence speed, privacy performance trade-off, and potential performance enhancement with appropriate network configuration. In sum, the study thoroughly explores TT-HF, DFL, and DP-HFL, and their unique solutions to distributed learning challenges such as efficiency, latency, and privacy concerns. These advanced FL frameworks have considerable potential to further enable effective, efficient, and secure distributed learning.</p>

Page generated in 0.1165 seconds