471 |
Diabetic Retinopathy Classification Using Gray Level Textural Contrast and Blood Vessel Edge Profile MapGurudath, Nikita January 2014 (has links)
No description available.
|
472 |
AI-WSN: Adaptive and Intelligent Wireless Sensor NetworksLi, Jiakai 24 September 2012 (has links)
No description available.
|
473 |
Assessing Working Models' Impact on Land Cover Dynamics through Multi-Agent Based Modeling and Artificial Neural Networks: A Case Study of Roanoke, VANusair, Heba Zaid 30 May 2024 (has links)
The transition towards flexible work arrangements, notably work-from-home (WFH) practices, has prompted significant discourse on their potential to reshape urban landscapes. While existing urban growth models (UGM) offer insights into environmental and economic impacts, There is a need to study the urban phenomena from the bottom-up style, considering the essential influence of individuals' behavior and decision-making process at disaggregate and local levels (Brail, 2008, p. 89). Addressing this gap, this study aims to comprehensively understand how evolving work modalities influence the urban form and land use patterns by focusing on socioeconomic and environmental factors. This research employs an Agent-Based Model (ABM) and Artificial Neural Network (ANN), integrated with GIS technologies, to predict the future Land Use and Land Cover (LULC) changes within Roanoke, Virginia. The study uniquely explores the dynamic interplay between macro-level policies and micro-level individual behaviors—categorized by employment types, social activities, and residential choices—shedding light on their collective impact on urban morphology.
Contrary to conventional expectations, findings reveal that the current low rate in WFH practices has not significantly redirected urban development trends towards sprawl but rather has emphasized urban densification, largely influenced by on-site work modalities. This observation is corroborated by WFH ratios not exceeding 10% in any analyzed census tract. Regarding model performance, the integration of micro-agents into the model substantially improved its accuracy from 86% to 89.78%, enabling a systematic analysis of residential preferences between WFH and on-site working (WrOS) agents. Furthermore, logistic regression analysis and decision score maps delineate the distinct spatial preferences of these agent groups, highlighting a pronounced suburban and rural preference among WFH agents, in contrast to the urban-centric inclination of WrOS agents. Utilizing ABM and ANN integrated with GIS technologies, this research advances the precision and complexity of urban growth predictions. The findings contribute valuable insights for urban planners and policymakers and underline the intricate relationships between work modalities and urban structure, challenging existing paradigms and setting a precedent for future urban planning methodologies. / Doctor of Philosophy / As more people start working from home, cities might change unexpectedly. This study in Roanoke, Virginia, explores how work-from-home (WFH) practices affect urban development. Traditional city growth models look at big-picture trends, but this study dives into the details of workers' individual behaviors and their residential choices.
Using advanced computer models such as machine learning and geographic information systems (GIS), predictions are made on how different work arrangements influence where workers live and how cities expand.
Surprisingly, fewer people work from home than expected. This hasn't caused cities to spread out more. Instead, Roanoke is expected to become denser in the next ten years because on-site workers tend to live in urban centers, while those who work from home prefer suburban and rural areas and, sometimes, urban. Different work arrangements lead to distinct residential preferences. By including the workers' individual behaviors in the models, the model's accuracy increased from 86% to 89.78%. Logistic regression analysis highlights the factors influencing land use changes, such as proximity to roads, slopes, home values, and wages.
This research helps city planners and policymakers understand working arrangement trends and create better policies to manage urban development. It shows the complex relationship between work practices and city structures, providing valuable insights for future city planning.
|
474 |
Smart Quality Assurance System for Additive Manufacturing using Data-driven based Parameter-Signature-Quality FrameworkLaw, Andrew Chung Chee 02 August 2022 (has links)
Additive manufacturing (AM) technology is a key emerging field transforming how customized products with complex shapes are manufactured. AM is the process of layering materials to produce objects from three-dimensional (3D) models. AM technology can be used to print objects with complicated geometries and a broad range of material properties. However, the issue of ensuring the quality of printed products during the process remains an obstacle to industry-level adoption. Furthermore, the characteristics of AM processes typically involve complex process dynamics and interactions between machine parameters and desired qualities. The issues associated with quality assurance in AM processes underscore the need for research into smart quality assurance systems.
To study the complex physics behind process interaction challenges in AM processes, this dissertation proposes the development of a data-driven smart quality assurance framework that incorporates in-process sensing and machine learning-based modeling by correlating the relationships among parameters, signatures, and quality. High-fidelity AM simulation data and the increasing use of sensors in AM processes help simulate and monitor the occurrence of defects during a process and open doors for data-driven approaches such as machine learning to make inferences about quality and predict possible failure consequences.
To address the research gaps associated with quality assurance for AM processes, this dissertation proposes several data-driven approaches based on the design of experiments (DoE), forward prediction modeling, and an inverse design methodology. The proposed approaches were validated for AM processes such as fused filament fabrication (FFF) using polymer and hydrogel materials and laser powder bed fusion (LPBF) using common metal materials. The following three novel smart quality assurance systems based on a parameter–signature–quality (PSQ) framework are proposed:
1. A customized in-process sensing platform with a DOE-based process optimization approach was proposed to learn and optimize the relationships among process parameters, process signatures, and parts quality during bioprinting processes. This approach was applied to layer porosity quantification and quality assurance for polymer and hydrogel scaffold printing using an FFF process.
2. A data-driven surrogate model that can be informed using high-fidelity physical-based modeling was proposed to develop a parameter–signature–quality framework for the forward prediction problem of estimating the quality of metal additive-printed parts. The framework was applied to residual stress prediction for metal parts based on process parameters and thermal history with reheating effects simulated for the LPBF process.
3. Deep-ensemble-based neural networks with active learning for predicting and recommending a set of optimal process parameter values were developed to optimize optimal process parameter values for achieving the inverse design of desired mechanical responses of final built parts in metal AM processes with fewer training samples. The methodology was applied to metal AM process simulation in which the optimal process parameter values of multiple desired mechanical responses are recommended based on a smaller number of simulation samples. / Doctor of Philosophy / Additive manufacturing (AM) is the process of layering materials to produce objects from three-dimensional (3D) models. AM technology can be used to print objects with complicated geometries and a broad range of material properties. However, the issue of ensuring the quality of printed products during the process remains a challenge to industry-level adoption. Furthermore, the characteristics of AM processes typically involve complex process dynamics and interactions between machine parameters and the desired quality. The issues associated with quality assurance in AM processes underscore the need for research into smart quality assurance systems.
To study the complex physics behind process interaction challenges in AM processes, this dissertation proposes a data-driven smart quality assurance framework that incorporates in-process sensing and machine-learning-based modeling by correlating the relationships among process parameters, sensor signatures, and parts quality. Several data-driven approaches based on the design of experiments (DoE), forward prediction modeling, and an inverse design methodology are proposed to address the research gaps associated with implementing a smart quality assurance system for AM processes. The proposed parameter–signature–quality (PSQ) framework was validated using bioprinting and metal AM processes for printing with polymer, hydrogel, and metal materials.
|
475 |
Real-Time Estimation of Traffic Stream Density using Connected Vehicle DataAljamal, Mohammad Abdulraheem 02 October 2020 (has links)
The macroscopic measure of traffic stream density is crucial in advanced traffic management systems. However, measuring the traffic stream density in the field is difficult since it is a spatial measurement. In this dissertation, several estimation approaches are developed to estimate the traffic stream density on signalized approaches using connected vehicle (CV) data. First, the dissertation introduces a novel variable estimation interval that allows for higher estimation precision, as the updating time interval always contains a fixed number of CVs. After that, the dissertation develops model-driven approaches, such as a linear Kalman filter (KF), a linear adaptive KF (AKF), and a nonlinear Particle filter (PF), to estimate the traffic stream density using CV data only. The proposed model-driven approaches are evaluated using empirical and simulated data, the former of which were collected along a signalized approach in downtown Blacksburg, VA. Results indicate that density estimates produced by the linear KF approach are the most accurate. A sensitivity of the estimation approaches to various factors including the level of market penetration (LMP) of CVs, the initial conditions, the number of particles in the PF approach, traffic demand levels, traffic signal control methods, and vehicle length is presented. Results show that the accuracy of the density estimate increases as the LMP increases. The KF is the least sensitive to the initial traffic density estimate, while the PF is the most sensitive to the initial traffic density estimate. The results also demonstrate that the proposed estimation approaches work better at higher demand levels given that more CVs exist for the same LMP scenario. For traffic signal control methods, the results demonstrate a higher estimation accuracy for fixed traffic signal timings at low traffic demand levels, while the estimation accuracy is better when the adaptive phase split optimizer is activated for high traffic demand levels. The dissertation also investigates the sensitivity of the KF estimation approach to vehicle length, demonstrating that the presence of longer vehicles (e.g. trucks) in the traffic link reduces the estimation accuracy. Data-driven approaches are also developed to estimate the traffic stream density, such as an artificial neural network (ANN), a k-nearest neighbor (k-NN), and a random forest (RF). The data-driven approaches also utilize solely CV data. Results demonstrate that the ANN approach outperforms the k-NN and RF approaches. Lastly, the dissertation compares the performance of the model-driven and the data-driven approaches, showing that the ANN approach produces the most accurate estimates. However, taking into consideration the computational time needed to train the ANN approach, the large amount of data needed, and the uncertainty in the performance when new traffic behaviors are observed (e.g., incidents), the use of the linear KF approach is highly recommended in the application of traffic density estimation due to its simplicity and applicability in the field. / Doctor of Philosophy / Estimating the number of vehicles (vehicle counts) on a road segment is crucial in advanced traffic management systems. However, measuring the number of vehicles on a road segment in the field is difficult because of the need for installing multiple detection sensors in that road segment. In this dissertation, several estimation approaches are developed to estimate the number of vehicles on signalized roadways using connected vehicle (CV) data. The CV is defined as the vehicle that can share its instantaneous location every time t. The dissertation develops model-driven approaches, such as a linear Kalman filter (KF), a linear adaptive KF (AKF), and a nonlinear Particle filter (PF), to estimate the number of vehicles using CV data only. The proposed model-driven approaches are evaluated using real and simulated data, the former of which were collected along a signalized roadway in downtown Blacksburg, VA. Results indicate that the number of vehicles produced by the linear KF approach is the most accurate. The results also show that the KF approach is the least sensitive approach to the initial conditions. Machine learning approaches are also developed to estimate the number of vehicles, such as an artificial neural network (ANN), a k-nearest neighbor (k-NN), and a random forest (RF). The machine learning approaches also use CV data only. Results demonstrate that the ANN approach outperforms the k-NN and RF approaches. Finally, the dissertation compares the performance of the model-driven and the machine learning approaches, showing that the ANN approach produces the most accurate estimates. However, taking into consideration the computational time needed to train the ANN approach, the huge amount of data needed, and the uncertainty in the performance when new traffic behaviors are observed (e.g., incidents), the use of the KF approach is highly recommended in the application of vehicle count estimation due to its simplicity and applicability in the field.
|
476 |
Exploring Optical Devices for Neuromorphic ApplicationsRhim, Seon-Young 30 April 2024 (has links)
In den letzten Jahren dominierten elektronikbasierte künstliche neuronale Netzwerke (KNN) die Computertechnik. Mit zunehmender Komplexität der Aufgaben stoßen konventionelle elektronische Architekturen jedoch an ihre Grenzen. Optische Ansätze bieten daher Lösungen durch analoge Berechnungen unter Verwendung von Materialien, die optische Signale zur synaptischen Plastizität steuern. Diese Studie untersucht daher die synaptischen Funktionen von photo- und elektrochrome Materialien für KNN.
Das Modulationsverhalten des Moleküls Diarylethen (DAE) auf Oberflächenplasmonen wird in der Kretschmann-Konfiguration untersucht. Optische Impulsfolgen ermöglichen synaptische Plastizität wie Langzeitpotenzierung und -depression. DAE-Modulation und Informationsübertragung bei unterschiedlichen Wellenlängen ermöglichen simultane Lese- und Schreibvorgänge und demonstrieren die nichtflüchtige Informationsspeicherung in plasmonischen Wellenleitern.
Die Integration von DAE in einem Y-Wellenleiter bildet somit ein vollständig optisches neuronales 2x1-Netzwerk. Synaptische Funktionen, die sich in DAE-Schaltvorgängen widerspiegeln, können somit in der Wellenleiterübertragung angewendet werden. Das Netzwerktraining für Logikgatter wird durch Gradientenverfahren erreicht, um UND- oder ODER-Funktionen auszuführen.
Elektrochrome Materialien in Wellenleitern ermöglichen optoelektronische Modulation. Die Kombination von gelartigem Polymer-Elektrolyt PS-PMMA-PS:[EMIM][TFSI] mit PEDOT:PSS ermöglicht eine elektrisch-gesteuerte Absorptionsmodulation. Eine binäre komplementäre Steuerung von Übertragungen und somit auch optisches Multiplexing in Y-Wellenleitern können dadurch demonstriert werden. Der feste Polymer-Elektrolyt PEG:NaOtf ermöglicht eine optische Signalmodulation für neuromorphes Computing. Mithilfe von analog gesteuertes Gradientenverfahren kann daher in einem Y-Wellenleiter lineare Klassifikation, ohne die Verwendung von zusätzlichen Speicher- oder Prozesseinheiten, antrainiert werden. / In recent years, electronic-based artificial neural networks (ANNs) have been dominant in computer engineering. However, as tasks grow complex, conventional electronic architectures reach their limits. Optical approaches therefore offer solutions through analog calculations using materials controlling optical signals for synaptic plasticity. This study explores photo- and electrochromic materials for synaptic functions in ANNs.
The switching behavior of the molecule diarylethene (DAE) affecting Surface Plasmon Polaritons (SPPs) is studied in the Kretschmann configuration. Optical pulse sequences enable synaptic plasticity like long-term potentiation and depression. DAE modulation and information transfer at distinct wavelengths allow simultaneous read and write processes, demonstrating non-volatile information storage in plasmonic waveguides.
DAE integration into Y-branch waveguides forms full-optical neural 2x1 networks. Synaptic functions, reflected in DAE switching, can be thus applied in waveguide transmission. Network training for logic gates is achieved using gradient descent method to adapt AND or OR gate functions based on the learning set.
Electrochromic materials in waveguides enable optoelectronic modulation. Combining gel-like polymer electrolyte PS-PMMA-PS:[EMIM][TFSI] with PEDOT:PSS allows electrical modulation, demonstrating binary complementary control of transmissions and optical multiplexing in Y-branch waveguides. The solid polymer electrolyte PEG:NaOtf enables optical signal modulation for neuromorphic computing, thereby facilitating the adaptation of linear classification in Y-branch waveguides without the need for additional storage or processing units.
|
477 |
Water-oriented management in forest plantations: combining hydrology, dendrochronology and ecophysiologyGualberto Fernandes, Tarcisio Jose 09 December 2015 (has links)
Tesis por compendio / Assessment of forest water-use (WU) is undoubtedly important and necessary, especially
in water scarcity areas that are already suffering the main negative impacts of climate
change. However, instead of just determining how much water is used by a forest, it is
also important to evaluate how forest-WU responds to forest management practices such
as thinning, a widely recognized alternative to promote improvements in the hydrologic
balance while maintaining or improving forest resilience. Thus, this thesis proposes three
integrated studies performed in an area of Aleppo pine subject to experimental thinning
in Eastern Spain. The first study was modelling an artificial neural network (ANN) to
estimate daily WU independently of forest heterogeneity provided by thinning. Stand
WU was accurately estimated using climate data, soil water content and forest
cover (correlation coefficient, R: 0.95; Nash-Sutcliffe coefficient, E: 0.90 and rootmean-square
error, RMSE: 0.078mm/day). Then the ANN modelled was used for gapfilling
when needed and those results were used in the following studies. The secondly
study addressed the question of how tree-growth, WU and water balance changed as a
consequence of thinning. To this end, the influence of thinning intensity and its effect at
short-term (thinned in 2008) and at mid-term (thinned in 1998) on the water-balance
components and tree-growth were investigated. The high-intensity thinning treatment
showed significant increases in mean annual tree-growth from 4.1 to 17.3 cm2
yr
-1
, a rate
which was maintained in the mid-term. Mean daily WU ranged from 5 (control) to 18
(high intensity thinning) l tree-1
. However, when expressed on stand basis, daily WU
ranged from 0.18 (medium intensity thinning) to 0.30 mm (control plot), meaning that in
spite of the higher WU rates in the remaining trees, stand WU was reduced with thinning.
Large differences were found in the water balance components between thinning plots
and control. These differences might have significant implications to maintain forest
resilience, and improve forest management practices. The third study, brings forth two
interesting points and their responses to thinning, WU and intrinsic water-use efficiency
(WUEi). First, the relationships between growth and climate were studied at mid-term in
order to identify if thinning can improve forest resilience. Second, the relationships
between WU and WUEi was explored to identify how these factors were affected by
thinning at short-term. A substantial limitation of tree-growth imposed by climatic
conditions was observed, although thinning changed the tree-growth-precipitation
relationships. Significant differences in WUEi were found after thinning at mid-term,
however no significant difference was observed at short-term. Despite this, in general
WUEi decreased when precipitation increased, with different slopes for each thinning
intensity. Different patterns of the relationship between WU and WUEi were found,
being positive for thinned plots and negative for control plot at short-term. Finally this
thesis suggest that thinning in Aleppo pine plantations is effective in changing the
relationships between WU and WUEi, furthermore, this thesis introduces a novel
contribution by looking at the inter-related effects on growth, WU, WUEi and water
balance in Mediterranean forest subject to thinning. / Gualberto Fernandes, TJ. (2014). Water-oriented management in forest plantations: combining hydrology, dendrochronology and ecophysiology [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48476 / Compendio
|
478 |
Advances in Document Layout AnalysisBosch Campos, Vicente 05 March 2020 (has links)
[EN] Handwritten Text Segmentation (HTS) is a task within the Document Layout Analysis field that aims to detect and extract the different page regions of interest found in handwritten documents. HTS remains an active topic, that has gained importance with the years, due to the increasing demand to provide textual access to the myriads of handwritten document collections held by archives and libraries.
This thesis considers HTS as a task that must be tackled in two specialized phases: detection and extraction. We see the detection phase fundamentally as a recognition problem that yields the vertical positions of each region of interest as a by-product. The extraction phase consists in calculating the best contour coordinates of the region using the position information provided by the detection phase.
Our proposed detection approach allows us to attack both higher level regions: paragraphs, diagrams, etc., and lower level regions like text lines. In the case of text line detection we model the problem to ensure that the system's yielded vertical position approximates the fictitious line that connects the lower part of the grapheme bodies in a text line, commonly known as the
baseline.
One of the main contributions of this thesis, is that the proposed modelling approach allows us to include prior information regarding the layout of the documents being processed. This is performed via a Vertical Layout Model (VLM).
We develop a Hidden Markov Model (HMM) based framework to tackle both region detection and classification as an integrated task and study the performance and ease of use of the proposed approach in many corpora. We review the modelling simplicity of our approach to process regions at different levels of information: text lines, paragraphs, titles, etc. We study the impact of adding deterministic and/or probabilistic prior information and restrictions via the VLM that our approach provides.
Having a separate phase that accurately yields the detection position (base- lines in the case of text lines) of each region greatly simplifies the problem that must be tackled during the extraction phase. In this thesis we propose to use a distance map that takes into consideration the grey-scale information in the image. This allows us to yield extraction frontiers which are equidistant to the adjacent text regions. We study how our approach escalates its accuracy proportionally to the quality of the provided detection vertical position. Our extraction approach gives near perfect results when human reviewed baselines are provided. / [ES] La Segmentación de Texto Manuscrito (STM) es una tarea dentro del campo de investigación de Análisis de Estructura de Documentos (AED) que tiene como objetivo detectar y extraer las diferentes regiones de interés de las páginas que se encuentran en documentos manuscritos. La STM es un tema de investigación activo que ha ganado importancia con los años debido a la creciente demanda de proporcionar acceso textual a las miles de colecciones de documentos manuscritos que se conservan en archivos y bibliotecas.
Esta tesis entiende la STM como una tarea que debe ser abordada en dos fases especializadas: detección y extracción. Consideramos que la fase de detección es, fundamentalmente, un problema de clasificación cuyo subproducto son las posiciones verticales de cada región de interés. Por su parte, la fase de extracción consiste en calcular las mejores coordenadas de contorno de la región utilizando la información de posición proporcionada por la fase de detección.
Nuestro enfoque de detección nos permite atacar tanto regiones de alto nivel (párrafos, diagramas¿) como regiones de nivel bajo (líneas de texto principalmente). En el caso de la detección de líneas de texto, modelamos el problema para asegurar que la posición vertical estimada por el sistema se aproxime a la línea ficticia que conecta la parte inferior de los cuerpos de los grafemas en una línea de texto, comúnmente conocida como línea base. Una de las principales aportaciones de esta tesis es que el enfoque de modelización propuesto nos permite incluir información conocida a priori sobre la disposición de los documentos que se están procesando. Esto se realiza mediante un Modelo de Estructura Vertical (MEV).
Desarrollamos un marco de trabajo basado en los Modelos Ocultos de Markov (MOM) para abordar tanto la detección de regiones como su clasificación de forma integrada, así como para estudiar el rendimiento y la facilidad de uso del enfoque propuesto en numerosos corpus. Así mismo, revisamos la simplicidad del modelado de nuestro enfoque para procesar regiones en diferentes niveles de información: líneas de texto, párrafos, títulos, etc. Finalmente, estudiamos el impacto de añadir información y restricciones previas deterministas o probabilistas a través de el MEV propuesto que nuestro enfoque proporciona.
Disponer de un método independiente que obtiene con precisión la posición de cada región detectada (líneas base en el caso de las líneas de texto) simplifica enormemente el problema que debe abordarse durante la fase de extracción. En esta tesis proponemos utilizar un mapa de distancias que tiene en cuenta la información de escala de grises de la imagen. Esto nos permite obtener fronteras de extracción que son equidistantes a las regiones de texto adyacentes. Estudiamos como nuestro enfoque aumenta su precisión de manera proporcional a la calidad de la detección y descubrimos que da resultados casi perfectos cuando se le proporcionan líneas de base revisadas por
humanos. / [CA] La Segmentació de Text Manuscrit (STM) és una tasca dins del camp d'investigació d'Anàlisi d'Estructura de Documents (AED) que té com a objectiu detectar I extraure les diferents regions d'interès de les pàgines que es troben en documents manuscrits. La STM és un tema d'investigació actiu que ha guanyat importància amb els anys a causa de la creixent demanda per proporcionar accés textual als milers de col·leccions de documents manuscrits que es conserven en arxius i biblioteques.
Aquesta tesi entén la STM com una tasca que ha de ser abordada en dues fases especialitzades: detecció i extracció. Considerem que la fase de detecció és, fonamentalment, un problema de classificació el subproducte de la qual són les posicions verticals de cada regió d'interès. Per la seva part, la fase d'extracció consisteix a calcular les millors coordenades de contorn de la regió utilitzant la informació de posició proporcionada per la fase de detecció.
El nostre enfocament de detecció ens permet atacar tant regions d'alt nivell (paràgrafs, diagrames ...) com regions de nivell baix (línies de text principalment). En el cas de la detecció de línies de text, modelem el problema per a assegurar que la posició vertical estimada pel sistema s'aproximi a la línia fictícia que connecta la part inferior dels cossos dels grafemes en una línia de
text, comunament coneguda com a línia base.
Una de les principals aportacions d'aquesta tesi és que l'enfocament de modelització proposat ens permet incloure informació coneguda a priori sobre la disposició dels documents que s'estan processant. Això es realitza mitjançant un Model d'Estructura Vertical (MEV).
Desenvolupem un marc de treball basat en els Models Ocults de Markov (MOM) per a abordar tant la detecció de regions com la seva classificació de forma integrada, així com per a estudiar el rendiment i la facilitat d'ús de l'enfocament proposat en nombrosos corpus. Així mateix, revisem la simplicitat del modelatge del nostre enfocament per a processar regions en diferents nivells d'informació: línies de text, paràgrafs, títols, etc. Finalment, estudiem l'impacte d'afegir informació i restriccions prèvies deterministes o probabilistes a través del MEV que el nostre mètode proporciona.
Disposar d'un mètode independent que obté amb precisió la posició de cada regió detectada (línies base en el cas de les línies de text) simplifica enormement el problema que ha d'abordar-se durant la fase d'extracció. En aquesta tesi proposem utilitzar un mapa de distàncies que té en compte la informació d'escala de grisos de la imatge. Això ens permet obtenir fronteres d'extracció que són equidistants de les regions de text adjacents. Estudiem com el nostre enfocament augmenta la seva precisió de manera proporcional a la qualitat de la detecció i descobrim que dona resultats quasi perfectes quan se li proporcionen línies de base revisades per humans. / Bosch Campos, V. (2020). Advances in Document Layout Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138397
|
479 |
Calibrating Constitutive Models Using Data-Driven Method : Material Parameter Identification for an Automotive Sheet MetalHaller, Anton, Fridström, Nicke January 2024 (has links)
The automotive industry is reliant on accurate finite element simulations for developing new parts or machines and to achieve this, accurate material models are essential. Material cards contain input about the material model, and are significant; however, time-consuming to calibrate with traditional methods. Therefore a newer method involving Machine Learning (ML) and Feed-Forward Neural Networks (FFNN) is studied in the thesis. The direct application of calibration with FFNN has never been applied to calibrate the Swift hardening law and Barlat yield 2000 criteria, which is done in this thesis. All steps for calibration are performed to achieve a high-fidelity database capable of training the FFNN. The outline of the thesis involves four different phases; experiments, simulations, building the high-fidelity database, and building and optimizing the FFNN. The experiment phase involves tensile testing of three different types of specimens in three material directions with Digital Image Correlation (DIC) to capture local strain. The simulation phase is to replicate all the experiments in LS-DYNA and perform finite element simulation. The finite element models are simulated 100 times and, respectively, 1000 times with different material parameters within a specific range. This range has a lower and upper bound that covers the experimental results. The database phase involves extracting the data from a huge amount of simulations and then extracting the key characteristics from the force-displacement curve. The last phase is building the FFNN and optimizing the network to find the best parameters. It’s first optimized based on Root Mean Square Error (RMSE) and then points from the Swift hardening curve and Barlat yield 2000 criteria are compared with experimental points. The result shows that the FFNN with the high-fidelity database can predict material parameters with an accuracy of over 99 % for the hardening law at the points chosen for optimization and the anisotropy parameters are optimized to 97 % accuracy for the yielding points and Lankford coefficients. The thesis concludes that the FFNN can accurately predict the material parameters with real experimental data. The effectiveness of using this method is significantly faster than traditional methods because only one type of test is needed. / Bilindustrin är beroende av trovärdiga och noggranna finita element simuleringar för utveckling av nya komponenter eller maskiner, och för det behövs noggranna materialmodeller. Materialkort innehåller information om materialmodellerna och är av stor betydelse, men är tidskrävande att kalibrera med traditionella metoder. Därför är en ny metod som involverar Maskininlärning (ML) och Feed-Forward Neurala Nätverk (FFNN) undersökt i avhandlingen. Applikationen av att kalibrera med FFNN har aldrig blivit undersökt för ”Swift hardening law” och anisotropi kriteriet ”Barlat yield 2000”. Alla steg för att kalibrera materialet är utförda för att uppnå en högkvalitativ databas som är kapabel att träna ett FFNN. Arbetets översikt involverar fyra faser som är; experiment, simulationer, databasensuppbyggnad och utvecklingen samt optimeringen av FFNN. Experimentfasen involverar dragprov för tre olika geometrier i tre materialriktningar tillsammans med Digital Image Correlation (DIC) för att fånga lokala töjningspunkter. Simulationsfasen går ut på att replikera experimentfasen genom finita element simuleringar i LS-DYNA. Finita element modellerna är simulerade 100 respektive 1000 gånger med olika materialparametrar inom ett specifikt intervall med en övre och undre gräns som ska täcka experimentdatan. Databasfasen handlar om att extrahera data från de massiva antalet simuleringar och extrahera nyckelbeteenden från kraft-förflyttningskurvan. Den sista fasen är att bygga FFNN och optimera för att hitta bästa möjliga parametrar. Det är först optimerat baserat på Root Mean Squared Error (RMSE) och sedan punkter från Swift härdningskurvan och beteenden genererat från Barlat yield 2000 som är jämförda med experimentella värden som Lankfordkoefficienter och sträckgränser för rullningsriktningarna. Resultatet visar att ett FFNN med en högkvalitiativ databas kan estimera materialparametrar med en noggrannhet över 99 % för härdningskurvan för jämförelsepunkterna och med en 97 % noggrannhet för anisotropipunkterna som Lankfordkoefficienter och sträckgränser i rullningsriktningarna. Exjobbet avslutas med att dra slutsatsen att FFNN kan estimera riktiga materialparametrar med en viss noggrannhet. Effektiviteten av att använda metoden är betydligt snabbare än traditionella metoder eftersom det endast tar några sekunder att estimera parametrarna när datan är extraherad och enbart en typ av test behövs.
|
480 |
Providing Situational Awareness For Naval Operators : Implementation of Two Prioritization AlgorithmsNilsson, Jonna, Lidh, Jesper January 2024 (has links)
On the 29th of August, the vessel Stena Scandica experienced a blackout. Before the blackout, 294 alarms were issued in 4 minutes. With the number of alarms, the operators could not prevent the blackout. The amount of information and the way it was presented became a hindrance to operators. They could not interpret their surroundings' information without fault from them. This interpretation is called situational awareness. This thesis will solve how information can be provided to operators without hindrance to situational awareness. The focus will be on the Swedish Navy's operators and their needs. The aim is to solve the problem by creating a system that provides situational awareness. The system will use the information on air- and seaborne targets from a radar and a camera display. Three research questions were proposed: how will the radar data structure be, how will it be ranked, and how will it be presented? The structure was expected to tell the targets' location, size, and movement. The ranking of the targets would tell if the targets were a threat to the naval operators. Lastly, the targets were expected to be presented with some of their information on a camera display. For the first question, the structure for both kinds of targets was constructed to meet the expectations. Two models were used to solve the second question. An artificial neural network and fuzzy c-means. The artificial neural network was chosen as it is one of the best classification algorithms. Fuzzy c-means were chosen since it can cluster similar behaviors together, therefore clustering high-threat targets together. Of these two models, the result showed that the artificial neural network was a better ranking method, with a higher accuracy of 92.9% for airborne targets and 80.6% for seaborne targets. A simulation was made to answer the third question and was built according to the expectations. The simulation only displayed the highest threat targets in the camera display. By presenting the high-threat targets, the operators received a better understanding of where the targets are in reality. In the future, studies should be conducted on implementation of the system on Swedish Navy vessels. For example, is there enough computational power for an artificial neural network? / Den 29 augusti drabbades fartyget Stena Scandica av ett strömavbrott. Innan strömavbrottet utlöste 294 larm inom 4 minuter, vilket gjorde det omöjligt för operatörerna att förhindra avbrottet. Mängden information och sättet den presenterades på blev ett hinder för operatörerna, vilket påverkade deras lägesbild. Arbete syftar till att lösa hur information kan tillhandahållas till operatörer utan att hindra deras situationsmedvetenhet, med fokus på den svenska marinens operatörer och deras behov. Detta arbete föreslår ett system som använder radardata och kameradisplayer för att tillhandahålla lägesbilden. Tre forskningsfrågor ställs: hur ska radarns datastruktur vara, hur ska den rankas, och hur ska den presenteras? Strukturen förväntas visa målens plats, storlek och rörelse. Rankningen ska indikera om målen utgör ett hot, och hög-hotmål ska presenteras på kameradisplayen. För att svara på den första frågan konstruerades strukturen för båda typerna av mål. För den andra frågan användes två modeller: ett artificiellt neuralt nätverk och fuzzy c-means. Det artificiella neurala nätverket visade sig vara den bästa metoden med en noggrannhet på 92,9% för luftmål och 80,6% för sjömål. En simulering gjordes för att svara på den tredje frågan, där endast de mest hotfulla målen visades på kameradisplayen. Detta gav operatörerna en bättre förståelse för var målen befann sig. Framtida studier bör undersöka systemets implementering på svenska marinens fartyg. Exempelvis om tillräcklig beräkningskraft finns för ett artificiellt neuralt nätverk.
|
Page generated in 0.0976 seconds