• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 10
  • 10
  • 10
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The combination of AI modelling techniques for the simulation of manufacturing processes

Korn, Stefan January 1998 (has links)
No description available.
2

Hydrological data interpolation using entropy

Ilunga, Masengo 17 November 2006 (has links)
Faculty of Engineering and Built Enviroment School of Civil and Enviromental Engineering 0105772w imasengo@yahoo.com / The problem of missing data, insufficient length of hydrological data series and poor quality is common in developing countries. This problem is much more prevalent in developing countries than it is in developed countries. This situation can severely affect the outcome of the water systems managers’ decisions (e.g. reliability of the design, establishment of operating policies for water supply, etc). Thus, numerous data interpolation (infilling) techniques have evolved in hydrology to deal with the missing data. The current study presents merely a methodology by combining different approaches and coping with missing (limited) hydrological data using the theories of entropy, artificial neural networks (ANN) and expectation-maximization (EM) techniques. This methodology is simply formulated into a model named ENANNEX model. This study does not use any physical characteristics of the catchment areas but deals only with the limited information (e.g. streamflow or rainfall) at the target gauge and its similar nearby base gauge(s). The entropy concept was confirmed to be a versatile tool. This concept was firstly used for quantifying information content of hydrological variables (e.g. rainfall or streamflow). The same concept (through directional information transfer index, i.e. DIT) was used in the selection of base/subject gauge. Finally, the DIT notion was also extended to the evaluation of the hydrological data infilling technique performance (i.e. ANN and EM techniques). The methodology was applied to annual total rainfall; annual mean flow series, annual maximum flows and 6-month flow series (means) of selected catchments in the drainage region D “Orange” of South Africa. These data regimes can be regarded as useful for design-oriented studies, flood studies, water balance studies, etc. The results from the case studies showed that DIT is as good index for data infilling technique selection as other criteria, e.g. statistical and graphical. However, the DIT has the feature of being non-dimensionally informational index. The data interpolation iii techniques viz. ANNs and EM (existing methods applied and not yet applied in hydrology) and their new features have been also presented. This study showed that the standard techniques (e.g. Backpropagation-BP and EM) as well as their respective variants could be selected in the missing hydrological data estimation process. However, the capability for the different data interpolation techniques of maintaining the statistical characteristics (e.g. mean, variance) of the target gauge was not neglected. From this study, the relationship between the accuracy of the estimated series (by applying a data infilling technique) and the gap duration was then investigated through the DIT notion. It was shown that a decay (power or exponential) function could better describe that relationship. In other words, the amount of uncertainty removed from the target station in a station-pair, via a given technique, could be known for a given gap duration. It was noticed that the performance of the different techniques depends on the gap duration at the target gauge, the station-pair involved in the missing data estimation and the type of the data regime. This study showed also that it was possible, through entropy approach, to assess (preliminarily) model performance for simulating runoff data at a site where absolutely no record exist: a case study was conducted at Bedford site (in South Africa). Two simulation models, viz. RAFLER and WRSM2000 models, were then assessed in this respect. Both models were found suitable for simulating flows at Bedford.
3

Artificial Neural Network-Based Approaches for Modeling the Radiated Emissions from Printed Circuit Board Structures and Shields

Kvale, David Thomas January 2010 (has links)
No description available.
4

Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI

Lövgren, Hans January 2014 (has links)
Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\newline Objectives. We investigate whether modern hardware and software, GPGPU, combined with state-of-art Evolution Strategies, CMA-Neuro-ES, can potentially increase the efficiency of solving RL problems.\newline Methods. In order to do this, both an implementational as well as an experimental research method is used. The implementational research mainly involves developing and setting up an experimental framework in which to measure efficiency through benchmarking. In this framework, the GPGPU/ES solution is later developed. Using this framework, experiments are conducted on a conventional sequential solution as well as our own parallel GPGPU solution.\newline Results. The results indicate that utilizing GPGPU and state-of-art ES when attempting to solve RL problems can be more efficient in terms of time consumption in comparison to a conventional and sequential CPU approach.\newline Conclusions. We conclude that our proposed solution requires additional work and research but that it shows promise already in this initial study. As the study is focused on primarily generating benchmark performance data from the experiments, the study lacks data on RL efficiency and thus motivation for using our approach. However we do conclude that the GPGPU approach suggested does allow less time consuming RL problem solving.
5

Drying shrinkage of self-compacting concrete incorporating fly ash

Abdalhmid, Jamila M.A. January 2019 (has links)
The present research is conducted to investigate long term (more than two years) free and confined drying shrinkage magnitude and behaviour of self-compacting concrete (SCC) and compare with normal concrete (NC). For all SCCs mixes, Portland cement was replaced with 0-60% of fly ash (FA), fine and coarse aggregates were kept constant at 890 kg/m3 and 780 kg/m3, respectively. Two different water binder ratios of 0.44 and 0.33 were examined for both SCCs and NCs. Fresh properties of SCCs such as filling ability, passing ability, viscosity and resistance to segregation and hardened properties such as compressive and flexural strengths, water absorption and density of SCCs and NCs were also determined. Experimental results of free drying shrinkage obtained from this study together with collected comprehensive database from different sources available in the literature were compared to five existing models, namely the ACI 209R-92 model, BSEN-92 model, ACI 209R-92 (Huo) model, B3 model, and GL2000 model. To assess the quality of predictive models, the influence of various parameters (compressive strength, cement content, water content and relative humidity) on the drying shrinkage strain are studied. An artificial neural network models (ANNM) for prediction of drying shrinkage strains of SCC was developed using the same data used in the existing models. Two ANNM sets namely ANNM1 and ANNM2 with different numbers of hidden layer neurones were constructed. Comparison between the results given by the ANNM1 model and the results obtained by the five existing predicted models were presented. The results showed that, using up to 60% of FA as cement replacement can produce SCC with a compressive strength as high as 30 MPa and low drying shrinkage strain. SCCs long-term drying shrinkage from 356 to 1000 days was higher than NCs. Concrete filled elliptical tubes (CFET) with self-compacting concrete containing FA up to 60% are recommended for use in construction in order to prevent confined drying strain. ACI 209R-92 model provided a better prediction of drying shrinkage compared with the other four models. However, a very high predictability with high accuracy was achieved with the ANNM1 model with a mean of 1.004. Moreover, by using ANNM models, it is easy to insert any of factors effecting drying shrinkage to the input parameters to predict drying shrinkage strain of SCC. / Ministry of Higher Education, Libya
6

Empirical Investigation of the Effect of Pruning Artificial Neural Networks With Respect to Increased Generalization Ability

Weman, Nicklas January 2010 (has links)
This final thesis covers the basics of artificial neural networks, with focus on supervised learning, pruning and the problem of achieving good generalization ability. An empirical investigation is conducted on twelve dierent problems originating from the Proben1 benchmark collection.The results indicate that pruning is more likely to improve generalization if the data is sensitive to overtting or if the networks are likely to be trapped in local minima.
7

Automated Pulmonary Nodule Detection on Computed Tomography Images with 3D Deep Convolutional Neural Network

Broyelle, Antoine January 2018 (has links)
Object detection on natural images has become a single-stage end-to-end process thanks to recent breakthroughs on deep neural networks. By contrast, automated pulmonary nodule detection is usually a three steps method: lung segmentation, generation of nodule candidates and false positive reduction. This project tackles the nodule detection problem with a single stage modelusing a deep neural network. Pulmonary nodules have unique shapes and characteristics which are not present outside of the lungs. We expect the model to capture these characteristics and to only focus on elements inside the lungs when working on raw CT scans (without the segmentation). Nodules are small, distributed and infrequent. We show that a well trained deep neural network can spot relevantfeatures and keep a low number of region proposals without any extra preprocessing or post-processing. Due to the visual nature of the task, we designed a three-dimensional convolutional neural network with residual connections. It was inspired by the region proposal network of the Faster R-CNN detection framework. The evaluation is performed on the LUNA16 dataset. The final score is 0.826 which is the average sensitivity at 0.125, 0.25, 0.5, 1, 2, 4, and 8 false positives per scan. It can be considered as an average score compared to other submissions to the challenge. However, the solution described here was trained end-to-end and has fewer trainable parameters. / Objektdetektering i naturliga bilder har reducerates till en enstegs process tack vare genombrott i djupa neurala nätverk. Automatisk detektering av pulmonella nodulärer är vanligtvis ett trestegsproblem: segmentering av lunga, generering av nodulärkandidater och reducering av falska positiva utfall. Det här projektet tar sig an nodulärdetektering med en enstegsmodell med hjälp av ett djupt neuralt nätverk. Pulmonella nodulärer har unika karaktärsdrag som inte finns utanför lungorna. Modellen förväntas fånga dessa drag och enbart fokusera på element inuti lungorna när den arbetar med datortomografibilder. Nodulärer är små och glest föredelade. Vi visar att ett vältränat nätverk kan finna relevanta särdrag samt föreslå ett lågt antal intresseregioner utan extra för- eller efter- behandling. På grund av den visuella karaktären av det här problemet så designade vi ett tredimensionellt s.k. convolutional neural network med residualkopplingar. Projektet inspirerades av Faster R-CNN, ett nätverk som utmärker sig i sin förmåga att detektera intresseregioner. Nätverket utvärderades på ett dataset vid namn LUNA16. Det slutgiltiga nätverket testade 0.826, vilket är genomsnittlig sensitivitet vid 0.125, 0.25, 0.5, 1, 2, 4, och 8 falska positiva per utvärdering. Detta kan anses vara genomsnittligt jämfört med andra deltagande i tävlingen, men lösningen som föreslås här är en enstegslösning som utför detektering från början till slut och har färre träningsbara parametrar. / La détection d’objets sur les images naturelles est devenue au fil du temps un processus réalisé de bout en bout en une seule étape grâce aux évolutions récentes des architectures de neurones artificiels profonds. En revanche, la détection automatique de nodules pulmonaires est généralement un processus en trois étapes : la segmentation des poumons (pré-traitement), la génération de zones d’intérêt (modèle) et la réduction des faux positifs (post-traitement). Ce projet s’attaque à la détection des nodules pulmonaires en une seule étape avec un réseau profond de neurones artificiels. Les nodules pulmonaires ont des formes et des structures uniques qui ne sont pas présentes en dehors de cet organe. Nous nous attendons à ce qu’un modèle soit capable de capturer ces caractéristiques et de se focaliser uniquement sur les éléments à l’intérieur des poumons alors même qu’il reçoit des images brutes (sans segmentation des poumons). Les nodules sont petits, peu fréquents et répartis aléatoirement. Nous montrons qu’un modèle correctement entraîné peut repérer les éléments caractéristiques des nodules et générer peu de localisations sans pré-traitement ni post-traitement. Du fait de la nature visuelle de la tâche, nous avons développé un réseau neuronal convolutif tridimensionnel. L’architecture utilisée est inspirée du méta-algorithme de détection Faster R-CNN. L’évaluation est réalisée avec le jeu de données du challenge LUNA16. Le score final est de 0.826 qui représente la sensibilité moyenne pour les valeurs de 0.125, 0.25, 0.5, 1, 2, 4 et 8 faux positifs par scanner. Il peut être considéré comme un score moyen comparé aux autres contributions du challenge. Cependant, la solution décrite montre la faisabilité d’un modèle en une seule étape, entraîné de bout en bout. Le réseau comporte moins de paramètres que la majorité des solutions.
8

Assessment of lung damages from CT images using machine learning methods. / Bedömning av lungskador från CT-bilder med maskininlärningsmetoder.

Chometon, Quentin January 2018 (has links)
Lung cancer is the most commonly diagnosed cancer in the world and its finding is mainly incidental. New technologies and more specifically artificial intelligence has lately acquired big interest in the medical field as it can automate or bring new information to the medical staff. Many research have been done on the detection or classification of lung cancer. These works are done on local region of interest but only a few of them have been done looking at a full CT-scan. The aim of this thesis was to assess lung damages from CT images using new machine learning methods. First, single predictors had been learned by a 3D resnet architecture: cancer, emphysema, and opacities. Emphysema was learned by the network reaching an AUC of 0.79 whereas cancer and opacity predictions were not really better than chance AUC = 0.61 and AUC = 0.61. Secondly, a multi-task network was used to predict the factors altogether. A training with no prior knowledge and a transfer learning approach using self-supervision were compared. The transfer learning approach showed similar results in the multi-task approach for emphysema with AUC=0.78 vs 0.60 without pre-training and opacities with an AUC=0.61. Moreover using the pre-training approach enabled the network to reach the same performance as each of single factor predictor but with only one multi-task network which saves a lot of computational time. Finally a risk score can be derived from the training to use this information in a clinical context.
9

Object detection for autonomous trash and litter collection / Objektdetektering för autonom skräpupplockning

Edström, Simon January 2022 (has links)
Trashandlitter discarded on the street is a large environmental issue in Sweden and across the globe. In Swedish cities alone it is estimated that 1.8 billion articles of trash are thrown to the street each year, constituting around 3 kilotons of waste. One avenue to combat this societal and environmental problem is to use robotics and AI. A robot could learn to detect trash in the wild and collect it in order to clean the environment. A key component of such a robot would be its computer vision system which allows it to detect litter and trash. Such systems are not trivially designed or implemented and have only recently reached high enough performance in order to work in industrial contexts. This master thesis focuses on creating and analysing such an algorithm by gathering data for use in a machine learning model, developing an object detection pipeline and evaluating the performance of that pipeline based on varying its components. Specifically, methods using hyperparameter optimisation, psuedolabeling and the preprocessing methods tiling and illumination normalisation were implemented and analysed. This thesis shows that it is possible to create an object detection algorithm with high performance using currently available state-of-the-art methods. Within the analysed context, hyperparameter optimisation did not significantly improve performance and psuedolabeling could only briefly be analysed but showed promising results. Tiling greatly increased mean average precision (mAP) for the detection of small objects, such as cigarette butts, but decreased the mAP for large objects and illumination normalisation improved mAPforimagesthat were brightly lit. Both preprocessing methods reduced the frames per second that a full detector could run at whilst psuedolabeling and hyperparameter optimisation greatly increased training times. / Skräp som slängs på marken har en stor miljöpåverkan i Sverige och runtom i världen. Enbart i Svenska städer uppskattas det att 1,8 miljarder bitar skräp slängs på gatan varje år, bestående av cirka 3 kiloton avfall. Ett sätt att lösa detta samhälleliga och miljömässiga problem är att använda robotik och AI. En robot skulle kunna lära siga att detektera skräp i utomhusmiljöer och samla in den för att på så sätt rengöra våra städer och vår natur. En nyckelkomponent av en sådan robot skulle vara dess system för datorseende som tillåter den att se och hitta skräp. Sådana system är inte triviala att designa eller implementera och har bara nyligen påvisat tillräckligt hög prestanda för att kunna användas i kommersiella sammanhang. Detta masterexamensarbete fokuserar på att skapa och analysera en sådan algoritm genom att insamla data för att använda i en maskininlärningsmodell, utveckla en objektdetekterings pipeline och utvärdera prestandan när dess komponenter modifieras. Specifikt analyseras metoderna pseudomarkering, hyperparameter optimering samt förprocesseringsmetoderna kakling och ljusintensitetsnormalisering. Examensarbetet visar att det är möjligt att skapa en objektdetekteringsalgoritm med hög prestanda med hjälp av den senaste tekniken på området. Inom det undersökta sammanhanget gav hyperparameter optimering inte någon större förbättring av prestandan och pseudomarkering kunde enbart ytligt analyseras men uppvisade preliminärt lovande resultat. Kakling förbättrade resultatet för detektering av små objekt, som cigarettfimpar, men minskade prestandan för större objekt och ljusintensitetsnormalisering förbättrade prestandan för bilder som var starkt belysta. Båda förprocesseringsmetoderna minskade bildhastigheten som en detektor skulle kunna köra i och psuedomarkering samt hyperparameter optimering ökade träningstiden kraftigt.
10

<b>Machine-Learning-Aided Development of Surrogate Models for Flexible Design Optimization of Enhanced Heat Transfer Surfaces</b>

Saeel Shrivallabh Pai (20692082) 10 February 2025 (has links)
<p dir="ltr">Due to the end of Dennard scaling, electronic devices must consume more electrical power for increased functionality. The increased power consumption, combined with diminishing form factors, results in increased power density within the device, leading to increased heat fluxes at the devices surfaces. Without proper thermal management, the increase in heat fluxes can cause device temperatures to exceed operational limits, ultimately resulting in device failure. However, the dissipation of these high heat fluxes often requires pumping or refrigeration of a coolant, which in turn, increases the total energy usage. Data centers, which form the backbone of the cloud infrastructure and the modern economy, account for ~2% of the total US electricity use, of which up to ~40% is spent on cooling needs alone. Thus, it is necessary to optimize the designs of the cooling systems to be able to dissipate higher heat fluxes, but at lower operating powers.</p><p dir="ltr">The design optimization of various thermal management components such as cold plates, heat sinks, and heat exchangers relies on accurate prediction of flow heat transfer and pressure drop. During the iterative design process, the heat transfer and pressure drop is typically either computed numerically or obtained using geometry-specific correlations for Nusselt number (<i>Nu</i>) and friction factor (<i>f</i>). Numerical approaches are accurate for evaluation of a single design but become computationally expensive if many design iterations are required (such as during formal optimization processes). Moreover, traditional empirical correlations are highly geometry dependent and assume functional forms that could introduce inaccuracies. To overcome these limitations, this thesis introduces accurate and continuous-valued machine-learning (ML)-based surrogate models for predicting Nusselt number and friction factor on various heat exchange surfaces. These surrogate models, which are applicable to more geometries than traditional correlations, enable flexible and computationally inexpensive design optimization. The utility of these surrogate models is first demonstrated through the optimization of single-phase liquid cold plates under specific boundary conditions. Subsequently, their effectiveness is further showcased in the more practical challenge of designing liquid-to-liquid heat exchangers by integrating the surrogate models with a homogenization-based topology optimization framework. As topology optimization relies heavily on accurate predictions of pressure drop and heat transfer at every point in the domain during each iteration, using ML-based surrogate models greatly reduces the computational cost while enabling the development of high-performance, customized heat exchange surfaces. Thus, this work contributes to the advancement of thermal management by leveraging machine learning techniques for efficient and flexible design optimization processes.</p><p dir="ltr">First, artificial neural network (ANN)-based surrogate correlations are developed to predict <i>f</i> and <i>Nu</i> for fully developed internal flow in channels of arbitrary cross section. This effectively collapses all known correlations for channels of different cross section shapes into one correlation for <i>f</i> and one for <i>Nu</i>. The predictive performance and generality of the ANN-based surrogate models is verified on various shapes outside the training dataset, and then the models are used in the design optimization of flow cross sections based on performance metrics that weigh both heat transfer and pressure drop. The optimization process leads to novel shapes outside the training data, the performance of which is validated through numerical simulations. Although the ML model predictions lose accuracy outside the training set for these novel shapes, the predictions are shown to follow the correct trends with parametric variations of the shape and therefore successfully direct the search toward optimized shapes.</p><p dir="ltr">The success of ANN-aided shape optimization of constant cross-section internal flow channels serves as a compelling proof-of-concept, highlighting the potential of ML-aided optimization in thermal-fluid applications. However, to address the complexities of widely used thermal management devices such as cold plates and heat exchangers, known for their intricate surface geometries beyond constant cross-section channels, a strategic shift is imperative. With the goal of crafting ML models specifically tailored for practical design optimization algorithms like topology optimization, the thesis next delves into diverse micro-pin fin arrangements commonly employed in applications like cold plates and heat exchangers. This study on pin fins includes the exploration of hydrodynamic and thermal developing effects, as well as the impact of pin fin cross section shape and orientation. The ML-based predictive models are trained on numerically simulated synthetic data. The large amounts of accurate synthetic data required to train machine learning models are generated using a custom-developed simulation automation framework. With this framework, numerical flow and heat transfer simulations can be run on thousands of geometries and boundary conditions with minimal user intervention. The proposed models provide accurate predictions of <i>f</i> and <i>Nu</i>, with a near exact match to the training data as well as on unseen testing data. Furthermore, the outputs of the ANNs are inspected to propose new analytical correlations to estimate the hydrodynamic and thermal entrance lengths for flow through square pin fin arrays. The ML models are also shown to be useable for fluids other than water, employing physics-based, Prandtl-number-dependent scaling relations.</p><p dir="ltr">The thesis further demonstrates the utility of the ML surrogate models to facilitate the design optimization of thermal management components through their integration in the topology optimization (TO) framework for heat exchanger design. Topology optimization is a computational design methodology for determining the optimal material distribution within a design space based on given constraints. The use of topology optimization in the design of heat exchangers and other thermal management devices has been gaining significant attention in recent years, particularly with the widespread availability of additive manufacturing techniques that offer geometric design flexibility. Particularly advantageous for heat exchanger design is the homogenization approach to topology optimization, which represents partial densities in the design domain using a physical unit cell structure to achieve sub-grid resolution features. This approach requires geometry-specific, correlations for <i>f</i> and <i>Nu</i> to simulate the performance of designs and evaluate the objective function during the optimization process. Topology optimized pin fin-based component designs rely on additive manufacturing, posing production scalability challenges with current technologies. Furthermore, the demand for flow and thermal anisotropy in several applications adds complexity to the design requirements. To address these challenges, the focus is shifted to traditional heat exchanger surface geometries that can be manufactured using conventional techniques, and which also exhibit pronounced anisotropy in flow and heat transfer characteristics. Traditionally, these geometries are distributed uniformly across heat exchange surfaces. However, incorporating such geometries into the topology optimization framework merges the strengths of both approaches, yielding mathematically optimized heat exchange surfaces with conventionally manufacturable designs. Offset strip fins, one such commonly used geometry, is chosen to be the physical unit cell structure to demonstrate the integration of ML-based surrogate models into the topology optimization framework. The large amount of data required to develop robust machine learning-based surrogate <i>f</i> and <i>Nu</i> models for axial and cross flow of water through offset strip fins are generated through numerical simulations performed for convective flows through these geometries. The data generated are compared against in-house-measured experimental data as well as against data from literature. To facilitate the integration of ML models into topology optimization, a discrete adjoint method was developed to calculate the sensitivities during topology optimization, to circumvent the absence of the analytical gradients.</p><p dir="ltr">Successful integration of the machine learning-based surrogate models into the topology optimization framework was demonstrated through the design optimization of a counterflow heat exchanger. The topology optimized design outperformed the benchmarks that used uniform, parametrically optimized offset strip fin arrays. The topology optimized design exhibited domain-specific enhancements such as peripheral flow paths for enhanced heat transfer and open channels to minimize pressure drops. This integration showcases the potential of combining ML models with topology optimization, providing a flexible framework that can be extended to a wide range of enhanced surface structure types and geometric configurations for which ML models can be trained. Thus, by enabling spatially localized optimization of enhanced surface structures using ML models, and consequently offering a pathway for expanding the design space to include many more surface structures in the topology optimization framework than previously possible, this thesis lays the foundation for advancing design optimization of thermal-fluid components and systems, using both additively and conventionally manufacturable geometries.</p>

Page generated in 0.0785 seconds