• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The combination of AI modelling techniques for the simulation of manufacturing processes

Korn, Stefan January 1998 (has links)
No description available.
2

Hydrological data interpolation using entropy

Ilunga, Masengo 17 November 2006 (has links)
Faculty of Engineering and Built Enviroment School of Civil and Enviromental Engineering 0105772w imasengo@yahoo.com / The problem of missing data, insufficient length of hydrological data series and poor quality is common in developing countries. This problem is much more prevalent in developing countries than it is in developed countries. This situation can severely affect the outcome of the water systems managers’ decisions (e.g. reliability of the design, establishment of operating policies for water supply, etc). Thus, numerous data interpolation (infilling) techniques have evolved in hydrology to deal with the missing data. The current study presents merely a methodology by combining different approaches and coping with missing (limited) hydrological data using the theories of entropy, artificial neural networks (ANN) and expectation-maximization (EM) techniques. This methodology is simply formulated into a model named ENANNEX model. This study does not use any physical characteristics of the catchment areas but deals only with the limited information (e.g. streamflow or rainfall) at the target gauge and its similar nearby base gauge(s). The entropy concept was confirmed to be a versatile tool. This concept was firstly used for quantifying information content of hydrological variables (e.g. rainfall or streamflow). The same concept (through directional information transfer index, i.e. DIT) was used in the selection of base/subject gauge. Finally, the DIT notion was also extended to the evaluation of the hydrological data infilling technique performance (i.e. ANN and EM techniques). The methodology was applied to annual total rainfall; annual mean flow series, annual maximum flows and 6-month flow series (means) of selected catchments in the drainage region D “Orange” of South Africa. These data regimes can be regarded as useful for design-oriented studies, flood studies, water balance studies, etc. The results from the case studies showed that DIT is as good index for data infilling technique selection as other criteria, e.g. statistical and graphical. However, the DIT has the feature of being non-dimensionally informational index. The data interpolation iii techniques viz. ANNs and EM (existing methods applied and not yet applied in hydrology) and their new features have been also presented. This study showed that the standard techniques (e.g. Backpropagation-BP and EM) as well as their respective variants could be selected in the missing hydrological data estimation process. However, the capability for the different data interpolation techniques of maintaining the statistical characteristics (e.g. mean, variance) of the target gauge was not neglected. From this study, the relationship between the accuracy of the estimated series (by applying a data infilling technique) and the gap duration was then investigated through the DIT notion. It was shown that a decay (power or exponential) function could better describe that relationship. In other words, the amount of uncertainty removed from the target station in a station-pair, via a given technique, could be known for a given gap duration. It was noticed that the performance of the different techniques depends on the gap duration at the target gauge, the station-pair involved in the missing data estimation and the type of the data regime. This study showed also that it was possible, through entropy approach, to assess (preliminarily) model performance for simulating runoff data at a site where absolutely no record exist: a case study was conducted at Bedford site (in South Africa). Two simulation models, viz. RAFLER and WRSM2000 models, were then assessed in this respect. Both models were found suitable for simulating flows at Bedford.
3

Artificial Neural Network-Based Approaches for Modeling the Radiated Emissions from Printed Circuit Board Structures and Shields

Kvale, David Thomas January 2010 (has links)
No description available.
4

Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI

Lövgren, Hans January 2014 (has links)
Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\newline Objectives. We investigate whether modern hardware and software, GPGPU, combined with state-of-art Evolution Strategies, CMA-Neuro-ES, can potentially increase the efficiency of solving RL problems.\newline Methods. In order to do this, both an implementational as well as an experimental research method is used. The implementational research mainly involves developing and setting up an experimental framework in which to measure efficiency through benchmarking. In this framework, the GPGPU/ES solution is later developed. Using this framework, experiments are conducted on a conventional sequential solution as well as our own parallel GPGPU solution.\newline Results. The results indicate that utilizing GPGPU and state-of-art ES when attempting to solve RL problems can be more efficient in terms of time consumption in comparison to a conventional and sequential CPU approach.\newline Conclusions. We conclude that our proposed solution requires additional work and research but that it shows promise already in this initial study. As the study is focused on primarily generating benchmark performance data from the experiments, the study lacks data on RL efficiency and thus motivation for using our approach. However we do conclude that the GPGPU approach suggested does allow less time consuming RL problem solving.
5

Drying shrinkage of self-compacting concrete incorporating fly ash

Abdalhmid, Jamila M.A. January 2019 (has links)
The present research is conducted to investigate long term (more than two years) free and confined drying shrinkage magnitude and behaviour of self-compacting concrete (SCC) and compare with normal concrete (NC). For all SCCs mixes, Portland cement was replaced with 0-60% of fly ash (FA), fine and coarse aggregates were kept constant at 890 kg/m3 and 780 kg/m3, respectively. Two different water binder ratios of 0.44 and 0.33 were examined for both SCCs and NCs. Fresh properties of SCCs such as filling ability, passing ability, viscosity and resistance to segregation and hardened properties such as compressive and flexural strengths, water absorption and density of SCCs and NCs were also determined. Experimental results of free drying shrinkage obtained from this study together with collected comprehensive database from different sources available in the literature were compared to five existing models, namely the ACI 209R-92 model, BSEN-92 model, ACI 209R-92 (Huo) model, B3 model, and GL2000 model. To assess the quality of predictive models, the influence of various parameters (compressive strength, cement content, water content and relative humidity) on the drying shrinkage strain are studied. An artificial neural network models (ANNM) for prediction of drying shrinkage strains of SCC was developed using the same data used in the existing models. Two ANNM sets namely ANNM1 and ANNM2 with different numbers of hidden layer neurones were constructed. Comparison between the results given by the ANNM1 model and the results obtained by the five existing predicted models were presented. The results showed that, using up to 60% of FA as cement replacement can produce SCC with a compressive strength as high as 30 MPa and low drying shrinkage strain. SCCs long-term drying shrinkage from 356 to 1000 days was higher than NCs. Concrete filled elliptical tubes (CFET) with self-compacting concrete containing FA up to 60% are recommended for use in construction in order to prevent confined drying strain. ACI 209R-92 model provided a better prediction of drying shrinkage compared with the other four models. However, a very high predictability with high accuracy was achieved with the ANNM1 model with a mean of 1.004. Moreover, by using ANNM models, it is easy to insert any of factors effecting drying shrinkage to the input parameters to predict drying shrinkage strain of SCC. / Ministry of Higher Education, Libya
6

Empirical Investigation of the Effect of Pruning Artificial Neural Networks With Respect to Increased Generalization Ability

Weman, Nicklas January 2010 (has links)
This final thesis covers the basics of artificial neural networks, with focus on supervised learning, pruning and the problem of achieving good generalization ability. An empirical investigation is conducted on twelve dierent problems originating from the Proben1 benchmark collection.The results indicate that pruning is more likely to improve generalization if the data is sensitive to overtting or if the networks are likely to be trapped in local minima.
7

Automated Pulmonary Nodule Detection on Computed Tomography Images with 3D Deep Convolutional Neural Network

Broyelle, Antoine January 2018 (has links)
Object detection on natural images has become a single-stage end-to-end process thanks to recent breakthroughs on deep neural networks. By contrast, automated pulmonary nodule detection is usually a three steps method: lung segmentation, generation of nodule candidates and false positive reduction. This project tackles the nodule detection problem with a single stage modelusing a deep neural network. Pulmonary nodules have unique shapes and characteristics which are not present outside of the lungs. We expect the model to capture these characteristics and to only focus on elements inside the lungs when working on raw CT scans (without the segmentation). Nodules are small, distributed and infrequent. We show that a well trained deep neural network can spot relevantfeatures and keep a low number of region proposals without any extra preprocessing or post-processing. Due to the visual nature of the task, we designed a three-dimensional convolutional neural network with residual connections. It was inspired by the region proposal network of the Faster R-CNN detection framework. The evaluation is performed on the LUNA16 dataset. The final score is 0.826 which is the average sensitivity at 0.125, 0.25, 0.5, 1, 2, 4, and 8 false positives per scan. It can be considered as an average score compared to other submissions to the challenge. However, the solution described here was trained end-to-end and has fewer trainable parameters. / Objektdetektering i naturliga bilder har reducerates till en enstegs process tack vare genombrott i djupa neurala nätverk. Automatisk detektering av pulmonella nodulärer är vanligtvis ett trestegsproblem: segmentering av lunga, generering av nodulärkandidater och reducering av falska positiva utfall. Det här projektet tar sig an nodulärdetektering med en enstegsmodell med hjälp av ett djupt neuralt nätverk. Pulmonella nodulärer har unika karaktärsdrag som inte finns utanför lungorna. Modellen förväntas fånga dessa drag och enbart fokusera på element inuti lungorna när den arbetar med datortomografibilder. Nodulärer är små och glest föredelade. Vi visar att ett vältränat nätverk kan finna relevanta särdrag samt föreslå ett lågt antal intresseregioner utan extra för- eller efter- behandling. På grund av den visuella karaktären av det här problemet så designade vi ett tredimensionellt s.k. convolutional neural network med residualkopplingar. Projektet inspirerades av Faster R-CNN, ett nätverk som utmärker sig i sin förmåga att detektera intresseregioner. Nätverket utvärderades på ett dataset vid namn LUNA16. Det slutgiltiga nätverket testade 0.826, vilket är genomsnittlig sensitivitet vid 0.125, 0.25, 0.5, 1, 2, 4, och 8 falska positiva per utvärdering. Detta kan anses vara genomsnittligt jämfört med andra deltagande i tävlingen, men lösningen som föreslås här är en enstegslösning som utför detektering från början till slut och har färre träningsbara parametrar. / La détection d’objets sur les images naturelles est devenue au fil du temps un processus réalisé de bout en bout en une seule étape grâce aux évolutions récentes des architectures de neurones artificiels profonds. En revanche, la détection automatique de nodules pulmonaires est généralement un processus en trois étapes : la segmentation des poumons (pré-traitement), la génération de zones d’intérêt (modèle) et la réduction des faux positifs (post-traitement). Ce projet s’attaque à la détection des nodules pulmonaires en une seule étape avec un réseau profond de neurones artificiels. Les nodules pulmonaires ont des formes et des structures uniques qui ne sont pas présentes en dehors de cet organe. Nous nous attendons à ce qu’un modèle soit capable de capturer ces caractéristiques et de se focaliser uniquement sur les éléments à l’intérieur des poumons alors même qu’il reçoit des images brutes (sans segmentation des poumons). Les nodules sont petits, peu fréquents et répartis aléatoirement. Nous montrons qu’un modèle correctement entraîné peut repérer les éléments caractéristiques des nodules et générer peu de localisations sans pré-traitement ni post-traitement. Du fait de la nature visuelle de la tâche, nous avons développé un réseau neuronal convolutif tridimensionnel. L’architecture utilisée est inspirée du méta-algorithme de détection Faster R-CNN. L’évaluation est réalisée avec le jeu de données du challenge LUNA16. Le score final est de 0.826 qui représente la sensibilité moyenne pour les valeurs de 0.125, 0.25, 0.5, 1, 2, 4 et 8 faux positifs par scanner. Il peut être considéré comme un score moyen comparé aux autres contributions du challenge. Cependant, la solution décrite montre la faisabilité d’un modèle en une seule étape, entraîné de bout en bout. Le réseau comporte moins de paramètres que la majorité des solutions.
8

Assessment of lung damages from CT images using machine learning methods. / Bedömning av lungskador från CT-bilder med maskininlärningsmetoder.

Chometon, Quentin January 2018 (has links)
Lung cancer is the most commonly diagnosed cancer in the world and its finding is mainly incidental. New technologies and more specifically artificial intelligence has lately acquired big interest in the medical field as it can automate or bring new information to the medical staff. Many research have been done on the detection or classification of lung cancer. These works are done on local region of interest but only a few of them have been done looking at a full CT-scan. The aim of this thesis was to assess lung damages from CT images using new machine learning methods. First, single predictors had been learned by a 3D resnet architecture: cancer, emphysema, and opacities. Emphysema was learned by the network reaching an AUC of 0.79 whereas cancer and opacity predictions were not really better than chance AUC = 0.61 and AUC = 0.61. Secondly, a multi-task network was used to predict the factors altogether. A training with no prior knowledge and a transfer learning approach using self-supervision were compared. The transfer learning approach showed similar results in the multi-task approach for emphysema with AUC=0.78 vs 0.60 without pre-training and opacities with an AUC=0.61. Moreover using the pre-training approach enabled the network to reach the same performance as each of single factor predictor but with only one multi-task network which saves a lot of computational time. Finally a risk score can be derived from the training to use this information in a clinical context.
9

Object detection for autonomous trash and litter collection / Objektdetektering för autonom skräpupplockning

Edström, Simon January 2022 (has links)
Trashandlitter discarded on the street is a large environmental issue in Sweden and across the globe. In Swedish cities alone it is estimated that 1.8 billion articles of trash are thrown to the street each year, constituting around 3 kilotons of waste. One avenue to combat this societal and environmental problem is to use robotics and AI. A robot could learn to detect trash in the wild and collect it in order to clean the environment. A key component of such a robot would be its computer vision system which allows it to detect litter and trash. Such systems are not trivially designed or implemented and have only recently reached high enough performance in order to work in industrial contexts. This master thesis focuses on creating and analysing such an algorithm by gathering data for use in a machine learning model, developing an object detection pipeline and evaluating the performance of that pipeline based on varying its components. Specifically, methods using hyperparameter optimisation, psuedolabeling and the preprocessing methods tiling and illumination normalisation were implemented and analysed. This thesis shows that it is possible to create an object detection algorithm with high performance using currently available state-of-the-art methods. Within the analysed context, hyperparameter optimisation did not significantly improve performance and psuedolabeling could only briefly be analysed but showed promising results. Tiling greatly increased mean average precision (mAP) for the detection of small objects, such as cigarette butts, but decreased the mAP for large objects and illumination normalisation improved mAPforimagesthat were brightly lit. Both preprocessing methods reduced the frames per second that a full detector could run at whilst psuedolabeling and hyperparameter optimisation greatly increased training times. / Skräp som slängs på marken har en stor miljöpåverkan i Sverige och runtom i världen. Enbart i Svenska städer uppskattas det att 1,8 miljarder bitar skräp slängs på gatan varje år, bestående av cirka 3 kiloton avfall. Ett sätt att lösa detta samhälleliga och miljömässiga problem är att använda robotik och AI. En robot skulle kunna lära siga att detektera skräp i utomhusmiljöer och samla in den för att på så sätt rengöra våra städer och vår natur. En nyckelkomponent av en sådan robot skulle vara dess system för datorseende som tillåter den att se och hitta skräp. Sådana system är inte triviala att designa eller implementera och har bara nyligen påvisat tillräckligt hög prestanda för att kunna användas i kommersiella sammanhang. Detta masterexamensarbete fokuserar på att skapa och analysera en sådan algoritm genom att insamla data för att använda i en maskininlärningsmodell, utveckla en objektdetekterings pipeline och utvärdera prestandan när dess komponenter modifieras. Specifikt analyseras metoderna pseudomarkering, hyperparameter optimering samt förprocesseringsmetoderna kakling och ljusintensitetsnormalisering. Examensarbetet visar att det är möjligt att skapa en objektdetekteringsalgoritm med hög prestanda med hjälp av den senaste tekniken på området. Inom det undersökta sammanhanget gav hyperparameter optimering inte någon större förbättring av prestandan och pseudomarkering kunde enbart ytligt analyseras men uppvisade preliminärt lovande resultat. Kakling förbättrade resultatet för detektering av små objekt, som cigarettfimpar, men minskade prestandan för större objekt och ljusintensitetsnormalisering förbättrade prestandan för bilder som var starkt belysta. Båda förprocesseringsmetoderna minskade bildhastigheten som en detektor skulle kunna köra i och psuedomarkering samt hyperparameter optimering ökade träningstiden kraftigt.

Page generated in 0.071 seconds