• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • Tagged with
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Greenhouse Climate Optimization using Weather Forecasts and Machine Learning

Sedig, Victoria, Samuelsson, Evelina, Gumaelius, Nils, Lindgren, Andrea January 2019 (has links)
It is difficult for a small scaled local farmer to support him- or herself. In this investigation a program was devloped to help the small scaled farmer Janne from Sala to keep an energy efficient greenhouse. The program applied machine learning to make predictions of future temperatures in the greenhouse. When the temperature was predicted to be dangerously low for the plants and crops Janne was warned via a HTML web page. To make an as accurate prediction as possible different machine learning algorithm methods were evaluated. XGBoost was the most efficient and accurate method with an cross validation value at 2.33 and was used to make the predictions. The data to train the method with was old data inside and outside the greenhouse provided from the consultancy Bitroot and SMHI. To make predictions in real time weather forecast was collectd from SMHI via their API. The program can be useful for a farmer and can be further developed in the future.
2

Analys av prediktiv precision av maskininlärningsalgoritmer

Remgård, Jonas January 2017 (has links)
Maskininlärning (eng: Machine Learning) har på senare tid blivit ett populärt ämne. En fråga som många användare ställer sig är hur mycket data det behövs för att få ett så korrekt svar som möjligt. Detta arbete undersöker relationen mellan inlärningsdata, mängd såväl som struktur, och hur väl algoritmen presterar. Fyra olika typer av datamängder (Iris, Digits, Symmetriskt och Dubbelsymetriskt) studerades med hjälp av tre olika algoritmer (Support Vector Classifier, K-Nearest Neighbor och Decision Tree Classifier). Arbetet fastställer att alla tre algoritmers prestation förbättras vid större mängd inlärningsdata upp till en viss gräns, men att denna gräns är olika för varje algoritm. Datainstansernas struktur påverkar också algoritmernas prestation där dubbelsymmetri ger starkare prestation än enkelsymmetri. / In recent years Machine Learning has become a popular subject. A challange that many users face is choosing the correct amount of training data. This study researches the relationship between the amount and structure of training data and the accuracy of the algorithm. Four different datasets (Iris, Digits, Symmetry and Double symmetry) were used with three different algorithms (Support Vector Classifier, K-Nearest Neighbor and Decision Tree Classifier). This study concludes that all algorithms perform better with more training data up to a certain limit, which is different for each algorithm. The structure of the dataset also affects the performance, where double symmetry gives greater performance than simple symmetry.
3

Extracting Information from Encrypted Data using Deep Neural Networks

Lagerhjelm, Linus January 2018 (has links)
In this paper we explore various approaches to using deep neural networks to per- form cryptanalysis, with the ultimate goal of having a deep neural network deci- pher encrypted data. We use long short-term memory networks to try to decipher encrypted text and we use a convolutional neural network to perform classification tasks on encrypted MNIST images. We find that although the network is unable to decipher encrypted data, it is able to perform classification on encrypted data. We also find that the networks performance is depending on what key were used to en- crypt the data. These findings could be valuable for further research into the topic of cryptanalysis using deep neural networks.
4

Predicting patient-specific outcome based on machine learning algorithms using genomic data of patients with locally advanced head and neck squamous cell carcinoma

Schmidt, Stefan 09 December 2019 (has links)
Aufgrund der heterogenen Tumorbiologie variiert der Therapieerfolg bei lokal fortgeschrittenen Plattenepithelkarzinomen stark, woraus ein mittleres 5-Jahres-Überleben dieser Patienten von etwa 50% resultiert. Um die Therapie besser an die Tumoreigenschaften anzupassen, muss die Therapieresistenz der Tumoren vor der Behandlung bestimmt werden. In dieser Dissertationsschrift werden Methoden aus dem Bereich des maschinellen Lernens angewandt um Genexpressionsdaten zu analysieren, um so Signaturen und Modelle zu erzeugen die eine Klassifizierung der Tumoren in verschiedene Risikogruppen bezüglich der loko-regionären Tumorkontrolle erlauben. Für Patienten, die mit postoperativer Radiochemotherapie behandelt wurden, konnte eine 7-Gen Signatur entwickelt und erfolgreich validiert werden. Außerdem konnte gezeigt werden, dass verschiedene Signaturen ähnlich gut zur Patientenklassifizierung geeignet sein können. Daher wurde eine Methode vorgeschlagen, die es erlaubt verschiedene prognostiche Modelle zu kombinieren. Weiterhin wurden verschiedene genbasierte Biomarker zwischen verschiedenen Genexpressionsmessmethoden verglichen. In den resultierenden Patienteneinteilungen zeigten Biomarker, die auf Signaturen basieren, eine geringere Variabilität als Biomarker, die auf einzelnen Genen basieren.:Abbreviations VII Figures IX Tables XII 1 Introduction 1 2 Biological & Statistical Background 4 2.1 Head and Neck Squamous Cell Carcinoma 4 2.1.1 Tumorigenesis 4 2.1.2 Biomarkers 8 2.2 Statistics 14 2.2.1 Survival analysis 14 2.2.2 Model and data evaluation 18 2.2.3 Data sampling methods 22 2.3 Machine learning algorithms 23 2.3.1 Feature selection algorithms 24 2.3.2 Prognostic models 27 2.4 Gene expression measurement methods 30 2.4.1 Real-time polymerase chain reaction (RT-PCR) 31 2.4.2 nCounter® gene expression 32 2.4.3 In situ-synthesized oligonucleotide microarrays 32 3 Material and methods 35 3.1 Patient cohorts 35 3.1.1 Primary radiochemotherapy (pRCTx) cohorts 35 3.1.2 Postoperative radio(chemo)therapy (PORT-C) cohorts 36 3.1.3 Clinical endpoints 38 3.2 Gene expression analyses 39 3.2.1 HPV status 39 3.2.2 Immunohistochemical staining 39 3.2.3 RT-PCR measurements 40 3.2.4 nCounter® measurements 40 3.2.5 GeneChip® analyses (only training cohorts) 41 3.3 Machine learning framework 41 3.3.1 Pre-processing of gene expression data 41 3.3.2 Determination of the ensemble gene signature 42 3.3.3 Expanding the ensemble signature by highly correlated genes 43 3.3.4 Independent validation and patient stratification 45 4 Identification of gene expression signatures as prognostic biomarkers 46 4.1 Hypoxia classification 46 4.2 nCounter® gene expression based signatures 50 4.2.1 Patients treated with primary radiochemotherapy 50 4.2.2 Clinical Features 55 4.2.3 Signature extension using clinical features 64 4.2.4 Patients treated with postoperative radiochemotherapy 65 4.2.5 Signature extension using clinical features – Port-C 72 4.3 GeneChip® gene expression-based signatures 78 4.3.1 Pre-selection 78 4.3.2 Patients treated with primary radiochemotherapy 79 4.3.3 Patients treated with postoperative radiochemotherapy 87 4.4 Combined models for PORT-C 91 4.4.1 Creation of a consensus model 92 4.4.2 Consensus model based on 2 models 93 4.4.3 Consensus model based on more than 2 models 97 4.4.4 Discussion and summary of model combination 101 5 Stability of gene expression-based biomarkers 102 5.1 Reproducibility depending on time of nCounter® 102 5.2 Comparison of nCounter® and GeneChip® gene expression 106 5.2.1 Introduction 106 5.2.2 Correlation analyses 106 5.2.3 Model and biomarker transfer 108 6 Conclusion and outlook 123 Zusammenfassung 125 Summary 128 Appendix 130 A. Supplementary Figures 130 B. Supplementary Tables 133 Bibliography 147 Acknowledgements 188 Erklärungen 189 / Due to heterogeneous tumour biology, the treatment response of locally advanced head and neck squamous cell carcinoma differs largely between patients, resulting in a mean 5-year survival of about 50%. In order to adapt the treatment to the properties of the tumour, the therapy resistance of the tumours must be assessed before treatment. In this thesis, gene expression data were analysed to identify novel gene signatures and models that allow for stratifying patients into risk groups with low and high risk of loco-regional tumour recurrence. To identify those signatures, methods from the field of machine learning were applied. For patients treated with postoperative radiochemotherapy, a 7-gene signature was developed and successfully validated. Furthermore, it was shown that several models based on different gene signatures may be equally suitable for patient stratification. A method is presented that combines those distinct prognostic models. In addition, gene-expression-based biomarkers were transferred between different gene expressions measurement methods with the result that signatures showed less variability in patient stratification than single-gene biomarkers.:Abbreviations VII Figures IX Tables XII 1 Introduction 1 2 Biological & Statistical Background 4 2.1 Head and Neck Squamous Cell Carcinoma 4 2.1.1 Tumorigenesis 4 2.1.2 Biomarkers 8 2.2 Statistics 14 2.2.1 Survival analysis 14 2.2.2 Model and data evaluation 18 2.2.3 Data sampling methods 22 2.3 Machine learning algorithms 23 2.3.1 Feature selection algorithms 24 2.3.2 Prognostic models 27 2.4 Gene expression measurement methods 30 2.4.1 Real-time polymerase chain reaction (RT-PCR) 31 2.4.2 nCounter® gene expression 32 2.4.3 In situ-synthesized oligonucleotide microarrays 32 3 Material and methods 35 3.1 Patient cohorts 35 3.1.1 Primary radiochemotherapy (pRCTx) cohorts 35 3.1.2 Postoperative radio(chemo)therapy (PORT-C) cohorts 36 3.1.3 Clinical endpoints 38 3.2 Gene expression analyses 39 3.2.1 HPV status 39 3.2.2 Immunohistochemical staining 39 3.2.3 RT-PCR measurements 40 3.2.4 nCounter® measurements 40 3.2.5 GeneChip® analyses (only training cohorts) 41 3.3 Machine learning framework 41 3.3.1 Pre-processing of gene expression data 41 3.3.2 Determination of the ensemble gene signature 42 3.3.3 Expanding the ensemble signature by highly correlated genes 43 3.3.4 Independent validation and patient stratification 45 4 Identification of gene expression signatures as prognostic biomarkers 46 4.1 Hypoxia classification 46 4.2 nCounter® gene expression based signatures 50 4.2.1 Patients treated with primary radiochemotherapy 50 4.2.2 Clinical Features 55 4.2.3 Signature extension using clinical features 64 4.2.4 Patients treated with postoperative radiochemotherapy 65 4.2.5 Signature extension using clinical features – Port-C 72 4.3 GeneChip® gene expression-based signatures 78 4.3.1 Pre-selection 78 4.3.2 Patients treated with primary radiochemotherapy 79 4.3.3 Patients treated with postoperative radiochemotherapy 87 4.4 Combined models for PORT-C 91 4.4.1 Creation of a consensus model 92 4.4.2 Consensus model based on 2 models 93 4.4.3 Consensus model based on more than 2 models 97 4.4.4 Discussion and summary of model combination 101 5 Stability of gene expression-based biomarkers 102 5.1 Reproducibility depending on time of nCounter® 102 5.2 Comparison of nCounter® and GeneChip® gene expression 106 5.2.1 Introduction 106 5.2.2 Correlation analyses 106 5.2.3 Model and biomarker transfer 108 6 Conclusion and outlook 123 Zusammenfassung 125 Summary 128 Appendix 130 A. Supplementary Figures 130 B. Supplementary Tables 133 Bibliography 147 Acknowledgements 188 Erklärungen 189
5

An approach to evaluate machine learning algorithms for appliance classification

Olsson, Charlie, Hurtig, David January 2019 (has links)
A cheap and powerful solution to lower the electricity usage and making the residents more energy aware in a home is to simply make the residents aware of what appliances that are consuming electricity. Meaning the residents can then take decisions to turn them off in order to save energy. Non-intrusive load monitoring (NILM) is a cost-effective solution to identify different appliances based on their unique load signatures by only measuring the energy consumption at a single sensing point. In this thesis, a low-cost hardware platform is developed with the help of an Arduino to collect consumption signatures in real time, with the help of a single CT-sensor. Three different algorithms and one recurrent neural network are implemented with Python to find out which of them is the most suited for this kind of work. The tested algorithms are k-Nearest Neighbors, Random Forest and Decision Tree Classifier and the recurrent neural network is Long short-term memory.
6

Digitalizing the supply chain on the road to deal with global crises / Digitalisering av försörjningskedjan på väg för att hantera den globala krisen

Mo, Xitao January 2022 (has links)
Recent years have seen more and more companies digitize their logistics systems to varying degrees. Data sharing standards for the supply chain have appeared in different fields, such as ONE Record for air cargo transport, papiNet for the paper and forest industry. One of the existing challenges is that it is difficult to carry out horizontal integration between companies from different supply chains because they each use different data exchange standards. Hence it is challenging to achieve wider-scale sustainability in this case. DigiGoods (a Vinnova funded project) focused on bringing improvements through digitization and data sharing through participants in the logistics value chain. It proposes a data model to build its data exchange standard between supply chain partners for sharing data and synchronizing progress. This thesis explores how to integrate this standard with other existing standards, and on this basis, explores how to use machine learning to optimize forecasts in the supply chain. This thesis lays the foundation for the integration of standards in the supply chain, explores the application of machine learning in the supply chain, and applies machine learning algorithms in multiple stages to improve the accuracy of forecasts in the supply chain, thereby responding to the global crisis. The performance of eight machine learning models is tested and compared to find the optimal algorithm and parameters for each dataset. A prototype is implemented to combine the advantages of the eight models and demonstrate that multi-stage machine learning could improve the prediction results in the context of DigiGoods. / De senaste åren har allt fler företag digitaliserat sina logistiksystem i varierande grad. Datadelningsstandarder för leveranskedjan har dykt upp på olika områden, till exempel ONE Record för flygfrakt, papiNet för pappers- och skogsindustrin. En av de befintliga utmaningarna är att det är svårt att genomföra horisontell integration mellan företag från olika leveranskedjor eftersom de använder olika datautbytesstandarder. om förbättringar genom digitalisering och datadelning genom deltagare i logistikvärdekedjan. Den föreslår en datamodell för att bygga sina standarder för datautbyte mellan leverantörskedjepartner för att dela data och synkronisera framsteg. Denna utforskar hur man integrerar denna standard med andra befintliga standarder, och på denna grund undersöker man hur man använder maskininlärning för att optimera prognoser i leveranskedjan. Denna avhandling lägger grunden för integration av standarder i försörjningskedjan, utforskar tillämpningen av maskininlärning i leveranskedjan och tillämpar maskininlärningsalgoritmer i flera steg för att förbättra prognosernas noggrannhet i leveranskedjan och därmed svara på den globala krisen. Prestanda för åtta maskininlärningsmodeller testas och jämförs för att hitta den optimala algoritmen och parametrarna för varje datamängd. Aprototyp implementeras för att kombinera fördelarna med de åtta modellerna och visa att maskininlärning i flera steg kan förbättra förutsägelseresultaten inom ramen för DigiGoods
7

Verktyg för tolkning av äldre planbestämmelser : Systematisk och automatiserad översättning av äldre planbestämmelser för framtidens samhällsbyggnadsprocess / Tools for transitioning older planning regulations : Systematic and uniform conversion of older planning regulations for the future urban planning process

Nordlund, Linn January 2024 (has links)
I samhället finns en stor efterfrågan på digitala detaljplaner eftersom digital och enhetlig data gör informationen analyserbar och sökbar. Även om nya detaljplaner ska vara digitala är den stora majoriteten av befintliga planer fysiska kartor med tillhörande bestämmelser på papper. För att underlätta transformationen och erhålla stringenta tolkningar av äldre planbestämmelser vid digitalisering av befintliga planer sammanställs här rådande vägledning om hur bestämmelser från 1949-1969 ska översättas. Efter att en enhetlig tolkning och formulering gjorts på ett dataset bestående av planbestämmelser från Värmdö kommun testas hur ett maskininlärningsprogram med hjälp av denna data kan generera en korrekt översättning från Boverkets planbestämmelsekatalog baserat på en äldre planbestämmelse. Trots begränsad data presterar programmet väl vid tolkningar avplanbestämmelser från Värmdö kommun under perioden. Däremot genereras färre korrekta tolkningar när programmet testas på bestämmelser från Haninge kommun, något som tyder på att träningsdata behöver vara mer diversifierat för att programmet ska vara generellt användbart vid tolkning av planbestämmelser från perioden. I arbetet diskuteras även hur rådande vägledning om tolkning av planbestämmelser kan utvecklas för att skapa bättreförutsättningar för enhetlig och analyserbar data. / The actors within the urban planning process in Sweden, whether they are municipalities, real estate developers or individual homeowners, benefit from digital development plans because digital and uniform data is easier to access and use in analyses. New development plans are required to be digital, but the large majority of the existing plan stock consist of physical maps with regulations on paper. To facilitate the transformation and acquire uniform translations of older planning regulations when digitizing older plans, guidance from government authorities on how to understand regulations from 1949-1969 is here compiled. After a systematic translation and formulation has been conducted on a dataset consisting of planning regulations from Värmdö municipality, the data is used in a machine learning program to test to what extent the program can predict the correct modern translation of the old text. When tested on regulations from Värmdö municipality, the performance was satisfactory despite the limited dataset used for training the program. However, when the program was tested on regulations from another nearby municipality, the result sank significantly, implying that the training data needs to be more diversified for the program to be generally useful when translating regulations from the time period. The research alsodiscusses how current guidelines on the topic can be developed to create better prerequisites for uniform and analysable data.
8

Dynamic Speed Adaptation for Curves using Machine Learning / Dynamisk hastighetsanpassning för kurvor med maskininlärning

Narmack, Kirilll January 2018 (has links)
The vehicles of tomorrow will be more sophisticated, intelligent and safe than the vehicles of today. The future is leaning towards fully autonomous vehicles. This degree project provides a data driven solution for a speed adaptation system that can be used to compute a vehicle speed for curves, suitable for the underlying driving style of the driver, road properties and weather conditions. A speed adaptation system for curves aims to compute a vehicle speed suitable for curves that can be used in Advanced Driver Assistance Systems (ADAS) or in Autonomous Driving (AD) applications. This degree project was carried out at Volvo Car Corporation. Literature in the field of speed adaptation systems and factors affecting the vehicle speed in curves was reviewed. Naturalistic driving data was both collected by driving and extracted from Volvo's data base and further processed. A novel speed adaptation system for curves was invented, implemented and evaluated. This speed adaptation system is able to compute a vehicle speed suitable for the underlying driving style of the driver, road properties and weather conditions. Two different artificial neural networks and two mathematical models were used to compute the desired vehicle speed in curves. These methods were compared and evaluated. / Morgondagens fordon kommer att vara mer sofistikerade, intelligenta och säkra än dagens fordon. Framtiden lutar mot fullständigt autonoma fordon. Detta examensarbete tillhandahåller en datadriven lösning för ett hastighetsanpassningssystem som kan beräkna ett fordons hastighet i kurvor som är lämpligt för förarens körstil, vägens egenskaper och rådande väder. Ett hastighetsanpassningssystem för kurvor har som mål att beräkna en fordonshastighet för kurvor som kan användas i Advanced Driver Assistance Systems (ADAS) eller Autonomous Driving (AD) applikationer. Detta examensarbete utfördes på Volvo Car Corporation. Litteratur kring hastighetsanpassningssystem samt faktorer som påverkar ett fordons hastighet i kurvor studerades. Naturalistisk bilkörningsdata samlades genom att köra bil samt extraherades från Volvos databas och bearbetades. Ett nytt hastighetsanpassningssystem uppfanns, implementerades samt utvärderades. Hastighetsanpassningssystemet visade sig vara kapabelt till att beräkna en lämplig fordonshastighet för förarens körstil under rådande väderförhållanden och vägens egenskaper. Två olika artificiella neuronnätverk samt två matematiska modeller användes för att beräkna fordonets hastighet. Dessa metoder jämfördes och utvärderades.

Page generated in 0.1391 seconds