• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Může modelová kombinace řídit prognózu volatility? / Can Model Combination Improve Volatility Forecasting?

Tyuleubekov, Sabyrzhan January 2019 (has links)
Nowadays, there is a wide range of forecasting methods and forecasters encounter several challenges during selection of an optimal method for volatility forecasting. In order to make use of wide selection of forecasts, this thesis tests multiple forecast combination methods. Notwithstanding, there exists a plethora of forecast combination literature, combination of traditional methods with machine learning methods is relatively rare. We implement the following combination techniques: (1) simple mean forecast combination, (2) OLS combination, (3) ARIMA on OLS combined fit, (4) NNAR on OLS combined fit and (5) KNN regression on OLS combined fit. To our best knowledge, the latter two combination techniques are not yet researched in academic literature. Additionally, this thesis should help a forecaster with three choice complication causes: (1) choice of volatility proxy, (2) choice of forecast accuracy measure and (3) choice of training sample length. We found that squared and absolute return volatility proxies are much less efficient than Parkinson and Garman-Klass volatility proxies. Likewise, we show that forecast accuracy measure (RMSE, MAE or MAPE) influences optimal forecasts ranking. Finally, we found that though forecast quality does not depend on training sample length, we see that forecast...
2

Deep Learning of Model Correction and Discontinuity Detection

Zhou, Zixu 26 August 2022 (has links)
No description available.
3

Neuronové modelování matematických struktur a jejich rozšíření / Neural modelling of mathematical structures and their extensions

Smolík, Martin January 2019 (has links)
In this thesis we aim to build algebraic models in computer using machine learning methods and in particular neural networks. We start with a set of axioms that describe functions, constants and relations and use them to train neural networks approximating them. Every element is represented as a real vector, so that neural networks can operate on them. We also explore and compare different representations. The main focus in this thesis are groups. We train neural representations for cyclic (the simplest) and symmetric (the most complex) groups. Another part of this thesis are experiments with extending such trained models by introducing new "algebraic" elements, not unlike the classic extension of rational numbers Q[ √ 2]. 1
4

Neuronové modelování matematických struktur a jejich rozšíření / Neural modelling of mathematical structures and their extensions

Smolík, Martin January 2019 (has links)
In this thesis we aim to build algebraic models in computer using machine learning methods and in particular neural networks. We start with a set of axioms that describe functions, constants and relations and use them to train neural networks approximating them. Every element is represented as a real vector, so that neural networks can operate on them. We also explore and compare different representations. The main focus in this thesis are groups. We train neural representations for cyclic (the simplest) and symmetric (the most complex) groups. Another part of this thesis are experiments with extending such trained models by introducing new "algebraic" elements, not unlike the classic extension of rational numbers Q[ √ 2]. 1
5

The role of high-resolution dataset on developing of coastal wind-driven waves model in low energy system

Baghbani, Ramin 10 May 2024 (has links) (PDF)
The spatial variation of wave climate plays a crucial role in erosion, sediment transport, and the design of management actions in coastal areas. Low energy wave systems occur frequently and over a wide range of geographical areas. There is a lack of studies assessing wave model performance in low-energy environments at a regional scale. Therefore, this research aims to model a low energy wave system using a high-resolution dataset. The specific objectives of this study involves 1) using cluster analysis and extensive field measurements to understand the spatial behavior of ocean waves, 2) develop a physics based model of wind-driven waves using high-resolution measurements, and 3) compare machine learning and physics-based models in simulating wave climates. The findings of this study indicate that clustering can effectively assess the spatial variation of the wave climate in a low energy system, with depth identified as the most important influencing factor. Additionally, the physics-based model showed varying performance across different locations within the study area, accurately simulating wave climates in some locations but not in others. Finally, the machine learning model demonstrated overall acceptable performance and accuracy in simulating wave climates and revealed better agreement with observed data in estimating central tendency compared to the physics-based model. The physics-based model performed more favorably for dispersion metrics. These findings contribute to our understanding of coastal dynamics. By providing insights into the spatial behavior of wave climates in low energy systems and comparing the performance of physics-based model and machine learning model, this research contributes to the development of effective coastal management strategies and enhances our understanding of coastal processes.
6

Application of numerical weather prediction with machine learning techniques to improve middle latitude rapid cyclogenesis forecasting

Snyder, Colin Matthew 13 August 2024 (has links) (PDF)
This study goal was to first determine the baseline Global Forecast System (GFS) skill in forecasting borderline (non-bomb:0.75-0.95, bomb: 1.-1.25) bomb events, and second to determine if machine learning (ML) techniques as a post-processor can improve the forecasts. This was accomplished by using the Tempest Extreme cyclone tracking software and ERA5 analysis to develop a case list during the period of October to March for the years 2008-2021. Based on the case list, GFS 24-hour forecasts of atmospheric base state variables in 10-degree by 10-degree cyclone center subdomains was compressed using S-mode Principal Component Analysis. A genetic algorithm was then used to determine the best predictors. These predictors were then used to train a logistic regression as a baseline ML skill and a Support Vector Machine (SVM) model. Both the logistic regression and SVM provided an improved bias over the GFS baseline skill, but only the logistic regression improved skill.
7

Continuous Video Quality of Experience Modelling using Machine Learning Model Trees

Chapala, Usha Kiran, Peteti, Sridhar January 1996 (has links)
Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users. In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.
8

Spent Nuclear Fuel under Repository Conditions : Update and Expansion of Database and Development of Machine Learning Models / Utbränt kärnbränsle under djupförvarsbetingelser : Uppdatering och expansion av databas samt utveckling av maskininlärningsmodeller

Abada, Maria January 2022 (has links)
Förbrukat kärnbränsle är mycket radioaktivt och behöver därför lagras i djupa geologiska förvar i tusentals år innan det säkert kan återföras till naturen. På grund av de långa lagringsperioderna görs säkerhetsanalyser av de djupa geologiska förvaren. Under säkerthetsanalyserna görs upplösningsexperiment på förbrukat kärnsbränsle för att utvärdera konsekvenserna av att grundvatten läcker in i bränslet vid barriärbrott. Dessa experiment är både dyra och tidskrävande, varför beräkningsmodeller som kan förutsäga förburkat kärnbränsles upplösningsbeteende är önskvärda. Denna avhandling fokuserar på att samla in tillgängliga experimentella data från upplösningsexperiment för att uppdatera och utöka en databas. Med hjälp av databasen har upplösningsbeteendet för varje radionuklid utvärderats och jämförts med tidigare kunskap från befintlig litteratur. Även om det var svårt att vara avgörande om beteendet hos element där en begränsad mängd data fanns tillgänglig, motsvarar de upplösningsbeteenden som hittats för olika radionuklider i denna avhandling inte bara tidigare studier utan ger också ett verktyg för att hantera och jämföra förbrukat kärnbränsles upplösningsdata från olika utgångsmaterial, bestrålningshistorik och betingeleser under upplösning. Dessutom gjorde sammanställningen av en så stor mängd experimentella data det möjligt att förstå var framtida experimentella ansträngningar bör fokuseras, exempelvis finns det en brist på data under reducerande förhållanden. Dessutom utvecklades och kördes maskininlärningsmodeller med hjälp av Artificial Neural Network (ANN), Random Forest (RF) och XGBoost-algoritmer med hjälp av databasen, varefter prestandan utvärderades. Prestanda för varje algoritm jämfördes för att få en förståelse för vilken modell som presterade bäst, men också för att förstå om dessa typer av modeller är lämpliga verktyg för att förutspå förbrukat kärnbränsles upplösningsbeteende. Den bäst presterande modellen, med träning och test R2 resultat nära 1, var XGBoost-modellen. Även om XGBoost hade en hög prestanda, drogs slutsatsen att mer experimentell data behövs innan maskininlärningsmodeller kan användas i verkliga situationer. / Spent nuclear fuel (SNF) is highly radioactive and therefore needs to be stored in deep geological repositories for thousands of years before it can be safely returned to nature. Due to the long storage times, performance assessments (PA) of the deep geological repositories are made. During PA dissolution experiments of SNF are made to evaluate the consequences of groundwater leaking into the fuel canister in case of barrier failure. These experiments are both expensive and time consuming, which is why computational models that can predict SNF dissolution behaviour are desirable.  This thesis focuses on gathering available experimental data of dissolution experiments to update and expand a database. Using the database, the dissolution behaviour of each radionuclide (RN) has been evaluated and compared to previous knowledge from existing literature. While it was difficult to be conclusive on the behaviour of elements where a limited amount of data was available, the dissolution behaviours found of different radionuclides in this thesis not only correspond to previous studies but also provide a tool to manage and compare SNF leaching data from different starting materials, irradiation history and leaching conditions. Moreover, the compilation of such a large amount of experimental data made it possible to understand where future experimental efforts should be focused, i.e. there is a lack of data during reducing conditions. In addition, machine learning models using Artificial Neural Network (ANN), Random Forest (RF) and XGBoost algorithms were developed and run using the database after which the performances were evaluated. The performances of each algorithm were compared to get an understanding of which model performed best, but also to understand whether these kinds of models are suitable tools for SNF dissolution behaviour predictions. The best performing model, with training and test R2 scores close to 1, was the XGBoost model. Although XGBoost, had a high performance, it was concluded that more experimental data is needed before machine learning models can be used in real situations.
9

Валидация модели машинного обучения для прогнозирования магнитных свойств нанокристаллических сплавов типа FINEMET : магистерская диссертация / Validation of machine learning model to predict magnetic properties of nanocrystalline FINEMET type alloys

Степанова, К. А., Stepanova, K. A. January 2022 (has links)
В работе была произведена разработка модели машинного обучения на языке программирования Python, а также проведена ее валидация на этапах жизненного цикла. Целью создания модели машинного обучения является прогнозирование магнитных свойств нанокристаллических сплавов на основе железа по химическому составу и условиям обработки. Процесс валидации модели машинного обучения позволяет не только произвести контроль за соблюдением требований, предъявляемых при разработке и эксплуатации модели, к результатам, полученных с помощью моделирования, но и способствует внедрению модели в процесс производства. Процесс валидации включал в себя валидацию данных, в ходе которой были оценены типы, пропуски данных, соответствие цели исследования, распределения признаков и целевых характеристик, изучены корреляции признаков и целевых характеристик; валидацию алгоритмов, применяемых в модели: были проанализированы параметры алгоритмов с целью соблюдения требования о корректной обобщающей способности модели (отсутствие недо- и переобучения); оценку работы модели, благодаря которой был произведен анализ полученных результатов с помощью тестовых данных; верификацию результатов с помощью актуальных данных, полученных из статей, опубликованных с 2010 по 2022 год. В результате валидации модели было показано высокое качество разработанной модели, позволяющее получить оценки качества R2 0,65 и выше. / In this work machine learning model was developed by Python programming language, and also was validated at stages of model’s life cycle. The purpose of creating the machine learning model is to predict the magnetic properties of Fe-based nanocrystalline alloys by chemical composition and processing conditions. The validation of machine learning models allows not only to control the requirements for development and operation of the models, for the results obtained by modeling, but also contrib¬utes to the introduction of the model into production process. The validation process included: data validation: data types and omissions, compliance with the purpose of the study, dis¬tribution of features and target characteristics were evaluated, correlations of features and target characteristics were studied; flgorithms validation: the parameters of the algorithms were analyzed in order to comply with the requirement for the correct generalizing ability of the model (without under- and overfit¬ting); evaluation of the model work: the analysis of the obtained results was carried out using test data; verification of results using actual data obtained from articles published since 2010 to 2022. As a result of the model validation, the high quality of the developed model was shown, which makes it possible to obtain quality metric R2 0.65 and higher.
10

Realization of Model-Driven Engineering for Big Data: A Baseball Analytics Use Case

Koseler, Kaan Tamer 27 April 2018 (has links)
No description available.

Page generated in 0.0781 seconds