• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Molecular Dynamics on a Grand Scale: Towards large-scale atomistic simulations of self-assembling biomolecular systems

Matthew Breeze Unknown Date (has links)
To explore progressively larger biomolecular systems, methods to model explicit solvent cheaply are required. In this work, the use of Graphics Processing Units, found in commodity video cards, for solving the constraints, calculating the non-bonded forces and generating the pair list in the case of the fully constrained three site SPC water model is investigated. It was shown that the GPU implementation of the SPC constraint-solving algorithm SETTLE was overall 26% faster than a conventional implementation running on a Central Processing Unit (CPU) core. The non-bonded forces were calculated up to 17 times faster than using a CPU core. Using these two approaches, an overall speed up of around 4 times was found. The most successful implementation of the pair-list generation ran at 38% the speed of a conventional grid-based implementation on a CPU core. In each investigation the accuracy was shown to be sufficient using a variety of numerical and distributional tests. Thus, the use of GPUs as parallel processors for MD calculations is highly promising. Lastly, a method of calculating a constraint force analytically is presented.
2

Αριθμητική-τοπολογική επίλυση (συνόρθωση) μη γραμμικών, υπερστατικών συστημάτων εξισώσεων

Σαλτογιάννη, Βασιλική 01 February 2013 (has links)
Με σκοπό την επίλυση προβλημάτων υπολογισμού των χαρακτηριστικών μεγεθών του μαγματικού θύλακα του ηφαιστείου Σαντορίνης με δεδομένα που βασίζονταν σε γεωδαιτικές μετρήσεις, αναπτύχθηκε αριθμητική-τοπολογική μέθοδος επίλυσης (υπερστατικών) συστημάτων πολύπλοκων και μη γραμμικών εξισώσεων με πλεονάζουσες παρατηρήσεις. Η συνόρθωση (adjustment) των συστημάτων αυτών δεν είναι δυνατόν να πραγματοποιηθεί με βάση συμβατικές αλγεβρικές μεθόδους, όπως αυτή των Ελαχίστων Τετραγώνων. Επιπλέον, οι αριθμητικές τεχνικές που έχουν αναπτυχθεί οδηγούν σε ατελείς λύσεις, πολλές φορές εγκλωβισμένες σε τοπικά ελάχιστα, με υψηλό βαθμό συσχέτισης μεταξύ συγκεκριμένων παραμέτρων και σε λύσεις μη ελεγχόμενες, όσον αφορά την αβεβαιότητα της εκτίμησης των παραμέτρων. Με στόχο να ξεπεραστούν τα προβλήματα αυτά, αναπτύσσεται μια εναλλακτική μέθοδος επίλυσης μη γραμμικών συστημάτων εξισώσεων (συνόρθωση). Η μέθοδος αυτή, βασίζεται σε αριθμητική – τοπολογική προσέγγιση και αναζήτηση στοιχείων πλέγματος Ν διαστάσεων, και είναι εμπνευσμένη από την μέθοδο εντοπισμού της θέσης ενός πλοίου με την βοήθεια των φάρων και άλλες χαμηλής ακρίβειας μεθόδους που χρησιμοποιούνται σε 2-D προσδιορισμό θέσης με βάση WiFi κλπ. Η διαδικασία περιλαμβάνει δύο στάδια. Αρχικά υπολογίζονται οι γεωμετρικοί τόποι των σημείων που επαληθεύουν την κάθε εξίσωση του συστήματος και στη συνέχεια προκύπτει η τελική λύση ως η κοινή τομή αυτών. Η αποτελεσματικότητα αυτής της μεθόδου έγκειται στο γεγονός ότι έχει την δυνατότητα να συνυπολογίζει τα σφάλματα των μετρήσεων και να βελτιστοποιεί την λύση με βάση αυτά, να επιτρέπει τον έλεγχο της ευαισθησίας της λύσης και να εξασφαλίζει πλήρως τον υπολογισμό του σχετικού μητρώου μεταβλητότητας – συμμεταβλητότητας των παραμέτρων. / At the present study a numerical-topological methology of solving systems of highly non-linear, redundant equations, deriving from observations of certain geophysical processes and geodetic data was examined. The motivation of developing this technique was to estimate the characteristic values of the magma source of the Santorini volcano during a slow-inflation episode. The adjustment of such systems cannot be based on conventional least-squares techniques, and is based on various numerical inversion techniques. Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solutions with poor error control. To overcome these problems, a numerical-topological, grid-search based technique in the RN space is proposed, a generalization and refinement of techniques used in some cases of low-accuracy 2-D positioning using Wi-fi etc. The basic concept is to define a grid in RN space which contains the true solution. In this grid the set of the estimated solution is mapped as the intersection of grid spaces of each observation. The efficiency of the proposed method is that it can incorporate weights of observations and optimize the solution based on them, and also it can compute variance-covariance matrices.
3

Data-Driven Emptying Detection for Smart Recycling Containers

Rutqvist, David January 2018 (has links)
Waste Management is one of the biggest challenges for modern cities caused by urbanisation and increased population. Smart Waste Management tries to solve this challenge with the help of techniques such as Internet of Things, machine learning and cloud computing. By utilising smart algorithms the time when a recycling container is going to be full can be predicted. By continuously measuring the filling level of containers and then partitioning the filling level data between consecutive emptyings a regression model can be used for prediction. In order to do this an accurate emptying detection is a requirement. This thesis investigates different data-driven approaches to solve the problem of an accurate emptying detection in a setting where the majority of the data are non-emptyings, i.e. suspected emptyings which by manual examination have been concluded not to be actual emptyings. This is done by starting with the currently deployed legacy solution and step-by-step increasing the performance by optimisation and machine learning models. The final solution achieves the classification accuracy of 99.1 % and the recall of 98.2 % by using a random forest classifier on a set of features based on the filling level at different given time spans. To be compared with the recall of 50 % by the legacy solution. In the end, it is concluded that the final solution, with a few minor practical modifications, is feasible for deployment in the next release of the system.
4

Hyperparameters relationship to the test accuracy of a convolutional neural network

Lundh, Felix, Barta, Oscar January 2021 (has links)
Machine learning for image classification is a hot topic and it is increasing in popularity. Therefore the aim of this study is to provide a better understanding of convolutional neural network hyperparameters by comparing the test accuracy of convolutional neural network models with different hyperparameter value configurations. The focus of this study is to see whether there is an influence in the learning process depending on which hyperparameter values were used. For conducting the experiments convolutional neural network models were developed using the programming language Python utilizing the library Keras. The dataset used for this study iscifar-10, it includes 60000 colour images of 10 categories ranging from man-made objects to different animal species. Grid search is used for instantiating models with varying learning rate and momentum, width and depth values. Learning rate is only tested combined with momentum and width is only tested combined with depth. Activation functions, convolutional layers and batch size are tested individually. Grid search is compared against Bayesian optimization to see which technique will find the most optimized learning rate and momentum values. Results illustrate that the impact different hyperparameters have on the overall test accuracy varies. Learning rate and momentum affects the test accuracy greatly, however suboptimal values for learning rate and momentum can decrease the test accuracy severely. Activation function, width and depth, convolutional layer and batch size have a lesser impact on test accuracy. Regarding Bayesian optimization compared to grid search, results show that Bayesian optimization will not necessarily find more optimal hyperparameter values.
5

[pt] ENGENHARIA DE RECURSOS PARA LIDAR COM DADOS RUIDOSOS NA IDENTIFICAÇÃO ESPARSA SOB AS PERSPECTIVAS DE CLASSIFICAÇÃO E REGRESSÃO / [en] FEATURE ENGINEERING TO DEAL WITH NOISY DATA IN SPARSE IDENTIFICATION THROUGH CLASSIFICATION AND REGRESSION PERSPECTIVES

THAYNA DA SILVA FRANCA 15 July 2021 (has links)
[pt] Os sistemas dinâmicos desempenham um papel crucial no que diz respeito à compreensão de fenômenos inerentes a diversos campos da ciência. Desde a última década, todo aporte tecnológico alcançado ao longo de anos de investigação deram origem a uma estratégia orientada a dados, permitindo a inferência de modelos capazes de representar sistemas dinâmicos. Além disso, independentemente dos tipos de sensores adotados a fim de realizar o procedimento de aquisição de dados, é natural verificar a existência de uma certa corrupção ruidosa nos referidos dados. Genericamente, a tarefa de identificação é diretamente afetada pelo cenário ruidoso previamente descrito, implicando na falsa descoberta de um modelo generalizável. Em outras palavras, a corrupção ao ruído pode ser responsável pela geração de uma representação matemática infiel de um determinado sistema. Nesta tese, no que diz respeito à tarefa de identificação, é demonstrado como a robustez ao ruído pode ser melhorada a partir da hibridização de técnicas de aprendizado de máquina, como aumento de dados, regressão esparsa, seleção de características, extração de características, critério de informação, pesquisa em grade e validação cruzada. Especificamente, sob as perspectivas de classificação e regressão, o sucesso da estratégia proposta é apresentado a partir de exemplos numéricos, como o crescimento logístico, oscilador Duffing, modelo FitzHugh-Nagumo, atrator de Lorenz e uma modelagem Suscetível-Infeccioso-Recuperado (SIR) do Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). / [en] The dynamical systems play a fundamental role related to the understanding of the phenomena inherent to several fields of science. Since the last decade, all technological advances achieved throughout years of research have given rise to a data oriented strategy, enabling the inference of dynamical systems. Moreover, regardless the sensor types adopted to perform the data acquisition procedure, it is natural to verify the existence of a certain noise corruption in such data. Generically, the identification task is directly affected by the noisy scenario previously described, which entails in the false discovery of a generalizable model. In other words, the noise corruption might be responsible to give rise to a worthless mathematical representation of a given system. In this thesis, with respect to the identification assignment, it is demonstrated how the robustness to noise may be improved from the hybridization of machine learning techniques, such as data augmentation, sparse regression, feature selection, feature extraction, information criteria, grid search and cross validation. Specifically, through classification and regression perspectives, the success of the proposed strategy is presented from numerical examples, such as the logistic growth, Duffing oscillator, FitzHugh–Nagumo model, Lorenz attractor and a Susceptible-Infectious-Recovered (SIR) modeling of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2).
6

Iterated Grid Search Algorithm on Unimodal Criteria

Kim, Jinhyo 02 June 1997 (has links)
The unimodality of a function seems a simple concept. But in the Euclidean space R^m, m=3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee convergence. Support is given both by analytic arguments and simulation study. Application is envisioned in low-dimensional but non-trivial problems. The convergence of the proposed iterated grid search algorithm is presented along with the results of particular application studies. It has been recognized that the derivative methods, such as the Newton-type method, are not entirely satisfactory, so a variety of other tools are being considered as alternatives. Many other tools have been rejected because of apparent manipulative difficulties. But in our current research, we focus on the simple algorithm and the guaranteed convergence for unimodal function to avoid the possible chaotic behavior of the function. Furthermore, in case the loss function to be optimized is not unimodal, we suggest a weaker condition: almost (noisy) unimodality, under which the iterated grid search finds an estimated optimum point. / Ph. D.
7

PREDICTING NET GAME REVENUE USING STATISTICAL MODELING : A seasonal ARIMA model including exogenous variables

Engman, Amanda, Venell, Alva January 2024 (has links)
Spelbolag AB has a long history in the Swedish market. Their products are all based on randomness, with a predetermined probability of winning. Some of Spelbolag AB's products are stable in sales throughout the year, while others fluctuate with holidays. Spelbolag AB offers products whose sales are largely influenced by the prize value; higher prize amounts attract more gamblers, while lower prize amounts attract fewer gamblers. Spelbolag AB also has products that are purchased more or less based on the value of the prize, i.e. a higher prize pot increases the number of gamblers and vice versa. Through campaigns, the company wishes to enhance the interest in their products. To estimate the total revenue from the products, a statistical tool has been used. The predictions are made for different key performance indexes (KPIs) which are used as the foundation for some strategic decisions. A wish to improve the statistical tool used by the company has risen due to poor performance. This thesis aimed to create an updated statistical tool. This tool was based on a time series analysis of the weekly net game revenue (NGR). The goal of the time series analysis was to find a statistical model with high forecast accuracy. To find the optimal model for forecast accuracy, a grid search algorithm was used. The performance measure mean squared prediction error (MSPE) was used as a decision base in the grid search along with the mean absolute percentage error (MAPE). Akaike information criterion (AIC) was also estimated as a goodness-of-fit measure. The thesis work resulted in two different SARIMAX models that were analyzed and tested, both including the same exogenous variables. The recommended SARIMAX(1, 0, 2)(1, 1, 1)52 model obtained an MAPE of 4.49%. / Spelbolag AB har en lång historia på den svenska marknaden. Deras produkter är alla slumpmässiga i dess utfall, med en förbestämd chans att vinna. Vissa av Spelbolag ABs produkter har stabil försäljning, medan andra flukturerar med högtider. Spelbolag AB har även produkter vars försäljning påverkas av vinstsumman; fler personer spelar när vinstsumman är hägre och tvärtom. Genom kampanjer önskar företaget öka intresset för sina produkter, och på så vis öka försäljningen. För att prediktera och kunna förutse de totala intäkterna från produkternas försäljning har ett statistisk verktyg använts. Dessa prediktioner har gjorts för olika KPIer, vilka används för att fatta strategiska beslut. Detta verktyg har på den senaste tiden resulterat i dåliga prediktioner, varpå en önskan om att förnya verktyget har uppkommit. Syftet med denna uppsats har därmed varit att uppdatera det statistiska verktyget. Verktyget har baserats på en tidsserieanalys av veckovist netto spelinkomst (NSI). Målet med tidsserieanalysen var att hitta en statistisk modell med hög träffsäkerhet i prediktionerna. För att hitta en optimal modell för just prediktionsnoggrannhet användes algoritmen rutnätssökning. Beslutsunderlaget i denna rutnätssökning var medelkvadratisk predikteringsfel (MSPE) samt medelabsolut procentuellt fel (MAPE). Dessutom estimerades akaike informationskriteriet (AIC) som ett mått på modellanpassning. Uppsatsen resulterade i två olika SARIMAX modeller som båda analyserades och testades, och dessa modeller inkluderade samma exogena variabler. Den rekommenderade SARIMAX(1, 0, 2)(1, 1, 1)52 modellen erhöll ett MAPE av 4.49%.
8

內部人交易行為對股票報酬之影響--門檻模型之運用

蔡禮聰 Unknown Date (has links)
本研究採用門檻迴歸模型 (Threshold Autoregression Model),試圖找出董監事等內部人之申報轉讓比率、持股比率及質押比率等門檻值,進而分析門檻值以內及以外,指標對於代理變數:融資成長率、營收成長率以及本益比與加權指數報酬率的影響程度與方向。本研究實證結果發現: 一、在申報轉讓比率方面: 當申報轉讓比率低於門檻值,存在所謂的群聚效果。當申報轉讓比率高於門檻值時,市場動能與加權指數報酬率無顯著關係,投資人於此階段進行投資決策時應該要謹慎小心。 二、在持股比率方面: 在持股比率低於門檻值時,加權指數報酬率對於前期營收成長率表現的修正幅度較大,意謂著董監事等內部人根據其對未來營收資訊掌握的優勢,反應其對營收資訊的真實性,而藉由持股轉讓的行為,使加權指數大幅度的修正。 三、在質押比率方面: 不管高於或低於門檻值,均無法利用董監事等內部人質押比率為門檻變數來分析本益比效果對加權指數報酬率的影響。造成其檢定失效的原因,可能是樣本小且模型受到極端值的影響所造成。
9

Single and Multiple Emitter Localization in Cognitive Radio Networks

Ureten, Suzan January 2017 (has links)
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
10

Joinpoint regression analysis of the COVID-19 epidemic curve in Sweden : A descriptive trend analysis of the different regions in Sweden

Bergwall, Sebastian, Tran, Duc January 2023 (has links)
Since the beginning of the global outbreak, the Swedish Public Health Agency has been closely monitoring the situation regarding COVID-19. Understanding and mapping the behaviour of the COVID-19 epidemic curve in Sweden is of great interest and conducting a descriptive analysis may yield additional important insights. This thesis focused on determining when a potential trend changes occurred by studying time series over the trend development of the COVID-19 incidence rates provided by the Swedish Public Health Agency. Joinpoint regression with grid search were used to analyse each individual region as well as the nation. Results from respective analyses were used to compare if a region notably differed from the national trend in regard to location and trend. The findings indicated that there are more potential changes in trends in the data than identified and that several regions appeared to differ from the nation, implying that more research is required.

Page generated in 0.0375 seconds