• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 38
  • 27
  • 23
  • 12
  • 8
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 399
  • 194
  • 81
  • 74
  • 58
  • 54
  • 48
  • 47
  • 46
  • 45
  • 37
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Migrering av en State of Charge-algoritm : Migrering och optimering av State of Charge algoritmen för Nickel-metallhydridbatterier

Jansson, Christoffer, Pettersson, Malte January 2023 (has links)
Följande studie är utförd på uppdrag av företaget Nilar som tillverkar Nickel-Metallhydridbatterier (NiMH-batterier) vid sin produktionanläggning i Gävle. Den nuvarande beräkningen av State of Charge (SoC) sker på deras Battery Management Unit (BMU) och är implementerad i Structured Text i exekveringsmiljön CODESYS. Nilar vill flytta SoC-beräkningen från BMU:n så att den kan exekveras på en Interface Control Unit (ICU). Motiveringen till detta är för att distribuera SoC-beräkningen då ett flertal ICU:er finns tillgängliga per Battery Management System (BMS) men även för att i framtiden helt byta ut CODESY. Syftet med denna studie är att migrera implementationen av SoC-algoritmen till programmeringsspråket C så att algoritmen senare kan exekveras på ICU:n. Därefter optimeras algoritmen för att sänka exekveringstiden. Studien utforskar kodstrukturella och funktionella skillnader mellan implementationerna samt metoder för att optimera SoC-algoritmen. Migreringen av algoritmen fullföljdes utan större inverkan på noggrannheten. Algoritmen optimerades genom att skapa en variant av en LU-faktorisering som var specifikt anpassad för det aktuella problemet. Optimeringen av algoritmen resulterade i en minskning på 25% av den totala exekveringstiden för algoritmen. De nya implementationerna tar markant längre tid att exekvera då batteriet befinner sig under laddning jämfört när det befinner sig under urladdning, någonting som inte kan noteras för den gamla implementationen. / The following study was carried out on the behalf of Nilar, which manufactures Nickel–metal hydride batteries at its production site in Gävle. The current State of Charge (SoC) calculation is done on their Battery Manegment Unit (BMU) and is implemented in Structured Text for the CODESYS runtime. Nilar wants to move the SoC calculation from the BMU so that its executed on a Interface Control Unit (ICU). The reasoning behind this is to distribute the SoC computation as several ICUs are available per Battery Management System (BMS) but also to remove the CODESYS dependency in the future. The purpose of this study is to migrate the implementation of the SoC-algorithm to the programming language C so that the algorithm can be executed on an ICU in the future. Furthermore this study aims to optimize the the algorithm to lower the execution time. The study explores differences in code structure and functionallity between the implementations as well as methods to optimize the SoC algorithm. The migration of the algorithm was completed without major impact on the accuracy. The algorithm was optimized by creating a variant of a LU factorization that was specifically suited to LU factorize the given problem. The optimization of the algorithm resulted in a 25% lower total execution time. The new implementations suffers from a longer total execution time when the battery is charging compared to when it’s discharging, something that’s not prevalent for the old implementation.
392

Recommending digital books to children : Acomparative study of different state-of-the-art recommendation system techniques / Att rekommendera digitala böcker till barn : En jämförelsestudie av olika moderna tekniker för rekommendationssystem

Lundqvist, Malvin January 2023 (has links)
Collaborative filtering is a popular technique to use behavior data in the form of user’s interactions with, or ratings of, items in a system to provide personalized recommendations of items to the user. This study compares three different state-of-the-art Recommendation System models that implement this technique, Matrix Factorization, Multi-layer Perceptron and Neural Matrix Factorization, using behavior data from a digital book platform for children. The field of Recommendation Systems is growing, and many platforms can benefit of personalizing the user experience and simplifying the use of the platforms. To perform a more complex comparison and introduce a new take on the models, this study proposes a new way to represent the behavior data as input to the models, i.e., to use the Term Frequency-Inverse Document Frequency (TFIDF) of occurrences of interactions between users and books, as opposed to the traditional binary representation (positive if there has been any interaction and negative otherwise). The performance is measured by extracting the last book read for each user, and evaluating how the models would rank that book for recommendations to the user. To assess the value of the models for the children’s reading platform, the models are also compared to the existing Recommendation System on the digital book platform. The results indicate that the Matrix Factorization model performs best out of the three models when using children’s reading behavior data. However, due to the long training process and larger set of hyperparameters to tune for the other two models, these may not have reached an optimal hyperparameter tuning, thereby affecting the comparison among the three state-of-the-art models. This limitation is further discussed in the study. All three models perform significantly better than the current system on the digital book platform. The models with the proposed representation using TF-IDF values show notable promise, performing better than the binary representation in almost all numerical metrics for all models. These results can suggest future research work on more ways of representing behavior data as input to these types of models. / Kollaborativ filtrering är en populär teknik för att använda beteendedata från användare i form av t.ex. interaktioner med, eller betygsättning av, objekt i ett system för att ge användaren personliga rekommendationer om objekt. I den här studien jämförs tre olika modeller av moderna rekommendationssystem som tillämpar denna teknik, matrisfaktorisering, flerlagersperceptron och neural matrisfaktorisering, med hjälp av beteendedata från en digital läsplattform för barn. Rekommendationssystem är ett växande område, och många plattformar kan dra nytta av att anpassa användarupplevelsen utifrån individen och förenkla användningen av plattformen. För att utföra en mer komplex jämförelse och introducera en ny variant av modellerna, föreslår denna studie ett nytt sätt att representera beteendedata som indata till modellerna, d.v.s. att använda termfrekvens med omvänd dokumentfrekvens (TF- IDF) av förekomster av interaktioner mellan användare och böcker, i motsats till den traditionella binära representationen (positiv om en tidigare interaktion existerar och negativ i annat fall). Prestandan mäts genom att extrahera den senaste boken som lästs för varje användare, och utvärdera hur högt modellerna skulle rangordna den boken i rekommendationer till användaren. För att värdesätta modellerna för plattformen med digitala böcker, så jämförs modellerna också med det befintliga rekommendationssystemet på plattformen. Resultaten tyder på att matrisfaktorisering-modellen presterar bäst utav de tre modellerna när man använder data från barns läsbeteende. På grund av den långa träningstiden och fler hyperparametrar att optimera för de andra två modellerna, kan det dock vara så att de inte har nått en optimal hyperparameterinställning, vilket påverkar jämförelsen mellan de tre moderna modellerna. Denna begränsning diskuteras ytterligare i studien. Alla tre modellerna presterar betydligt bättre än det nuvarande systemet på läsplattformen. Modellerna med den föreslagna representationen av TFIDF-värden visar sig mycket lovande och presterar bättre än den binära representationen i nästan alla numeriska mått för alla modeller. Dessa resultat kan ge skäl för framtida forskning av fler sätt att representera beteendedata som indata till denna typ av modeller.
393

Probing effects of organic solvents on paracetamol crystallization using in silico and orthogonal in situ methods

Chewle, Surahit 08 September 2023 (has links)
This work entails efforts to understand effects of solvent choice on paracetamol crystallization. Various techniques have been developed and implemented to study aforementioned. A clear-cut, direct evidence of two-step nucleation mechanism is demonstrated using a bench top Raman spectrometer and a novel method named as OSANO. / Polymorphismus ist die Eigenschaft vieler anorganischer und insbesondere organischer Moleküle, in mehr als einer Struktur zu kristallisieren. Es ist wichtig, die Faktoren zu verstehen, die den Polymorphismus beeinflussen, da er viele physikochemische Eigenschaften wie Stabilität und Löslichkeit beeinflusst. Nahezu 80 % der vermarkteten Medikamente weisen Polymorphismus auf. In dieser Arbeit wurde der Einfluss der Wahl des organischen Lösungsmittels auf den Polymorphismus von Paracetamol untersucht und verschiedene Methoden entwickelt und angewandt, um den Einfluss genauer zu verstehen. Es wurde festgestellt, dass Ethanol viel stärker auf Paracetamol-Kristallisation als Methanol wirkt. Nichtgleichgewichts-Molekulardynamiksimulationen mit periodischer, simulierter Abkühlung (Simulated Annealing) wurden verwendet, um Vorläufer der metastabilen Zwischenprodukte im Kristallisationsprozess zu untersuchen. Es wurde festgestellt, dass die Strukturen der Bausteine der Paracetamol-Kristalle durch geometrische Wechselwirkungen zwischen Lösungsmittel und Paracetamol bestimmt werden. Die statistisch häufigsten Bausteine in der Selbstassemblierung definieren die finale Kristallstruktur. Ein speziell angefertigter akustischer Levitator hat die Proben zuverlässig gehalten, wodurch die Untersuchung des Einflusses von Lösungsmitteln ermöglicht, heterogene Keimbildung abgeschwächt und andere Umgebungsfaktoren stabilisiert wurden. Die Kristallisation wurde in diesem Aufbau mit zeitaufgelöster In-situ-Raman-Spektroskopie verfolgt und mit einer neuen Zielfunktion basierenden Methode der nichtnegativen Matrixfaktorisierung (NMF) analysiert. Orthogonale Zeitrafferfotografie wurde in Verbindung mit NMF verwendet, um eindeutige und genaue Faktoren zu erhalten, die sich auf die Spektren und Konzentrationen verschiedener Anteile der Paracetamol-Kristallisation beziehen, die als latente Komponenten in den unbehandelten Daten vorhanden sind. / Polymorphism is the property exhibited by many inorganic and organic molecules to crystallize in more than one crystal structure. There is a strong need for understanding the influencing factors on polymorphism, as it is responsible for differences in many physicochemical properties such as stability and solubility. Nearly 80 % of marketed drugs exhibit polymorphism. In this work, we took the model system of paracetamol to investigate the influence of solvent choice on its polymorphism. Different methods were developed and employed to understand the influence of small organic solvents on the crystallization of paracetamol. Non-equilibrium molecular dynamics simulations with periodic simulated annealing were used as a tool to probe the nature of precursors of the metastable intermediates occurring in the crystallization process. Using this method, it was found that the structures of the building blocks of crystals of paracetamol is governed by solvent-solute interactions. In situ Raman spectroscopy was used with a custom-made acoustic levitator to follow crystallization. This set-up is a reliable method for investigating solvent influence, attenuating heterogeneous nucleation and stabilizing other environmental factors. It was established that as a solvent, ethanol is much stronger than methanol in its effect of driving paracetamol solutions to their crystal form. The time-resolved Raman spectroscopy crystallization data was processed using a newly developed objective function based non-negative matrix factorization method (NMF). An orthogonal time-lapse photography was used in conjunction with NMF to get unique and accurate factors that pertain to the spectra and concentrations of different moieties of paracetamol crystallization existing as latent components in the untreated data.
394

Tail Risk Protection via reproducible data-adaptive strategies

Spilak, Bruno 15 February 2024 (has links)
Die Dissertation untersucht das Potenzial von Machine-Learning-Methoden zur Verwaltung von Schwanzrisiken in nicht-stationären und hochdimensionalen Umgebungen. Dazu vergleichen wir auf robuste Weise datenabhängige Ansätze aus parametrischer oder nicht-parametrischer Statistik mit datenadaptiven Methoden. Da datengetriebene Methoden reproduzierbar sein müssen, um Vertrauen und Transparenz zu gewährleisten, schlagen wir zunächst eine neue Plattform namens Quantinar vor, die einen neuen Standard für wissenschaftliche Veröffentlichungen setzen soll. Im zweiten Kapitel werden parametrische, lokale parametrische und nicht-parametrische Methoden verglichen, um eine dynamische Handelsstrategie für den Schutz vor Schwanzrisiken in Bitcoin zu entwickeln. Das dritte Kapitel präsentiert die Portfolio-Allokationsmethode NMFRB, die durch eine Dimensionsreduktionstechnik hohe Dimensionen bewältigt. Im Vergleich zu klassischen Machine-Learning-Methoden zeigt NMFRB in zwei Universen überlegene risikobereinigte Renditen. Das letzte Kapitel kombiniert bisherige Ansätze zu einer Schwanzrisikoschutzstrategie für Portfolios. Die erweiterte NMFRB berücksichtigt Schwanzrisikomaße, behandelt nicht-lineare Beziehungen zwischen Vermögenswerten während Schwanzereignissen und entwickelt eine dynamische Schwanzrisikoschutzstrategie unter Berücksichtigung der Nicht-Stationarität der Vermögensrenditen. Die vorgestellte Strategie reduziert erfolgreich große Drawdowns und übertrifft andere moderne Schwanzrisikoschutzstrategien wie die Value-at-Risk-Spread-Strategie. Die Ergebnisse werden durch verschiedene Data-Snooping-Tests überprüft. / This dissertation shows the potential of machine learning methods for managing tail risk in a non-stationary and high-dimensional setting. For this, we compare in a robust manner data-dependent approaches from parametric or non-parametric statistics with data-adaptive methods. As these methods need to be reproducible to ensure trust and transparency, we start by proposing a new platform called Quantinar, which aims to set a new standard for academic publications. In the second chapter, we dive into the core subject of this thesis which compares various parametric, local parametric, and non-parametric methods to create a dynamic trading strategy that protects against tail risk in Bitcoin cryptocurrency. In the third chapter, we propose a new portfolio allocation method, called NMFRB, that deals with high dimensions thanks to a dimension reduction technique, convex Non-negative Matrix Factorization. This technique allows us to find latent interpretable portfolios that are diversified out-of-sample. We show in two universes that the proposed method outperforms other classical machine learning-based methods such as Hierarchical Risk Parity (HRP) concerning risk-adjusted returns. We also test the robustness of our results via Monte Carlo simulation. Finally, the last chapter combines our previous approaches to develop a tail-risk protection strategy for portfolios: we extend the NMFRB to tail-risk measures, we address the non-linear relationships between assets during tail events by developing a specific non-linear latent factor model, finally, we develop a dynamic tail risk protection strategy that deals with the non-stationarity of asset returns using classical econometrics models. We show that our strategy is successful at reducing large drawdowns and outperforms other modern tail-risk protection strategies such as the Value-at-Risk-spread strategy. We verify our findings by performing various data snooping tests.
395

Brisure CP/T via les produits triples dans les désintégrations des hadrons à saveur de beauté

Bensalem, Wafia 05 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / Cette thèse est constituée de quatre articles qui ont tous été publiés. Le sujet principal est l’étude d’asymétries de brisure CP/T via les produits triples dans les désintégrations des hadrons contenant un quark b (mésons B ou hypérons Ab) et ce, dans le modèle standard de la physique des particules et au-delà. / This thesis is comprised of four articles, ail of which have been published. The main subject is the study, in and beyond the standard model of particle physics, of CP/T-violation via triple products in decays of hadrons containing a b quark (B mesons or Ab hyperons).
396

Competition improves robustness against loss of information

Kolankeh, Arash Kermani, Teichmann, Michael, Hamker, Fred H. 21 July 2015 (has links)
A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.
397

Contribution à l'analyse mathématique et à la résolution numérique d'un problème inverse de scattering élasto-acoustique / Contribution to the mathematical analysis and to the numerical solution of an inverse elasto-acoustic scattering problem

Estecahandy, Elodie 19 September 2013 (has links)
La détermination de la forme d'un obstacle élastique immergé dans un milieu fluide à partir de mesures du champ d'onde diffracté est un problème d'un vif intérêt dans de nombreux domaines tels que le sonar, l'exploration géophysique et l'imagerie médicale. A cause de son caractère non-linéaire et mal posé, ce problème inverse de l'obstacle (IOP) est très difficile à résoudre, particulièrement d'un point de vue numérique. De plus, son étude requiert la compréhension de la théorie du problème de diffraction direct (DP) associé, et la maîtrise des méthodes de résolution correspondantes. Le travail accompli ici se rapporte à l'analyse mathématique et numérique du DP élasto-acoustique et de l'IOP. En particulier, nous avons développé un code de simulation numérique performant pour la propagation des ondes associée à ce type de milieux, basé sur une méthode de type DG qui emploie des éléments finis d'ordre supérieur et des éléments courbes à l'interface afin de mieux représenter l'interaction fluide-structure, et nous l'appliquons à la reconstruction d'objets par la mise en oeuvre d'une méthode de Newton régularisée. / The determination of the shape of an elastic obstacle immersed in water from some measurements of the scattered field is an important problem in many technologies such as sonar, geophysical exploration, and medical imaging. This inverse obstacle problem (IOP) is very difficult to solve, especially from a numerical viewpoint, because of its nonlinear and ill-posed character. Moreover, its investigation requires the understanding of the theory for the associated direct scattering problem (DP), and the mastery of the corresponding numerical solution methods. The work accomplished here pertains to the mathematical and numerical analysis of the elasto-acoustic DP and of the IOP. More specifically, we have developed an efficient numerical simulation code for wave propagation associated to this type of media, based on a DG-type method using higher-order finite elements and curved edges at the interface to better represent the fluid-structure interaction, and we apply it to the reconstruction of objects with the implementation of a regularized Newton method.
398

Statistical Inference

Chou, Pei-Hsin 26 June 2008 (has links)
In this paper, we will investigate the important properties of three major parts of statistical inference: point estimation, interval estimation and hypothesis testing. For point estimation, we consider the two methods of finding estimators: moment estimators and maximum likelihood estimators, and three methods of evaluating estimators: mean squared error, best unbiased estimators and sufficiency and unbiasedness. For interval estimation, we consider the the general confidence interval, confidence interval in one sample, confidence interval in two samples, sample sizes and finite population correction factors. In hypothesis testing, we consider the theory of testing of hypotheses, testing in one sample, testing in two samples, and the three methods of finding tests: uniformly most powerful test, likelihood ratio test and goodness of fit test. Many examples are used to illustrate their applications.
399

Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms

Vestin, Albin, Strandberg, Gustav January 2019 (has links)
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.

Page generated in 0.0734 seconds