• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 560
  • 32
  • 6
  • 2
  • Tagged with
  • 620
  • 620
  • 585
  • 52
  • 41
  • 40
  • 38
  • 34
  • 33
  • 30
  • 30
  • 29
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Inverse Parameter Estimation using Hamilton-Jacobi Equations / Inversa parameteruppskattningar genom tillämpning av Hamilton-Jacobi ekvationer

Helin, Mikael January 2013 (has links)
Inthis degree project, a solution on a coarse grid is recovered by fitting apartial differential equation to a few known data points. The PDE to consideris the heat equation and the Dupire’s equation with their synthetic data,including synthetic data from the Black-Scholes formula. The approach to fit aPDE is by optimal control to derive discrete approximations to regularized Hamiltoncharacteristic equations to which discrete stepping schemes, and parameters forsmoothness, are examined. By non-parametric numerical implementation thedervied method is tested and then a few suggestions on possible improvementsare given / I detta examensarbete återskapas en lösning på ett glest rutnät genom att anpassa en partiell differentialekvation till några givna datapunkter. De partiella differentialekvationer med deras motsvarande syntetiska data som betraktas är värmeledningsekvationen och Dupires ekvation inklusive syntetiska data från Black-Scholes formel. Tillvägagångssättet att anpassa en PDE är att med hjälp av optimal styrning härleda diskreta approximationer på ett system av regulariserade Hamilton karakteristiska ekvationer till vilka olika diskreta stegmetoder och parametrar för släthet undersöks. Med en icke-parametrisk numerisk implementation prövas den härledda metoden och slutligen föreslås möjliga förbättringar till metoden.
462

Imbalanced Learning and Feature Extraction in Fraud Detection with Applications / Obalanserade Metoder och Attribut Aggregering för Upptäcka Bedrägeri, med Appliceringar

Jacobson, Martin January 2021 (has links)
This thesis deals with fraud detection in a real-world environment with datasets coming from Svenska Handelsbanken. The goal was to investigate how well machine learning can classify fraudulent transactions and how new additional features affected classification. The models used were EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees, and Random Forests. To determine the best results the Mathew Correlation Coefficient was used as performance metric, which has been shown to have a medium bias for imbalanced datasets. Each model could deal with high imbalanced datasets which is common for fraud detection. Best results were achieved with Random Forest and Extra Trees. The best scores were around 0.4 for the real-world datasets, though the score itself says nothing as it is more a testimony to the dataset’s separability. These scores were obtained when using aggregated features and not the standard raw dataset. The performance measure recall’s scores were around 0.88-0.93 with an increase in precision by 34.4%-67%, resulting in a large decrease of False Positives. Evaluation results showed a great difference compared to test-runs, either substantial increase or decrease. Two theories as to why are discussed, a great distribution change in the evaluation set, and the sample size increase (100%) for evaluation could have lead to the tests not being well representing of the performance. Feature aggregation were a central topic of this thesis, with the main focus on behaviour features which can describe patterns and habits of customers. For these there were five categories: Sender’s fraud history, Sender’s transaction history, Sender’s time transaction history, Sender’shistory to receiver, and receiver’s history. Out of these, the best performance increase was from the first which gave the top score, the other datasets did not show as much potential, with mostn ot increasing the results. Further studies need to be done before discarding these features, to be certain they don’t improve performance. Together with the data aggregation, a tool (t-SNE) to visualize high dimension data was usedto great success. With it an early understanding of what to expect from newly added features would bring to classification. For the best dataset it could be seen that a new sub-cluster of transactions had been created, leading to the belief that classification scores could improve, whichthey did. Feature selection and PCA-reduction techniques were also studied and PCA showedgood results and increased performance. Feature selection had not conclusive improvements. Over- and under-sampling were used and neither improved the scores, though undersampling could maintain the results which is interesting when increasing the dataset. / Denna avhandling handlar om upptäcka bedrägerier i en real-world miljö med data från Svenska Handelsbanken. Målet var att undersöka hur bra maskininlärning är på att klassificera bedrägliga transaktioner, och hur nya attributer hjälper klassificeringen. Metoderna som användes var EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees och Random Forests. För evaluering av resultat används Mathew Correlation Coefficient, vilket har visat sig ha småttt beroende med hänsyn till obalanserade datamängder. Varje modell har inbygda värden för attklara av att bearbeta med obalanserade datamängder, vilket är viktigt för att upptäcka bedrägerier. Resultatmässigt visade det sig att Random Forest och Extra Trees var bäst, utan att göra p-test:s, detta på grund att dataseten var relativt sätt små, vilket gör att små skillnader i resultat ej är säkra. De högsta resultaten var cirka 0.4, det absoluta värdet säger ingenting mer än som en indikation om graden av separation mellan klasserna. De bäst resultaten ficks när nya aggregerade attributer användes och inte standard datasetet. Dessa resultat hade recall värden av 0,88-0,93 och för dessa kunde det synas precision ökade med 34,4% - 67%, vilket ger en stor minskning av False Positives. Evluation-resultaten hade stor skillnad mot test-resultaten, denna skillnad hade antingen en betydande ökning eller minskning. Två anledningar om varför diskuterades, förändring av evaluation-datan mot test-datan eller att storleksökning (100%) för evaluation har lett till att testerna inte var representativa. Attribute-aggregering var ett centralt ämne, med fokus på beteende-mönster för att beskriva kunders vanor. För dessa fanns det fem kategorier: Avsändarens bedrägerihistorik, Avsändarens transaktionshistorik, Avsändarens historik av tid för transaktion, Avsändarens historik till mottagaren och mottagarens historik. Av dessa var den största prestationsökningen från bedrägerihistorik, de andra attributerna hade inte lika positiva resultat, de flesta ökade inte resultaten.Ytterligare mer omfattande studier måste göras innan dessa attributer kan sägas vara givande eller ogivande. Tillsammans med data-aggregering användes t-SNE för att visualisera högdimensionsdata med framgång. Med t-SNE kan en tidig förståelse för vad man kan förvänta sig av tillagda attributer, inom klassificering. För det bästa dataset kan man se att ett nytt kluster som hade skapats, vilket kan tolkas som datan var mer beskrivande. Där förväntades också resultaten förbättras, vilket de gjorde. Val av attributer och PCA-dimensions reducering studerades och PCA-visadeförbättring av resultaten. Over- och under-sampling testades och kunde ej förbättrade resultaten, även om undersampling kunde bibehålla resultated vilket är intressant om datamängden ökar.
463

A Genetic Algorithm for Personnel Scheduling in Vacation Seasons

Fakt, Martin January 2022 (has links)
For workplaces with a preference or need for staffing around the clock, employees commonly work in shifts, which are work sessions that span different parts of the day. The scheduling of these shifts is a multi-objective optimization problem with both hard and soft constraints. The reduction in the available workforce when employees go on vacation makes the problem especially constrained. We describe a method that uses a genetic algorithm to generate shift schedules, for teams of employees and time periods with vacations. The method supports a staffing demand that can be met with one of multiple combinations of shifts. The genetic algorithm features specialized crossovers, together with a repair step aimed at maintaining staffing that fulfils the staffing requirements. A software implementation of the method is evaluated on three real-life problem instances. For two of them, it can produce schedules that are feasible, but subpar to those constructed manually by an experienced personnel scheduling professional. Several ideas to improve the program are presented.
464

Mathematical Optimization for the Test Case Prioritization Problem

Felding, Eric January 2022 (has links)
Regression testing is the process of testing software to make sure changes to the software will not change the functionality. With growing test suites theneed to prioritize arises. This thesis explores how to weigh factors such as the number of fails detected, days since latest test case execution, and coverage. The prioritization is done over multiple test systems, software branches, and over many test sessions where the software can change in-between. With data provided by an industrial partner, we evaluate different ways to prioritize. The developed mathematical model could not cope with the size of the problem, whereas a simulated annealing approach based on said model proved highly successful. We also found that prioritizing test cases related to recent codechanges was effective. / Regressionstestning är processen att testa mjukvara för att säkerställa att ändringar av mjukvaran inte kommer att ändra funktionaliteten. Med växande testsviter uppstår behovet av att prioritera. Det här examensarbetet undersöker hur man väger faktorer som antalet upptäckta underkända testfall, dagar sedan testfallen senast kördes och täckning. Prioriteringen görs över flera testsystem, mjukvarugrenar och över många testsessioner där mjukvaran kan ändras däremellan. Med data från en industriell partner utvärderar vi olika sätt att prioritera. Den utvecklade matematiska modellen kunde inte hantera problemets storlek, medan en simulerad kylningsmetod baserad på denna modell visade sig vara mycket framgångsrik. Vi fann också att prioritering enligt ändringar som gjorts i mjukvaran var effetivt
465

Optimization-Based Methods for Revising Train Timetables with Focus on Robustness

Khoshniyat, Fahimeh January 2016 (has links)
With increase in the use of railway transport, ensuring robustness in railway timetables has never been this important. In a dense railway timetable even a small disturbance can propagate easily and affect trains' arrival and departure times. In a robust timetable small delays are absorbed and knock-on effects are prevented effectively. The aim of this thesis is to study how optimization tools can support the generation of robust railway traffic timetables. We address two Train Timetabling Problems (TTP) and for both problems we apply Mixed Integer Linear Programming (MILP) to solve them from network management perspectives. The first problem is how robustness in a given timetable can be assessed and ensured. To tackle this problem, a headway-based method is introduced. The proposed method is implemented in real timetables and evaluated from performance perspectives. Furthermore, the impact of the proposed method on capacity utilization, heterogeneity and the speed of trains, is monitored. Results show that the proposed method can improve robustness without imposing major changes in timetables. The second problem addressed in the thesis is how robustness can be assessed and maintained in a given timetable when allocating additional traffic and maintenance slots. Different insertion strategies are studied and their consequences on capacity utilization and on the properties of the timetables are analyzed. Two different insertion strategies are considered: i) simultaneous and ii) stepwise insertion. The results show that inserting the additional trains simultaneously usually results in generating more optimal solutions. However, solving this type of problem is computationally challenging. We also observed that the existing robustness metrics cannot capture the essential properties of having more robust timetables. Therefore we proposed measuring Channel Width, Channel Width Forward, Channel Width Behind and Track Switching. Furthermore, the experimental analysis of the applied MILP model shows that some cases are computationally hard to solve and there is a need to decrease the computation time. Hence several valid inequalities are developed and their effects on the computation time are analyzed. This thesis contains three papers which are appended. The results of this thesis are of special interests for railway traffic planners and it would support their working process. However, railway traffic operators and passengers also benefit from this study.
466

Improved Statistical Methods for Elliptic Stochastic Homogenization Problems : Application of Multi Level- and Multi Index Monte Carlo on Elliptic Stochastic Homogenization Problems

Daloul, Khalil January 2023 (has links)
In numerical multiscale methods, one relies on a coupling between macroscopic model and a microscopic model. The macroscopic model does not include the microscopic properties that the microscopic model offers and that are vital for the desired solution. Such microscopic properties include parameters like material coefficients and fluxes which may variate microscopically in the material. The effective values of this data can be computed by running local microscale simulations while averaging the microscopic data. One desires the effect of the microscopic coefficients on a macroscopic scale, and this can be done using classical homogenisation theory. One method in the homogenization theory is to use local elliptic cell problems in order to compute the homogenized constants and this results in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda%20/R" data-classname="equation_inline" data-title="" /> error where <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda" data-classname="equation" /> is the wavelength of the microscopic variations and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?R" data-classname="mimetex" data-title="" /> is the size of the simulation domain. However, one could greatly improve the accuracy by a slight modification in the homogenisation elliptic PDE and use a filter in the averaging process to get much better orders of error. The modification relates the elliptic PDE to a parabolic one, that could be solved and integrated in time to get the elliptic PDE's solution.   In this thesis I apply the modified elliptic cell homogenization method with a qth order filter to compute the homogenized diffusion constant in a 2d Poisson equation on a rectangular domain. Two cases were simulated. The diffusion coefficients used in the first case was a deterministic 2d matrix function and in the second case I used stochastic 2d matrix function, which results in a 2d stochastic differential equation (SDE). In the second case two methods were used to determine the expected value of the homogenized constants, firstly the multi-level Monte Carlo (MLMC) and secondly its generalization multi-index Monte Carlo (MIMC). The performance of MLMC and MIMC is then compared when used in the process of the homogenization.   In the homogenization process the finite element notations in 2d were used to estimate a solution of the Poisson equation. The grid spatial steps were varied in a first order differences in MLMC (square mesh) and first order mixed differences in MIMC (which allows for rectangular mesh).
467

A Journey Through the World of Compression with IRS Contracts / En resa genom kompressionens värld med IRS kontrakt

Hjalmarsson, Karl January 2023 (has links)
By participating in the market a party buys and sells different types of contracts resulting in the collection of contracts growing. With a large collection of contracts come the hurdles of an increasing operational cost, a harder-to-manage order book, and an increase in counterparty risk. To combat these problems we set out to minimize the size and quantity of contracts by performing what is called a compression. We have looked into three different types of compression methods for interest rate swap contracts. One method is specialized for central clearing, Coupon Blending, and two methods for bilateral clearing, Closed Loops, and the Network Simplex Method. By using Monte Carlo Simulations, all three methods could be compared to one another to conclude the significant findings. The clear winner for centrally cleared contracts was Coupon Blending which could terminate over 92% of the contracts, and reduce the total absolute size of the contracts by over 75%. Network Simplex came in as a close second which could also reduce the total absolute size of the contracts by over 75% but only terminate 86%. Coupon Blending and Network Simplex, both had very similar accuracy in their compression. However, NetworkSimplex performed better at keeping the system’s total risk intact. For bilateral clearing, NetworkSimplex performed the best where the Closed Loops strategy was not an optimized approach. / Genom att delta i den finansiella marknaden köper och säljer en participant olika sorters kontrakt vilket resulterar i att samlingen av kontrakt växer. Med en ständigt växande samling av kontrakt skapas problem som, att kostnaden för hantering ökar, att orderbokens hantering blir svårare och en ökad risk för konkurs. För att undvika dessa problem kan man utföra kompression vilket är att försöka reducera kontrakten i antal och storlek. Vi har studerat tre olika typer av kompressionsstrategier för kompression av ränteswappar. Den första strategin är Coupon Blending som är specialiserad för central clearing medan de två andra, Closed Loops och Network Simplex Metoden är utvecklade för bilateral clearing. Genom att använda Monte Carlo Simuleringar på alla tre strategier kunde vi dra slutsatser kring deras egenskaper och effektivitet. Den bästa strategin var Coupon Blending som kunde terminera över 92% av alla kontrakt, och samtidigt reducera den totala absoluta storleken på kontrakten med 75%. Network Simplex presterade också bra och kunde reducera den totala absoluta storleken på kontrakten med 75% och terminera 86% av kontrakten. Coupon Blending och Network Simplex hade bägge en liknande noggrannhet, men Network Simplex var något bättre på att hålla systemets totala risk intakt. För bilateral clearing presterade Network Simplex bäst där Closed Loops strategin inte var tillräckligt optimerad.
468

Order Matching Optimization : Developing and Evaluating Algorithms for Efficient Order Matching and Transaction Minimization

Jonsson, Victor, Steen, Adam January 2023 (has links)
This report aimed to develop algorithms for solving the optimization problem of matchingbuy and sell orders in call auctions while minimizing the number of transactions. The developed algorithms were evaluated based on their execution time and solution accuracy.The study found that the problem was more difficult to solve than initially anticipated, and commercial solvers were inadequate for the task. The data’s characteristics werecritical to the algorithms’ performance, and the lack of specifications for instruments andexchange posed a challenge. The algorithms were tested on a broad range of datasets with different characteristics, as well as real trades of stocks from the Stockholm Stock Exchange. Evaluating the best-performing algorithm became a trade-off between time and accuracy, where the quickest algorithm did not have the highest solution accuracy. Therefore, the importance of these factors should be considered before deciding which algorithm to implement. Eight algorithms were evaluated: four greedy algorithms and four clusteralgorithms capable of identifying 2-1 and 3-1 matches. If execution time is the single most crucial factor, the Unsorted Greedy Algorithm should be considered. However, if accuracyi s a priority, the Cluster 3-1 &amp; 1-3 Algorithm should be considered, even though it takes longer to find a solution. Ultimately, the report concluded that while no single algorithm can be definitively la-beled as the best, the Cluster 2-1 Algorithm strikes the most effective balance between execution time and solution accuracy, while also remaining relatively stable in perfor-mance for all test cases. The recommendation was based on the fact that the Cluster 2-1 Algorithm proved to be the quickest of the developed cluster algorithms, and that cluster algorithms were able to find the best solutions for all tested data sets. This study successfully addressed its purpose by developing eight algorithms that solved the given problem and suggested an appropriate algorithm that strikes a balance between execution time and solution quality.
469

Finita differensapproximationer av tvådimensionella vågekvationen med variabla koefficienter / Finite Difference Approximations of the Two-Dimensional Wave Equation with Variable Coefficients

Bergkvist, Herman January 2023 (has links)
I [Mattson, Journal of Scientific Computing 51.3 (2012), s. 650–682] konstruerades partialsummeringsoperatorer för finita differensapproximationer av andraderivator med variabla koefficienter. Vi tillämpar framgångsrikt dessa operatorer på vågekvationen i två dimensioner med diskontinuerliga koefficienter, utan särskild behandling av diskontinuiteten. Närmare bestämt undersöks (i) operatorernas fel och konvergensordning relativt ”korrekt” hantering av diskontinuiteter genom blockuppdelning med kopplingstermer; (ii) ifall mycket komplicerade koefficienter orsakar instabilitet eller icke-fysikaliska fel. Vi visar att hoppet i våghastighet i simuleringen sker ett antal punkter ifrån hoppet i koefficienter, där antalet punkter beror på operatorernas ordning och storleken av hoppet i koefficienter. I (i) får dessa två faktorer plus blockets form och antalet punkter en stor påverkan på både storleken av felet, samt metodens konvergensordning som varierar från ca 1–2,5. Annars sker i både (i) och (ii) inget större icke-fysikaliskt fel eller instabilitet, vilket gör denna relativt enkla metod tillämpningsbar på komplexa verklighetsbaserade problem.
470

Moving in the dark : Mathematics of complex pedestrian flows

Veluvali, Meghashyam January 2023 (has links)
The field of mathematical modelling for pedestrian dynamics has attracted significant scientific attention, with various models proposed from perspectives such as kinetic theory, statistical mechanics, game theory and partial differential equations. Often such investigations are seen as being a part of a new branch of study in the domain of applied physics, called sociophysics. Our study proposes three models that are tailored to specific scenarios of crowd dynamics. Our research focuses on two primary issues. The first issue is centred around pedestrians navigating through a partially dark corridor that impedes visibility, requiring the calculation of the time taken for evacuation using a Markov chain model. The second issue is posed to analyse how pedestrians move through a T-shaped junction. Such a scenario is motivated by the 2022 crowd-crush disaster took place in the Itaewon district of Seoul, Korea. We propose a lattice-gas-type model that simulates pedestrians’ movement through the grid by obeying a set of rules as well as a parabolic equation with special boundary conditions. By the means of numerical simulations, we investigate a couple of evacuation scenarios by evaluating the mean velocity of pedestrians through the dark corridor, varying both the length of the obscure region and the amount of uncertainty induced by the darkness. Additionally, we propose an agent-based-modelling and cellular automata inspired model that simulates the movement of pedestrians through a T-shaped grid, varying the initial number of pedestrians. We measure the final density and time taken to reach a steady pedestrian traffic state. Finally, we propose a parabolic equation with special boundary conditions that mimic the dynamic of the pedestrian populations in a T-junction. We solve the parabolic equation using a random walk numerical scheme and compare it with a finite difference approximation. Furthermore, we prove rigorously the convergence of the random walk scheme to a corresponding finite difference scheme approximation of the solution.

Page generated in 0.0964 seconds