• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 560
  • 32
  • 6
  • 2
  • Tagged with
  • 620
  • 620
  • 585
  • 52
  • 41
  • 40
  • 38
  • 34
  • 33
  • 30
  • 30
  • 29
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

CFD Simulation of Jet Cooling andImplementation of Flow Solvers inGPU

Hosain, Md. Lokman January 2013 (has links)
In rolling of steel into thin sheets the final step is the cooling of the finished product on the Runout Table. In this thesis, the heat transfer into a water jet impinging on a hot flat steel plate was studied as the key cooling process on the runout table. The temperature of the plate was kept under the boiling point. Heat transfer due to a single axisymmetric jet with different water flow rate was compared to cases of a single jet and two jets in 3D. The RANS model in ANSYS Fluent was used with the k −ε model in transient simulation of the axisymmetric model and steady flow for the 3D cases. Two different boundary conditions, constant temperature and constant heat flux were applied at the surface of the steel plate. The numerical results were consistent between 2D and 3D and compared well to literature data. The time dependent simulation for the 3D model requires very large computational power which motivated an investigation of simpler flow solvers running on a GPU platform. A simple 2D Navier-Stokes solver based on Finite Volume Method was written using OpenCL which can simulate flow and heat convection. A standard CFD problem named "Lid Driven Cavity" in 2D was chosen as validation case and for performance measurement and tuning of the solver. / När stål valsas till plåt är det sista steget att kyla den färdiga produkten på utrullningsbordet (ROT). I detta arbete studeras värmetransporten i en vattenstråle som faller in mot en varm plan platta som är den viktigaste kylprocessen på utrullningsbordet. Plattans temperatur hölls under kokpunkten. Värmeövergång i en ensam rotationssymmetrisk stråle med olika hastighet jämförs med en och två strålar i 3D modeller. RANS-modellering i ANSYS Fluent med k −ε turbulensmodell används för transientberäkning för rotationssymmetri och för stationär beräkning för 3D-fallen. Två olika randvillkor, konstant temperatur och konstant värmeflöde, används vid plattan. De numeriska resultaten är konsistenta mellan rotationssymmetri och 3D och jämförbara med litteratur-data. Transient simulering av 3D modellerna kräver stora datorresurser vilket motiverar en undersökning om enklare strömningsmodeller som kan köra på GPU-plattform. En enkel 2D Navier-Stokes-lösare baserad på Finita Volym-metoden implementerades i OpenCL för simulering av konvektiv värmetransport. Lid Driven Cavity-problemet i 2D valdes för verifiering och tidtagning.
122

Analysis and implementation of anefficient solver for large-scalesimulations of neuronal systems

The, Matthew January 2013 (has links)
Numerical integration methods exploiting the characteristics of neuronal equation systems were investigated. The main observations was a high stiffness and a quasi-linearity of the system. The latter allowed for decomposition into two smaller systems by using a block diagonal Jacobian approximation. The popular backwards differentiation formulas methods (BDF) showed performance degradation for this during first experiments. Linearly implicit peer methods (PeerLI), a new class of methods, did not show this degradation. Parameters for PeerLI were optimized by experimental means and then compared in performance to BDF. Models were simulated in both Matlab and NEURON, a neuron modelling package. For small models PeerLI was competitive with BDF, especially with a block diagonal Jacobian. In NEURON the performance of the block diagonal Jacobian did no longer degrade for BDF, but instead showed degradation for PeerLI, especially for large models. With full Jacobian PeerLI was competitive with BDF, but with block diagonal Jacobian an increase of ca.50% was seen in simulation time. Overall PeerLI methods were competitive for certain problems, but did not give the desired performance gain for block diagonal Jacobian for large problems. There is, however, still a lot of room for improvement, since parameters were only determined experimentally and tuned to small problems. / Undersökningen gäller numeriska integrationsmetoder som utnyttjar egenskaper hos de ekvationer som beskriver neuronsystem, huvudsakligen utpräglad styvhet och kvasi-linjaritet. Den senare tillåter uppdelning i två mindre system med block-diagonal Jacobian-approximation. De populära bakåtderiveringsmetoderna (BDF) påverkades negativt av detta i de inledande experimenten. Linjärt implicita peer metoder (PeerLI), en ny metodklass, påverkades inte. Parametrarna i PeerLI optimerades experimentellt och metoderna jämfördes sedan med BDF. Modeller simulerades både i Matlab och neuron-modelleringsprogrammet NEURON. För små system var BDF och PeerLI likvärdiga, särskilt med block-diagonal Jacobian. I NEURON försämrades inte BDF av block-diagonal Jacobian, utan i stället PeerLI, särskilt för större modeller. Med full Jacobian var PeerLI och BDF lika bra, men med block-diagonal Jacobian ökade tiden med 50%. översiktligt var PeerLI likvärdig för vissa problem men gav inte önskvärd uppsnabbning för block-diagonal Jacobian för stora system. Men förbättringsmöjligheterna är många eftersom parameterinställningen gjordes experimentellt för små modeller.
123

Computational methods to estimate error rates forpeptide identifications in mass spectrometry-based proteomics / Beräkningsmetoder för att uppskatta felfrekvensen hos peptididentifikationer inom masspektrometri-baserad proteomik

Liang, Xiao January 2013 (has links)
In the field of proteomics, tandem mass spectrometry is the core technology which promises to identify peptide components within complex mixtures on a large scale. Currently the bottleneck is to reduce the error rates and assign accurate statistical estimates of peptide identifications. In this work, we introduce the techniques of identifying chimeric spectra, where two or more precursor ions with similar mass and retention time are co-fragmented and sequenced by the MS/MS instrument. Based on this, we try to analyze the factor which leads to the high error rate of identifications. We show that chimeric spectra have high correlations with the ranking scores and can reduce the number of positive identifications. Additionally, we address the problem of assigning a posterior error probability (PEP) to the individual peptide-spectrum matches (PSMs) that are obtained via search engines. This problem is computationally more difficult than estimating the error rate associated with a large collection of PSMs, such as false discovery rate (FDR). Existing methods rely on parametric or semiparametric models of the underlying score distribution as preassumption.We provide a so-called kernel logistic regression procedure without any explicit assumptions about the score distribution. Based on an appropriate positive definite Gaussian kernel, the resulting PEP estimate is proven to be robust by achieving a close correspondence between the PEP-derived q-values and FDR-derived q-values. Furthermore, we also accept at least 200 more significant PSMs with setting a threshold based on PEP-derived q-values compared to FDR-derived q-values. Finally, we show that this kernel logistic regression method is well established in the statistics literature and it can produce accurate PEP estimates for different types of PSM score functions and data. / Tandemmasspektrometri (MS/MS) är kärnan i proteomikstudier som försöker att identifiera peptider inom komplexa proteinlösningar i stor skala. För närvarande är flaskhalsen att minska felprocenten av peptideidentifikationerna, samt att tilldela noggranna statistiska skattningar av dessa. I detta arbete presenterar vi metoder för att identifiera chimära spektra, där två eller flera produktjoner med liknande massa och retentionstid är samfragmenterade och sekvenserade i ett MS/MS-instrument. Hypotesen är att dessa sam-fragmenterade joner är en anledning till den höga felfrekvensen hos peptideidentifikationer. Vi visar att chimära spektra har korrelerar med identifikationskvalitéten och kan minska antalet positiva identifikationer. Dessutom undersöker vi problemet med att tilldela en posteriori felsannolikhet (posterior error probability, PEP) till individuella peptid-spektrum matcher (PSM) som erhålls genom sökmotorer. Detta problem är beräkningsmässigt svårare än att uppskatta felfrekvensen med en stor samling av PSM, såsom false discover rate (FDR). Befintliga metoder förlitar sig på parametriska eller delvis-parametriska modeller av den underliggande fördelningen av poäng till identifikationer. Vi tillhandahåller en kernel-logistisk regressionsmodell utan några explicita antaganden av fördelningen. Baserat på en lämpligt positiv definit Gausskärna, har den resulterande PEP-uppskattningen visat sig vara robust genom att uppnå ett nära samband mellan PEP-härledda q-värden och FDR-härledda q-värden. Slutligen visar vi att denna icke-parametrisk kernel-logistisk regression metod är väl etablerad i den statistiska litteraturen och kan producera noggranna PEP uppskattningar för olika typer av PSM värderingar
124

A comparison between finite differenceand binomial methods for solvingAmerican single-stock options

Eriksson, Alexander January 2013 (has links)
In this thesis, we compare four different finite-difference solvers with a binomial solver for pricing American options, with a special emphasis on achievable accuracy under computational time constraints. The three finite-difference solvers are: an operator splitting method suggested by S. Ikonen and J. Toivanen, a boundary projection method suggested by M. Brennan and E. Schwartz, projected successive overrelaxation and second order accurate operator splitting method known as Peaceman-Rachford. The binomial method is a modified variant employing an analytical final step as suggested by M. Broadie and J. Detemple. The model problem is an American put option, and we empirically examine the effects of the relevant numerical parameters on the quality of the solutions. For the finite-difference methods we utilize both a Crank-Nicolson discretization and a fully implicit second-order-in-time discretization. We conclude that the operator splitting method suggested by S. Ikonen and J. Toivanen is the Alternating Direction Implicit algorithm known as the Douglas-Rachford algorithm. We also conclude that the accuracy of the Peaceman- Rachford algorithm degrades to first order for the American option problem. Of the finite-difference methods tried, the Douglas-Rachford algorithm has the highest performance in terms of accuracy under computational time constraints. We conclude that it does, however, not outperform the modified binomial model
125

Numerical and experimental investigation of the effectof geometry modification on the aerodynamic characteristics of a NACA 64(2)-415wing / Numerisk och experimentell undersökning av effekten avgeometrimodifikationer på NACA-profil på dess aerodynamiska egenskaper

Ramesh, Pradeep January 2013 (has links)
The objective of the thesis is to study the effect of the geometry modifications on the aerodynamic characteristics of a standard airfoil (NACA series). The Airfoil was chosen for a high aspect ratio and reynolds number of the range 10⁶ - 10⁷ (realistic conditions for flight and naval applications). Experimental and Numerical investigation were executed in collaboration with KTH – CTL and Schlumberger. Experimental investigations were conducted at NTNU which was funded by Schlumberger. The numerical investigation was executed with the massively parallel unified continuum adaptive finite element method solver “Unicorn” and the computing resources at KTH – CTL. The numerical results are validated against the experiments and against experimental results in the literature, and possible discrepancies analyzed and discussed based on the numerical method. In addition, this will help us to expand our horizon and get acquainted with the numerical methods and the computational framework. The further scope of this thesis is to develop and implement the new modules for the Unicorn solver suitable for the aerodynamic applications. / I arbetet studeras effekten av geometri-modifikationerpå aerodynamiska egenskaper hos en standard-vingprofil ur NACA-serien. Profilenvaldes för en slank vinge och Reynoldstal mellan en och tio miljoner vilket kanvara realistiskt för flygplan och marina tillämpningar. Experiment ochnumeriska beräkningar utförs i samarbete mellan KTH/CTL och Schlumberger.Experimenten utfördes på NTNU med stöd av Schlumberger. Beräkningarna gjordesmed finita-element paketet "Unicorn" på KTH/CTL s datorer. NyaUnicorn-moduler för aerodynamiska beräkningar utvecklas vilket ger erfarenhetav de numeriska metoderna och beräkningsmiljön. Numeriska resultat validerasmot experimenten och resultat i litteraturen, och avvikelserna för den aktuellanumeriska metoden analyseras
126

Development, Implementation, Optimization and Performance Analysis of Matrix-Vector Multiplication on Eight-Core Digital Signal Processor

Muradov, Feruz January 2013 (has links)
This thesis work aims at implementing the sparse matrix vector multiplication on eight-core Digital Signal Processor (DSP) and giving insights on how to optimize matrix multiplication on DSP to achieve high energy efficiency. We used two sparse matrix formats: the Compressed Sparse Row (CSR) and the Block Compressed Sparse Row (BCSR) formats. We carried out loop unrolling optimization of the naive algorithm. In addition, we implemented the Registerblocked and the Cache-blocked sparse matrix vector multiplications to optimize the naive algorithm. The computational performance improvement with loop unrolling technique was promising (≈12%). With this optimization, we observed a decrease of power usage (0.3 W) when using a matrix size of 600 and an increase of power usage (1.2 W), when using larger size matrices. The Register-blocked algorithm resulted to be the most efficient technique on DSP. With this algorithm, we were able to increase performance by a factor of six when compared to the naive algorithm, still retaining low power consumption (≈ 14 W). The Cache-blocked sparse matrix vector multiplication is known to be most convenient for large number of architectures with coherent caches. However, because DSP does not support coherency between caches, this method did not show large improvement in computational performance. In fact, we confirm that power consumption for the Cache-blocked method was higher when compared to other effective algorithms such as Register-blocked sparse matrix vector multiplication and loop unrolling of naive algorithm. In conclusion, we found that the DSP delivers low power consumption, excellent computational performance and energy efficiency when the Register-blocked sparse matrix vector  multiplication technique is used.
127

Lattice Boltzmann method for two immiscible components / Lattice Boltzmann-simulering av två oblandbara vätskor

Dabbaghitehrani, Maryam January 2013 (has links)
No description available.
128

Multi-scale methods for wave propagation in heterogeneous media

Holst, Henrik January 2009 (has links)
Multi-scale wave propagation problems are computationally costly to    solve by traditional techniques because the smallest scales must be    represented over a domain determined by the largest scales of the    problem.  We have developed new numerical methods for multi-scale wave    propagation in the framework of heterogeneous multi-scale methods.  The    numerical methods couples simulations on macro and micro scales with    data exchange between models of different scales.  With the new method    we are able to consider a general class of problems including some    problems where a homogenized equation is unknown.  We show that the    complexity of the new method is significantly lower than that of    traditional techniques.  Numerical results are presented from problems    in one, two and three dimensional and for finite and long time.  We also    analyze the method, in one and several dimensions and for finite time,    using Fourier analysis.
129

Novel Hessian approximations in optimization algorithms

Berglund, Erik January 2022 (has links)
There are several benefits of taking the Hessian of the objective function into account when designing optimization algorithms. Compared to using strictly gradient-based algorithms, Hessian-based algorithms usually require fewer iterations to converge. They are generally less sensitive to tuning of parameters and can better handle ill-conditioned problems. Yet, they are not universally used, due to there being several challenges associated with adapting them to various challenging settings. This thesis deals with Hessian-based optimization algorithms for large-scale, distributed and zeroth-order problems. For the large-scale setting, we contribute with a new way of deriving limited memory quasi-Newton methods, which we show can achieve better results than traditional limited memory quasi-Newton methods with less memory for some logistic and linear regression problems. For the distributed setting, we perform an analysis of how the error of a Newton-step is affected by the condition number and the number of iterations of a consensus-algorithm based on averaging, We show that the number of iterations needed to solve a quadratic problem with relative error less than ε grows logarithmically with 1/ε and also with the condition number of the Hessian of the centralized problem. For the zeroth order setting, we exploit the fact that a finite difference estimate of the directional derivative works as an approximate sketching technique, and use this to propose a zeroth order extension of a sketched Newton method that has been developed to solve large-scale problems. With the extension of this method to the zeroth order setting, we address the combined challenge of large-scale and zeroth order problems. / <p>QC 20220120</p>
130

Decomposition Methods for Combinatorial Optimization

Ngulo, Uledi January 2021 (has links)
This thesis aims at research in the field of combinatorial optimization. Problems within this field often posses special structures allowing them to be decomposed into more easily solved subproblems, which can be exploited in solution methods. These structures appear frequently in applications. We contribute with both re-search on the development of decomposition principles and on applications. The thesis consists of an introduction and three papers.  In Paper I, we develop a Lagrangian meta-heuristic principle, which is founded on a primal-dual global optimality condition for discrete and non-convex optimization problems. This condition characterizes (near-)optimal solutions in terms of near-optimality and near-complementarity measures for Lagrangian relaxed solutions. The meta-heuristic principle amounts to constructing a weighted combination of these measures, thus creating a parametric auxiliary objective function (which is a close relative to a Lagrangian function), and embedding a Lagrangian heuristic in a search procedure in the space of the weight parameters. We illustrate and assess the Lagrangian meta-heuristic principle by applying it to the generalized assignment problem and to the set covering problem. Our computational experience shows that the meta-heuristic extension of a standard Lagrangian heuristic principle can significantly improve upon the solution quality.  In Paper II, we study the duality gap for set covering problems. Such problems sometimes have large duality gaps, which make them computationally challenging. The duality gap is dissected with the purpose of understanding its relationship to problem characteristics, such as problem shape and density. The means for doing this is the above-mentioned optimality condition, which is used to decompose the duality gap into terms describing near-optimality in a Lagrangian relaxation and near-complementarity in the relaxed constraints. We analyse these terms for numerous problem instances, including some large real-life instances, and conclude that when the duality gap is large, the near-complementarity term is typically large and the near-optimality term small. The large violation of complementarity is due to extensive over-coverage. Our observations have implications for the design of solution methods, especially for the design of core problems.  In Paper III, we study a bi-objective covering problem stemming from a real-world application concerning the design of camera surveillance systems for large-scale outdoor areas. It is prohibitively costly to surveil the entire area, and therefore relevant to be able to present a decision-maker with trade-offs between total cost and the portion of the area that is surveilled. The problem is stated as a set covering problem with two objectives, describing cost and portion of covering constraints that are fulfilled, respectively. Finding the Pareto frontier for these objectives is very computationally demanding and we therefore develop a method for finding a good approximate frontier in a reasonable computing time. The method is based on the ε−constraint reformulation, an established heuristic for set covering problems, and subgradient optimization. / Denna avhandling behandlar lösningsmetoder för stora och komplexa kombinatoriska optimeringsproblem. Sådana problem har ofta speciella strukturer som gör att de kan dekomponeras i en uppsättning mindre delproblem, vilket kan utnyttjas för konstruktion av effektiva lösningsmetoder. Avhandlingen omfattar både grundforskning inom utvecklingen av dekompositionsprinciper för kombinatorisk optimering och forskning på tillämpningar inom detta område. Avhandlingen består av en introduktion och tre artiklar.  I den första artikeln utvecklar vi en “Lagrange-meta-heuristik-princip”. Principen bygger på primal-duala globala optimalitetsvillkor för diskreta och icke-konvexa optimeringsproblem. Dessa optimalitetsvillkor beskriver (när)optimala lösningar i termer av när-optimalitet och när-komplementaritet för Lagrange-relaxerade lösningar. Den meta-heuristiska principen bygger på en ihopviktning av dessa storheter vilket skapar en parametrisk hjälpmålfunktion, som har stora likheter med en Lagrange-funktion, varefter en traditionell Lagrange-heuristik används för olika värden på viktparametrarna, vilka avsöks med en meta-heuristik. Vi illustrerar och utvärderar denna meta-heuristiska princip genom att tillämpa den på det generaliserade tillordningsproblemet och övertäckningsproblemet, vilka båda är välkända och svårlösta kombinatoriska optimeringsproblem. Våra beräkningsresultat visar att denna meta-heuristiska utvidgning av en vanlig Lagrange-heuristik kan förbättra lösningskvaliteten avsevärt.  I den andra artikeln studerar vi egenskaper hos övertäckningsproblem. Denna typ av optimeringsproblem har ibland stora dual-gap, vilket gör dem beräkningskrävande. Dual-gapet analyseras därför med syfte att förstå dess relation till problemegenskaper, såsom problemstorlek och täthet. Medlet för att göra detta är de ovan nämnda primal-duala globala optimalitetsvillkoren för diskreta och icke-konvexa optimeringsproblem. Dessa delar upp dual-gapet i två termer, som är när-optimalitet i en Lagrange-relaxation och när-komplementaritet i de relaxerade bivillkoren, och vi analyserar dessa termer för ett stort antal probleminstanser, däribland några storskaliga praktiska problem. Vi drar slutsatsen att när dualgapet är stort är vanligen den när-komplementära termen stor och den när-optimala termen liten. Vidare obseveras att när den när-komplementära termen är stor så beror det på en stor överflödig övertäckning. Denna förståelse för problemets inneboende egenskaper går att använda vid utformningen av lösningsmetoder för övertäckningsproblem, och speciellt för konstruktion av så kallade kärnproblem.  I den tredje artikeln studeras tvåmålsproblem som uppstår vid utformningen av ett kameraövervakningssystem för stora områden utomhus. Det är i denna tillämpning alltför kostsamt att övervaka hela området och problemet modelleras därför som ett övertäckningsproblem med två mål, där ett mål beskriver totalkostnaden och ett mål beskriver hur stor del av området som övervakas. Man önskar därefter kunna skapa flera lösningar som har olika avvägningar mellan total kostnad och hur stor del av området som övervakas. Detta är dock mycket beräkningskrävande och vi utvecklar därför en metod för att hitta bra approximationer av sådana lösningar inom rimlig beräkningstid.

Page generated in 0.0997 seconds