• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 615
  • 143
  • 104
  • 92
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

On the MSE Performance and Optimization of Regularized Problems

Alrashdi, Ayed 11 1900 (has links)
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
212

Investigation of the applicability of the lattice Boltzmann method to free-surface hydrodynamic problems in marine engineering / Étude de l’applicabilité de la méthode de Boltzmann sur réseau aux problèmes hydrodynamiques à surface libre du génie maritime

Cao, Weijin 08 April 2019 (has links)
La simulation numérique des écoulements à surface libre pour les applications du génie maritime est un problème qui présente de grands défis dans le domaine de la dynamique des fluides numérique (CFD). On propose dans cette thèse une solution, qui consiste à utiliser la méthode de Boltzmann sur réseau régularisée (RLBM) avec un modèle de surface libre basé sur le volume-de-fluide (VOF), et on étudie sa faisabilité et sa fiabilité. Les connaissances théoriques de la méthode de Boltzmann sur réseau (LBM) sont présentées dans un premier temps, sur la base d'un développement polynomial d'Hermite et d'une analyse de Chapman-Enskog. De cette perspective, l’idée de la RLBM se résume comme étant la régularisation d'Hermite des fonctions de distribution. Dans les cas tests suivants du vortex de Taylor-Green et de la cavité entraînée, il est vérifié que la RLBM posse possède une précision de second ordre et une stabilité améliorée. On a alors ensuite implémenté le modèle de surface libre dans la RLBM. Sur la simulation d'une onde de gravité visqueuse stationnaire et d'un écoulement de dambreak, il est montré que la régularisation stabilise fortement le calcul en réduisant les oscillations de pression, ce qui est très bénéfique pour obtenir des écoulements à surface libre précis, et que la RLBM n'introduit pas non plus de dissipation numérique supplémentaire. De plus, une nouvelle méthode de reconstruction des fonctions de distribution à la surface libre est proposée. Le modèle proposé est ainsi plus consistent avec la RLBM, ce qui offre un moyen efficace pour simuler des écoulements à surface libre à un grand nombre de Reynolds en génie maritime. / The numerical simulation of the freesurface flows for marine engineering applications is a very challenging issue in the field of computational fluid dynamics (CFD). In this thesis, we propose a solution, which is to use the regularized lattice Boltzmann method (RLBM) with a volume-of-fluid (VOF) based single-phase free-surface lattice Boltzmann (LB) model, and we investigate its feasibility and its reliability. The theoretical insights of the lattice Boltzmann method (LBM) are given at first, through the Hermite expansion and the Chapman-Enskog analysis. From this perspective, the idea of the RLBM is summarized as the Hermite regularization of the distribution functions. On the test-cases of the Taylor-Green vortex and the lid-driven cavity flow, the RLBM is verified to have a 2nd-order accuracy and an improved stability. The adopted free-surface model is then implemented into the RLBM and validated through simulating a viscous standing wave and a dambreak flow problems. It is shown that the regularization not only strongly stabilizes the calculation by reducing spurious pressure oscillations, which is very beneficial for obtaining accurate free-surface motions, but also does not introduce any extra numerical dissipation. Furthermore, a new reconstruction method for the distribution functions at the free-surface is proposed. The present model is more consistent with the RLBM, which provides an effective way for simulating high-Reynoldsnumber free-surface flows in marine engineering.
213

Solving an inverse problem for an elliptic equation using a Fourier-sine series.

Linder, Olivia January 2019 (has links)
This work is about solving an inverse problem for an elliptic equation. An inverse problem is often ill-posed, which means that a small measurement error in data can yield a vigorously perturbed solution. Regularization is a way to make an ill-posed problem well-posed and thus solvable. Two important tools to determine if a problem is well-posed or not are norms and convergence. With help from these concepts, the error of the reg- ularized function can be calculated. The error between this function and the exact function is depending on two error terms. By solving the problem with an elliptic equation, a linear operator is eval- uated. This operator maps a given function to another function, which both can be found in the solution of the problem with an elliptic equation. This opera- tor can be seen as a mapping from the given function’s Fourier-sine coefficients onto the other function’s Fourier-sine coefficients, since these functions are com- pletely determined by their Fourier-sine series. The regularization method in this thesis, uses a chosen number of Fourier-sine coefficients of the function, and the rest are set to zero. This regularization method is first illustrated for a simpler problem with Laplace’s equation, which can be solved analytically and thereby an explicit parameter choice rule can be given. The goal with this work is to show that the considered method is a reg- ularization of a linear operator, that is evaluated when the problem with an elliptic equation is solved. In the tests in Chapter 3 and 4, the ill-posedness of the inverse problem is illustrated and that the method does behave like a regularization is shown. Also in the tests, it can be seen how many Fourier-sine coefficients that should be considered in the regularization in different cases, to make a good approximation. / Det här arbetet handlar om att lösa ett inverst problem för en elliptisk ekvation. Ett inverst problem är ofta illaställt, vilket betyder att ett litet mätfel i data kan ge en kraftigt förändrad lösning. Regularisering är ett tillvägagångssätt för att göra ett illaställt problem välställt och således lösbart. Två viktiga verktyg för att bestämma om ett problem är välställt eller inte är normer och konvergens. Med hjälp av dessa begrepp kan felet av den regulariserade lösningen beräknas. Felet mellan den lösningen och den exakta är beroende av två feltermer. Genom att lösa problemet med den elliptiska ekvationen, så är en linjär operator evaluerad. Denna operator avbildar en given funktion på en annan funktion, vilka båda kan hittas i lösningen till problemet med en elliptisk ekva- tion. Denna operator kan ses som en avbildning från den givna funktions Fouri- ersinuskoefficienter på den andra funktionens Fouriersinuskoefficienter, eftersom dessa funktioner är fullständigt bestämda av sina Fouriersinusserier. Regularise- ringsmetoden i denna rapport använder ett valt antal Fouriersinuskoefficienter av funktionen, och resten sätts till noll. Denna regulariseringsmetod illustreras först för ett enklare problem med Laplaces ekvation, som kan lösas analytiskt och därmed kan en explicit parametervalsregel anges. Målet med detta arbete är att visa att denna metod är en regularisering av den linjära operator som evalueras när problemet med en elliptisk ekvation löses. I testerna i kapitel 3 och 4, illustreras illaställdheten av det inversa problemet och det visas att metoden beter sig som en regularisering. I testerna kan det också ses hur många Fouriersinuskoefficienter som borde betraktas i regulariseringen i olika fall, för att göra en bra approximation.
214

Ill-posedness of parameter estimation in jump diffusion processes

Düvelmeyer, Dana, Hofmann, Bernd 25 August 2004 (has links)
In this paper, we consider as an inverse problem the simultaneous estimation of the five parameters of a jump diffusion process from return observations of a price trajectory. We show that there occur some ill-posedness phenomena in the parameter estimation problem, because the forward operator fails to be injective and small perturbations in the data may lead to large changes in the solution. We illustrate the instability effect by a numerical case study. To overcome the difficulty coming from ill-posedness we use a multi-parameter regularization approach that finds a trade-off between a least-squares approach based on empircal densities and a fitting of semi-invariants. In this context, a fixed point iteration is proposed that provides good results for the example under consideration in the case study.
215

High-Dimensional Analysis of Regularized Convex Optimization Problems with Application to Massive MIMO Wireless Communication Systems

Alrashdi, Ayed 03 1900 (has links)
In the past couple of decades, the amount of data available has dramatically in- creased. Thus, in modern large-scale inference problems, the dimension of the signal to be estimated is comparable or even larger than the number of available observa- tions. Yet the desired properties of the signal typically lie in some low-dimensional structure, such as sparsity, low-rankness, finite alphabet, etc. Recently, non-smooth regularized convex optimization has risen as a powerful tool for the recovery of such structured signals from noisy linear measurements in an assortment of applications in signal processing, wireless communications, machine learning, computer vision, etc. With the advent of Compressed Sensing (CS), there has been a huge number of theoretical results that consider the estimation performance of non-smooth convex optimization in such a high-dimensional setting. In this thesis, we focus on precisely analyzing the high dimensional error perfor- mance of such regularized convex optimization problems under the presence of im- pairments (such as uncertainties) in the measurement matrix, which has independent Gaussian entries. The precise nature of our analysis allows performance compari- son between different types of these estimators and enables us to optimally tune the involved hyper-parameters. In particular, we study the performance of some of the most popular cases in linear inverse problems, such as the LASSO, Elastic Net, Least Squares (LS), Regularized Least Squares (RLS) and their box-constrained variants. In each context, we define appropriate performance measures, and we sharply an- alyze them in the High-Dimensional Statistical Regime. We use our results for a concrete application of designing efficient decoders for modern massive multi-input multi-output (MIMO) wireless communication systems and optimally allocate their power. The framework used for the analysis is based on Gaussian process methods, in particular, on a recently developed strong and tight version of the classical Gor- don Comparison Inequality which is called the Convex Gaussian Min-max Theorem (CGMT). We use some results from Random Matrix Theory (RMT) in our analysis as well.
216

Residual Stress Analysis in 3C-SiC Thin Films by Substrate Curvature Method

Carballo, Jose M 25 March 2010 (has links)
Development of thin films has allowed for important improvements in optical, electronic and electromechanical devices within micrometer length scales. In order to grow thin films, there exist a wide variety of deposition techniques, as each technique offers a unique set of advantages. The main challenge of thin film deposition is to reach smallest possible dimensions, while achieving mechanical stability during operating conditions (including extreme temperatures and external forces, complex film structures and device configurations). Silicon carbide (SiC) is attractive for its resistance to harsh environments, and the potential it offers to improve performance in several microelectronic, micro-electromechanical, and optoelectronic applications. The challenge is to overcome presence of high defect densities within structure of SiC while it is grown as a crystalline thin film. For this reason is important to monitor levels of residual stress, inherited from such grown defects, and which can risk the mechanical stability of SiC- made thin film devices. Stoney's equation is the theoretical foundation of the curvature method for measuring thin film residual stress. It connects residual film stress with substrate curvature through thin plates bending mechanics. Important assumptions and vii simplifications are made about the film-substrate system material properties, dimensions and loading conditions; however, accuracy is reduced upon applying such simplifications. In recent studies of cubic SiC growth, certain Stoney's equation assumptions are violated in order to obtain approximate values of residual stress average. Furthermore, several studies have proposed to expand the scope of Stoney's equation utility; however, such expansions demand of more extensive substrate deflection measurements to be made, before and after film deposition. The goal of this work is to improve the analysis of substrate deflection data, obtained by mechanical profilometry, which is a simple and inexpensive technique. Scatter in deflection data complicates the use of simple processes such as direct differentiation or polynomial fitting. One proposed method is total variation regularization of differentiation process; and results are promising for the adaptation of mechanical profilometry for complete measurement of all components of non-uniform substrate curvature.
217

Resettling Displaced Residents from Regularized Settlements in Dar es Salaam City, Tanzania : The case of Community Infrastructure Upgrading Program (CIUP)

Magembe-Mushi, Dawah Lulu January 2011 (has links)
This research seeks to examine the process of displacement and resettlement of residents who had been affected by regularization process within Manzese and Buguruni wards in Dar es Salaam City, Tanzania. It aimed at analyzing the issues and opportunities faced by the affected residents during regularization. The regularization which involves two processes, tenure and physical upgrading has been extensively used in solving problems associated with unplanned and informal settlements within developing countries in Africa, Asia and Latin America. It’s a process used to bring informal and unauthorized settlements into the legal, official and administrative structures of land management as well as improving the living conditions of its dwellers. In Tanzania, whereby more than 80 per cent of its urban residents live in informal settlement, the process had been practiced in order to provide basic services such as access roads, storm water drainages, street lights, water supply and public toilets within informal and unplanned settlements. Compared to previous strategies for upgrading such as slum clearance and site and services and squatter upgrading, regularization had been considered to bring positive results.  The main concern of this research is physical regularization which was implemented through Community Infrastructure Upgrading Project (CIUP) within sixteen settlements in Dar es Salaam city. During its implementation, about twenty households of tenants and house owners were displaced. This research being explorative focused on understanding the process of displacement and resettlement by using qualitative method. This was done through narrations of traced and found six tenants and four house owners within the affected settlements of Mnazi Mmoja, Mnyamani and Madenge settlements. It applied case study strategy whereby the settlements made the main case study areas and the individual displaced residents became sub cases. Experiences before, during and after displacement and resettlement were narrated by using in-depth interviews. The selected settlements were obtained through criteria sampling whereby the individual displaced residents were found by using snow balling approach. Also resettlement issues and opportunities faced by displaced tenants and house owners were analyzed and the emerging patterns of issues and opportunities were identified. The issues include loss of access to common facilities, homelessness, marginalization and social disarticulation, family disintegration and joblessness. The opportunities include improved facilities, expansion of human competence and social opportunities, enhanced capabilities and improved social services. It was also realized that the issues suffered and opportunities accrued by house owners were different from that of tenants. The research examined the process of displacement and resettlement through policy and legal frameworks which guided the regularization. It also used the justice and collaborative theories in formulating concepts for data collection, analysis and discussing the results. During the discussions it was realized that there were emerging gaps in the process as it was indicated within the experiences of individual cases. These gaps include that of lack of real participation and democracy, insufficient knowledge on compensation level, insufficient community participation especially with the affected tenants.  The research provides an indicative knowledge on regularization process which can further be used in improving the planning process. / QC 20111123
218

Regularizing An Ill-Posed Problem with Tikhonov’s Regularization

Singh, Herman January 2022 (has links)
This thesis presents how Tikhonov’s regularization can be used to solve an inverse problem of Helmholtz equation inside of a rectangle. The rectangle will be met with both Neumann and Dirichlet boundary conditions. A linear operator containing a Fourier series will be derived from the Helmholtz equation. Using this linear operator, an expression for the inverse operator can be formulated to solve the inverse problem. However, the inverse problem will be found to be ill-posed according to Hadamard’s definition. The regularization used to overcome this ill-posedness (in this thesis) is Tikhonov’s regularization. To compare the efficiency of this inverse operator with Tikhonov’s regularization, another inverse operator will be derived from Helmholtz equation in the partial frequency domain. The inverse operator from the frequency domain will also be regularized with Tikhonov’s regularization. Plots and error measurements will be given to understand how accurate the Tikhonov’s regularization is for both inverse operators. The main focus in this thesis is the inverse operator containing the Fourier series. A series of examples will also be given to strengthen the definitions, theorems and proofs that are made in this work.
219

The visual perception of 3D shape from stereo: Metric structure or regularization constraints?

Yu, Ying 07 December 2017 (has links)
No description available.
220

Neural Network Regularization for Generalized Heart Arrhythmia Classification

Glandberger, Oliver, Fredriksson, Daniel January 2020 (has links)
Background: Arrhythmias are a collection of heart conditions that affect almost half of the world’s population and accounted for roughly 32.1% of all deaths in 2015. More importantly, early detection of arrhythmia through electrocardiogram analysis can prevent up to 90% of deaths. Neural networks are a modern and increasingly popular tool of choice for classifying arrhythmias hidden within ECG-data. In the pursuit of achieving increased classification accuracy, some of these neural networks can become quite complex which can result in overfitting. To combat this phenomena, a technique called regularization is typically used. Thesis’ Problem Statement: Practically all of today’s research on utilizing neural networks for arrhythmia detection incorporates some form of regularization. However, most of this research has chosen not to focus on, and experiment with, regularization. In this thesis we measured and compared different regularization techniques in order to improve arrhythmia classification accuracy. Objectives: The main objective of this thesis is to expand upon a baseline neural network model by incorporating various regularization techniques and compare how these new models perform in relation to the baseline model. The regularization techniques used are L1, L2, L1 + L2, and Dropout. Methods: The study used quantitative experimentation in order to gather metrics from all of the models. Information regarding related works and relevant scientific articles were collected from Summon and Google Scholar. Results: The study shows that Dropout generally produces the best results, on average improving performance across all parameters and metrics. The Dropout model with a regularization parameter of 0.1 performed particularly well. Conclusions: The study concludes that there are multiple models which can be considered to have the greatest positive impact on the baseline model. Depending on how much one values the consequences of False Negatives vs. False Positives, there are multiple candidates which can be considered to be the best model. For example, is it worth choosing a model which misses 11 people suffering from arrhythmia but simultaneously catches 1651 mistakenly classified arrhythmia cases? / Bakgrund: Arytmier är en samling hjärt-kärlsjukdomar som drabbar nästan hälften av världens befolkning och stod för ungefär 32,1% av alla dödsfall 2015. 90% av dödsfallen som arytmi orsakar kan förhindras om arytmin identifieras tidigare. Neurala nätverk har blivit ett populärt verktyg för att detektera arytmi baserat på ECG-data. I strävan på att uppnå bättre klassificeringsnogrannhet kan dessa nätverk råka ut för problemet ’overfitting’. Overfitting kan dock förebyggas med regulariseringstekniker. Problemställning: Praktiskt taget all forskning som utnyttjar neurala nätverk för att klassifiera arytmi innehåller någon form av regularisering. Dock har majoriteten av denna forsknings inte valt att fokusera och experimentera med regularisering. I den här avhandlingen kommer vi att testa olika regulariseringstekniker för att jämföra hur de förbättrar grundmodellens arytmiklassificeringsförmåga. Mål: Huvudmålet med denna avhandling är att modifiera ett neuralt nätverk som utnyttjar transfer learning för att klassificera arytmi baserat på två-dimensionell ECG-data. Grundmodellen utökades med olika regulariseringstekniker i mån om att jämföra dessa och därmed komma fram till vilken teknik som har störst positiv påverkan. De tekniker som jämfördes är L1, L2, L1 + L2, och Dropout. Metod: Kvantitativa experiment användes för att samla in data kring teknikernas olika prestationer och denna data analyserades och presenterades sedan. En litteraturstudie genomfördes med hjälp av Summon och Google Scholar för att hitta information från relevanta artiklar. Resultat: Forskningen tyder på att generellt sett presterar Dropout bättre än de andra teknikerna. Dropout med parametern 0.1 förbättrade mätvärderna mest. Slutsatser: I specifikt denna kontext presterade Dropout(0.1) bäst. Dock anser vi att falska negativ och falska positiv inte är ekvivalenta. Vissa modeller presterar bättre än andra beroende på hur mycket dessa variabler värderas, och därmed är den bästa modellen subjektiv. Är det till exempel värt att låta 11 personer dö om det innebär att 1651 personer inte kommer att vidare testas i onödan?

Page generated in 0.0752 seconds