• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 615
  • 143
  • 104
  • 92
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Reinforcement Learning for Wind Turbine Load Control: How AI can drive tomorrow‘s wind turbines

Westerbeck, Nico, Gonsior, Julius, Marten, David, Perez-Becker, Sebastian 30 May 2023 (has links)
Load control strategies for wind turbines are used to reduce the structural wear of the turbine without reducing energy yield. Until now, these control strategies are almost exclusively built up-on linear approaches like PID and model-based controllers due to their robustness. However, advances in turbine size and capabilities create a need for more complex control strategies that can effectively address design challenges in modern turbines. This work presents WINDL, a load control policy based on a neural network, which is trained through model-free Reinforcement Learning (RL) on a simulated wind turbine. While RL has achieved great success in the past on games and simple simulation benchmarks, applications to more complex control problems are starting to emerge just recently. We show that through the usage of regularization techniques and signal transformations, such an application to the field of wind turbine load control is possible. Using a smoothness regularizer, we incentivize the highly non-linear neural network policy to output control actions that are safe to apply to a wind turbine. The Coleman transformation, a common tool for the design of traditional PID-based load control strategies, is used to project signals into a stationary coordinate space, increasing robustness and final policy performance. Trained to control a large offshore turbine in a model-free fashion, WINDL finds a control policy that outperforms a state-of-the-art controller based on the IPC strategy with respect to the prima-ry optimization goal blade loads. Using the DEL metric, we measure 54.1% lower blade loads in the steady wind and 13.45% lower blade loads in the turbulent wind scenario. While such levels of blade reduction come with slightly worse performance on secondary optimi-zation goals like pitch wear and power production, we demonstrate the ability to control the trade-off between different optimization goals on the example of pitch versus blade loads. To comple-ment our findings, we perform a qualitative analysis of the policy behavior and learning process. We believe our work to be the first application of RL to wind turbine load control that exceeds baseline performance in the primary optimization metric, opening up the possibility of including specialized load controllers for targeting critical design driving scenarios of modern large wind turbines.:Problem Method Aim Results Conclusion
242

High Order Implementation in Integral Equations

Marshall, Joshua P 09 August 2019 (has links)
The present work presents a number of contributions to the areas of numerical integration, singular integrals, and boundary element methods. The first contribution is an elemental distortion technique, based on the Duffy transformation, used to improve efficiency for the numerical integration of near hypersingular integrals. Results show that this method can reduce quadrature expense by up to 75 percent over the standard Duffy transformation. The second contribution is an improvement to integration of weakly singular integrals by using regularization to smooth weakly singular integrals. Errors show that the method may reduce errors by several orders of magnitude for the same quadrature order. The final work investigated the use of regularization applied to hypersingular integrals in the context of the boundary element method in three dimensions. This work showed that by using the simple solutions technique, the BEM is reduced to a weakly singular form which directly supports numerical integration. Results support that the method is more efficient than the state-of-the-art.
243

Regularization Methods for Ill-posed Problems

Neuman, Arthur James, III 15 June 2010 (has links)
No description available.
244

Space-Frequency Regularization for Qualitative Inverse Scattering

Alqadah, Hatim F. January 2011 (has links)
No description available.
245

A Geometric Singular Perturbation Theory Approach to Viscous Singular Shocks Profiles for Systems of Conservation Laws

Hsu, Ting-Hao 14 October 2015 (has links)
No description available.
246

PARAMETER CHOICES FOR THE SPLIT BREGMAN METHOD APPLIED TO SIGNAL RESTORATION

Hashemi, Seyyed Amirreza 20 October 2016 (has links)
No description available.
247

Regularized Fine-tuning Strategies for Neural Language Models : Application of entropy regularization on GPT-2

Hong, Jae Eun January 2022 (has links)
Deep neural language models like GPT-2 is undoubtedly strong at text generation, but often requires special decoding strategies to prevent producing degenerate output - namely repetition. The use of maximum likelihood training objective results in a peaked probability distribution, leading to the over-confidence of neural networks. In this thesis, we explore entropy regularization for a neural language model that can easily smooth peaked output distribution during the fine-tuning process employing GPT-2. We first define the models in three ways: (1) Out of-the box model without fine-tuning process, (2) Fine-tuned model without entropy regularization, and (3) Fine-tuned model with entropy regularization. To investigate the effect of domains on the model, we also divide the dataset into three ways: (1) fine-tuned on heterogeneous dataset, tested on heterogeneous dataset, (2) fine-tuned on homogeneous dataset, tested on homogeneous dataset, and (3) fine-tuned on heterogeneous dataset, tested on homogeneous dataset. In terms of entropy regularization, we experiment controlling the entropy strength parameter (𝛽) in the range of [0.5, 1.0, 2.0, 4.0, 6.0] and annealing the parameter during fine-tuning process. Our findings prove that the entropy-based regularization during fine-tuning process improve the text generation models by significantly reducing the repetition rate without tuning the decoding strategies. As a result of comparing the probabilities of human-generated sentence tokens, it was observed that entropy regularization compensates for the shortcomings of the deterministic decoding method (Beam search) that mostly selects few high-probability words. Various studies have explored entropy regularization in the cold-start training process of neural networks. However, there are not many studies covering the effect of the fine-tuning stage of text generation tasks when employing large scale pre-trained language models. Our findings present strong evidence that one can achieve significant improvement in text generation by way of utilizing entropy regularization, a highly cost-effective approach, during the fine-tuning process.
248

MRI Velocity Quantification Implementation and Evaluation of Elementary Functions for the Cell Broadband Engine

Li, Wei 27 June 2007 (has links)
<p> Magnetic Resonance Imaging (MRI) velocity quantification is addressed in part I of this thesis. In simple MR imaging, data is collected and tissue densities are displayed as images. Moving tissue creates signals which appear as artifacts in the images. In velocity imaging, more data is collected and phase differences are used to quantify the velocity of tissue components. The problem is described and a novel formulation of a regularized, nonlinear inverse problem is proposed. Both Tikhonov and Total Variation Regularization are discussed. Results of numerical simulations show that significant noise reduction is possible.</p> <p> The method is firstly verified with MATLAB. A number of experiments are carried out with different regularization parameters, different magnetic fields and different noise levels. The experiments show that the stronger the complex noise is, the stronger the magnetic field requires for estimating the velocity. The regularization parameter also plays an important role in the experiments. Given the noise level and with an appropriate value of regularization parameter, the estimated velocity converges to ideal velocity very quickly. A proof-of-concept implementation on the Cell BE processor is described, quantifying the performance potential of this platform.</p> <p> The second part of this thesis concerns the evaluation of an elementary function library. Since CBE SPU is designed for compute intensive applications, the well developed Math functions can help developer program and save time to take care other details. Dr. Anand's research group in McMaster developed 28 math functions for CBE SPU. The test tools for accuracy and performance were developed on CBE. The functions were tuned while testing. The functions are either competitive or an addition to the existing SDK1.1 SPU math functions.</p> / Thesis / Master of Applied Science (MASc)
249

Approximate Deconvolution Reduced Order Modeling

Xie, Xuping 01 February 2016 (has links)
This thesis proposes a large eddy simulation reduced order model (LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition (POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution (AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient ( ν= 10⁻³). / Master of Science
250

On the Effectiveness of Dimensionality Reduction for Unsupervised Structural Health Monitoring Anomaly Detection

Soleimani-Babakamali, Mohammad Hesam 19 April 2022 (has links)
Dimensionality reduction techniques (DR) enhance data interpretability and reduce space complexity, though at the cost of information loss. Such methods have been prevalent in the Structural Health Monitoring (SHM) anomaly detection literature. While DR is favorable in supervised anomaly detection, where possible novelties are known a priori, the efficacy is less clear in unsupervised detection. In this work, we perform a detailed assessment of the DR performance trade-offs to determine whether the information loss imposed by DR can impact SHM performance for previously unseen novelties. As a basis for our analysis, we rely on an SHM anomaly detection method operating on input signals' fast Fourier transform (FFT). FFT is regarded as a raw, frequency-domain feature that allows studying various DR techniques. We design extensive experiments comparing various DR techniques, including neural autoencoder models, to capture the impact on two SHM benchmark datasets exclusively. Results imply the loss of information to be more detrimental, reducing the novelty detection accuracy by up to 60\% with autoencoder-based DR. Regularization can alleviate some of the challenges though unpredictable. Dimensions of substantial vibrational information mostly survive DR; thus, the regularization impact suggests that these dimensions are not reliable damage-sensitive features regarding unseen faults. Consequently, we argue that designing new SHM anomaly detection methods that can work with high-dimensional raw features is a necessary research direction and present open challenges and future directions. / M.S. / Structural health monitoring (SHM) aids the timely maintenance of infrastructures, saving human lives and natural resources. Infrastructure will undergo unseen damages in the future. Thus, data-driven SHM techniques for handling unlabeled data (i.e., unsupervised learning) are suitable for real-world usage. Lacking labels and defined data classes, data instances are categorized through similarities, i.e., distances. Still, distance metrics in high-dimensional spaces can become meaningless. As a result, applying methods to reduce data dimensions is currently practiced, yet, at the cost of information loss. Naturally, a trade-off exists between the loss of information and the increased interpretability of low-dimensional spaces induced by dimensionality reduction procedures. This study proposes an unsupervised SHM technique that works with low and high-dimensional data to assess that trade-off. Results show the negative impacts of dimensionality reduction to be more severe than its benefits. Developing unsupervised SHM methods with raw data is thus encouraged for real-world applications.

Page generated in 0.0302 seconds