• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 475
  • 475
  • 117
  • 99
  • 99
  • 88
  • 67
  • 62
  • 62
  • 54
  • 48
  • 47
  • 47
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Data-Driven Variational Multiscale Reduced Order Modeling of Turbulent Flows

Mou, Changhong 16 June 2021 (has links)
In this dissertation, we consider two different strategies for improving the projection-based reduced order model (ROM) accuracy: (I) adding closure terms to the standard ROM; (II) using Lagrangian data to improve the ROM basis. Following strategy (I), we propose a new data-driven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMS-ROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, we use available data to model the VMS-ROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMS-ROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new data-driven VMS-ROM in the numerical simulation of four test cases: (i) the 1D Burgers equation with viscosity coefficient $nu = 10^{-3}$; (ii) a 2D flow past a circular cylinder at Reynolds numbers $Re=100$, $Re=500$, and $Re=1000$; (iii) the quasi-geostrophic equations at Reynolds number $Re=450$ and Rossby number $Ro=0.0036$; and (iv) a 2D flow over a backward facing step at Reynolds number $Re=1000$. The numerical results show that the data-driven VMS-ROM is significantly more accurate than standard ROMs. Furthermore, we propose a new hybrid ROM framework for the numerical simulation of fluid flows. This hybrid framework incorporates two closure modeling strategies: (i) A structural closure modeling component that involves the recently proposed data-driven variational multiscale ROM approach, and (ii) A functional closure modeling component that introduces an artificial viscosity term. We also utilize physical constraints for the structural ROM operators in order to add robustness to the hybrid ROM. We perform a numerical investigation of the hybrid ROM for the three-dimensional turbulent channel flow at a Reynolds number $Re = 13,750$. In addition, we focus on the mathematical foundations of ROM closures. First, we extend the verifiability concept from large eddy simulation to the ROM setting. Specifically, we call a ROM closure model verifiable if a small ROM closure model error (i.e., a small difference between the true ROM closure and the modeled ROM closure) implies a small ROM error. Second, we prove that a data-driven ROM closure (i.e., the data-driven variational multiscale ROM) is verifiable. For strategy (II), we propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis. / Doctor of Philosophy / Reduced order models (ROMs) are popular in physical and engineering applications: for example, ROMs are widely used in aircraft designing as it can greatly reduce computational cost for the aircraft's aeroelastic predictions while retaining good accuracy. However, for high Reynolds number turbulent flows, such as blood flows in arteries, oil transport in pipelines, and ocean currents, the standard ROMs may yield inaccurate results. In this dissertation, to improve ROM's accuracy for turbulent flows, we investigate three different types of ROMs. In this dissertation, both numerical and theoretical results show that the proposed new ROMs yield more accurate results than the standard ROM and thus can be more useful.
372

DATA-DRIVEN APPROACHES FOR UNCERTAINTY QUANTIFICATION WITH PHYSICS MODELS

Huiru Li (18423333) 25 April 2024 (has links)
<p dir="ltr">This research aims to address these critical challenges in uncertainty quantification. The objective is to employ data-driven approaches for UQ with physics models.</p>
373

Large Eddy Simulation Reduced Order Models

Xie, Xuping 12 May 2017 (has links)
This dissertation uses spatial filtering to develop a large eddy simulation reduced order model (LES-ROM) framework for fluid flows. Proper orthogonal decomposition is utilized to extract the dominant spatial structures of the system. Within the general LES-ROM framework, two approaches are proposed to address the celebrated ROM closure problem. No phenomenological arguments (e.g., of eddy viscosity type) are used to develop these new ROM closure models. The first novel model is the approximate deconvolution ROM (AD-ROM), which uses methods from image processing and inverse problems to solve the ROM closure problem. The AD-ROM is investigated in the numerical simulation of a 3D flow past a circular cylinder at a Reynolds number $Re=1000$. The AD-ROM generates accurate results without any numerical dissipation mechanism. It also decreases the CPU time of the standard ROM by orders of magnitude. The second new model is the calibrated-filtered ROM (CF-ROM), which is a data-driven ROM. The available full order model results are used offline in an optimization problem to calibrate the ROM subfilter-scale stress tensor. The resulting CF-ROM is tested numerically in the simulation of the 1D Burgers equation with a small diffusion parameter. The numerical results show that the CF-ROM is more efficient than and as accurate as state-of-the-art ROM closure models. / Ph. D.
374

ENHANCING AUTOMOTIVE MANUFACTURING QUALITY AND REDUCING VARIABILITY : THROUGH SIX SIGMA PRINCIPLES

Cholakkal, Mohamed Jasil, Chettiyam Thodi, Nisar Ahamed January 2024 (has links)
The dissertation "Enhancing Automotive Manufacturing Quality and Reducing Variability Through Six Sigma Principles" provides a thorough analysis of the ways in which Six Sigma techniques can be applied to the automotive manufacturing sector to improve quality control, reduce variability, and boost operational efficiency. Utilizing a diverse of secondary data sources, such as industry reports, case studies, academic research articles, and one-on-one consultations, this study seeks to offer important insights into the implementation and efficacy of Six Sigma principles in the context of automotive manufacturing. By stressing the fundamental ideas of Six Sigma outlined by Deming and Juran and scrutinizing influential works in quality management, the literature study builds a solid theoretical basis. The study's goals and research questions centre on comprehending how Six Sigma improves quality and lowers variability in automobile production processes. This research finds important insights on how Six Sigma may improve quality control, lower process variability, and increase operational efficiency in the automobile manufacturing industry via thorough secondary data analysis. The research offers useful insights into using Six Sigma approaches, emphasizing the significance of staff involvement, data-driven decision-making, and leadership commitment in guaranteeing the success of Six Sigma projects. The thesis ends with suggestions for further research, such as investigating primary data gathering techniques, contrasting this methodology with other approaches to quality management, and using longitudinal analysis to monitor the long-term effects of Six Sigma projects. In summary, this dissertation advances our knowledge of how Six Sigma concepts may be used to promote operational excellence and continuous improvement in the automobile manufacturing sector. It also provides practitioners and stakeholders in the industry with insightful information
375

L’acquisition de la liaison chez des apprenants italophones : des atouts d’un corpus de natifs pour l’étude de la liaison en français langue étrangère (FLE) / Acquisition of liaison by Italian-speaking learners : advantages of a native-speaker corpus for the study of liaison in French as a foreign language

Barreca, Giulia 07 December 2015 (has links)
Dans le cadre du projet « InterPhonologie du Français Contemporain » (IPFC) (Racine, Detey et Kawaguchi 2012) ce travail se propose d’examiner les stratégies d’acquisition de la liaison en L2. Si des modèles développementaux ont été proposés pour l’acquisition de la liaison en L1, aucune hypothèse pour rendre compte de l’apprentissage en L2 n’a reçu à ce jour un appui empirique convaincant (Wauquier 2009). C’est dans ce contexte que s’inscrit la présente étude longitudinale menée auprès d’étudiants italophones de français langue étrangère (FLE) (niveau A2-B1) dont les résultats ont été enrichis par une comparaison avec les autres travaux multilingues issus du projet international IPFC. Cette approche a permis de mettre en évidence la présence de tendances et d’erreurs communes qui suggèrent que, malgré le recours à des stratégies lexicales, les apprenants sont en mesure de développer des généralisations phonologiques de la liaison. De plus, compte-tenu des difficultés que l’hétérogénéité de la liaison pose dans l’enseignement et l'acquisition en français L2 (Racine et Detey sous presse), les apprenants montrent des faiblesses tant au niveau de la production de la liaison qu’au niveau des connaissances épilinguistiques (Gombert 1990) de la variation de la liaison. Ces résultats nous ont amené à exploiter les données extraites de l’analyse fréquentielle du corpus des natifs « Phonologie du Français Contemporain » (PFC) (Durand, Laks et Lyche 2009) afin de proposer des ressources pédagogiques dont nous espérons qu’elles seront en mesure, dans le cadre d’une démarche data-driven learning, de contribuer au renouvèlement de l′enseignement de la liaison en français langue étrangère. / As part of the international project InterPhonologie du Français Contemporain (IPFC) (Detey Kawaguchi and 2008; Racine, Detey and Kawaguchi 2012), this study aims to examine acquisition strategies of liaison in French as a foreign language. While developmental models have been proposed for the acquisition of liaison in L1, to date, no hypothesis accounting for L2 liaison learning has received convincing empirical support (Wauquier 2009). It is in this context that the present longitudinal study of Italian-speaking students of French as a foreign language (FFL) (level A2-B1) can be situated. The results have been enriched by a comparison with other multilingual studies (several source languages) from the international project IPFC. This approach shows the presence of common trends and errors in different populations of learners that suggest that, despite the use of lexical strategies, learners are able to develop phonological generalisations of liaison. Taking into account the difficulties that the heterogeneity of liaison presents for second language teaching and acquisition (Racine and Detey in press), learners show weaknesses in both production of liaison and in epilinguistic knowledge (Gombert 1990) of the variation of liaison.These results led us to use data from a frequency analysis of a corpus of native speakers Phonologie du Français Contemporain (PFC) (Durand, Laks and Lyche 2009) to provide learning resources, based on a data-driven learning approach, to contribute to the renewal of the teaching of liaison in French as a foreign language. / Nata nell’ambito del progetto InterPhonologie du Français Contemporain (IPFC) (Racine, Detey et Kawaguchi 2012), questa tesi propone un’analisi delle strategie di acquisizione della liaison in L2. Sebbene dei modelli di acquisizione della liaison in L1 siano già stati proposti, nessuna ipotesi è stata confermata da studi empirici (Wauquier 2009).È in questo contesto che si situa il nostro studio longitudinale realizzato su degli studenti universitari italofoni di francese lingua straniera (FLS) (livello A2-B1); un’osservazione i cui risultati sono stati confrontati con altre ricerche multilingui facenti parte del progetto internazionale IPFC. Questo approccio ha permesso di far emergere la presenza di tendenze e errori comuni alle diverse popolazioni di apprendenti; una similitudine che suggerisce come gli apprendenti, malgrado ricorrano a diverse strategie lessicali, sono in grado di sviluppare delle generalizzazioni fonologiche della liaison. Inoltre, date le difficoltà che la variabilità della liaison pone tanto nella didattica quanto nell’acquisizione, gli apprendenti sembrano possedere una scarsa competenza della variazione della liaison non solo sul piano della produzione ma anche su quello delle conoscenze epilinguistiche (Gombert 1990).Questi risultati ci hanno spinto quindi ad utilizzare i dati dell’analisi della frequenza della liaison condotta sul corpus di nativi Phonologie du Français Contemporain (PFC) (Durand, Laks et Lyche 2009) al fine di proporre delle risorse pedagogiche per l’acquisizione della liaison. Questi strumenti potrebbero contribuire al rinnovamento dell’insegnamento della liaison nella didattica del francese lingua straniera.
376

[en] PORTFOLIO SELECTION VIA DATA-DRIVEN DISTRIBUTIONALLY ROBUST OPTIMIZATION / [pt] SELEÇÃO DE CARTEIRAS DE ATIVOS FINANCEIROS VIA DATA-DRIVEN DISTRIBUTIONALLY ROBUST OPTIMIZATION

JOAO GABRIEL FELIZARDO S SCHLITTLER 07 January 2019 (has links)
[pt] Otimização de portfólio tradicionalmente assume ter conhecimento da distribuição de probabilidade dos retornos ou pelo menos algum dos seus momentos. No entanto, é sabido que a distribuição de probabilidade dos retornos muda com frequência ao longo do tempo, tornando difícil a utilização prática de modelos puramente estatísticos, que confiam indubitavelmente em uma distribuição estimada. Em contrapartida, otimização robusta considera um completo desconhecimento da distribuição dos retornos, e por isto, buscam uma solução ótima para todas as realizações possíveis dentro de um conjunto de incerteza dos retornos. Mais recentemente na literatura, técnicas de distributionally robust optimization permitem lidar com a ambiguidade com relação à distribuição dos retornos. No entanto essas técnicas dependem da construção do conjunto de ambiguidade, ou seja, distribuições de probabilidade a serem consideradas. Neste trabalho, propomos a construção de conjuntos de ambiguidade poliédricos baseado somente em uma amostra de retornos. Nestes conjuntos, as relações entre variáveis são determinadas pelos dados de maneira não paramétrica, sendo assim livre de possíveis erros de especificação de um modelo estocástico. Propomos um algoritmo para construção do conjunto e, dado o conjunto, uma reformulação computacionalmente tratável do problema de otimização de portfólio. Experimentos numéricos mostram que uma melhor performance do modelo em comparação com benchmarks selecionados. / [en] Portfolio optimization traditionally assumes knowledge of the probability distribution of returns or at least some of its moments. However is well known that the probability distribution of returns changes over time, making difficult the use of purely statistic models which undoubtedly rely on an estimated distribution. On the other hand robust optimization consider a total lack of knowledge about the distribution of returns and therefore it seeks an optimal solution for all the possible realizations wuthin a set of uncertainties of the returns. More recently the literature shows that distributionally robust optimization techniques allow us to deal with ambiguity regarding the distribution of returns. However these methods depend on the construction of the set of ambiguity, that is, all distribution of probability to be considered. This work proposes the construction of polyhedral ambiguity sets based only on a sample of returns. In those sets, the relations between variables are determined by the data in a non-parametric way, being thus free of possible specification errors of a stochastic model. We propose an algorithm for constructing the ambiguity set, and then a computationally treatable reformulation of the portfolio optimization problem. Numerical experiments show that a better performance of the model compared to selected benchmarks.
377

Privacy preserving software engineering for data driven development

Tongay, Karan Naresh 14 December 2020 (has links)
The exponential rise in the generation of data has introduced many new areas of research including data science, data engineering, machine learning, artificial in- telligence to name a few. It has become important for any industry or organization to precisely understand and analyze the data in order to extract value out of the data. The value of the data can only be realized when it is put into practice in the real world and the most common approach to do this in the technology industry is through software engineering. This brings into picture the area of privacy oriented software engineering and thus there is a rise of data protection regulation acts such as GDPR (General Data Protection Regulation), PDPA (Personal Data Protection Act), etc. Many organizations, governments and companies who have accumulated huge amounts of data over time may conveniently use the data for increasing business value but at the same time the privacy aspects associated with the sensitivity of data especially in terms of personal information of the people can easily be circumvented while designing a software engineering model for these types of applications. Even before the software engineering phase for any data processing application, often times there can be one or many data sharing agreements or privacy policies in place. Every organization may have their own way of maintaining data privacy practices for data driven development. There is a need to generalize or categorize their approaches into tactics which could be referred by other practitioners who are trying to integrate data privacy practices into their development. This qualitative study provides an understanding of various approaches and tactics that are being practised within the industry for privacy preserving data science in software engineering, and discusses a tool for data usage monitoring to identify unethical data access. Finally, we studied strategies for secure data publishing and conducted experiments using sample data to demonstrate how these techniques can be helpful for securing private data before publishing. / Graduate
378

IT’S IN THE DATA : A multimethod study on how SaaS-businesses can utilize cohort analysis to improve marketing decision-making

Fridell, Gustav, Cedighi Chafjiri, Saam January 2020 (has links)
Incorporating data and analytics within marketing decision-making is today crucial for a company’s success. This holds true especially for SaaS-businesses due to having a subscription-based pricing model dependent on good service retention for long- term viability and profitability. Efficiently incorporating data and analytics does have its prerequisites but can for SaaS-businesses be achieved using the analytical framework of cohort analysis, which utilizes subscription data to obtain actionable insights on customer behavior and retention patterns. Consequently, to expand upon the understanding of how SaaS-businesses can utilize data-driven methodologies to improve their operations, this study has examined how SaaS-businesses can utilize cohort analysis to improve marketing decision-making and what the prerequisites are for efficiently doing so. Thus, by utilizing a multimethodology approach consisting of action research and a single caste study on the fast-growing SaaS-company GetAccept, the study has concluded that the incorporation and utilization of cohort analysis can improve marketing decision-making for SaaS-businesses. This conclusion is drawn by having identified that: The incorporation of cohort analysis can streamline the marketing decision-making process; and The incorporation of cohort analysis can enable decision-makers to obtain a better foundation of information to base marketing decisions upon, thus leading to an improved expected outcome of the decisions. Furthermore, to enable efficient data-driven marketing decision-making and effectively utilize methods such as cohort analysis, the study has concluded that SaaS- businesses need to fulfill three prerequisites, which have been identified to be: Management that support and advocate for data and analytics; A company culture built upon information sharing and evidence-based decision-making; and A large enough customer base to allow for determining similarities within and differences between customer segments as significant. However, the last prerequisite applies specifically for methods such as or similar to cohort analysis. Thus, by utilizing other methods, SaaS-businesses might still be able to efficiently utilize data-driven marketing decision-making, as long as the first two prerequisites are fulfilled.
379

A hybrid prognostic methodology and its application to well-controlled engineering systems

Eker, Ömer F. January 2015 (has links)
This thesis presents a novel hybrid prognostic methodology, integrating physics-based and data-driven prognostic models, to enhance the prognostic accuracy, robustness, and applicability. The presented prognostic methodology integrates the short-term predictions of a physics-based model with the longer term projection of a similarity-based data-driven model, to obtain remaining useful life estimations. The hybrid prognostic methodology has been applied on specific components of two different engineering systems, one which represents accelerated, and the other a nominal degradation process. Clogged filter and fatigue crack propagation failure cases are selected as case studies. An experimental rig has been developed to investigate the accelerated clogging phenomena whereas the publicly available Virkler fatigue crack propagation dataset is chosen after an extensive literature search and dataset analysis. The filter clogging experimental rig is designed to obtain reproducible filter clogging data under different operational profiles. This data is thought to be a good benchmark dataset for prognostic models. The performance of the presented methodology has been evaluated by comparing remaining useful life estimations obtained from both hybrid and individual prognostic models. This comparison has been based on the most recent prognostic evaluation metrics. The results show that the presented methodology improves accuracy, robustness and applicability. The work contained herein is therefore expected to contribute to scientific knowledge as well as industrial technology development.
380

L’infrastructure de la science citoyenne : le cas eBird

Paniagua, Alejandra 04 1900 (has links)
Cette recherche explore comment l’infrastructure et les utilisations d’eBird, l’un des plus grands projets de science citoyenne dans le monde, se développent et évoluent dans le temps et l’espace. Nous nous concentrerons sur le travail d’eBird avec deux de ses partenaires latino-américains, le Mexique et le Pérou, chacun avec un portail Web géré par des organisations locales. eBird, qui est maintenant un grand réseau mondial de partenariats, donne occasion aux citoyens du monde entier la possibilité de contribuer à la science et à la conservation d’oiseaux à partir de ses observations téléchargées en ligne. Ces observations sont gérées et gardées dans une base de données qui est unifiée, globale et accessible pour tous ceux qui s’intéressent au sujet des oiseaux et sa conservation. De même, les utilisateurs profitent des fonctionnalités de la plateforme pour organiser et visualiser leurs données et celles d’autres. L’étude est basée sur une méthodologie qualitative à partir de l’observation des plateformes Web et des entrevues semi-structurées avec les membres du Laboratoire d’ornithologie de Cornell, l’équipe eBird et les membres des organisations partenaires locales responsables d’eBird Pérou et eBird Mexique. Nous analysons eBird comme une infrastructure qui prend en considération les aspects sociaux et techniques dans son ensemble, comme un tout. Nous explorons aussi à la variété de différents types d’utilisation de la plateforme et de ses données par ses divers utilisateurs. Trois grandes thématiques ressortent : l’importance de la collaboration comme une philosophie qui sous-tend le développement d’eBird, l’élargissement des relations et connexions d’eBird à travers ses partenariats, ainsi que l’augmentation de la participation et le volume des données. Finalement, au fil du temps on a vu une évolution des données et de ses différentes utilisations, et ce qu’eBird représente comme infrastructure. / This research explores the evolution of the infrastructure and uses of eBird, one of the world’s largest citizen science projects. It concentrates on the work of eBird with two of its local partners in Latin America who manage regional portals in Mexico and Peru. eBird allows users throughout the world to contribute their observations of birds online and so to advance the case of science and conservation. These observations are stored and managed in a unified, global database that is freely accessible to all who are interested in birds and their conservation. Participants can use the platform’s various functionalities to organize and visualize their data as well as that of others. The research follows a qualitative methodology based on observation of the eBird platform and on interviews with members of the Cornell Lab of Ornithology, the eBird team and members of local organizations responsible for eBird in Peru and Mexico. We analyze eBird as an infrastructure whose technical and social sides are interrelated and need to be examined simultaneously. We also explore how the eBird team conceives the uses of the eBird platform and the data it contains. Three major themes emerge: the philosophy of collaboration underlying the development of eBird, the extension and diversification of eBird through its network of partnerships and a corresponding increase in both participation and volume of data. Finally, we also observe an evolution in the type and variety of uses for eBird observations and the eBird infrastructure itself.

Page generated in 0.0738 seconds