• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Analyse de connectivité et techniques de partitionnement de données appliquées à la caractérisation et la modélisation d'écoulement au sein des réservoirs très hétérogènes / Connectivity analysis and clustering techniques applied for the characterisation and modelling of flow in highly heterogeneous reservoirs

Darishchev, Alexander 10 December 2015 (has links)
Les techniques informatiques ont gagné un rôle primordial dans le développement et l'exploitation des ressources d'hydrocarbures naturelles ainsi que dans d'autres opérations liées à des réservoirs souterrains. L'un des problèmes cruciaux de la modélisation de réservoir et les prévisions de production réside dans la présélection des modèles de réservoir appropriés à la quantification d'incertitude et au le calage robuste des résultats de simulation d'écoulement aux réelles mesures et observations acquises du gisement. La présente thèse s'adresse à ces problématiques et à certains autres sujets connexes.Nous avons élaboré une stratégie pour faciliter et accélérer l'ajustement de tels modèles numériques aux données de production de champ disponibles. En premier lieu, la recherche s'était concentrée sur la conceptualisation et l'implémentation des modèles de proxy reposant sur l'analyse de la connectivité, comme une propriété physique intégrante et significative du réservoir, et des techniques avancées du partitionnement de données et de l'analyse de clusters. La méthodologie développée comprend aussi plusieurs approches originales de type probabiliste orientées vers les problèmes d'échantillonnage d'incertitude et de détermination du nombre de réalisations et de l'espérance de la valeur d'information d'échantillon. Afin de cibler et donner la priorité aux modèles pertinents, nous avons agrégé les réalisations géostatistiques en formant des classes distinctes avec une mesure de distance généralisée. Ensuite, afin d'améliorer la classification, nous avons élargi la technique graphique de silhouettes, désormais appelée la "séquence entière des silhouettes multiples" dans le partitionnement de données et l'analyse de clusters. Cette approche a permis de recueillir une information claire et compréhensive à propos des dissimilarités intra- et intre-cluster, particulièrement utile dans le cas des structures faibles, voire artificielles. Finalement, la séparation spatiale et la différence de forme ont été visualisées graphiquement et quantifiées grâce à la mesure de distance probabiliste.Il apparaît que les relations obtenues justifient et valident l'applicabilité des approches proposées pour améliorer la caractérisation et la modélisation d'écoulement. Des corrélations fiables ont été obtenues entre les chemins de connectivité les plus courts "injecteur-producteur" et les temps de percée d'eau pour des configurations différentes de placement de puits, niveaux d'hétérogénéité et rapports de mobilité de fluides variés. Les modèles de connectivité proposés ont produit des résultats suffisamment précis et une performance compétitive au méta-niveau. Leur usage comme des précurseurs et prédicateurs ad hoc est bénéfique en étape du traitement préalable de la méthodologie. Avant le calage d'historique, un nombre approprié et gérable des modèles pertinents peut être identifié grâce à la comparaison des données de production disponibles avec les résultats de... / Computer-based workflows have gained a paramount role in development and exploitation of natural hydrocarbon resources and other subsurface operations. One of the crucial problems of reservoir modelling and production forecasting is in pre-selecting appropriate models for quantifying uncertainty and robustly matching results of flow simulation to real field measurements and observations. This thesis addresses these and other related issues. We have explored a strategy to facilitate and speed up the adjustment of such numerical models to available field production data. Originally, the focus of this research was on conceptualising, developing and implementing fast proxy models related to the analysis of connectivity, as a physically meaningful property of the reservoir, with advanced cluster analysis techniques. The developed methodology includes also several original probability-oriented approaches towards the problems of sampling uncertainty and determining the sample size and the expected value of sample information. For targeting and prioritising relevant reservoir models, we aggregated geostatistical realisations into distinct classes with a generalised distance measure. Then, to improve the classification, we extended the silhouette-based graphical technique, called hereafter the "entire sequence of multiple silhouettes" in cluster analysis. This approach provided clear and comprehensive information about the intra- and inter-cluster dissimilarities, especially helpful in the case of weak, or even artificial, structures. Finally, the spatial separation and form-difference of clusters were graphically visualised and quantified with a scale-invariant probabilistic distance measure. The obtained relationships appeared to justify and validate the applicability of the proposed approaches to enhance the characterisation and modelling of flow. Reliable correlations were found between the shortest "injector-producer" pathways and water breakthrough times for different configurations of well placement, various heterogeneity levels and mobility ratios of fluids. The proposed graph-based connectivity proxies provided sufficiently accurate results and competitive performance at the meta-level. The use of them like precursors and ad hoc predictors is beneficial at the pre-processing stage of the workflow. Prior to history matching, a suitable and manageable number of appropriate reservoir models can be identified from the comparison of the available production data with the selected centrotype-models regarded as the class representatives, only for which the full fluid flow simulation is pre-requisite. The findings of this research work can easily be generalised and considered in a wider scope. Possible extensions, further improvements and implementation of them may also be expected in other fields of science and technology.
162

Decision Makers’ Cognitive Biases in Operations Management: An Experimental Study

AlKhars, Mohammed 05 1900 (has links)
Behavioral operations management (BOM) has gained popularity in the last two decades. The main theme in this new stream of research is to include the human behavior in Operations Management (OM) models to increase the effectiveness of such models. BOM is classified into 4 areas: cognitive psychology, social psychology, group dynamics and system dynamics (Bendoly et al. 2010). This dissertation will focus on the first class, namely cognitive psychology. Cognitive psychology is further classified into heuristics and biases. Tversky and Kahneman (1974) discussed 3 heuristics and 13 cognitive biases that usually face decision makers. This dissertation is going to study 6 cognitive biases under the representativeness heuristic. The model in this dissertation states that cognitive reflection of the individual (Frederick 2005) and training about cognitive biases in the form of warning (Kaufmann and Michel 2009) will help decisions’ makers make less biased decisions. The 6 cognitive biases investigated in this dissertation are insensitivity to prior probability, insensitivity to sample size, misconception of chance, insensitivity to predictability, the illusion of validity and misconception of regression. 6 scenarios in OM contexts have been used in this study. Each scenario corresponds to one cognitive bias. Experimental design has been used as the research tool. To see the impact of training, one group of the participants received the scenarios without training and the other group received them with training. The training consists of a brief description of the cognitive bias as well as an example of the cognitive bias. Cognitive reflection is operationalized using cognitive reflection test (CRT). The survey was distributed to students at University of North Texas (UNT). Logistic regression has been employed to analyze data. The research shows that participants show the cognitive biases proposed by Tversky and Kahneman. Moreover, CRT is significant factor to predict the cognitive bias in two scenarios. Finally, providing training in terms of warning helps participants to make more rational decisions in 4 scenarios. This means that although cognitive biases are inherent in the mind of people, management of corporations has the tool to educate its managers and professionals about such biases which helps companies make more rational decisions.
163

Untersuchung von Holzwerkstoffen unter Schlagbelastung zur Beurteilung der Werkstoffeignung für den Maschinenbau: Untersuchung von Holzwerkstoffen unter Schlagbelastung zurBeurteilung der Werkstoffeignung für den Maschinenbau

Müller, Christoph 07 October 2015 (has links)
In der vorliegenden Arbeit werden Holzwerkstoffe im statischen Biegeversuch und im Schlagbiegeversuch vergleichend geprüft. Ausgewählte Holzwerkstoffe werden thermisch geschädigt, zudem wird eine relevante Kerbgeometrie geprüft. Ziel der Untersuchungen ist die Eignung verschiedenartiger Werkstoffe für den Einsatz in sicherheitsrelevanten Anwendungen mit Schlagbelastungen zu prüfen. Hierzu werden zunächst die Grundlagen der instrumentierten Schlagprüfung und der Holzwerkstoffe erarbeitet. Der Stand der Technik wird dargelegt und bereits durchgeführte Studien werden analysiert. Darauf aufbauend wird eine eigene Prüfeinrichtung zur zeitlich hoch aufgelösten Kraft-Beschleunigungs-Messung beim Schlagversuch entwickelt. Diese wird anhand verschiedener Methoden auf ihre Eignung und die Messwerte auf Plausibilität geprüft. Darüber hinaus wird ein statistisches Verfahren zur Überprüfung auf ausreichende Stichprobengröße entwickelt und auf die durchgeführten Messungen angewendet. Anhand der unter statischer und schlagartiger Biegebeanspruchung ermittelten charakteristischen Größen, wird ein Klassenmodell zum Werkstoffvergleich und zur Werkstoffauswahl vorgeschlagen. Dieses umfasst integral die mechanische Leistungsfähigkeit der geprüften Holzwerkstoffe und ist für weitere Holzwerkstoffe anwendbar. Abschließend wird, aufbauend auf den gewonnenen Erkenntnissen, ein Konzept für die Bauteilprüfung unter Schlagbelastung für weiterführende Untersuchungen vorgeschlagen. / In the present work wood-based materials are compared under static bending load and impact bending load. Several thermal stress conditions are applied to selected materials, furthermore one relevant notch geometry is tested. The objective of these tests is to investigate the suitability of distinct wood materials for security relevant applications with the occurrence of impact loads. For this purpose the basics of instrumented impact testing and wood-based materials are acquired. The state of the technology and a comprehensive analysis of original studies are subsequently presented. On this basis an own impact pendulum was developed to allow force-acceleration measurement with high sample rates. The apparatus is validated by several methods and the achieved signals are tested for plausibility. A general approach of testing for adequate sample size is implemented and applied to the tested samples. Based on the characteristic values of the static bending and impact bending tests a classification model for material selection and comparison is proposed. The classification model is an integral approach for mechanical performance assessment of wood-based materials. In conclusion a method for impact testing of components (in future studies) is introduced.
164

[pt] EFEITO DA ESTIMAÇÃO DOS PARÂMETROS SOBRE O DESEMPENHO CONJUNTO DOS GRÁFICOS DE CONTROLE DE X-BARRA E S / [en] EFFECT OF PARAMETER ESTIMATION ON THE JOINT PERFORMANCE OF THE X-BAR AND S CHARTS

LORENA DRUMOND LOUREIRO VIEIRA 09 July 2020 (has links)
[pt] A probabilidade de alarme falso, alfa, dos gráficos de controle de processos depende dos seus limites de controle, que, por sua vez, dependem de estimativas dos parâmetros do processo. Esta tese apresenta inicialmente uma revisão dos principais trabalhos sobre o efeito dos erros de estimação dos parâmetros do processo sobre alfa quando se utiliza o gráfico de X e S individualmente e em conjunto. O desempenho dos gráficos é medido através de medidas de desempenho (número médio de amostras até o sinal, taxa de alarme falso, distribuição do número de amostras até o sinal, que, em geral, são variáveis aleatórias, função dos erros de estimação. Pesquisas recentes têm focado nas propriedades da distribuição condicional do número de amostras até o sinal, ou ainda, nas propriedades da distribuição da taxa de alarme-falso condicional. Esta tese adota esta abordagem condicional e analisa o efeito da estimação dos parâmetros do processo no desempenho conjunto dos gráficos de X e S em dois casos: Caso KU (Média conhecida – Variância desconhecida) e Caso UU (Média desconhecida – Variância desconhecida). A quase totalidade dos trabalhos anteriores considerou apenas um gráfico, isoladamente; sobre efeito da estimação dos parâmetros sobre o desempenho conjunto conhecemos apenas um trabalho, sobre gráficos de X e R, mas nenhum sobre gráficos de X e S. Os resultados da análise mostram que o desempenho dos gráficos pode ser muito afetado pela estimação de parâmetros e que o número de amostras iniciais requerido para garantir um desempenho desejado é muito maior que os números tradicionalmente recomendados na literatura normativa de controle estatístico de processo (livros texto e manuais). Esse número é, porém, menor que o máximo entre os números requeridos para os gráficos de X e de S individualmente. Questões a serem investigadas como desdobramento dessa pesquisa são também indicadas nas Considerações Finais e Recomendações. / [en] The false-alarm rate of control charts, alpha, depends on the control limits calculated, which depend, in turn, on the estimated process parameters. This dissertation initially presents a review of the main research articles about the effect of the estimation errors of the process parameters upon alpha when X and S charts are used separately and together. The charts performance is evaluated through performance measures (average run-length, false-alarm rate, run-length distribution, etc), which are, in general, random variables, function of the estimation errors. Recent researches focused on the properties of the conditional run-length, or still (in the case of Shewhart charts) on the properties of the conditional false-alarm rate distribution. This dissertation adopts this conditional approach and investigates the effect of parameter estimation on the joint behavior of X and S charts in two cases: KU Case (Known mean – Unknown variance) and UU Case (Unknown mean - Unknown variance). Almost all previous works considered just only one chart separately – just only one joint performance work is known by the author, one about the effect of the estimation errors of the process parameters upon X e R joint performance. The results show that the charts performance can be severely affected by the parameter estimation and the number of initial samples required to ensure the desirable performance is greater than the numbers of initial samples recommended by traditional statistical process control reference texts (books and manuals). This number is, however, smaller than the maximum between the numbers of samples required by the X and the S charts separately. Additional issues for follow-up research are recommended in the concluding section.
165

Data-Driven Success in Infrastructure Megaprojects. : Leveraging Machine Learning and Expert Insights for Enhanced Prediction and Efficiency / Datadriven framgång inom infrastrukturmegaprojekt. : Utnyttja maskininlärning och expertkunskap för förbättrad prognostisering och effektivitet.

Nordmark, David E.G. January 2023 (has links)
This Master's thesis utilizes random forest and leave-one-out cross-validation to predict the success of megaprojects involving infrastructure. The goal was to enhance the efficiency of the design and engineering phase of the infrastructure and construction industries. Due to the small sample size of megaprojects and limitated data sharing, the lack of data poses significant challenges for implementing artificial intelligence for the evaluation and prediction of megaprojects. This thesis explore how megaprojects can benefit from data collection and machine learning despite small sample sizes. The focus of the research was on analyzing data from thirteen megaprojects and identifying the most influential data for machine learning analysis. The results prove that the incorporation of expert data, representing critical success factors for megaprojects, significantly enhanced the accuracy of the predictive model. The superior performance of expert data over economic data, experience data, and documentation data demonstrates the significance of domain expertise. In addition, the results demonstrate the significance of the planning phase by implementing feature selection techniques and feature importance scores. In the planning phase, a small, devoted, and highly experienced team of project planners has proven to be a crucial factor for project success. The thesis concludes that in order for companies to maximize the utility of machine learning, they must identify their critical success factors and collect the corresponding data. / Denna magisteruppsats undersöker följande forskningsfråga: Hur kan maskininlärning och insiktsfull dataanalys användas för att öka effektiviteten i infrastruktursektorns plannerings- och designfas? Denna utmaning löses genom att analysera data från verkliga megaprojekt och tillämpa avancerade maskininlärningsalgoritmer för att förutspå projektframgång och ta reda på framgångsfaktorerna. Vår forskning är särskilt intresserad av megaprojekt på grund av deras komplicerade natur, unika egenskaper och enorma inverkan på samhället. Dessa projekt slutförs sällan, vilket gör att det är svårt att få tillgång till stora mängder verklig data. Det är uppenbart att AI har potential att vara ett ovärderligt verktyg för att förstå och hantera megaprojekts komplexitet, trots de problem vi står inför. Artificiell intelligens gör det möjligt att fatta beslut som är datadrivna och mer informerade. Uppsatsen lyckas med att hanterard det stora problemet som är bristen på data från megaprojekt. Uppsatsen motiveras även av denna brist på data, vilket gör forskningen relevant för andra områden som präglas av litet dataurval. Resultaten från uppsatsen visar att evalueringen av megaprojekt går att förbättra genom smart användning av specifika dataattribut. Uppsatsen inspirerar även företag att börja samla in viktig data för att möjliggöra användningen av artificiell intelligens och maskinginlärning till sin fördel.

Page generated in 0.1955 seconds