• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7709
  • 3557
  • 3293
  • 1169
  • 361
  • 177
  • 166
  • 164
  • 150
  • 88
  • 76
  • 76
  • 56
  • 55
  • 47
  • Tagged with
  • 20663
  • 3848
  • 3297
  • 3210
  • 2746
  • 2697
  • 2689
  • 1937
  • 1802
  • 1511
  • 1370
  • 1242
  • 1186
  • 1122
  • 980
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Forward and inverse modeling of fire physics towards fire scene reconstructions

Overholt, Kristopher James 06 November 2013 (has links)
Fire models are routinely used to evaluate life safety aspects of building design projects and are being used more often in fire and arson investigations as well as reconstructions of firefighter line-of-duty deaths and injuries. A fire within a compartment effectively leaves behind a record of fire activity and history (i.e., fire signatures). Fire and arson investigators can utilize these fire signatures in the determination of cause and origin during fire reconstruction exercises. Researchers conducting fire experiments can utilize this record of fire activity to better understand the underlying physics. In all of these applications, the fire heat release rate (HRR), location of a fire, and smoke production are important parameters that govern the evolution of thermal conditions within a fire compartment. These input parameters can be a large source of uncertainty in fire models, especially in scenarios in which experimental data or detailed information on fire behavior are not available. To better understand fire behavior indicators related to soot, the deposition of soot onto surfaces was considered. Improvements to a soot deposition submodel were implemented in a computational fluid dynamics (CFD) fire model. To better understand fire behavior indicators related to fire size, an inverse HRR methodology was developed that calculates a transient HRR in a compartment based on measured temperatures resulting from a fire source. To address issues related to the uncertainty of input parameters, an inversion framework was developed that has applications towards fire scene reconstructions. Rather than using point estimates of input parameters, a statistical inversion framework based on the Bayesian inference approach was used to determine probability distributions of input parameters. These probability distributions contain uncertainty information about the input parameters and can be propagated through fire models to obtain uncertainty information about predicted quantities of interest. The Bayesian inference approach was applied to various fire problems and coupled with zone and CFD fire models to extend the physical capability and accuracy of the inversion framework. Example applications include the estimation of both steady-state and transient fire sizes in a compartment, material properties related to pyrolysis, and the location of a fire in a compartment. / text
272

Modeling cross-classified data with and without the crossed factors' random effects' interaction

Wallace, Myriam Lopez 08 September 2015 (has links)
The present study investigated estimation of the variance of the cross-classified factors’ random effects’ interaction for cross-classified data structures. Results for two different three-level cross-classified random effects model (CCREM) were compared: Model 1 included the estimation of this variance component and Model 2 assumed the value of this variance component was zero and did not estimate it. The second model is the model most commonly assumed by researchers utilizing a CCREM to estimate cross-classified data structures. These two models were first applied to a real world data set. Parameter estimates for both estimating models were compared. The results for this analysis served as a guide to provide generating parameter values for the Monte Carlo simulation that followed. The Monte Carlo simulation was conducted to compare the two estimating models under several manipulated conditions and assess their impact on parameter recovery. The manipulated conditions included: classroom sample size, the structure of the cross-classification, the intra-unit correlation coefficient (IUCC), and the cross-classified factors’ variance component values. Relative parameter and standard error bias were calculated for fixed effect coefficient estimates, random effects’ variance components, and the associated standard errors for both. When Model 1 was used to estimate the simulated data, no substantial bias was found for any of the parameter estimates or their associated standard errors. Further, no substantial bias was found for conditions with the smallest average within-cell sample size (4 students). When Model 2 was used to estimate the simulated data, substantial bias occurred for the level-1 and level-2 variance components. Several of the manipulated conditions in the study impacted the magnitude of the bias for these variance estimates. Given that level-1 and level-2 variance components can often be used to inform researchers’ decisions about factors of interest, like classroom effects, assessment of possible bias in these estimates is important. The results are discussed, followed by implications and recommendations for applied researchers who are using a CCREM to estimate cross-classified data structures. / text
273

Modeling and Analysis of Software Product Line Variability in Clafer

Bak, Kacper 24 October 2013 (has links)
Both feature and class modeling are used in Software Product Line (SPL) engineering to model variability. Feature models are used primarily to represent user-visible characteristics (i.e., features) of products; whereas class models are often used to model types of components and connectors in a product-line architecture. Previous works have explored the approach of using a single language to express both configurations of features and components. Their goal was to simplify the definition and analysis of feature-to-component mappings and to allow modeling component options as features. A prominent example of this approach is cardinality-based feature modeling, which extends feature models with multiple instantiation and references to express component-like, replicated features. Another example is to support feature modeling in a class modeling language, such as UML or MOF, using their profiling mechanisms and a stylized use of composition. Both examples have notable drawbacks: cardinality-based feature modeling lacks a constraint language and a well-defined semantics; encoding feature models as class models and their evolution bring extra complexity. This dissertation presents Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. Clafer can express rich structural models augmented with complex constraints, i.e., domain, variability, component models, and meta-models. Clafer supports: (i) class-based meta-models, (ii) object models (with uncertainty, if needed), (iii) feature models with attributes and multiple instantiation, (iv) configurations of feature models, (v) mixtures of meta- and feature models and model templates, and (vi) first-order logic constraints. Clafer also makes it possible to arrange models into multiple specialization and extension layers via constraints and inheritance. On the other hand, in designing Clafer we wanted to create a language that builds upon as few concepts as possible, and is easy to learn. The language is supported by tools for SPL verification and optimization. We propose to unify basic modeling constructs into a single concept, called clafer. In other words, Clafer is not a hybrid language. We identify several key mechanisms allowing a class modeling language to express feature models concisely. We provide Clafer with a formal semantics built in a novel, structurally explicit way. As Clafer subsumes cardinality-based feature modeling with attributes, references, and constraints, we are the first to precisely define semantics of such models. We also explore the notion of partial instantiation that allows for modeling with uncertainty and variability. We show that Object-Oriented Modeling (OOM) languages with no direct support for partial instances can support them via class modeling, using subclassing and strengthening multiplicity constraints. We make the encoding of partial instances via subclassing precise and general. Clafer uses this encoding and pushes the idea even further: it provides a syntactic unification of types and (partial) instances via subclassing and redefinition. We evaluate Clafer analytically and experimentally. The analytical evaluation shows that Clafer can concisely express feature and meta-models via a uniform syntax and unified semantics. The experimental evaluation shows that: 1) Clafer can express a variety of realistic rich structural models with complex constraints, such as variability models, meta-models, model templates, and domain models; and 2) that useful analyses can be performed within seconds.
274

Automatic Generation of Goal Models from Regulations

Rashidi-Tabrizi, Rouzbahan 29 October 2013 (has links)
Organizations in many domains such as healthcare, finances, telecommunications, educa-tion, and software development, must comply with an ever-increasing number of regula-tions, including laws and policies. In order to measure compliance to regulation, several recent approaches propose modelling regulations using goals and indicators. However, creating goal models for regulations is time consuming and prone to errors, especially as this is usually done manually. This thesis tackles this issue by automating some of the steps for creating goal models, and by offering better ways to create graphical views of goal models than what is currently available nowadays in goal modelling tools. The notation used in this thesis is the Goal-oriented Requirement Language (GRL), which is part of the User Requirements Notation standard and is supported by the jUCMNav tool. The concepts of regulations and their indicators are captured using a tab-ular presentation in Comma-Separated Value (CSV) files. An import mechanism is added to jUCMNav to automatically create regulation goal models from such files. The imported GRL model can then by visualized using novel features that enable the addition of multiple views/diagrams in an efficient and usable way.
275

A SIMULATED COMPARISON OF LINEAR AND RANS BASED CFD MODELING IN REGARD TO CRITICAL SLOPE

Robinson, Jeffrey January 2018 (has links)
The aim of this study is to compare the performance of a linear model to a nonlinear model focusing on flow separation based on a critical slope value. Specifically, the WindPRO WAsP model will be compared with the WindSIM CFD model over a simulated terrain to determine the point the two models differ in relation to the inclination of the terrain. The results of this study will verify if the proposed critical slope value of roughly 17 degrees is truly representative of the limitation of the WAsP model in producing accurate results as compared to a CFD model.  Multiple similar studies have been performed using existing sites with actual met mast data as a comparison to the model outputs. Many of these cases have come up with varying results due primarily to the large number of uncontrolled factors influencing the data. This study will be designed in a fully simulated environment where all variables can be controlled, allowing for the manipulation of a single variable to understand its’ specific influence over the model. The primary variable being tested in this study will be the slope of the terrain with all other factors held constant.   Based on the outcome of 7 alternative runs with ridge heights of 100, 120, 140, 160, 180, 200, and 300 meters and respective maximum slope values of 10.31, 12.32, 14.29, 16.23, 18.14, 20, and 28.63 degrees a defined separation point at a hub height of 94 meters could not be found. Each run demonstrated correlation between wind speeds and terrain slope variations but a considerable difference in estimated wind resources was present between the linear and non-linear CFD models where any slope in terrain is present. This, as expected, increases where terrain inclination increases, but a clearly defined difference between the two models is not evident at the previously established critical slope value of approximately 17 degrees (30%).
276

Natural language generation as neural sequence learning and beyond

Zhang, Xingxing January 2017 (has links)
Natural Language Generation (NLG) is the task of generating natural language (e.g., English sentences) from machine readable input. In the past few years, deep neural networks have received great attention from the natural language processing community due to impressive performance across different tasks. This thesis addresses NLG problems with deep neural networks from two different modeling views. Under the first view, natural language sentences are modelled as sequences of words, which greatly simplifies their representation and allows us to apply classic sequence modelling neural networks (i.e., recurrent neural networks) to various NLG tasks. Under the second view, natural language sentences are modelled as dependency trees, which are more expressive and allow to capture linguistic generalisations leading to neural models which operate on tree structures. Specifically, this thesis develops several novel neural models for natural language generation. Contrary to many existing models which aim to generate a single sentence, we propose a novel hierarchical recurrent neural network architecture to represent and generate multiple sentences. Beyond the hierarchical recurrent structure, we also propose a means to model context dynamically during generation. We apply this model to the task of Chinese poetry generation and show that it outperforms competitive poetry generation systems. Neural based natural language generation models usually work well when there is a lot of training data. When the training data is not sufficient, prior knowledge for the task at hand becomes very important. To this end, we propose a deep reinforcement learning framework to inject prior knowledge into neural based NLG models and apply it to sentence simplification. Experimental results show promising performance using our reinforcement learning framework. Both poetry generation and sentence simplification are tackled with models following the sequence learning view, where sentences are treated as word sequences. In this thesis, we also explore how to generate natural language sentences as tree structures. We propose a neural model, which combines the advantages of syntactic structure and recurrent neural networks. More concretely, our model defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated subtree. We show experimentally that this model achieves good performance in language modeling and can also generate dependency trees.
277

USE OF UNSTEADY MODELING TO PREDICT FLOODING BY CORRELATING STREAM GAGES: A CASE STUDY

Burke, Michael John 01 August 2011 (has links)
Scientific studies have suggested an increase in the frequency and intensity of flooding. The research presented herein is focused on a small watershed, which has experienced intense flooding of a downstream, urbanized area. For emergency response and preparedness, it is pertinent to have the ability to predict intensity and peak flows of a flood. The Town of Dyer, Indiana has been severely impacted by flooding in the last twenty years. A 37.6 square mile watershed begins in a rural section of Illinois with tributaries draining into Plum Creek. The creek crosses into Indiana and becomes Hart Ditch, a straight, narrow, deep channel through the urbanized Town of Dyer. A HEC-HMS hydrologic model was used and calibrated based on USGS gage data. Storm events ranging from short, high intensity to long, intermittent precipitation provided a vast representation of possible scenarios within the watershed. The hydrologic model was paired with an unsteady HEC-RAS hydraulic model to allow for different lateral inflows to the creek providing variations of flow. A comparison between upstream and downstream stream gage readings was utilized to create a working model that predicts downstream water surface elevations for previous real-time storm and hypothetical storms. These conditions were analyzed by two stream gages and a correlation between the two gages was developed. This correlation was used to predict downstream water surface elevations. The correlation was also used to determine the time to crest based on readings at the upstream gage for many different storm events. The ability to know downstream water surface elevations for real-time storm events allows a window of time to implement emergency response in areas where flooding is imminent. The downstream area of concern has known flood elevations that represent various damage levels.
278

Modelagem digital tridimensional para o desenvolvimento de prototipagem rápida: um enfoque sobre a modelagem orgânica

BARROS, Gutenberg Xavier da Silva 28 February 2012 (has links)
Submitted by Amanda Silva (amanda.osilva2@ufpe.br) on 2015-03-09T14:14:32Z No. of bitstreams: 2 dissertaçãoGutenberg7-final-ebook.pdf: 6877782 bytes, checksum: e4b2a344d999dde4803e526908399826 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-09T14:14:32Z (GMT). No. of bitstreams: 2 dissertaçãoGutenberg7-final-ebook.pdf: 6877782 bytes, checksum: e4b2a344d999dde4803e526908399826 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-02-28 / Este estudo levanta dados sobre modelagem tridimensional virtual e sobre a prototipagem rápida. A partir de então, gera análises comparativas das dimensões, da contagem de elementos, dos erros detectados e do resultado da impressão de modelos tridimensionais gerados através de três métodos de modelagem tridimensional orgânica, o Poly Modeling, o Spline Modeling e o Nurbs dentro de dois dos mais utilizados softwares de modelagem 3D, o Autodesk 3ds Max e o Autodesk Maya. Durante o processo também é feita uma comparação entre as atuações dos comandos de construção nos métodos de modelagem tridimensional orgânica. Busca-se as ferramentas, os métodos e o software recomendado para obtenção dos melhores resultados na prototipagem rápida em termos de eficiência de construção, verossimilhança formal e proximidade dimensional entre o modelo virtual e a prototipagem obtida, além das diferenças entre as ferramentas existentes entre os dois softwares citados. A investigação se dá sobre peças construídas baseando-se no modelo feminino da metodologia de Rosa (2005) e impressas pelo processo de prototipagem por FDM. Os resultados elegem o 3ds Max como software ideal, entre outros motivos, pelo feedback em tempo real das configurações das suas ferramentas e pela facilidade e precisão da exportação do modelo para a prototipagem. Apresenta o Poly Modeling como método com maior flexibilidade de construção e fidelidade tanto formal quanto dimensional. Ainda são apresentados procedimentos alternativos de construção e de exportação para os três métodos e para os dois softwares.
279

Python Tools to Aid and Improve Rapid Hydrologic and Hydraulic Modeling with the Automated Geospatial Watershed Assessment Tool (AGWA)

Barlow, Jane E., Barlow, Jane E. January 2017 (has links)
Hydrologic and hydraulic modeling are used to assess watershed function at different spatial and temporal scales. Many tools have been developed to make these types of models more accessible to use and model results easier to interpret. One tool that makes hydrologic models more accessible in a geographic information system (GIS) is the Automated Geospatial Watershed Assessment tool (AGWA); the GIS enables the development of spatially variable model inputs and model results for a variety of applications. Two major applications of AGWA are for rangeland watershed assessments and post-wildfire rapid watershed assessments. Each of these applications have primarily utilized the Kinematic Runoff and Erosion model (KINEROS2) which is accessible in AGWA. Two new tools were developed which work within the existing AGWA/KINEROS2 framework in ArcGIS to enhance rangeland and post-wildfire watershed assessments. The Storage Characterization Tool, was developed to work with high-resolution topographic data to characterize existing stock ponds so these features can easily be incorporated into AGWA/KINEROS2 for rangeland hydrologic analysis. The second tool simulates reach scale flood inundation (the Inundation Tool) utilizing AGWA/KINEROS2 outputs and local channel properties for Hydrologic Engineering Center (HEC-2) hydraulic calculations to compute flood inundation in post-wildfire environments. Both tools have been validated using multiple datasets and desired applications were outlined so that the tools are properly used.
280

Towards a Unified Model-Based Formalism for Supporting Safety Assessment activities

Forssén, Fredrik January 2010 (has links)
Safety assessment is a rational and systematic process for assessing the risk associated with the usage of a product. While the safety assessment process is important even when making a simple product, the true importance of this process comes into light when designing for example an aircraft, where a failure could possibly lead to the loss of human lives. However,even though this process is vital for certain industries, it is plagued by a lack of tools. The existing tools are focused on specific parts of the process and do not make use of work done in earlier steps of the process which often means that the safety engineer needs to manually do work that could have been calculated automatically from information that is already present from an earlier step in the process. This thesis shows that by creating a model of the product that can be present and augmented throughout every step in the process, many calculations that are currently done by hand can be automated or semi-automated by examining this shared model. The thesis proposes a specification for a modeling formalism that is simple enough to be used as early as the requirements phase of a project, but powerful enough to provide important information all the way throughout the safety assessment process. The thesis also specifically shows how this model can be used to help in the creation and updating process of Failure Mode and Effects Analysis (FMEA) documents as a proof-of concept implementation based on Sörman Information AB’s product “Uptime BPC Standard”.Algorithms for synchronizing between the model and the FMEA representation, as well as algorithms for automatically calculating the next level effect and global level effect of failure modes based on the hierarchy and connections made in the model are also presented. The prototype implementation shows that even though the entire safety assessment process cannot be automated it is possible to extract information from the model by analyzing its hierarchy and connections. While more work still needs to be done before the entire safety assessment process can be encompassed, the initial results shows that the proposed modeling formalism allows us to create models from which relevant information that can be used to support the safety assessment process can be calculated.

Page generated in 0.082 seconds