Spelling suggestions: "subject:"3dmodeling,"" "subject:"bymodeling,""
401 |
Development of accurate and efficient models for biological moleculesWu, Johnny Chung 08 July 2013 (has links)
The abnormal expression or function of biological molecules, such as nucleic acids, proteins, or other small organic molecules, lead to the majority of diseases. Consequently, understanding the structure and function of these molecules through modeling can provide insight and perhaps suggest treatment for diseases. However, biologically relevant molecular phenomenon can vary vastly in the nature of their interactions and different classes of models are required to accommodate for this diversity. The objective of this thesis is to develop models for small molecules, amino acid peptides, and nucleic acids. A physical polarizable molecular mechanics model is described to accurately represent small molecules and single atom ions and applied to predict experimentally measurable thermodynamic properties such as hydration and binding free energies. A novel physical coarse-grain model based on Gay-Berne potentials and electrostatic multipoles has been developed for short peptides. The fraction of residues that adopt the alpha-helix conformation agrees with all-atom molecule dynamics results. Finally, a statistically-derived model based on sequence comparative sequence alignments is developed and applied to improve folding accuracy of RNA molecules. / text
|
402 |
An ensemble solution for the Earth's time-varying gravitational field from the NASA/DLR GRACE missionSakumura, Carly Frances 02 December 2013 (has links)
Several groups produce estimates of the Earth's time-varying gravitational field with data provided by the NASA/DLR Gravity Recovery and Climate Experiment (GRACE) mission. These unprecedented highly accurate global data sets track the time-variable transport of mass across and underneath the surface of the Earth and give insight into secular, seasonal, and sub seasonal variations in the global water supply. Knowledge gained from these products can inform and be incorporated into ocean and hydrological models and advise environmental policy planning. Therefore, a complete understanding of the accuracy and variations between these different fields is necessary, and the most accurate possible solutions desired. While the various gravity fields are similar, differences in processing strategies and tuning parameters result in solutions with regionally specific variations and error patterns.
This study analyzed the spatial, temporal, and spectral variations between four different gravity field products. The knowledge gained in this analysis was used to develop an ensemble solution that harnesses the best characteristics of each individual field to create an optimal model. Multiple methods were used to combine and analyze the individual and ensemble solutions. First a simple mean model was created; then the different solutions were weighted based on the formal error estimates as well as the monthly deviation from the arithmetic mean ensemble. These ensemble models as well as the four individual data center solutions were analyzed for bias, long term trend, and regional variations between the solutions, evaluated statistically to assess the noise and scatter within the solutions, and compared to independent hydrological models. Therefore, the form and cause of the deviations between the models, as well as the impact of these variations, is characterized. The three ensemble solutions constructed in this analysis were all effective at reducing noise in the models and better correlate to hydrological processes than any individual solution. However, the scale of these improvements is constrained by the relative variation between the individual solutions as the deviation of these individual data products from the hydrological model output is much larger than the variations between the individual and ensemble solutions. / text
|
403 |
14 MeV neutron generator dose modelingMcConnell, Kristen Alycia 18 March 2014 (has links)
Modeling and understanding the doses around the neutron generator provides insightful data in regard to radiation safety and protection precautions. Published data can be used to predict doses, but realistic data for the Nuclear Engineering Teaching Laboratory’s Thermo MP 320 Neutron Generator helps health physicists more accurately predict dose rates and protect experimenters against exposure. The goal was to create a model inclusive of the entire setup and room where the neutron generator is housed.
Monte Carlo N-Particle (MCNP) Code reigns as the preferred method for modeling radiation transport and was utilized to model the transport of neutrons within the current configuration of the 14 MeV neutron generator facility. This model took into account all shielding materials and their respective dimensions and locations within the concrete room. By utilizing tallies and tally modifiers, the model predicts dose rates that can be used with experimental factors such as irradiation time and flux to predict a dose in millirem.
Validation experiments were performed in the current setup using Landauer Luxel®+ with Neutrak dosimeters placed in strategic locations to record the neutron dose
vi
received as well as a Ludlum Model 42-41 PRESCILA neutron probe to predict dose rates. The dosimeters and PRESCILA measurement locations matched the positions of the point detector tallies in MCNP. After laboratory analysis, a comparison was performed between the model output and the dosimeter and PRESCILA values to successfully validate the accuracy of the model. / text
|
404 |
Applicability of agent-based model to managing roadway infrastructureLi, Chen, active 2013 25 March 2014 (has links)
In a roadway network, infrastructure conditions determine efficient network operation and traveler safety, and thus roadway engineers need a sophisticated plan to monitor and maintain network performance. Developing a comprehensive maintenance and rehabilitation (M&R) strategy for an infrastructure system, specifically a roadway network, is a complicated process because of the system uncertainties and multiple parties involved. Traditional approaches are mostly top-down, and restrict the decision-making process. In contrast, agent-based models, a bottom-up approach, could well simulate and analyze the autonomy of each party and their interactions in the infrastructure network. In this thesis, an agent-based model prototype was developed to simulate the operations of a small roadway network with a high degree of simplification. The objective of this study is to assess the applicability of agent-based modeling for infrastructure management problems through the following four aspects: (1) to simulate the user route selection process in the network; (2) to analyze the impact of users’ choices on the congestion levels and structural conditions of roadway sections; (3) to help the engineer to determine M&R strategies under a certain budget; and (4) to investigate the impact due to different fare rates of the toll road section on the infrastructure conditions in the network. This prototype detected traffic flow, and gave appropriate M&R advice to each roadway segment. To improve this model, more investigation should be conducted to increase the level of sophistication for the interaction rules between agents, the route selection, and the budget allocation algorithm. Upon completion, this model can be applied to existing road networks to assist roadway engineers in managing the network with an efficient M&R plan and toll rate. / text
|
405 |
Study on barriers of implementation of building information modeling in facilities managementHe, Zhaoqiang, 贺照强 January 2012 (has links)
Innovation implementation within an organization has always been associated with barriers from all aspects. As a key innovation in the building industry, Building Information Modeling (BIM) has been adopted rapidly in the design and construction process. Facilities management (FM) which contributed far more values than design and construction however did not seems to catch up with this trend. High cost, poor technology and other factors inherent within organizations were mostly mentioned in research papers and industry to be the key obstacles. This paper aimed to explore and identify the key organizational barriers of the implementation process of BIM in FM. Three case studies on large FM organizations in Hong Kong were reported through in-depth interviews. Two FM software providers were also interviewed to have a comprehensive understanding of BIM in FM interfacing technology.
Before the data collection process, two theoretical models were built to guide the data collection and analysis process. The first model was based on the information flow during the BIM in FM implementation process whilst the second model was about the required conditions for such process.
FM managers from three leading organizations in BIM implementation in Hong Kong were interviewed. Some published documents from the targeted organizations were reviewed to facilitate the research findings. Soft system analysis was adopted to analyze the barriers which impeded the implementation of BIM in FM. A cross case study was also conducted to strengthen the findings from the three case studies. Two overseas software providers with successful BIM in FM experiences were also interviewed. The technology of BIM in FM is found to be ready for importing the construction stage information to FM software packages. The additional functions based on BIM in FM, however, are still not readily available in the market.
The fragmentation between the project and facilities management teams was found to be the most significant barriers for BIM implementation. To overcome such barriers, organizations may consider establishing a coordination platform between the project management team and FM team. It could be the most efficient way when the fragmented organizational structure was not possible to be changed in a short time. A company-wide BIM standard would also be useful to help during the coordination process. / published_or_final_version / Real Estate and Construction / Master / Master of Philosophy
|
406 |
Mechanistic modeling of low salinity water injectionKazemi Nia Korrani, Aboulghasem 16 February 2015 (has links)
Petroleum and Geosystems Engineering / Low salinity waterflooding is an emerging enhanced oil recovery (EOR) technique in which the salinity of the injected water is substantially reduced to improve oil recovery over conventional higher salinity waterflooding. Although there are many low salinity experimental results reported in the literature, publications on modeling this process are rare. While there remains some debate about the mechanisms of low salinity waterflooding, the geochemical reactions that control the wetting of crude oil on the rock are likely to be central to a detailed description of the process. Since no comprehensive geochemical-based modeling has been applied in this area, we decided to couple a state-of-the-art geochemical package, IPhreeqc, developed by the United States Geological Survey (USGS) with UTCOMP, the compositional reservoir simulator developed at the Center for Petroleum and Geosystems Engineering in The University of Texas at Austin. A step-by-step algorithm is presented for integrating IPhreeqc with UTCOMP. Through this coupling, we are able to simulate homogeneous and heterogeneous (mineral dissolution/precipitation), irreversible, and ion-exchange reactions under non-isothermal, non-isobaric and both local-equilibrium and kinetic conditions. Consistent with the literature, there are significant effects of water-soluble hydrocarbon components (e.g., CO2, CH4, and acidic/basic components of the crude) on buffering the aqueous pH and more generally, on the crude oil, brine, and rock reactions. Thermodynamic constrains are used to explicitly include the effect of these water-soluble hydrocarbon components. Hence, this combines the geochemical power of IPhreeqc with the important aspects of hydrocarbon flow and compositional effects to produce a robust, flexible, and accurate integrated tool capable of including the reactions needed to mechanistically model low salinity waterflooding. The geochemical module of UTCOMP-IPhreeqc is further parallelized to enable large scale reservoir simulation applications. We hypothesize that the total ionic strength of the solution is the controlling factor of the wettability alteration due to low salinity waterflooding in sandstone reservoirs. Hence, a model based on the interpolating relative permeability and capillary pressure as a function of total ionic strength is implemented in the UTCOMP-IPhreeqc simulator. We then use our integrated simulator to match and interpret a low salinity experiment published by Kozaki (2012) (conducted on the Berea sandstone core) and the field trial done by BP at the Endicott field (sandstone reservoir). On the other hand, we believe that during the modified salinity waterflooding in carbonate reservoirs, calcite is dissolved and it liberates the adsorbed oil from the surface; hence, fresh surface with the wettability towards more water-wet is created. Therefore, we model wettability to be dynamically altered as a function of calcite dissolution in UTCOMP-IPhreeqc. We then apply our integrated simulator to model not only the oil recovery but also the entire produced ion histories of a recently published coreflood by Chandrasekhar and Mohanty (2013) on a carbonate core. We also couple IPhreeqc with UTCHEM, an in-house research chemical flooding reservoir simulator developed at The University of Texas at Austin, for a mechanistic integrated simulator to model alkaline/surfactant/polymer (ASP) floods. UTCHEM has a comprehensive three phase (water, oil, microemulsion) flash calculation package for the mixture of surfactant and soap as a function of salinity, temperature, and co-solvent concentration. Similar to UTCOMP-IPhreeqc, we parallelize the geochemical module of UTCHEM-IPhreeqc. Finally, we show how apply the integrated tool, UTCHEM-IPhreeqc, to match three different reaction-related chemical flooding processes: ASP flooding in an acidic active crude oil, ASP flooding in a non-acidic crude oil, and alkaline/co-solvent/polymer (ACP) flooding. / text
|
407 |
Efficient simulation techniques for large-scale applicationsHuang, Jen-Cheng 21 September 2015 (has links)
Architecture simulation is an important performance modeling approach. Modeling hardware components with sufficient detail helps architects to identify both hardware and software bottlenecks. However, the major issue of architectural simulation is the huge slowdown compared to native execution. The slowdown gets higher for the emerging workloads that feature high throughput and massive parallelism, such as GPGPU kernels. In this dissertation, three simulation techniques were proposed to simulate emerging GPGPU kernels and data
analytic workloads efficiently. First, TBPoint reduce the simulated instructions of GPGPU kernels using the inter-launch and intra-launch sampling approaches. Second, GPUmech improves the simulation speed of GPGPU kernels by abstracting the simulation model using functional simulation and analytical modeling. Finally, SimProf applies stratified random sampling with performance counters to select representative simulation points for data analytic workloads to deal with data-dependent performance. This dissertation presents the techniques that can be used to simulate the emerging large-scale workloads accurately and efficiently.
|
408 |
Predictive Modeling Using a Nationally Representative Database to Identify Patients at Risk of Developing MicroalbuminuriaVilla Zapata, Lorenzo Andrés January 2014 (has links)
Background: Predictive models allow clinicians to more accurately identify higher- and lower-risk patients and make more targeted treatment decisions, which can help improve efficiency in health systems. Microalbuminuria (MA) is a condition characterized by the presence of albumin in the urine below the threshold detectable by a standard dipstick. Its presence is understood to be an early marker for cardiovascular disease. Therefore, identifying patients at risk for MA and intervening to treat or prevent conditions associated with MA, such as high blood pressure or high blood glucose, may support cost-effective treatment. Methods: The National Health and Nutrition Examination Survey (NHANES) was utilized to create predictive models for MA. This database includes clinical, medical and laboratory data. The dataset was split into thirds; one-third was used to develop the model, while the other two-thirds were utilized to validate the model. Univariate logistic regression was performed to identify variables related with MA. Stepwise multivariate logistic regression was performed to create the models. Model performance was evaluated using three criteria: 1) receiver operator characteristic (ROC) curves; 2) pseudo R-squared; and 3) goodness of fit (Hosmer-Lemeshow). The predictive models were then used to develop risk-scores. Results: Two models were developed using variables that had significant correlations in the univariate analysis (p-value<0.05). For Model A, variables included in the final model were: systolic blood pressure (SBP); fasting glucose; C-reactive protein; blood urea nitrogen (BUN); and alcohol consumption. For Model B, the variables were: SBP; glycohemoglobin; BUN; smoking status; and alcohol consumption. Both models performed well in the creation dataset and no significant difference between the models was found when they were evaluated in the validation set. A 0-18 risk score was developed utilizing Model A, and the predictive probability of developing MA was calculated. Conclusion: The predictive models developed provide new evidence about which variables are related with MA and may be used by clinicians to identify at-risk patients and to tailor treatment. Furthermore, the risk score developed using Model A may allow clinicians to more easily measure patient risk. Both predictive models will require external validation before they can be applied to other populations.
|
409 |
A Study of Korean Students' Creativity in Science Using Structural Equation ModelingJO, SON MI January 2009 (has links)
Through the review of creativity research I have found that studies lack certain crucial parts: a) a theoretical framework for the study of creativity in science, b) studies considering the unique components related to scientific creativity, and c) studies of the interactions among key components through simultaneous analyses. The primary purpose of this study is to explore the dynamic interactions among four components (scientific proficiency, intrinsic motivation, creative competence, context supporting creativity) related to scientific creativity under the framework of scientific creativity. A total of 295 Korean middle school students participated. Well-known and commonly used measurements were selected and developed. Two scientific achievement scores and one score measured by performance-based assessment were used to measure student scientific knowledge/inquiry skills. Six items selected from the study of Lederman, Abd-El-Khalick, Bell, and Schwartz (2002) were used to assess how well students understand the nature of science. Five items were selected from the subscale of the scientific attitude inventory version II (Moore & Foy, 1997) to assess student attitude toward science. The Test of Creative Thinking-Drawing Production (Urban & Jellen, 1996) was used to measure creative competence. Eight items chosen from the 15 items of the Work Preference Inventory (1994) were applied to measure students' intrinsic motivation. To assess the level of context supporting creativity, eight items were adapted from measurement of the work environment (Amabile, Conti, Coon, Lazenby, and Herron, 1996). To assess scientific creativity, one open-ended science problem was used and three raters rated the level of scientific creativity through the Consensual Assessment Technique (Amabile, 1996). The results show that scientific proficiency and creative competence correlates with scientific creativity. Intrinsic motivation and context components do not predict scientific creativity. The strength of relationships between scientific proficiency and scientific creativity (estimate parameter=0.43) and creative competence and scientific creativity (estimate parameter=0.17) are similar [Δx²(.05)(1)=0.670, P > .05]. In specific analysis of structural model, I found that creative competence and scientific proficiency play a role of partial mediators among three components (general creativity, scientific proficiency, and scientific creativity). The moderate effects of intrinsic motivation and context component were investigated, but the moderation effects were not found.
|
410 |
Multi-Layer Cellular DEVS Formalism for Faster Model Development and Simulation EfficiencyBait Shiginah, Fahad Awadh January 2006 (has links)
Recent research advances in Discrete EVent system Specification (DEVS) as well as cellular space modeling emphasized the need for high performance modeling methodologies and environments. The growing demand for cellular space models has directed researchers to use different implementation formalisms. Many efforts were dedicated to develop cellular space models in DEVS in order to employ the advantage of discrete event systems. Unfortunately, the conventional implementations degrade the performance in large scale cellular models because of the huge volume of inter-cell messages generated during simulation. This work introduces a new multi-layer formalism for cellular DEVS models that assures high performance and ease of user specification. It starts with the parallel DEVS specification layer and derives a high performance cellular DEVS layer using the property of closure under coupling. This is done through converting the parallel DEVS into its equivalent non-modular form which involves computational and communication overhead tradeoffs. The new specification layer, in contrast to multi-component DEVS, is identical to the modular parallel DEVS in the sense of state trajectories which are updated according to the modular message passing methodology. The equivalency of the two forms is verified using simulation methods. Once the equivalency has been ensured, analysis of the models becomes a decisive factor in employing modularity in cellular DEVS models. Non-modular models show significant speedup in simulation runs given that their event list handler is implemented based on analytical and experimental survey that involve actual operation counts. However, the new high performance non-modular specification layer is complicated to implement. Therefore, a third layer of specification is proposed to provide a simple user specification that is automatically converted into the fast complex cellular DEVS specification, which is finally put in the standard parallel DEVS specification. A tool was implemented to automatically accept user's model specification via GUI and generate the models using the new specifications. The generated models are then required to be tested and verified using some automatic DEVS verification methods. As a result, the model development and verification processes are made easier and faster.
|
Page generated in 0.0772 seconds