• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 127
  • 23
  • 17
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 382
  • 382
  • 136
  • 132
  • 76
  • 66
  • 49
  • 43
  • 40
  • 33
  • 29
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Implementing inquiry based computational modeling curriculum in the secondary science classroom

Moldenhauer, Theodore Gerald 1970- 16 October 2014 (has links)
Better visualization of micro-level structures and processes can greatly enhance student understanding of key biological functions such as the central dogma. Previous research has demonstrated a need of introducing novel methods to increase student understanding of these concepts. The intention of this report is to show how computational modeling programs (CMPs) can be successfully used as an innovative method of teaching biology concepts that occur at a molecular level. The use of computers and web-based lessons are not new topics in secondary education studies but there is not an abundance of research related to computational modeling alone. We began by researching the many studies that have already indicated the benefits of using computers in the classroom with an emphasis on CMPs and simulations. Of these, we focused mostly on the ones that showed increased student engagement and influenced understanding of core science concepts. Based on the literature reviewed, a framework for curriculum designed around CMPs is proposed. Lastly, a model lesson is discussed to provide an example of how these professional grade tools can be employed in the classroom. This report provides a basis for the continued development of constructivist curriculum built around the use of professional grade computational tools in secondary science classrooms. / text
22

Neural mechanisms for face and orientation after-effects

Zhao, Chen January 2011 (has links)
Understanding how human and animal visual systems work is an important and still largely unsolved problem. The neural mechanisms for low-level visual processing have been studied in detail, focusing on early visual areas. Much less is known about the neural basis of high-level perception, particularly in humans. An important issue is whether and how lessons learned from low-level studies, such as how neurons in the primary visual cortex respond to oriented edges, can be applied to understanding highlevel perception, such as human processing of faces. Visual aftereffects are a useful tool for investigating how stimuli are represented, because they reveal aspects of the underlying neural organisation. This thesis focuses on identifying neural mechanisms involved in high-level visual processing, by studying the relationship between low- and high-level visual aftereffects. Previous psychophysical studies have shown that humans exhibit reliable orientation (tilt) aftereffects, wherein prolonged exposure to an oriented visual pattern systematically biases perception of other orientations. Humans also show face identity aftereffects, wherein prolonged exposure to one face systematically biases perception of other faces. Despite these apparent similarities, previous studies have argued that the two effects reflect different mechanisms, in part because tilt aftereffects show a characteristic S-shaped curve, with the effect magnitude increasing and then decreasing with orientation difference, while face aftereffects appeared to increase monotonically (in various units of face morphing strengths) with difference from a norm (average) face. Using computational models of orientation and face processing in the visual cortex, I show that the same computational mechanisms derived from early cortical processing, applied to either orientation-selective or face-selective neurons, are sufficient to replicate both types of effects. However, the models predict that face aftereffects would also be S-shaped, if tested on a sufficiently wide range of face stimuli. Based on the modelling work, I designed psychophysical experiments to test this theory. An identical experimental paradigm was used to test both face gender and tilt aftereffects, with strikingly similar S-shape curves obtained for both conditions. Combined with the modelling results, this result provides evidence that low- and high level visual adaptation reflect similar neural mechanisms. Other psychophysical experiments have recently shown interactions between low and high-level aftereffects, whereby orientation and line curvature processing (in early visual area) can influence judgements of facial emotion (by high-level face-selective neurons). An extended multi-level version of the face processing model replicates this interaction across levels, but again predicts that the cross-level effects will show similar S-shaped aftereffect curves. Future psychophysical experiments can test these predictions. Together, these results help us to understand how stimuli are represented and processed at each level of the visual cortex. They suggest that similar adaptation mechanisms may underlie both high-level and low-level visual processing, which would allow us to apply much of what we know from low-level studies to help understand high-level processing.
23

Design and Validation of a Computational Model for Study of Scapholunate Joint Kinematics

Tremols, Edward J 01 January 2014 (has links)
As computational power has increased, computational modeling has become a very promising tool to model the biomechanics of complex joint systems. Musculoskeletal computational models have become more complex when compared to original iterations which utilized a number of simplifications. This thesis utilized a three-dimensional computational model of the wrist joint structure to investigate scapholunate kinematics. The model accurately represented the bony anatomy of the wrist and hand and represented soft tissue structures such as ligaments, tendons, and other surrounding tissues. Creation of the model was done using commercially available computer-aided design and medical image processing software, and utilized the rigid body modeling methodology. It was validated for scapholunate kinematics against a cadaver study and then utilized to investigate further measures and surgical procedures. The simulations performed by the model demonstrated an accurate anatomical response of wrist function. As better understanding of the biomechanics of the wrist joint is achieved, this model could prove to be an important tool to further investigate wrist mechanics.
24

Computational modeling of biochemical systems using cellular automata

Apte, Advait 14 December 2009 (has links)
Biological systems exhibit complex behaviors through coordinated responses of individual biological components. With the advent of genome-scale techniques, one focus has been to develop methods to model interactions between components to accurately describe intact system function. Mathematical modeling techniques such as constraint-based modeling, agent-based modeling, cellular automata (CA) modeling and differential equation modeling are employed as computational tools to study biological phenomenon. We have shown that cellular automata simulations can be used as a computational tool for 12 predicting the dynamics of biological systems with stochastic behavior. The basic premise for the research was the observations made during a study of biologically important feed-forward motifs where CA simulations were compared with differential equation simulations. It was shown for classes of structural motifs with feed-forward architecture that network topology affects the overall rate of a process in a quantitatively predictable manner. The study which comprised of CA simulations compared with differential equation modeling show reasonable agreement in the predictability of system dynamics, which provided enough support to model biological systems at cellular level to observe dynamic system evolution. The great promise shown by CA simulations to model biochemical systems was then employed to elucidate evolutionary clues as to why biological networks show preference for certain types of motifs and preserve them with higher frequency during evolution. It was followed by modeling apoptotic networks to shed light on the efficacy of inhibitors and to model cellulose hydrolysis to evaluate efficiency of different enzyme systems used by cellulytic bacteria.
25

Optimization of Coupled Computational Modeling and Experimentation for Metallic Systems: Systematic Microstructural Feature – Mechanical Property Correlation for Cold-Sprayable Powders

Tsaknopoulos, Derek 17 April 2019 (has links)
Additive manufacturing technologies place materials at the direct point of need of the warfighter, enabling the development of optimal, situation-specific means to produce and repair parts of Army and DoD weapons systems. In the case of solid-state AM, a full understanding of the metallic powder is critical with producing ideal consolidated material properties reliably and repeatably. By way of iteratively coupling computational models with supportive experimental testing, one can rapidly archetype differences in processing methods, alloy compositions, and heat treatments for metallic powders that serve as feedstock for these AM technologies. Through the combination of thermodynamic models, advanced characterization, and dynamic nano-indentation, representative correlations are established between microstructural features and mechanical properties, enabling the development of enhanced feedstock materials that can achieve the specific needs of the warfighter efficiently without forfeiting quality. This represents both a holistic and a materials-by-design approach to AM through the deliberate use of computation to drive down the discovery process and allow feedstock powders to be engineered with specific properties dictated by Army requirements for performance. In a case study of Al 6061, unique observations were made through the combination of modeling and experimentation. It was discovered that the precipitation kinetics were greatly accelerated in powders and therefore, typical heat treatment processes used for cast-aluminum alloys were not valid. Due to this shift in precipitation sequences, high-temperature treatment was limited to discourage precipitate and grain coarsening. Additionally, when compared to typical cast Al 6061, the main precipitation hardening phase shifts from Mg2Si to Al4Cu2Mg8Si7, changing how aging mechanisms were accounted for. These conclusions were supported by both the computational models and experimental results. Through the generation of numerous data, the models were calibrated, enabling more efficient and precise development of tailored material characteristics from specific microstructural features to serve as an input in a holistic through-process model for a solid-state AM process and guide future experimentation.
26

ExoPlex: A New Python Library for Detailed Modeling of Rocky Exoplanet Internal Structure and Mineralogy

January 2018 (has links)
abstract: The pace of exoplanet discoveries has rapidly accelerated in the past few decades and the number of planets with measured mass and radius is expected to pick up in the coming years. Many more planets with a size similar to earth are expected to be found. Currently, software for characterizing rocky planet interiors is lacking. There is no doubt that a planet’s interior plays a key role in determining surface conditions including atmosphere composition and land area. Comparing data with diagrams of mass vs. radius for terrestrial planets provides only a first-order estimate of the internal structure and composition of planets [e.g. Seager et al 2007]. This thesis will present a new Python library, ExoPlex, which has routines to create a forward model of rocky exoplanets between 0.1 and 5 Earth masses. The ExoPlex code offers users the ability to model planets of arbitrary composition of Fe-Si-Mg-Al-Ca-O in addition to a water layer. This is achieved by modeling rocky planets after the earth and other known terrestrial planets. The three distinct layers which make up the Earth's internal structure are: core, mantle, and water. Terrestrial planet cores will be dominated by iron however, like earth, there may be some quantity of light element inclusion which can serve to enhance expected core volumes. In ExoPlex, these light element inclusions are S-Si-O and are included as iron-alloys. Mantles will have a more diverse mineralogy than planet cores. Unlike most other rocky planet models, ExoPlex remains unbiased in its treatment of the mantle in terms of composition. Si-Mg-Al-Ca oxide components are combined by predicting the mantle mineralogy using a Gibbs free energy minimization software package called Perple\_X [Connolly 2009]. By allowing an arbitrary composition, ExoPlex can uniquely model planets using their host star’s composition as an indicator of planet composition. This is a proven technique [Dorn et al 2015] which has not yet been widely utilized, possibly due to the lack of availability of easy to use software. I present a model sensitivity analysis to indicate the most important parameters to constrain in future observing missions. ExoPlex is currently available on PyPI so it may be installed using pip or conda on Mac OS or Linux based operating systems. It requires a specific scripting environment which is explained in the documentation currently stored on the ExoPlex GitHub page. / Dissertation/Thesis / Masters Thesis Astrophysics 2018
27

Anthropometric shape parameters in obese subjects: implications for obese total joint arthroplasty patients

Simoens, Kevin James 01 May 2017 (has links)
Obesity is a severe concern worldwide and its prevalence is expected to continue to increase. Linked to diabetes, kidney disease, heart disease, and high blood pressure among other things, obesity has been identified as the forthcoming, largest preventable cause of mortality. Osteoarthritis, surgical consequences, distribution of subcutaneous adipose tissue, and alteration of joint biomechanics have vast implications in total joint repair (TJR). Previous studies have linked obesity to increased forces through weight-bearing lower extremities, alterations in gait, and risk of implant failure. The objectives of this study were to (1) provide a tool to predict lower extremity dimensions and shape variations of subcutaneous adipose tissue, (2) identify the degree to which obesity influences shape variation of the osseous anatomy of the knee joint, and (3) lay a foundation to compare the knee contact force of obese patients in activities of daily living. Long-leg EOS films were obtained, retrospectively over 5 years, from 232 patients that were being seen at the Adult Reconstruction Clinic at the University of Iowa. Using custom Matlab algorithms, measurements of soft tissue distribution and lower extremity osseous anatomy were obtained and analyzed. Additionally knee contact force measurements were obtained through motion capture analysis and modeling in Anybody Technology. Males and females had similar lower extremity shapes, with females having greater knee circumferences than males. The variability of PPT and PTT tended to be greater in females and increased with increasing BMI. Although similar in the anteroposterior direction, males tended to have on average 12mm wider proximal tibias in the mediolateral direction. Clinical observations of increased post-operative complications trend with these findings. The future of research into biomechanics of obesity will rely heavily on anatomic models of the obese lower extremities, which until this work did not exist.
28

Investigation of a HA/PDLGA/Carbon Foam Material System for Orthopedic Fixation Plates Based on Time-Dependent Properties

Rodriguez, Douglas E. 14 January 2010 (has links)
While there is continuing interest in bioresorbable materials for orthopedic fixation devices, the major challenge in utilizing these materials in load-bearing applications is creating materials sufficiently stiff and strong to sustain loads throughout healing while maintaining fracture stability. The primary aim of this study is to quantify the degradation rate of a bioresorbable material system, then use this degradation rate to determine the material response of an orthopedic device made of the same material as healing progresses. The present research focuses on the development and characterization of a material system consisting of carbon foam infiltrated with hydroxyapatite (HA) reinforced poly(D,L-lactide)-co-poly(glycolide) (PDLGA). A processing technique is developed to infiltrate carbon foam with HA/PDLGA and material morphology is investigated. Additionally, short-term rat osteoblast cell studies are undertaken to establish a starting point for material biocompatibility. Degradation experiments are conducted to elicit the time-dependent properties of the material system at the material scale. These properties are then incorporated into computational models of an internal plate attached to a fractured human femur to design and predict the material response to applied physiological loads. Results from this work demonstrate the importance of material dissolution rate as well as material strength when designing internal fixation plates.
29

Finite Element Studies of an Embryonic Cell Aggregate under Parallel Plate Compression

Yang, Tzu-Yao January 2008 (has links)
Cell shape is important to understanding the mechanics of three-dimensional (3D) cell aggregates. When an aggregate of embryonic cells is compressed between parallel plates, the cell mass and the cells of which it is composed flatten. Over time, the cells typically move past one another and return to their original, spherical shapes, even during sustained compression, although the profile of the aggregate changes little once plate motion stops. Although the surface and interfacial tensions of cells have been attributed to driving these internal movements, measurements of these properties have largely eluded researchers. Here, an existing 3D finite element model, designed specifically for the mechanics of cell-cell interactions, is enhanced so that it can be used to investigate aggregate compression. The formulation of that model is briefly presented and enhancements made to its rearrangement algorithms discussed. Simulations run using the model show that the rounding of interior cells is governed by the ratio between the interfacial tension and cell viscosity, whereas the shape of cells in contact with the medium or the compression plates is dominated by their respective cell-medium or cell-plate surface tensions. The model also shows that as an aggregate compresses, its cells elongate more in the circumferential direction than the radial direction. Since experimental data from compressed aggregates are anticipated to consist of confocal sections, geometric characterization methods are devised to quantify the anisotropy of cells and to relate cross sections to 3D properties. The average anisotropy of interior cells as found using radial cross sections corresponds more closely with the 3D properties of the cells than data from series of parallel sections. A basis is presented for estimating cell-cell interfacial tensions from the cell shape histories they exhibit during the cell reshaping phase of an aggregate compression test.
30

All cumulative semantic interference is not equal: A test of the Dark Side Model of lexical access

Walker Hughes, Julie 16 September 2013 (has links)
Language production depends upon the context in which words are named. Renaming previous items results in facilitation while naming pictures semantically related to previous items causes interference. A computational model (Oppenheim, Dell, & Schwartz, 2010) proposes that both facilitation and interference are the result of using naming events as “learning experiences” to ensure future accuracy. The model successfully simulates naming data from different semantic interference paradigms by implementing a learning mechanism that creates interference and a boosting mechanism that resolves interference. This study tested this model’s assumptions that semantic interference effects in naming are created by learning and resolved by boosting. Findings revealed no relationship between individual performance across semantic interference tasks, and measured learning and boosting abilities did not predict performance. These results suggest that learning and boosting mechanisms do not fully characterize the processes underlying semantic interference when naming.

Page generated in 0.1574 seconds