Spelling suggestions: "subject:"3dmodeling,"" "subject:"bymodeling,""
241 |
PDEModelica – A High-Level Language for Modeling with Partial Differential EquationsSaldamli, Levon January 2006 (has links)
This thesis describes work on a new high-level mathematical modeling language and framework called PDEModelica for modeling with partial differential equations. It is an extension to the current Modelica modeling language for object-oriented, equation-based modeling based on differential and algebraic equations. The language extensions and the framework presented in this thesis are consistent with the concepts of Modelica while adding support for partial differential equations and space-distributed variables called fields. The specification of a partial differential equation problem consists of three parts: 1) the description of the definition domain, i.e., the geometric region where the equations are defined, 2) the initial and boundary conditions, and 3) the actual equations. The known and unknown distributed variables in the equation are represented by field variables in PDEModelica. Domains are defined by a geometric description of their boundaries. Equations may use the Modelica derivative operator extended with support for partial derivatives, or vector differential operators such as divergence and gradient, which can be defined for general curvilinear coordinates based on coordinate system definitions. The PDEModelica system also allows the partial differential equation models to be defined using a coefficient-based approach, where PDE models from a library are instantiated with different parameter values. Such a library contains both continuous and discrete representations of the PDE model. The user can instantiate the continuous parts and define the parameters, and the discrete parts containing the equations are automatically instantiated and used to solve the PDE problem numerically. Compared to most earlier work in the area of mathematical modeling languages supporting PDEs, this work provides a modern object-oriented component-based approach to modeling with PDEs, including general support for hierarchical modeling, and for general, complex geometries. It is possible to separate the geometry definition from the model definition, which allows geometries to be defined separately, collected into libraries, and reused in new models. It is also possible to separate the analytical continuous model description from the chosen discretization and numerical solution methods. This allows the model description to be reused, independent of different numerical solution approaches. The PDEModelica field concept allows general declaration of spatially distributed variables. Compared to most other approaches, the field concept described in this work affords a clearer abstraction and defines a new type of variable. Arrays of such field variables can be defined in the same way as arrays of regular, scalar variables. The PDEModelica language supports a clear, mathematical syntax that can be used both for equations referring to fields and explicit domain specifications, used for example to specify boundary conditions. Hierarchical modeling and decomposition is integrated with a general connection concept, which allows connections between ODE/DAE and PDE based models. The implementation of a Modelica library needed for PDEModelica and a prototype implementation of field variables are also described in the thesis. The PDEModelica library contains internal and external solver implementations, and uses external software for mesh generation, requisite for numerical solution of the PDEs. Finally, some examples modeled with PDEModelica and solved using these implementations are presented.
|
242 |
The Impact of Video Self-modeling on Conversational Skills with Adolescent Students with Severe DisabilitiesSangster, Megan Elizabeth 12 July 2007 (has links) (PDF)
Video self-modeling has been found to be effective in increasing appropriate behaviors, increasing task fluency, and decreasing inappropriate behaviors. During video self-modeling, a student is filmed completing a task and then mistakes, prompts, and negative behaviors are edited from the video. When the edited video is viewed by the subject student, the student views a perfect model of him or herself successfully completing the given task. Video self-modeling has been used predominately with participants with autism spectrum disorder. This study is a replication of a previous study in which the effectiveness of video self-modeling and video peer modeling was compared (Sherer, Paredes, Kisacky, Ingersoll, & Schreiman, 2001). Sherer et al. evaluated these procedures with high functioning students with autism using a combined multiple baseline across participants and alternating treatment design. This study differs from Sherer et al.'s study in its use of participants who have multiple disabilities and low cognitive functioning. The results show that video self-modeling is effective for some participants while video peer modeling is effective for others. The individual student's preference for one form of video modeling over another form may indicate the method that is best for a particular participant. Implications for further research are included.
|
243 |
DEVELOPMENT OF GENERIC GROUND SYSTEMS BY THE USE OF A STANDARD MODELING METHODYamada, Takahiro 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / This paper presents an approach to development of generic ground systems to be used for
spacecraft testing and operations. This method makes use of a standard modeling method, which
enables virtualization of spacecraft. By virtualizing spacecraft, development of generic systems
that are applicable to different spacecraft becomes possible even if spacecraft themselves are not
standardized. This is because systems can utilize (1) a standard database that can store
information on any virtual spacecraft and (2) standard software tools that can be used for any
virtual spacecraft. This paper explains the concept of virtualization of spacecraft, introduces the
standard model used for virtualization of spacecraft, shows how to manipulate virtual spacecraft
with software tools, and presents the core elements of generic ground systems.
|
244 |
High-speed performance and power modelingSunwoo, Dam 01 October 2010 (has links)
The high cost of designing, testing and manufacturing semiconductor chips makes simulation essential to predict performance and power throughout the design cycle of hardware components. However, standard detailed software performance/power simulators are too slow to finish real-life benchmarks within the design cycle. To compensate, reduced accuracy is often traded for improved simulator performance.
This dissertation explores the FPGA-Accelerated Simulation Technologies (FAST) methodology that can dramatically improve simulation performance without sacrificing accuracy. Design trade-offs of the functional model partition of a FAST simulator are discussed and QUICK, an implementation of a FAST functional model that is designed to provide fast functional execution as well as the ability to rollback and execute down different paths is described. QUICK is general enough to be useful beyond FPGA-accelerated simulators and provides complex ISA (x86) and full-system support. A complete FAST simulator that combines QUICK with an FPGA-based timing model runs in the millions of x86 instructions per seconds, several orders of magnitude faster than software simulators of comparable accuracy capability, and boots unmodified Windows XP and Linux.
Ideally, one could model power at the same speeds as performance modeling in a FAST simulator. However, traditional software-implemented power estimation techniques are very slow. PrEsto, a new power modeling methodology that automatically generates accurate power models that can efficiently fit and operate within FAST simulators, is proposed. Such models can dramatically improve the accuracy and performance of architectural power estimation.
Improving high-accuracy simulator performance will open research directions that could not be explored economically in the past. The combination of simulation performance, accuracy, and power estimation capabilities extend the usefulness of such simulators, thus enabling the co-design of architecture, hardware implementation, operating systems, and software. / text
|
245 |
Implementing inquiry based computational modeling curriculum in the secondary science classroomMoldenhauer, Theodore Gerald 1970- 16 October 2014 (has links)
Better visualization of micro-level structures and processes can greatly enhance student understanding of key biological functions such as the central dogma. Previous research has demonstrated a need of introducing novel methods to increase student understanding of these concepts. The intention of this report is to show how computational modeling programs (CMPs) can be successfully used as an innovative method of teaching biology concepts that occur at a molecular level. The use of computers and web-based lessons are not new topics in secondary education studies but there is not an abundance of research related to computational modeling alone. We began by researching the many studies that have already indicated the benefits of using computers in the classroom with an emphasis on CMPs and simulations. Of these, we focused mostly on the ones that showed increased student engagement and influenced understanding of core science concepts. Based on the literature reviewed, a framework for curriculum designed around CMPs is proposed. Lastly, a model lesson is discussed to provide an example of how these professional grade tools can be employed in the classroom. This report provides a basis for the continued development of constructivist curriculum built around the use of professional grade computational tools in secondary science classrooms. / text
|
246 |
Multi-scale modeling of damage in masonry wallsMassart, Thierry J. 02 December 2003 (has links)
<p align="justify">The conservation of structures of the historical heritage is an increasing concern nowadays for public authorities. The technical design phase of repair operations for these structures is of prime importance. Such operations usually require an estimation of the residual strength and of the potential structural failure modes of structures to optimize the choice of the repairing techniques.</p>
<p align="justify">Although rules of thumb and codes are widely used, numerical simulations now start to emerge as valuable tools. Such alternative methods may be useful in this respect only if they are able to account realistically for the possibly complex failure modes of masonry in structural applications.</p>
<p align="justify">The mechanical behaviour of masonry is characterized by the properties of its constituents (bricks and mortar joints) and their stacking mode. Structural failure mechanisms are strongly connected to the mesostructure of the material, with strong localization and damage-induced anisotropy.</p>
<p align="justify">The currently available numerical tools for this material are mostly based on approaches incorporating only one scale of representation. Mesoscopic models are used in order to study structural details with an explicit representation of the constituents and of their behaviour. The range of applicability of these descriptions is however restricted by computational costs. At the other end of the spectrum, macroscopic descriptions used in structural computations rely on phenomenological constitutive laws representing the collective behaviour of the constituents. As a result, these macroscopic models are difficult to identify and sometimes lead to wrong failure mode predictions.</p>
<p align="justify">The purpose of this study is to bridge the gap between mesoscopic and macroscopic representations and to propose a computational methodology for the analysis of plane masonry walls. To overcome the drawbacks of existing approaches, a multi-scale framework is used which allows to include mesoscopic behaviour features in macroscopic descriptions, without the need for an a priori postulated macroscopic constitutive law. First, a mesoscopic constitutive description is defined for the quasi-brittle constituents of the masonry material, the failure of which mainly occurs through stiffness degradation. The mesoscopic description is therefore based on a scalar damage model. Plane stress and generalized plane state assumptions are used at the mesoscopic scale, leading to two-dimensional macroscopic continuum descriptions. Based on periodic homogenization techniques and unit cell computations, it is shown that the identified mesoscopic constitutive setting allows to reproduce the characteristic shape of (anisotropic) failure envelopes observed experimentally. The failure modes corresponding to various macroscopic loading directions are also shown to be correctly captured. The in-plane failure mechanisms are correctly represented by a plane stress description, while the generalized plane state assumption, introducing simplified three-dimensional effects, is shown to be needed to represent out-of-plane failure under biaxial compressive loading. Macroscopic damage-induced anisotropy resulting from the constituents' stacking mode in the material, which is complex to represent properly using macroscopic phenomenological constitutive equations, is here obtained in a natural fashion. The identified mesoscopic description is introduced in a scale transition procedure to infer the macroscopic response of the material. The first-order computational homogenization technique is used for this purpose to extract this response from unit cells. Damage localization eventually appears as a natural outcome of the quasi-brittle nature of the constituents. The onset of macroscopic localization is treated as a material bifurcation phenomenon and is detected from an eigenvalue analysis of the homogenized acoustic tensor obtained from the scale transition procedure together with a limit point criterion. The macroscopic localization orientations obtained with this type of detection are shown to be strongly related to the underlying mesostructural failure modes in the unit cells.</p>
<p align="justify">A well-posed macroscopic description is preserved by embedding localization bands at the macroscopic localization onset, with a width directly deduced from the initial periodicity of the mesostructure of the material. This allows to take into account the finite size of the fracturing zone in the macroscopic description. As a result of mesoscopic damage localization in narrow zones of the order of a mortar joint, the material response computationally deduced from unit cells may exhibit a snap-back behaviour. This precludes the use of such a response in the standard strain-driven multi-scale scheme.</p>
<p align="justify">Adaptations of the multi-scale framework required to treat the mesostructural response snap-back are proposed. This multi-scale framework is finally applied for a typical confined shear wall problem, which allows to verify its ability to represent complex structural failure modes.</p>
|
247 |
Towards Improving Conceptual Modeling: An Examination of Common Errors and Their Underlying ReasonsCurrim, Sabah January 2008 (has links)
Databases are a critical part of Information Technology. Following a rigorous methodology in the database lifecycle ensures the development of an effective and efficient database. Conceptual data modeling is a critical stage in the database lifecycle. However, modeling is hard and error prone. An error could be caused by multiple reasons. Finding the reasons behind errors helps explain why the error was made and thus facilitates corrective action to prevent recurrence of that type of error in the future. We examine what errors are made during conceptual data modeling and why. In particular, this research looks at expertise-related reasons behind errors. We use a theoretical approach, grounded in work from educational psychology, followed up by a survey study to validate the model. Our research approach includes the following steps: (1) measure expertise level, (2) classify kinds of errors made, (3) evaluate significance of errors, (4) predict types of errors that will be made based on expertise level, and (5) evaluate significance of each expertise level. Hypotheses testing revealed what aspects of expertise influence different types of errors. Once we better understand why expertise related errors are made, future research can design tailored training to eliminate the errors.
|
248 |
Synthesis and Molecular Modeling Studies of Bicyclic Inhibitors of Dihydrofolate Reductase, Receptor Tyrosine Kinases and TubulinRaghavan, Sudhir 08 March 2016 (has links)
The results from this work are reported into two sections listed below:
<br><br>
Synthesis:
<br><br>
Following structural classes of compounds have been designed, synthesized and studied as inhibitors of pjDHFR, RTKs and tubulin:
<br>
1. 2,4-Diamino-6-(substituted-arylmethyl)pyrido[2,3-d]pyrimidines <br>
2. 4-((3-Bromophenyl)linked)-6-(substituted-benzyl)-7H-pyrrolo[2,3-d]pyrimidin-2-amines<br>
3. 6-Methyl-5-((substitutedphenyl)thio)-7H-pyrrolo[2,3-d]pyrimidin-2-amines
<br>
A total of 35 new compounds (excluding intermediates) were synthesized, characterized and submitted for biological evaluation. Results from these studies will be presented in due course. Bulk synthesis of the potent lead compound 170 was carried out to facilitate in vivo evaluation.
<br><br>
Docking Studies
<br><br>
Docking studies were performed using LeadIT, MOE, Sybyl or Flexx for target compounds listed above and for other compounds reported by Gangjee et al. against the following targets:
<br>
1. Dihydrofolate reductase: human, P. carinii, P. jirovecii (pjDHFR) and T. gondii (tgDHFR)<br>
2. Thymidylate synthase: human (hTS) and T. gondii (tgTS)<br>
3. Receptor tyrosine kinases: VEGFR2, EGFR and PDGFR-β<br>
4. Colchicine binding site of tublulin.<br>
Novel homology models were generated and validated for pjDHFR, tgDHFR, tgTS, PDGFR-β and the F36C L65P pjDHFR double mutant. The tgTS homology model generated in this study and employed to design novel inhibitors shows remarkable similarity with the recently published X-ray crystal structures. Docking studies were performed to provide a molecular basis for the observed activity of target compounds against DHFR, RTKs or tubulin. Results from these studies support structure-based and ligand-based medicinal chemistry efforts in order to improve potency and/or selectivity of analogs of the docked compounds against these targets.<br>
Novel topomer CoMFA models were developed for tgTS and hTS using a set of 85 bicyclic inhibitors and for RTKs using a set of 60 inhibitors reported by Gangjee et al. The resultant models could be used to explain the potency and/or selectivity differences for selected molecules for tgTS over hTS. Topomer CoMFA maps show differences in steric and/or electronic requirements among the three RTKs, and could be used, in conjuction with other medicinal chemistry approaches, to modulate the selectivity and/or potency of inhibitors with multiple RTK inhibitory potential. Drug design efforts that involve virtual library screening using these topomer CoMFA models in conjunction with traditional medicinal chemistry techniques and docking are currently underway. / Mylan School of Pharmacy and the Graduate School of Pharmaceutical Sciences; / Medicinal Chemistry / PhD; / Dissertation;
|
249 |
Improving Energy-Efficiency of Multicores using First-Order ModelingSpiliopoulos, Vasileios January 2016 (has links)
In the recent decades, power consumption has evolved to one of the most critical resources in a computer system. In the form of electricity bill in data centers, battery life in mobile devices, or thermal constraints in desktops and laptops, power consumption imposes several limitations in today’s processors and improving power and energy efficiency is one of the most urgent research topics of Computer Architecture. Dynamic Voltage and Frequency Scaling (DVFS) and Cache Resizing are among the most popular energy saving techniques. Previous work, however, has focused on developing heuristics and trial-and-error methods that yield acceptable savings, but fail to provide insight and understanding of how these techniques affect power and performance of a computer system. In contrast, this Thesis proposes the use of first-order modeling to improve the energy efficiency of computer systems. A first-order model needs to be (i) accurate enough to efficiently drive DVFS and Cache Resizing decisions, and (ii) simple enough to eliminate the overhead of collecting the required inputs to the model. We show that such models can be constructed and successfully applied in modern systems. For DVFS, we propose to scale frequency down to exploit applications’ memory slack, i.e., periods that the processor spends waiting for data to be fetched from the main memory. In such cases, the processor frequency can be scaled down to save energy without inordinate performance penalty. Our DVFS models can detect slack and predict the impact of DVFS in both power and performance with great accuracy. Cache Resizing, on the other hand, relies on the fact that many applications do not benefit from the vast amount of cache that modern processors are equipped with. In such cases, the cache can be resized to save static energy consumption at limited performance cost. Since both techniques are related with the memory behavior of applications, we propose a unified model to manage the two techniques in tandem and maximize energy efficiency through synergistic DVFS and Cache Resizing. Finally, our experience with DVFS in real systems motivated us to contribute to the integration of DVFS into the gem5 simulator. Unlike other simulators that ignore the role of OS in DVFS, we extend the gem5 simulator by developing the hardware and software components that allow existing Linux DVFS infrastructure to be seamlessly integrated in the simulator.
|
250 |
Water Quality Study and Plume Behavior Modeling for Lake Pontchartrain at the Mouth of the Tchefuncte RiverLeal Castellano, Jeimmy C. 08 May 2004 (has links)
Over the last several decades, the Lake Pontchartrain Basin has been impacted by the presence of high levels of Fecal Coliform bacteria following periods of rainfall. This is a potential problem for recreational uses of the area. In 2003 a field sampling study was initiated in the north shore area of the Lake at the mouth of the Tchefuncte River. The objectives were to determine the water quality in the area and to simulate the plume patterns from the Tchefuncte River. Twenty eight stations at the mouth of the Tchefuncte River, and a station at the Madisonville Bridge were selected for study on the basis of proximity to the mouth of the River. Fecal coliform counts were found to be “wet†weather-dependent at the mouth of the River and unsuitable for primary contact recreation for at least two to three days following a rain event. A 3-D finite volume hydrodynamics model (A coupled Hydrodynamical-Ecological Model for Regional and Shelf Seas – COHERENS) and the TECPLOT™ equation feature were used for the prediction of contaminant plumes from the Tchefuncte River into the Lake Pontchartrain. The field data were used to validate the model. The upper limits predicted by the model and those measured in the field were in good agreement. The model used river flow and tidal forcing without wind shear. The model verified that that the wet weather effect lasted for two to three-day after a high storm water discharges at the mouth of the river.
|
Page generated in 0.0625 seconds