341 |
Theory and computation on nonlinear vortex/wave interactions in internal and external flowsPatel, Rupa Ashyinkumar January 1997 (has links)
No description available.
|
342 |
The prediction of swirling recirculating flow and the fluid flow and mixing in stirred tanksAl-Wazzan, Yousef Jassim Easa January 1997 (has links)
No description available.
|
343 |
Numerical prediction of two fluid systems with sharp interfacesUbbink, Onno January 1997 (has links)
No description available.
|
344 |
An incremental parser for government-binding theoryMacias, Benjamin January 1991 (has links)
No description available.
|
345 |
Applications of Non-Traditional Measurements for Computational ImagingTreeaporn, Vicha, Treeaporn, Vicha January 2017 (has links)
Imaging systems play an important role in many diverse applications. Requirements for these applications, however, can lead to complex or sub-optimal designs. Traditionally, imaging systems are designed to yield a visually pleasing representation, or "pretty picture", of the scene or object. Often this is because a human operator is viewing the acquired image to perform a specific task. With digital computers increasingly being used for automation, a large number of algorithms have been designed to accept as input a pretty picture. This isomorphic representation however is neither necessary nor optimal for tasks such as data compression, transmission, pattern recognition or classification. This disconnect between optical measurement and post processing for the final system outcome has motivated an interest in computational imaging (CI). In a CI system the optical sub-system and post-processing sub-system is jointly designed to optimize system performance for a specific task. In these hybrid imagers, the measured image may no longer be a pretty picture but rather an intermediate non-traditional measurement. In this work, applications of non-traditional measurements are considered for computational imaging. Two systems for an image reconstruction task are studied and one system for a detection task is investigated. First, a CI system to extend the field of view is analyzed and an experimental prototype demonstrated. This prototype validates the simulation study and is designed to yield a 3x field of view improvement relative to a conventional imager. Second, a CI system to acquire time-varying natural scenes, i.e. video, is developed. A candidate system using 8x8x16 spatiotemporal blocks yields about 292x compression compared to a conventional imager. Candidate electro-optical architectures, including charge-domain processing, to implement this approach are also discussed. Lastly, a CI system with x-ray pencil beam illumination is investigated for a detection task where system performance is quantified using an information-theoretic metric.
|
346 |
A default logic approach to the derivation of natural language presuppositionsMercer, Robert Ernest January 1987 (has links)
A hearer's interpretation of the meaning of an utterance consists of more than what is conveyed
by just the sentence itself. Other parts of the meaning are produced as inferences from three knowledge sources: the sentence itself, knowledge about the world, and knowledge about language use. One inference of this type is the natural language presupposition. This category of inference is distinguished by a number of features: the inferences are generated only, but not necessarily, if certain lexical or syntactic environments are present in the uttered sentence; normal interpretations of these presuppositional environments in the scope of a negation in a simple sentence produce the same inferences as the unnegated environment; and the inference can be cancelled by information in the conversational context.
We propose a method for deriving presuppositions of natural language sentences that has its foundations in an inference-based concept of meaning. Whereas standard (monotonic) forms of reasoning are able to capture portions of a sentence's meaning, such as its entailments, non-monotonic forms of reasoning are required to derive its presuppositions. Gazdar's idea of presuppositions being consistent with the context, and the usual connection of presuppositions with lexical and syntactic environments motivates the use of Default Logic as the formal nonmonotonic
reasoning system. Not only does the default logic approach provide a natural means to represent presuppositions, but also a single (slightly restricted) default proof procedure is all that is required to generate the presuppositions. The naturalness and simplicity of this method contrasts with the traditional projection methods. Also available to the logical approach is the proper treatment of 'or' and 'if ... then ...' which is not available to any of the projection methods.
The default logic approach is compared with four others, three projection methods and one non-projection method. As well as serving the function of demonstrating empirical and methodological difficulties with the other methods, the detailed investigation also provides the motivation for the topics discussed in connection with default logic approach. Some of the difficulties have been solved using the default logic method, while possible solutions for others have only been sketched.
A brief discussion of a new method for providing corrective answers to questions is presented.
The novelty of this method is that the corrective answers are viewed as correcting presuppositions of the answer rather than of the question. / Science, Faculty of / Computer Science, Department of / Graduate
|
347 |
Enriching deontic logic with typicalityChingoma, Julian January 2020 (has links)
Legal reasoning is a method that is applied by legal practitioners to make legal decisions. For a scenario, legal reasoning requires not only the facts of the scenario but also the legal rules to be enforced within it. Formal logic has long been used for reasoning tasks in many domains. Deontic logic is a logic which is often used to formalise legal scenarios with its built-in notions of obligation, permission and prohibition. Within the legal domain, it is important to recognise that there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with not only the notion of defeasibility, which allows for reasoning about exceptions, but a stronger notion of typicality which is based on defeasibility. KLM-style defeasible reasoning introduced by Kraus, Lehmann and Magidor (KLM), is a logic system that employs defeasibility while a logic that serves the same role for the stronger notion of typicality is Propositional Typicality Logic (PTL). Deontic paradoxes are often used to examine deontic logic systems as the scenarios arising from the paradoxes' structures produce undesirable results when desirable deontic properties are applied to the scenarios. This is despite the various scenarios themselves seeming intuitive. This dissertation shows that KLM-style defeasible reasoning and PTL are both effective when applied to the analysis of the deontic paradoxes. We first present the background information which comprises propositional logic, which forms the foundation for the other logic systems, as well as the background of KLM-style defeasible reasoning, deontic logic and PTL. We outline the paradoxes along with their issues within the presentation of deontic logic. We then show that for each of the two logic systems we can intuitively translate the paradoxes, satisfy many of the desirable deontic properties and produce reasonable solutions to the issues resulting from the paradoxes.
|
348 |
Development of a Micromorphic (Multiscale) Material Model aimed at Cardiac Tissue MechanicsDollery, Devin 21 January 2021 (has links)
Computational cardiac mechanics has historically relied on classical continuum models; however, classical models amalgamate the behaviour of a material's micro-constituents, and thus only approximate the macroscopically observable material behaviour as a purely averaged response that originated on micro-structural levels. As such, classical models do not directly and independently address the response of the cardiac tissue (myocardium) components, such as the muscle fibres (myocytes) or the hierarchically organized cytoskeleton. Multiscale continuum models have developed over time to account for some of the micro-architecture of a material, and allow for additional degrees of freedom in the continuum over classical models. The micromorphic continuum [15] is a multiscale model that contains additional degrees of freedom which lend themselves to the description of fibres, referred to as micro-directors. The micromorphic model has great potential to replicate certain characteristics of the myocardium in more detail. Specifically, the micromorphic micro-directors can represent the myocytes, thus allowing for non-affine relative deformations of the myocytes and the extracellular matrix (ECM) of tissue constraining the myocytes, which is not directly possible with classical models. A generalized micromorphic approach of Sansour [73, 74, 75] is explored in this study. Firstly, numerical examples are investigated and several novel proofs are devised to understand the behaviour of the micromorphic model with regards to numerical instabilities, micro-director displacements, and macro-traction vector contributions. An alternative micromorphic model is developed by the author for comparison against Sansour's model regarding the handling of micro-boundary conditions and other numerical artifacts. Secondly, Sansour's model is applied to cardiac modelling, whereby a macro-scale strain measure represents the deformation of the ECM of the tissue, a micro-scale strain measure represents the muscle fibres, and a third strain measure describes of the interaction of both constituents. Separate constitutive equations are developed to give unique stiffness responses to both the ECM and the myocytes. The micromorphic model is calibrated for cardiac tissue, first using triaxial shear experiments [80], and subsequently, to a pressure-volume relationship. The contribution of the micromorphic additional degrees of freedom to the various triaxial shear modes is quantified, and an analytical explanation is provided for differences in contributions. The passive filling phase of the heart cycle is investigated using a patient-specific left ventricle geometry supplied by the Cape Universities Body Imaging Centre (CUBIC) [38].
|
349 |
Time integration schemes for piecewise linear plasticityRencontre, LJ January 1991 (has links)
The formulation of a generalized trapezoidal rule for the integration of the constitutive equations for a convex elastic-plastic solid is presented. This rule, which is based on an internal variable description, is consistent with a generalized trapezoidal rule for creep. It is shown that by suitable linear extrapolation, the standard backward difference algorithm can lead to this generalized trapezoidal rule or to a generalized midpoint rule. In either case, the generalized rules retain the symmetry of the consistent tangent modulus. It is also shown that the generalized trapezoidal and midpoint rules are fully equivalent in the sense that they lead to the establishment of the same minimum principle for the increment. The generalized trapezoidal rule thus inherits the notion of B-stability and both rules offer the opportunity to exploit the second order rate of convergence for a = ½. However, in the generalized trapezoidal rule, the equilibrium. and constitutive equations are fully satisfied at the end of the time increment. This may be more convenient than the generalized midpoint rule, in which equilibrium and plastic consistency are satisfied at the generalized midpoint. A backward difference return algorithm for piecewise linear yield surfaces is then formulated, with attention restricted to an associated flow rule and isotropic material behavior. Both the Tresca and Mohr-Coulomb yield surfaces with perfectly plastic and linear hardening rules are considered in detail. The algorithm has the advantage of being fully linked to the governing principles and avoids the inherent problems associated with corners on the yield surface. It is fully consistent in that no heuristic assumptions are made. The algorithm is extended to include the generalized trapezoidal rule in such a way that the general structure of the backward difference algorithm is maintained. This allows both for the computational advantages of the generalized trapezoidal rule to be utilized, and for a basis for comparison between this algorithm and existing backward difference algorithms to be established. Using this fully consistent algorithm, the return paths in stress space for the Tresca and Mohr-Coulomb yield surfaces with perfectly plastic and linear hardening rules are identified. These return paths thus provide a basis against which heuristically developed algorithms can be compared.
|
350 |
Planning of Treatment at Rehabilitation Clinics Using a Two Stage Mixed-Integer Programming ApproachKönig, Tobias January 2021 (has links)
This thesis presents a method for planning patient intake and assignment of treatment personnel at rehabilitative care clinics. The rehabilitation process requires patients to undergo a series of treatments spanning several weeks, requiring therapists of different disciplines. We have developed a two stage mixed-integer programming model which plans when each admitted patient will receive treatment and assigns therapists. In addition, the model provides support to decide when to admit new patients and when to hire additional staff in order to maximise the clinic’s patient throughput. Numerical results based on a real rehabilitation clinic are presented and discussed.
|
Page generated in 0.0848 seconds