• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 587
  • 218
  • 79
  • 51
  • 31
  • 16
  • 12
  • 12
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1233
  • 246
  • 194
  • 181
  • 176
  • 137
  • 132
  • 115
  • 104
  • 103
  • 101
  • 92
  • 87
  • 87
  • 85
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Pure subtype systems : a type theory for extensible software

Hutchins, DeLesley January 2009 (has links)
This thesis presents a novel approach to type theory called “pure subtype systems”, and a core calculus called DEEP which is based on that approach. DEEP is capable of modeling a number of interesting language techniques that have been proposed in the literature, including mixin modules, virtual classes, feature-oriented programming, and partial evaluation. The design of DEEP was motivated by two well-known problems: “the expression problem”, and “the tag elimination problem.” The expression problem is concerned with the design of an interpreter that is extensible, and requires an advanced module system. The tag elimination problem is concerned with the design of an interpreter that is efficient, and requires an advanced partial evaluator. We present a solution in DEEP that solves both problems simultaneously, which has never been done before. These two problems serve as an “acid test” for advanced type theories, because they make heavy demands on the static type system. Our solution in DEEP makes use of the following capabilities. (1) Virtual types are type definitions within a module that can be extended by clients of the module. (2) Type definitions may be mutually recursive. (3) Higher-order subtyping and bounded quantification are used to represent partial information about types. (4) Dependent types and singleton types provide increased type precision. The combination of recursive types, virtual types, dependent types, higher-order subtyping, and bounded quantification is highly non-trivial. We introduce “pure subtype systems” as a way of managing this complexity. Pure subtype systems eliminate the distinction between types and objects; every term can behave as either a type or an object depending on context. A subtype relation is defined over all terms, and subtyping, rather than typing, forms the basis of the theory. We show that higher-order subtyping is strong enough to completely subsume the traditional type relation, and we provide practical algorithms for type checking and for finding minimal types. The cost of using pure subtype systems lies in the complexity of the meta-theory. Unfortunately, we are unable to establish some basic meta-theoretic properties, such as type safety and transitivity elimination, although we have made some progress towards these goals. We formulate the subtype relation as an abstract reduction system, and we show that the type theory is sound if the reduction system is confluent. We can prove that reductions are locally confluent, but a proof of global confluence remains elusive. In summary, pure subtype systems represent a new and interesting approach to type theory. This thesis describes the basic properties of pure subtype systems, and provides concrete examples of how they can be applied. The Deep calculus demonstrates that our approach has a number of real-world practical applications in areas that have proved to be quite difficult for traditional type theories to handle. However, the ultimate soundness of the technique remains an open question.
232

Formal computational framework for the study of molecular evolution

Kwiatkowski, Marek January 2010 (has links)
Over the past 10 years, multiple executable modelling formalisms for molecular biology have been developed in order to address the growing need for a system-level understanding of complex biological phenomena. An important class of these formalisms are biology-inspired process algebras, which offer-among other desirable properties - an almost complete separation of model specification (syntax) from model dynamics (semantics). In this thesis, the similarity between this separation and the genotype-phenotype duality in evolutionary biology is exploited to develop a process-algebraic approach to the study of evolution of biochemical systems. The main technical contribution of this thesis is the continuous π-calculus (cπ), a novel process algebra based on the classical π-calculus of Milner et. al. Its two defining characteristics are: continuous, compositional, computationally inexpensive semantics, and a exible interaction structure of processes (molecules). Both these features are conductive to evolutionary analysis of biochemical systems by, respectively, enabling many variants of a given model to be evaluated, and facilitating in silico evolution of new functional connections. A further major contribution is a collection of variation operators, syntactic model transformation schemes corresponding to common evolutionary events. When applied to a cπ model of a biochemical system, variation operators produce its evolutionary neighbours, yielding insights into the local fitness landscape and neutral neighbourhood. Two well-known biochemical systems are modelled in this dissertation to validate the developed theory. One is the KaiABC circadian clock in the cyanobacterium S. elongatus, the other is a mitogen-activated protein kinase cascade. In each case we study the system itself as well as its predicted evolutionary variants. Simpler examples, particularly that of a generic enzymatic reaction, are used throughout the thesis to illustrate important concepts as they are introduced.
233

The Dynamic Foundation of Fractal Operators.

Bologna, Mauro 05 1900 (has links)
The fractal operators discussed in this dissertation are introduced in the form originally proposed in an earlier book of the candidate, which proves to be very convenient for physicists, due to its heuristic and intuitive nature. This dissertation proves that these fractal operators are the most convenient tools to address a number of problems in condensed matter, in accordance with the point of view of many other authors, and with the earlier book of the candidate. The microscopic foundation of the fractal calculus on the basis of either classical or quantum mechanics is still unknown, and the second part of this dissertation aims at this important task. This dissertation proves that the adoption of a master equation approach, and so of probabilistic as well as dynamical argument yields a satisfactory solution of the problem, as shown in a work by the candidate already published. At the same time, this dissertation shows that the foundation of Levy statistics is compatible with ordinary statistical mechanics and thermodynamics. The problem of the connection with the Kolmogorov-Sinai entropy is a delicate problem that, however, can be successfully solved. The derivation from a microscopic Liouville-like approach based on densities, however, is shown to be impossible. This dissertation, in fact, establishes the existence of a striking conflict between densities and trajectories. The third part of this dissertation is devoted to establishing the consequences of the conflict between trajectories and densities in quantum mechanics, and triggers a search for the experimental assessment of spontaneous wave-function collapses. The research work of this dissertation has been the object of several papers and two books.
234

Languages, Logics, Types and Tools for Concurrent System Modelling

Gutkovas, Ramūnas January 2016 (has links)
A concurrent system is a computer system with components that run in parallel and interact with each other. Such systems are ubiquitous and are notably responsible for supporting the infrastructure for transport, commerce and entertainment. They are very difficult to design and implement correctly: many different modeling languages and verification techniques have been devised to reason about them and verifying their correctness. However, existing languages and techniques can only express a limited range of systems and properties. In this dissertation, we address some of the shortcomings of established models and theories in four ways: by introducing a general modal logic, extending a modelling language with types and a more general operation, providing an automated tool support, and adapting an established behavioural type theory to specify and verify systems with unreliable communication. A modal logic for transition systems is a way of specifying properties of concurrent system abstractly. We have developed a modal logic for nominal transition systems. Such systems are common and include the pi-calculus and psi-calculi. The logic is adequate for many process calculi with regard to their behavioural equivalence even for those that no logic has been considered, for example, CCS, the pi-calculus, psi-calculi, the spi-calculus, and the fusion calculus. The psi-calculi framework is a parametric process calculi framework that subsumes many existing process calculi. We extend psi-calculi with a type system, called sorts, and a more general notion of pattern matching in an input process. This gives additional expressive power allowing us to capture directly even more process calculi than was previously possible. We have reestablished the main results of psi-calculi to show that the extensions are consistent. We have developed a tool that is based on the psi-calculi, called the psi-calculi workbench. It provides automation for executing the psi-calculi processes and generating a witness for a behavioural equivalence between processes. The tool can be used both as a library and as an interactive application. Lastly, we developed a process calculus for unreliable broadcast systems and equipped it with a binary session type system. The process calculus captures the operations of scatter and gather in wireless sensor and ad-hoc networks. The type system enjoys the usual property of subject reduction, meaning that well-typed processes reduce to well-typed processes. To cope with unreliability, we also introduce a notion of process recovery that does not involve communication. This is the first session type system for a model with unreliable communication.
235

Application and analysis of just in time teaching methods in a calculus course

Natarajan, Rekha January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Andrew G. Bennett / "Just In Time Teaching" (JiTT) is a teaching practice that utilizes web based technology to collect information about students' background knowledge prior to attending lecture. Traditionally, students answer either multiple choice, short answer, or brief essay questions outside of class; based on student responses, instructors adjust their lectures "just-in-time." In this study, modified JiTT techniques in the form of online review modules were applied to a first semester calculus course at a large midwestern state university during the spring 2012 term. The review modules covered algebra concepts and skills relevant to the new material presented in calculus lecture (the "just-in-time" adjustment of the calculus lectures was not implemented in this teaching experiment). The reviews were part of the course grade. Instead of being administered purely "just-in-time," the reviews were assigned ahead of time as part of the online homework component of Calculus-I. While previous studies have investigated the use of traditional JiTT techniques in math courses and reported student satisfaction with such teaching tools, these studies have not addressed gains in student achievement with respect to specific calculus topics. The goal of this study was to investigate the latter, and to determine whether timing of the reviews plays a role in bettering student performance. Student progress on weekly Calculus-I online assignments was tracked in spring of 2012 and compared to student scores from weekly Calculus-I online assignments from spring 2011, when modified JiTT instruction was not available. For select Calculus-I online assignments during the spring 2012 term, we discovered that the review modules significantly increased the number of students receiving perfect scores, even when the reviews were not purely administered ``just-in-time." Analysis of performance, success of review assignments, and future implications are also discussed.
236

Network on Chip : Performance Bound and Tightness

Zhao, Xueqian January 2015 (has links)
Featured with good scalability, modularity and large bandwidth, Network-on-Chip (NoC) has been widely applied in manycore Chip Multiprocessor (CMP) and Multiprocessor System-on-Chip (MPSoC) architectures. The provision of guaranteed service emerges as an important NoC design problem due to the application requirements in Quality-of-Service (QoS). Formal analysis of performance bounds plays a critical role in ensuring guaranteed service of NoC by giving insights into how the design parameters impact the network performance. The study in this thesis proposes analysis methods for delay and backlog bounds with Network Calculus (NC). Based on xMAS (eXecutable Micro-Architectural Specification), a formal framework to model communication fabrics, the delay bound analysis procedure is presented using NC. The micro-architectural xMAS representation of a canonical on-chip router is proposed with both the data flow and control flow well captured. Furthermore, a well-defined xMAS model for a specific application on an NoC can be created with network and flow knowledge and then be mapped to corresponding NC analysis model for end-to-end delay bound calculation. The xMAS model effectively bridges the gap between the informal NoC micro-architecture and the formal analysis model. Besides delay bound, the analysis of backlog bound is also crucial for predicting buffer dimensioning boundary in on-chip Virtual Channel (VC) routers. In this thesis, basic buffer use cases are identified with corresponding analysis models proposed so as to decompose the complex flow contention in a network. Then we develop a topology independent analysis technique to convey the backlog bound analysis step by step. Algorithms are developed to automate this analysis procedure. Accompanying the analysis of performance bounds, tightness evaluation is an essential step to ensure the validity of the analysis models. However, this evaluation process is often a tedious, time-consuming, and manual simulation process in which many simulation parameters may have to be configured before the simulations run. In this thesis, we develop a heuristics aided tightness evaluation method for the analytical delay and backlog bounds. The tightness evaluation is abstracted as constrained optimization problems with the objectives formulated as implicit functions with respect to the system parameters. Based on the well-defined problems, heuristics can be applied to guide a fully automated configuration searching process which incorporates cycle-accurate bit-accurate simulations. As an example of heuristics, Adaptive Simulated Annealing (ASA) is adopted to guide the search in the configuration space. Experiment results indicate that the performance analysis models based on NC give tight results which are effectively found by the heuristics aided evaluation process even the model has a multidimensional discrete search space and complex constraints. In order to facilitate xMAS modeling and corresponding validation of the performance analysis models, the thesis presents an xMAS tool developed in Simulink. It provides a friendly graphical interface for xMAS modeling and parameter configuring based on the powerful Simulink modeling environment. Hierarchical model build-up and Verilog-HDL code generation are essentially supported to manage complex models and conduct simulations. Attributed to the synthesizable xMAS library and the good extendibility, this xMAS tool has promising use in application specific NoC design based on the xMAS components. / <p>QC 20150520</p>
237

Minimizing the Probability of Ruin in Exchange Rate Markets

Chase, Tyler A. 30 April 2009 (has links)
The goal of this paper is to extend the results of Bayraktar and Young (2006) on minimizing an individual's probability of lifetime ruin; i.e. the probability that the individual goes bankrupt before dying. We consider a scenario in which the individual is allowed to invest in both a domestic bank account and a foreign bank account, with the exchange rate between the two currencies being modeled by geometric Brownian motion. Additionally, we impose the restriction that the individual is not allowed to borrow money, and assume that the individual's wealth is consumed at a constant rate. We derive formulas for the minimum probability of ruin as well as the individual's optimal investment strategy. We also give a few numerical examples to illustrate these results.
238

Optimal control of hereditary differential system.

January 1985 (has links)
by Yung Siu-Pang. / Includes bibliographical references / Thesis (M.Ph.)--Chinese University of Hong Kong, 1985
239

Variational and optimal control problems with time delay.

January 1977 (has links)
Tai Chi-hung. / Thesis (M.Phil.)--Chinese University of Hong Kong. / Bibliography: leaves 40-41.
240

Insights from College Algebra Students' Reinvention of Limit at Infinity

McGuffey, William January 2018 (has links)
The limit concept in calculus has received a lot of attention from mathematics education researchers, partly due to its position in mathematics curricula as an entry point to calculus and partly due to its complexities that students often struggle to understand. Most of this research focuses on students who had previously studied calculus or were enrolled in a calculus course at the time of the study. In this study, I aimed to gain insights into how students with no prior experience with precalculus or calculus might think about limits via the concept of limit at infinity, with the goal of designing instructional tasks based on these students’ intuitive strategies and ways of reasoning. In particular, I designed a sequence of instructional tasks that starts with an experientially realistic starting point that involves describing the behavior of changing quantities in real-world physical situations. From there, the instructional tasks build on the students’ ways of reasoning through tasks involving making predictions about the values of the quantity and identifying characteristics associated with making good predictions. These instructional tasks were developed through three iterations of design research experimentation. Each iteration included a teaching experiment in which a pair of students engaged in the instructional tasks under my guidance. Through ongoing and reflective analysis, the instructional tasks were refined to evoke the students’ intuitive strategies and ways of thinking and to leverage these toward developing a definition for the concept of limit at infinity. The final, refined sequence of instructional tasks together with my rationale for each task and expected student responses provides insights into how students can come to understand the concept of limit at infinity in a way that is consistent with the formal definition prior to receiving formal instruction on limits. The results presented in this dissertation come from the third and final teaching experiment, in which Jon and Lexi engaged in the sequence of instructional tasks. Although Jon and Lexi did not construct a definition of limit at infinity consistent with a formal definition, they demonstrated many strategies and ways of reasoning that anticipate the formal definition of limit at infinity. These include identifying a limit candidate, defining the notion of closeness, describing the notion of sufficiently large, and coordinating the notion of closeness in the range with the notion of sufficiently large in the domain. On the other hand, Jon and Lexi demonstrated some strategies and ways of reasoning that potentially inhibited their development of a definition consistent with the formal definition. Pedagogical implications on instruction in calculus and its prerequisites are discussed as well as contributions to the field and potential directions for future research.

Page generated in 0.1821 seconds