Spelling suggestions: "subject:"computerscience"" "subject:"composerscience""
481 |
A New Model for Coalition Formation GamesSamman, Alfred 13 July 2012 (has links)
We present two broad categories of games, namely, group matching games and bottleneck routing games on grids. Borrowing ideas from coalition formation games, we introduce a new category of games which we call group matching games. We investigate how these games perform when agents are allowed to make selfish decisions that increase their individual payoffs versus when agents act towards the social benefit of the game as a whole. The Price of Anarchy (PoA) and Price of Stability (PoS) metrics are used to quantify these comparisons. We show that the PoA for a group matching game is at most kα and the PoS is at most k/α where k is the maximum size of a group and α is a switching cost. Furthermore we show that the PoA and PoS of the games do not change significantly even if we increase γ, the number of groups that an agent is allowed to join.
We also consider routing games on grid network topologies. The social cost is the worst congestion in any of the network edges (bottleneck congestion). Each player's objective is to find a path that minimizes the bottleneck congestion in its path. We show that the price of anarchy in bottleneck games in grids is proportional to the number of bends β that the paths are allowed to take in the grids' space. We present games where the PoA is O(β). We also give a respective lower bound of Ω(β) which shows that our upper bound is within only a poly-log factor from the best achievable price of anarchy. A significant impact of our analysis is that there exist bottleneck routing games with small number of bends which give a poly-log approximation to the optimal coordinated solution that may use an arbitrary number of bends. To our knowledge, this is the first tight analysis of bottleneck games on grids.
|
482 |
Perpetual Requirements EngineeringPeralta, Manuel Alfonso 31 October 2012 (has links)
This dissertation attempts to make a contribution within the fields of distributed systems, security,
and formal verification. We provide a way to formally assess the impact of a given change in
three different contexts. We have developed a logic based on Lewiss Counterfactual Logic. First
we show how our approach is applied to a standard sequential programming setting. Then, we
show how a modified version of the logic can be used in the context of reactive systems and sensor
networks. Last but not least we show how this logic can be used in the context of security systems.
Traditionally, change impact analysis has been viewed as an area in traditional software engineering.
Software artifacts (source code, usually) are modified in response to a change in user
requirements. Aside from making sure that the changes are inherently correct (testing and verification),
programmers (software engineers) need to make sure that the introduced changes are
coherent with those parts of the systems that were not affected by the artifact modification. The
latter is generally achieved by establishing a dependency relation between software artifacts. In
rough lines, the process of change management consists of projecting the transitive closure of the
this dependency relation based on the set of artifacts that have actually changed and assessing how
the related artifacts changed.
The latter description of the traditional change management process generally occurs after the
affected artifacts are changed. Undesired secondary effects are usually found during the testing
phase after the changes have been incorporated. In cases when there is certain level of criticality,
there is always a division between production and development environments. Change management
(either automatic, tool driven, or completely manually done) can introduce extraneous defects
into any of the changed software life-cycle artifacts. The testing phase tries to eradicate a
relatively large portion of the undesired defects introduced by change. However, traditional testing
techniques are limited by their coverage strength. Therefore, even when maximum coverage is
guaranteed there is always the non-zero probability of having secondary effects prior to a change.
|
483 |
Deductive Formal Verification of Embedded SystemsLu, Zheng 20 November 2012 (has links)
<div align = "justify">
We combine static analysis techniques with model-based deductive verification using SMT solvers to provide a framework that, given an analysis aspect of the source code, automatically generates an analyzer capable of inferring information about that aspect.
<p>
The analyzer is generated by translating the collecting semantics of a program to a formula in first order logic over multiple underlying theories. We import the semantics of the API invocations as first order logic assertions. These assertions constitute the models used by the analyzer. Logical specification of the desired program behavior is incorporated as a first order logic formula. An SMT-LIB solver treats the combined formula as a constraint and solves it. The solved form can be used to identify logical and security errors in embedded programs. We have used this framework to analyze Android applications and MATLAB code.
<p>
We also report the formal verification of the conformance of the open source Netgear WNR3500L wireless router firmware implementation to the RFC 2131. Formal verification of a software system is essential for its deployment in mission-critical environments. The specifications for the development of routers are provided by RFCs that are only described informally in English. It is prudential to ensure that a router firmware conforms to its corresponding RFC before it can be deployed for managing mission-critical networks. The formal verification process demonstrates the usefulness of inductive types and higher-order logic in software certification.
<div>
|
484 |
Opportunistic Lookahead Routing Protocol for Delay Tolerant NetworksRotti, Priyanka 20 November 2012 (has links)
Delay Tolerant Networks are wireless networks that have sporadic network connectivity, thus rendering the existence of instantaneous end-to-end paths from a source to a destination difficult or impossible. Hence, in such networks, message delivery relies heavily on the store-and-forward paradigm to route messages. However, limited knowledge of the contact times between the nodes poses a big challenge to effective forwarding of messages.
In this thesis, we discuss several aspects of routing in DTNs and present one algorithm and three variants for addressing the routing problem in DTNs: (i) the Look-ahead Protocol, in which the forwarding decision at each node to its immediate or one-hop neighbor is based on the position of the packet / message in the queue of the neighboring node(ii) Backpressure based lookahead, where a lookahead factor is introduced with the basic backpressure equation. This factor takes into account the difference of queue lengths from the neighbors, (iii) a two-step lookahead protocol, where the forwarding decision is sometimes based on the instantaneous one-hop neighbors of the neighboring node.
We also present simulation results of these protocols and compare these results to the existing standard routing protocols for DTNs. In all the algorithms, we look to optimize the amount of network bandwidth used by looking one step ahead before making a forwarding decision. By considering the queue in the neighboring nodes, the amount of network resources consumed decreases. The protocols that we propose may come with a slightly higher hop-count per packet than most protocols, but we have tried to maintain a comparable delivery ratio with the existing standard protocols
|
485 |
A Hybrid Framework of Iterative MapReduce and MPI for Molecular Dynamics ApplicationsBai, Shuju 09 July 2013 (has links)
Developing platforms for large scale data processing has been a great interest to scientists. Hadoop is a widely used computational platform which is a fault-tolerant distributed system for data storage due to HDFS (Hadoop Distributed File System) and performs fault-tolerant distributed data processing in parallel due to MapReduce framework. It is quite often that actual computations require multiple MapReduce cycles, which needs chained MapReduce jobs. However, Design by Hadoop is poor in addressing problems with iterative structures. In many iterative problems, some invariant data is required by every MapReduce cycle. The same data is uploaded to Hadoop file system in every MapReduce cycle, causing repeated data delivering and unnecessary time cost in transferring this data. In addition, although Hadoop can process data in parallel, it does not support MPI in computing. In any Map/Reduce task, the computation must be serial. This results in inefficient scientific computations wrapped in Map/Reduce tasks because the computation can not be distributed over a Hadoop cluster, especially a Hadoop cluster on a traditional high performance computing cluster.
Computational technologies have been extensively investigated to be applied into many application domains. Since the presence of Hadoop, scientists have applied the MapReduce framework to biological sciences, chemistry, medical sciences, and other areas to efficiently process huge data sets.
In our research, we proposed a hybrid framework of iterative MapReduce and MPI for molecular dynamics applications. We carried out molecular dynamics simulations with the implemented hybrid framework. We improved the capability and performance of Hadoop by adding a MPI module to Hadoop. The MPI module enables Hadoop to monitor and manage the resources of Hadoop cluster so that computations incurred in Map/Reduce tasks can be performed in a parallel manner. We also applied the local caching mechanism to avoid data delivery redundancy to make the computing more efficient. Our hybrid framework inherits features of Hadoop and improves computing efficiency of Hadoop.
The targeting application domain of our research is molecular dynamics simulation. However, the potential use of our iterative MapReduce framework with MPI is broad. It can be used by any applications which contain single or multiple MapReduce iterations, invoke serial or parallel (MPI) computations in Map phase or Reduce phase of Hadoop.
|
486 |
IMPLEMENTATION OF A VERTICALLY INTEGRATED ICE SHEET MOMENTUM BALANCE MODELCampbell, Joshua Charles 21 August 2013 (has links)
A new high-fidelity ice sheet momentum balance model meant for inclusion in the Glimmer community ice-sheet model is presented. As a component of the Community Earth
Systems Model the newly developed momentum balance will directly benefit from ice/ocean and ice/atmosphere coupling efforts occurring elsewhere. The objectives of this thesis
are to develop a model which converges quickly (quadratic convergence rates) for non-Newtonian Stokes flow approximations, and to provide a clear and low-level discussion of
its derivation, variation and discretization.
The model utilizes the Finite Element Method to discretize variational forms of the first
variation arising from the Galerkin method and for vertically-integrated Stokes flow. The
model employs a hybridization of two commonly used approximations to Stokes flow. It
couples the Shallow Shelf Approximation (SSA) and Shallow Ice Approximation (SIA).
This approximation is then differentiated symbolically. Efficient sparse matrix formats
are manipulated directly to avoid invoking costly sorting routines in the underlying linear
solvers. The code was not only developed for standards-compliant FORTRAN 90 compilers but also for automatic differentiation tools. The model is verified against published
model intercomparison projects.
|
487 |
Interpreting and Answering Keyword Queries using Web Knowledge BasesPound, Jeffrey 08 August 2013 (has links)
Many keyword queries issued to Web search engines target information about real world entities,
and interpreting these queries over Web knowledge bases can allow a search system to provide exact answers to keyword queries.
Such an ability provides a useful service to end users, as their information need can be directly addressed and they need not
scour textual results for the desired information.
However, not all keyword queries can be addressed by even the most comprehensive knowledge base, and therefore equally
important is the problem of recognizing when a reference knowledge base is not capable of modelling the keyword query's intention.
This may be due to lack of coverage of the knowledge base or lack of expressiveness in the underlying query representation formalism.
This thesis presents an approach to computing structured representations of keyword queries over a reference knowledge base.
Keyword queries are annotated with occurrences of semantic constructs by learning a sequential labelling model from an annotated
Web query log. Frequent query structures are then mined from the query log and are used along with the annotations to map keyword
queries into a structured representation over the vocabulary of a reference knowledge base.
The proposed approach exploits coarse linguistic structure in keyword queries, and combines it with rich structured query
representations of information needs.
As an intermediate representation formalism, a novel query language is proposed that blends keyword search
with structured query processing over large Web knowledge bases.
The formalism for structured keyword queries combines the flexibility of keyword search with the expressiveness of
structures queries. A solution to the resulting disambiguation problem caused by introducing keywords as primitives
in a structured query language is presented. Expressions in our proposed language are rewritten using the
vocabulary of the knowledge base, and different possible rewritings are ranked based on their syntactic relationship
to the keywords in the query as well as their semantic coherence in the underlying knowledge base.
The problem of ranking knowledge base entities returned as a query result is also explored from the perspective of
personalized result ranking. User interest models based on entity types are learned from a Web search session by
cross referencing clicks on URLs with known entity homepages. The user interest model is then used to effectively
rerank answer lists for a given user.
A methodology for evaluating entity-based search engines is also proposed and empirically evaluated.
|
488 |
Periodicity and Repetition in Combinatorics on WordsWang, Ming-wei January 2004 (has links)
This thesis concerns combinatorics on words. I present many results in this area, united by the common themes of periodicity and repetition. Most of these results have already appeared in journal or conference articles. Chapter 2 – Chapter 5 contain the most significant contribution of this thesis in the area of combinatorics on words. Below we give a brief synopsis of each chapter. Chapter 1 introduces the subject area in general and some background information. Chapter 2 and Chapter 3 grew out of attempts to prove the Decreasing Length Conjecture (DLC). The DLC states that if ′ is a morphism over an alphabet of size <em>n</em> then for any word <em>w</em>, there exists 0 ≤ <em>i</em> < <em>j</em> ≤ <em>n</em> such that |′<em>i</em>(<em>w</em>)| ≤ |′<em>j</em>(<em>w</em>)|. The DLC was proved by S. Cautis and S. Yazdani in <em>Periodicity, morphisms, and matrices</em> in <em>Theoret. Comput. Sci. </em> (<strong>295</strong>) 2003, 107-121. More specifically, Chapter 2 gives two generalizations of the classical Fine and Wilf theorem which states that if (<em>fn</em>)<em>n</em>≥0, (<em>gn</em>)<em>n</em>≥0 are two periodic sequences of real numbers, of period lengths <em>h</em> and <em>k</em> respectively, (a) If <em>fn</em> = <em>gn</em> for 0 ≤ <em>n</em> < <em>h</em> +<em> k</em> - gcd(<em>h</em>;<em>k</em>), then <em>fn</em> = <em>gn</em> for all <em>n</em> ≥ 0.
(b) The conclusion in (a) would be false if <em>h</em> + <em>k</em> - gcd(<em>h</em>;<em>k</em>) were replaced by any smaller number. We give similar results where equality in (a) is replaced by inequality and to more than two sequences. These generalizations can be used to prove weak versions of the DLC. Chapter 3 gives an essentially optimal bound to the following matrix problem. Let <em>A</em> be an <em>n</em> × <em>n</em> matrix with non-negative integer entries. Let <em>f</em>(<em>n</em>) be the smallest integer such that for all <em>A</em>, there exist <em>i</em> < <em>j </em>≤ <em>f</em>(<em>n</em>) such that <em>Ai</em> ≤ <em>Aj</em>, where <em>A</em> ≤ <em>B</em> means each entry of <em>A</em> is less than or equal to the corresponding entry in <em>B</em>. The question is to find good upper bounds on <em>f</em>(<em>n</em>). This problem has been attacked in two different ways. We give a method that proves an essentially optimal upper bound of <em>n</em> + <em>g</em>(<em>n</em>) where <em>g</em>(<em>n</em>) is the maximum order of an element of the symmetric group on <em>n</em> objects. A second approach yields a slightly worse upper bound. But this approach has a result of independent interest concerning <em>irreducible matrices</em>. A non-negative <em>n</em> × <em>n</em> matrix <em>A</em> is <em>irreducible</em> if ∑{<em>i</em>=0}^{<em>n</em>-1}<em>Ai</em> has all entries strictly positive. We show in Chapter 3 that if <em>A</em> is an irreducible <em>n</em> × <em>n</em> matrix, then there exists an integer <em>e</em> > 0 with <em>e</em> = <em>O</em>(<em>n</em> log <em>n</em>) such that the diagonal entries of <em>Ae</em> are all strictly positive. These results improve on results in my Master's thesis and is a version of the DLC in the matrix setting. They have direct applications to the growth rate of words in a D0L system. Chapter 4 gives a complete characterization of two-sided fixed points of morphisms. A weak version of the DLC is used to prove a non-trivial case of the characterization. This characterization completes the previous work of Head and Lando on finite and one-sided fixed points of morphisms. Chapter 5, 6 and 7 deal with avoiding different kinds of repetitions in infinite words. Chapter 5 deals with problems about simultaneously avoiding cubes and large squares in infinite binary words. We use morphisms and fixed points to construct an infinite binary word that simultaneously avoid cubes and squares <em>xx</em> with |<em>x</em>| ≥ 4. M. Dekking was the first to show such words exist. His construction used a non-uniform morphism. We use only uniform morphisms in Chapter 5. The construction in Chapter 5 is somewhat simpler than Dekking's. Chapter 6 deals with problems of simultaneously avoiding several patterns at once. The patterns are generated by a simple arithmetic operation. Chapter 7 proves a variant of a result of H. Friedman. We say a word <em>y</em> is a <em>subsequence</em> of a word <em>z</em> if <em>y</em> can be obtained by striking out zero or more symbols from <em>z</em>. Friedman proved that over any finite alphabet, there exists a longest finite word <em>x</em> = <em>x</em>₁<em>x</em>₂ ··· x<em>n</em> such that x<em>i</em>x<em>i</em>i+1 ··· <em>x</em>₂<em>i</em> is not a subsequence of <em>xjxj</em>+1 ··· <em>x</em>₂<em>j</em> for 1 ≤ <em>i</em> < <em>j</em> ≤ <em>n</em>/2. We call such words <em>self-avoiding</em>. We show that if “subsequence” is replaced by “subword” in defining self-avoiding, then there are infinite self-avoiding words over a 3-letter alphabet but not over binary or unary alphabets. This solves a question posed by Jean-Paul Allouche. In Chapter 8 we give an application of the existence of infinitely many square-free words over a 3-letter alphabet. The <em>duplication language</em> generated by a word <em>w</em> is roughly speaking the set of words that can be obtained from <em>w </em>by repeatedly doubling the subwords of <em>w</em>. We use the existence of infinitely many square-free words over a 3-letter alphabet to prove that the duplication language generated by a word containing at least 3 distinct letters is not regular. This solves an open problem due to J. Dassow, V. Mitrana and Gh. Paun. It is known that the duplication language generated by a word over a binary alphabet is regular. It is not known whether such languages are context-free if the generator word contains at least 3 distinct letters. After the defence of my thesis I noticed that essentially the same argument was given by Ehrenfeucht and Rozenberg in <em>Regularity of languages generated by copying systems</em> in <em>Discrete Appl. Math. </em> (<strong>8</strong>) 1984, 313-317. Chapter 9 defines a new “descriptive” measure of complexity of a word <em>w</em> by the minimal size of a deterministic finite automaton that accepts <em>w</em> (and possibly other words) but no other words of length |<em>w</em>|. Lower and upper bounds for various classes of words are proved in Chapter 9. Many of the proofs make essential use of repetitions in words.
|
489 |
An Aspect-Oriented Approach to Design and Develop Hypermedia DocumentsZhang, Ping January 2005 (has links)
Hypermedia applications can be defined as collections of interactive multimedia documents that are organized as a hypertext net. The variety of application domains and the complexity of the relationship among the application components make the design and development of these hypermedia applications a difficult process. Hypermedia design models help designers to complete their conceptualization tasks, provide a conceptual understanding of hypermedia systems and defining the relationships among system components. However, when existing document models are used, features such as logging, error handling, access control and personalized view generation are notoriously difficult to implement in a modular way. The result is that the code related to these features is tangled across a system, which leads to quality, productivity and maintenance problems. In this thesis, we provided an aspect-oriented approach to design and develop hypermedia documents. A general aspect-based document model is defined to support the separation of concerns related to the features previously described. As a result, each feature is modularized as a different aspect and, in this way, the "tangling code" problem is solved. We have also applied the general document model in a concrete case study combining DOM and AspectJ. The resulting implementation provided a new prototype dealing with many features defined as aspects such as access control, logging, view generation, annotation and dynamic event handling.
|
490 |
On the Application of Photoacoustic Absorption Spectral Data to the Modeling of Leaf Optical PropertiesEng, Denise January 2007 (has links)
Due to the importance of plants in the Earth's ecosystem, their photobiological responses have become the subject of extensive research in life sciences. Leaf optical models have been developed to assist in the analysis of remotely sensed data to derive information on leaf biochemistry and anatomy from foliar spectral curves (transmittance and reflectance). In this paper, we investigate the implications of using in vitro pigment absorption spectra to model foliar optical properties in the visible domain. Typically pigment absorption spectra have been determined using light absorption spectroscopy or by applying a data fitting approach. Alternatively, we propose the use of photoacoustic absorption spectroscopy, which despite being available in the literature has not been used in the modeling of foliar optical properties before. We also perform computational experiments in which foliar modeled reflectance and transmittance spectral curves generated using these different absorption data sets are compared with actual measured data. Our findings indicate that the proposed alternative not only allows key pigments to be individually incorporated into the models, which, in turn, increases the predictability of the simulations, but also enables the generation of modeled leaf spectra that are closer approximations to measured leaf spectra than those obtained using absorption data derived from standard absorption spectroscopy procedures.
|
Page generated in 0.0967 seconds