131 |
Perfect Double Roman Domination of TreesEgunjobi, Ayotunde T., Haynes, Teresa W. 30 September 2020 (has links)
For a graph G with vertex set V(G) and function f:V(G)→{0,1,2,3}, let Vi be the set of vertices assigned i by f. A perfect double Roman dominating function of a graph G is a function f:V(G)→{0,1,2,3} satisfying the conditions that (i) if u∈V0, then u is either adjacent to exactly two vertices in V2 and no vertex in V3 or adjacent to exactly one vertex in V3 and no vertex in V2; and (ii) if u∈V1, then u is adjacent to exactly one vertex in V2 and no vertex in V3. The perfect double Roman domination number of G, denoted γdRp(G), is the minimum weight of a perfect double Roman dominating function of G. We prove that if T is a tree of order n≥3, then γdRp(T)≤9n∕7. In addition, we give a family of trees T of order n for which γdRp(T) approaches this upper bound as n goes to infinity.
|
132 |
"Y'all Done Up and Done It": The Semantics of a Perfect Construction in an Upstate South Carolina DialectRuppe, Eric L. 21 May 2013 (has links)
No description available.
|
133 |
Algorithms For Haplotype Inference And Block PartitioningVijaya, Satya Ravi 01 January 2006 (has links)
The completion of the human genome project in 2003 paved the way for studies to better understand and catalog variation in the human genome. The International HapMap Project was started in 2002 with the aim of identifying genetic variation in the human genome and studying the distribution of genetic variation across populations of individuals. The information collected by the HapMap project will enable researchers in associating genetic variations with phenotypic variations. Single Nucleotide Polymorphisms (SNPs) are loci in the genome where two individuals differ in a single base. It is estimated that there are approximately ten million SNPs in the human genome. These ten million SNPS are not completely independent of each other - blocks (contiguous regions) of neighboring SNPs on the same chromosome are inherited together. The pattern of SNPs on a block of the chromosome is called a haplotype. Each block might contain a large number of SNPs, but a small subset of these SNPs are sufficient to uniquely dentify each haplotype in the block. The haplotype map or HapMap is a map of these haplotype blocks. Haplotypes, rather than individual SNP alleles are expected to effect a disease phenotype. The human genome is diploid, meaning that in each cell there are two copies of each chromosome - i.e., each individual has two haplotypes in any region of the chromosome. With the current technology, the cost associated with empirically collecting haplotype data is prohibitively expensive. Therefore, the un-ordered bi-allelic genotype data is collected experimentally. The genotype data gives the two alleles in each SNP locus in an individual, but does not give information about which allele is on which copy of the chromosome. This necessitates computational techniques for inferring haplotypes from genotype data. This computational problem is called the haplotype inference problem. Many statistical approaches have been developed for the haplotype inference problem. Some of these statistical methods have been shown to be reasonably accurate on real genotype data. However, these techniques are very computation-intensive. With the international HapMap project collecting information from nearly 10 million SNPs, and with association studies involving thousands of individuals being undertaken, there is a need for more efficient methods for haplotype inference. This dissertation is an effort to develop efficient perfect phylogeny based combinatorial algorithms for haplotype inference. The perfect phylogeny haplotyping (PPH) problem is to derive a set of haplotypes for a given set of genotypes with the condition that the haplotypes describe a perfect phylogeny. The perfect phylogeny approach to haplotype inference is applicable to the human genome due to the block structure of the human genome. An important contribution of this dissertation is an optimal O(nm) time algorithm for the PPH problem, where n is the number of genotypes and m is the number of SNPs involved. The complexity of the earlier algorithms for this problem was O(nm^2). The O(nm) complexity was achieved by applying some transformations on the input data and by making use of the FlexTree data structure that has been developed as part of this dissertation work, which represents all the possible PPH solution for a given set of genotypes. Real genotype data does not always admit a perfect phylogeny, even within a block of the human genome. Therefore, it is necessary to extend the perfect phylogeny approach to accommodate deviations from perfect phylogeny. Deviations from perfect phylogeny might occur because of recombination events and repeated or back mutations (also referred to as homoplasy events). Another contribution of this dissertation is a set of fixed-parameter tractable algorithms for constructing near-perfect phylogenies with homoplasy events. For the problem of constructing a near perfect phylogeny with q homoplasy events, the algorithm presented here takes O(nm^2+m^(n+m)) time. Empirical analysis on simulated data shows that this algorithm produces more accurate results than PHASE (a popular haplotype inference program), while being approximately 1000 times faster than phase. Another important problem while dealing real genotype or haplotype data is the presence of missing entries. The Incomplete Perfect Phylogeny (IPP) problem is to construct a perfect phylogeny on a set of haplotypes with missing entries. The Incomplete Perfect Phylogeny Haplotyping (IPPH) problem is to construct a perfect phylogeny on a set of genotypes with missing entries. Both the IPP and IPPH problems have been shown to be NP-hard. The earlier approaches for both of these problems dealt with restricted versions of the problem, where the root is either available or can be trivially re-constructed from the data, or certain assumptions were made about the data. We make some novel observations about these problems, and present efficient algorithms for unrestricted versions of these problems. The algorithms have worst-case exponential time complexity, but have been shown to be very fast on practical instances of the problem.
|
134 |
A Historical Technique from a Modern Perspective: The Transcription Scordatura in Mozart’s Sinfonia Concertante for Violin, Viola and Orchestra in E-flat major, K. 364Chiang, I-Chun 30 September 2010 (has links)
No description available.
|
135 |
Perfectly Matched Layer (PML) for Finite Difference Time Domain (FDTD) Computations in Piezoelectric CrystalsChagla, Farid 08 1900 (has links)
The Finite-Difference Time-Domain (FDTD) method has become a very powerful tool for the analysis of propagating electromagnetic waves. It involves the discretization of Maxwell's equations in both time and space that leads to a numerical solution of the wave propagation problem in the time domain. The technique's main benefits are that it permits the description of wave propagation in non-uniform media, it can easily accommodate a wide range of boundary conditions, and it can be used to model nonlinear effects as well as the wave behaviour near localized structures or material defects. In this study, we extend this technique to mechanical wave propagation in piezoelectric crystals. It is observed to give large reflection artefacts generated by the computational boundaries which interfere with the desired wave propagation. To solve this problem, the renowned absorbing boundary condition called perfectly matched layer (PML) is used. PML was first introduced in 1994 for electromagnetic wave propagation. Our research has further developed this idea for acoustic wave propagation in piezoelectric crystals.
The need to improve the large reflection artefacts by introducing a finite thickness PML has reduced acoustic wave reflection occurring due to practical errors to less than 0.5 %. However, it is found that PML can generate numerical instabilities in the calculation of acoustic fields in piezoelectric crystals. Theses observations are also discussed in this report. / Thesis / Master of Applied Science (MASc)
|
136 |
Optimal, Multiplierless Implementations of the Discrete Wavelet Transform for Image Compression ApplicationsKotteri, Kishore 12 May 2004 (has links)
The use of the discrete wavelet transform (DWT) for the JPEG2000 image compression standard has sparked interest in the design of fast, efficient hardware implementations of the perfect reconstruction filter bank used for computing the DWT. The accuracy and efficiency with which the filter coefficients are quantized in a multiplierless implementation impacts the image compression and hardware performance of the filter bank. A high precision representation ensures good compression performance, but at the cost of increased hardware resources and processing time. Conversely, lower precision in the filter coefficients results in smaller, faster hardware, but at the cost of poor compression performance. In addition to filter coefficient quantization, the filter bank structure also determines critical hardware properties such as throughput and power consumption.
This thesis first investigates filter coefficient quantization strategies and filter bank structures for the hardware implementation of the biorthogonal 9/7 wavelet filters in a traditional convolution-based filter bank. Two new filter bank properties—"no-distortion-mse" and "deviation-at-dc"—are identified as critical to compression performance, and two new "compensating" filter coefficient quantization methods are developed to minimize degradation of these properties. The results indicate that the best performance is obtained by using a cascade form for the filters with coefficients quantized using the "compensating zeros" technique. The hardware properties of this implementation are then improved by developing a cascade polyphase structure that increases throughput and decreases power consumption.
Next, this thesis investigates implementations of the lifting structure—an orthogonal structure that is more robust to coefficient quantization than the traditional convolution-based filter bank in computing the DWT. Novel, optimal filter coefficient quantization techniques are developed for a rational and an irrational set of lifting coefficients. The results indicate that the best quantized lifting coefficient set is obtained by starting with the rational coefficient set and using a "lumped scaling" and "gain compensation" technique for coefficient quantization.
Finally, the image compression properties and hardware properties of the convolution and lifting based DWT implementations are compared. Although the lifting structure requires fewer computations, the cascaded arrangement of the lifting filters requires significant hardware overhead. Consequently, the results depict that the convolution-based cascade polyphase structure (with "<i>z</i>₁-compensated" coefficients) gives the best performance in terms of image compression performance and hardware metrics like throughput, latency and power consumption. / Master of Science
|
137 |
John Wesley's concept of perfect love: a motif analysisCubie, David Livingstone January 1965 (has links)
Thesis (Ph.D.)--Boston University / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / The problem of the dissertation is to discover what John Wesley meant by perfect love. Statements of both approbation and criticism regarding his doctrine are usually made from the vantage of various present-day interpretations. The goal of this study is to describe the type of perfection and love which was uppermost in Wesley's thought.
The method used is motif analysis as it is developed by Anders Nygren in his book, Agape and Eros. Nygren's method and motifs (Agape, the New Testament motif; Eros, the Greek motif; Nomos, the Judaistic motif; and Caritas, Augustine's union of the Greek and New Testament motifs) are examined to determine their usefulness for research. While Nygren's description of Agape or New Testament love is not sufficiently complete, his description of the contrasting ways and systems of thought is sufficiently demonstrated to warrant the use of motif research . The method proved to be valuable in the examination of Wesley's thought [TRUNCATED] / 2999-01-01
|
138 |
Essays on Economic Decision MakingLee, Dongwoo 17 May 2019 (has links)
This dissertation focuses on exploring individual and strategic decision problems in Economics. I take a different approach in each chapter to capture various aspects of decision problems. An overview of this dissertation is provided in Chapter 1.
Chapter 2 studies an individual's decision making in extensive-form games under ambiguity when the individual is ambiguous about an opponent's moves. In this chapter, a player follows Choquet Expected Utility preferences, since the standard Expected Utility cannot explain the situations of ambiguity. I raise the issue that dynamically inconsistent decision making can be derived in extensive-form games with ambiguity. To cope with this issue, this chapter provides sufficient conditions to recover dynamic consistency.
Chapter 3 analyzes the strategic decision making in signaling games when a player makes an inference about hidden information from the behavioral hypothesis. The Hypothesis Testing Equilibrium (HTE) is proposed to provide an explanation for posterior beliefs from the player. The notion of HTE admits belief updates for all events including zero-probability events. In addition, this chapter introduces well-motivated modifications of HTE.
Finally, Chapter 4 examines a boundedly rational individual who considers selective attributes when making a decision. It is assumed that the individual focuses on a subset of attributes that stand out from a choice set. The selective attributes model can accommodate violations of choice axioms of Independence from Irrelevant Alternative (IIA) and Regularity. / Doctor of Philosophy / This dissertation focuses on exploring individual and strategic decision problems in Economics. I take a different approach in each chapter to capture various aspects of decision problem. An overview of this dissertation is provided in Chapter 1. Chapter 2 studies an individual’s decision making in extensive-form games under ambiguity. Ambiguity describes the situation in which the information available to a decision maker is too imprecise to be summarized by a probability measure (Epstein, 1999). It is known that ambiguity causes dynamic inconsistency between ex-ante and interim decision making. This chapter provides sufficient conditions under which dynamic consistency is maintained. Chapter 3 analyzes the strategic decision making in signaling games in which there are two players: informed sender and uninformed receiver. The sender has a private information about his type and the receiver makes an inference about hidden information. This chapter suggests a notion of the Hypothesis Testing Equilibrium (HTE), which provides an alternative explanation for the receiver’s beliefs. The idea of the HTE can be used as a refinement of Perfect Bayesian Equilibrium (PBE) in signaling games to cope with the known limitations of PBE. Finally, Chapter 4 examines a boundedly rational individual who considers only salient attributes when making a decision. The individual considers an attribute only when it stands out enough in a choice set. The selective attribute model can accommodate violations of choice axioms of Independence from Irrelevant Alternative (IIA) and Regularity.
|
139 |
Perfect Reconstruction Filter Bank Structure Based On Interpolated FIR FiltersCadena Pico, Jorge Eduardo 07 July 2016 (has links)
State of the art filter bank structures achieve practically perfect reconstruction with very high computational efficiency. However, the increase in computational requirements due to the need to process increasingly wider band signals is paramount. New filter bank structures that provide extra information about a signal while achieving the same level of required efficiency, and perfect reconstruction properties, need to be developed. In this work a new filter bank structure, the interpolated FIR (IFIR) filter bank is developed. Such a structure combines the concepts of filter banks, and interpolated FIR filters. The filter design procedures for the IFIR filter bank are developed and explained.
The resulting structure was compared with the non-maximally-decimated filter bank (NMDFB), achieving the same performance in terms of the number of multiplications required per sample and the overall distortion introduced by the system, when operating with Nyquist prototype filters.
In addition, the IFIR filter is tested in both simulated and real communication environments. Performance, in terms of bit-error-rate, was found to not be degraded significantly when using the IFIR filter bank system for transmission and reception of QPSK symbols. / Master of Science
|
140 |
Essays on Applied Game Theory and Public EconomicsYang, Tsung-Han 01 May 2018 (has links)
The first chapter presents a theoretical model of electoral competition where two parties can increase campaign contributions by choosing policies benefiting a significant interest group. However, such decision will shrink their hardcore vote base where voters are well informed about the policy. The parties can then allocate the funds between campaigning and personal wealth. Different from the core voters, independent voters can be attracted by advertisements funded by campaign spending. Using a multi-stage extensive form game, I investigate how electoral competition interacts with diversions and policy distortions. My result shows that a higher level of electoral competition helps mitigate policy distortions but prompts the parties to divert more funds.
Perfectly informed signal senders need to communicate their true type (productivity or ability) which is often private information to potential receivers. While tests are commonly used as measures of applicants' productivity, the accuracy of them has been questioned. Beginning with the framework of a two-type labor market signaling game, the second chapter investigates how tests of limited reliability affect the nature of equilibria in signaling games with asymmetric information. Our results show that, if a test is inaccurate and costly, only pooling PBE exists given certain conditions. Different forms of test inaccuracy may allow a separating PBE to exist. We also study the case of three types and find different PBEs.
The central issue of siting noxious facilities is that the host community absorbs potential costs, while all others can share the benefits without paying as much. The third chapter presents a modified Clarke mechanism to facilitate the siting decision, taking into account all residents' strategies. Suppose that the social planner is able to reasonably estimate the possible costs, depending on the host location, to each resident created by the facility. Our proposed Clarke mechanism is characterized by strategy-proofness and yields an efficient siting outcome. The issue of budget imbalance is mitigated when the compensation scheme is fully funded with the tax revenue based on the benefits. We then use a simple example to show that a weighted version of the Clarke mechanism may yield a different outcome. / Ph. D. / People worry that lobbying may affect legislative decision-making in ways that disadvantage ordinary citizens. The first chapter presents a theoretical model of electoral competition where two parties can increase campaign contributions by choosing policies benefiting a significant interest group. However, such decision will shrink their hardcore vote base where voters are well informed about the policy. The parties can then allocate the funds between campaigning and personal wealth. Different from the core voters, independent voters can be attracted by advertisements funded by campaign spending. The result shows that a higher level of electoral competition helps mitigate policy distortions, but prompts the parties to divert more funds.
Signaling games have been widely used for decades. Perfectly informed signal senders need to communicate their true type (productivity or ability) which is often private information to potential receivers. While tests are commonly used as measures of applicants’ productivity, the accuracy of them has been questioned. Beginning with the framework of a two-type labor market signaling game, the second chapter investigates how tests of limited reliability affect the nature of equilibria in signaling games with asymmetric information. Our results show that, if a test is inaccurate and costly, both high- and low-productivity workers voluntarily take the test given certain conditions. Different forms of test inaccuracy may allow the existence of a specific equilibrium where only high-productivity workers are willing to take the test. We also study the case of three types and find different types of equilibria.
The central issue of siting noxious facilities is that the host community absorbs potential costs, while all others can share the benefits without paying as much. The third chapter presents a modified Clarke mechanism to facilitate the siting decision, taking into account all residents’ strategies. Suppose that a social planner is able to reasonably estimate the possible costs, depending on the host location, to each resident created by the facility. Our proposed mechanism motivates all citizens to honestly report their preferences and yields an efficient siting outcome. The issue of budget imbalance is mitigated when the total cost, including the compensation scheme, is fully funded with tax revenues. We then use a simple example to show that a wealth-weighted version of our proposed Clarke mechanism may yield a different outcome.
|
Page generated in 0.2888 seconds