Spelling suggestions: "subject:"found"" "subject:"sound""
221 |
Andreev bound states and tunneling characteristics of a noncentrosymmetric superconductorIniotakis, C., Hayashi, N., Sawa, Y., Yokoyama, T., May, U., Tanaka, Y., Sigrist, M. 07 1900 (has links)
No description available.
|
222 |
The Graphs of HU+00E4ggkvist & HellRoberson, David E. January 2008 (has links)
This thesis investigates HU+00E4ggkvist & Hell graphs. These graphs are an extension of the idea of Kneser graphs, and as such share many attributes with them. A variety of original results on many different properties of these graphs are given.
We begin with an examination of the transitivity and structural properties of HU+00E4ggkvist & Hell graphs. Capitalizing on the known results for Kneser graphs, the exact values of girth, odd girth, and diameter are derived. We also discuss subgraphs of HU+00E4ggkvist & Hell graphs that are isomorphic to subgraphs of Kneser graphs. We then give some background on graph homomorphisms before giving some explicit homomorphisms of HU+00E4ggkvist & Hell graphs that motivate many of our results. Using the theory of equitable partitions we compute some eigenvalues of these graphs. Moving on to independent sets we give several bounds including the ratio bound, which is computed using the least eigenvalue. A bound for the chromatic number is given using the homomorphism to the Kneser graphs, as well as a recursive bound. We then introduce the concept of fractional chromatic number and again give several bounds. Also included are tables of the computed values of these parameters for some small cases. We conclude with a discussion of the broader implications of our results, and give some interesting open problems.
|
223 |
The Graphs of HU+00E4ggkvist & HellRoberson, David E. January 2008 (has links)
This thesis investigates HU+00E4ggkvist & Hell graphs. These graphs are an extension of the idea of Kneser graphs, and as such share many attributes with them. A variety of original results on many different properties of these graphs are given.
We begin with an examination of the transitivity and structural properties of HU+00E4ggkvist & Hell graphs. Capitalizing on the known results for Kneser graphs, the exact values of girth, odd girth, and diameter are derived. We also discuss subgraphs of HU+00E4ggkvist & Hell graphs that are isomorphic to subgraphs of Kneser graphs. We then give some background on graph homomorphisms before giving some explicit homomorphisms of HU+00E4ggkvist & Hell graphs that motivate many of our results. Using the theory of equitable partitions we compute some eigenvalues of these graphs. Moving on to independent sets we give several bounds including the ratio bound, which is computed using the least eigenvalue. A bound for the chromatic number is given using the homomorphism to the Kneser graphs, as well as a recursive bound. We then introduce the concept of fractional chromatic number and again give several bounds. Also included are tables of the computed values of these parameters for some small cases. We conclude with a discussion of the broader implications of our results, and give some interesting open problems.
|
224 |
New Conic Optimization Techniques for Solving Binary Polynomial Programming ProblemsGhaddar, Bissan January 2011 (has links)
Polynomial programming, a class of non-linear programming where the objective and the constraints are multivariate polynomials, has attracted the attention of many researchers in the past decade. Polynomial programming is a powerful modeling tool that captures various optimization models. Due to the wide range of applications, a research topic of high interest is the development of computationally efficient algorithms for solving polynomial programs. Even though some solution methodologies are already available and have been studied in the literature, these approaches are often either problem specific or are inapplicable for large-scale polynomial programs. Most of the available methods are based on using hierarchies of convex relaxations to solve polynomial programs; these schemes grow exponentially in size becoming rapidly computationally expensive. The present work proposes methods and implementations that are capable of solving polynomial programs of large sizes. First we propose a general framework to construct conic relaxations for binary polynomial programs, this framework allows us to re-derive previous relaxation schemes and provide new ones. In particular, three new relaxations for binary quadratic polynomial programs are presented. The first two relaxations, based on second-order cone and semidefinite programming, represent a significant improvement over previous practical relaxations for several classes of non-convex binary quadratic polynomial problems. The third relaxation is based purely on second-order cone programming, it outperforms the semidefinite-based relaxations that are proposed in the literature in terms of computational efficiency while being comparable in terms of bounds. To strengthen the relaxations further, a dynamic inequality generation scheme to generate valid polynomial inequalities for general polynomial programs is presented. When used iteratively, this scheme improves the bounds without incurring an exponential growth in the size of the relaxation. The scheme can be used on any initial relaxation of the polynomial program whether it is second-order cone based or semidefinite based relaxations. The proposed scheme is specialized for binary polynomial programs and is in principle scalable to large general combinatorial optimization problems. In the case of binary polynomial programs, the proposed scheme converges to the global optimal solution under mild assumptions on the initial approximation of the binary polynomial program. Finally, for binary polynomial programs the proposed relaxations are integrated with the dynamic scheme in a branch-and-bound algorithm to find global optimal solutions.
|
225 |
Eliminating Design Alternatives under Interval-Based UncertaintyRekuc, Steven Joseph 19 July 2005 (has links)
Typically, design is approached as a sequence of decisions in which designers select what they believe to be the best alternative in each decision. While this approach can be used to arrive at a final solution quickly, it is unlikely to result in the most-preferred solution. The reason for this is that all the decisions in the design process are coupled. To determine the most preferred alternative in the current decision, the designer would need to know the outcomes of all future decisions, information that is currently unavailable or indeterminate. Since the designer cannot select a single alternative because of this indeterminate (interval-based) uncertainty, a set-based design approach is introduced. The approach is motivated by the engineering practices at Toyota and is based on the structure of the Branch and Bound Algorithm. Instead of selecting a single design alternative that is perceived as being the most preferred at the time of the decision, the proposed set-based design approach eliminates dominated design alternatives: rather than selecting the best, eliminate the worst. Starting from a large initial design space, the approach sequentially reduces the set of non-dominated design alternatives until no further reduction is possible ??e remaining set cannot be rationally differentiated based on the available information. A single alternative is then selected from the remaining set of non-dominated designs.
In this thesis, the focus is on the elimination step of the set-based design method: A criterion for rational elimination under interval-based uncertainty is derived. To be efficient, the criterion takes into account shared uncertainty ??certainty shared between design alternatives. In taking this uncertainty into account, one is able to eliminate significantly more design alternatives, improving the efficiency of the set-based design approach. Additionally, the criterion uses a detailed reference design to allow more elimination of inferior design sets without evaluating each alternative in that set. The effectiveness of this elimination is demonstrated in two examples: a beam design and a gearbox design.
|
226 |
An Isotopic Study of Fiber-Water InteractionsWalsh, Frances Luella 04 August 2006 (has links)
A new technique for measuring the water content of fiber is presented. Tritiated water is added to a pulp/water suspension whereupon the tritium partitions between the bulk water and the pulp. Through this technique a fiber:water partition coefficient is developed, Kpw. This thesis will cover the development of the Kpw procedure and three different case studies.
The first study involves comparing Kpw to traditional methods of fiber water content. The procedure provides a value of ten percent for the tightly bound water content of unrefined hardwood or softwood kraft fiber, either bleached or unbleached. If this water is assumed to cover the fiber surface as a monolayer, then an estimate of the wet surface area of fiber can be obtained. This estimate compares well to independent measurements of surface area.
Kpw has also been found to be valuable in furthering the understanding of refining. Based on the study, it is proposed that refining occurs in three discrete stages. First, refining removes the primary cell wall and S1 layer while beginning to swell the S2 layer. Next, internal delamination occurs within the S2 layer. Finally, fiber destruction occurs at high refining levels. By using Kpw, the three stages of refining are clearly recognized.
Lastly, Kpw is used to study the effect of hornification on bleached softwood kraft fiber. The recycling effects at three refining levels were characterized by Kpw and followed closely the findings of the refining study. At low and high refining levels, the impact of recycling was minimal according to Kpw results, but at 400 mL csf the impact of recycling was much more pronounced. This could be attributed to the closing of internal delaminations within the fiber.
|
227 |
On the estimation of time series regression coefficients with long range dependenceChiou, Hai-Tang 28 June 2011 (has links)
In this paper, we study the parameter estimation of the multiple linear time series
regression model with long memory stochastic regressors and innovations. Robinson and
Hidalgo (1997) and Hidalgo and Robinson (2002) proposed a class of frequency-domain
weighted least squares estimates. Their estimates are shown to achieve the Gauss-Markov
bound with standard convergence rate. In this study, we proposed a time-domain generalized LSE approach, in which the inverse autocovariance matrix of the innovations is estimated via autoregressive coefficients. Simulation studies are performed to compare the proposed estimates with Robinson and Hidalgo (1997) and Hidalgo and Robinson (2002). The results show the time-domain generalized LSE is comparable to Robinson and Hidalgo (1997) and Hidalgo and Robinson (2002) and attains higher efficiencies when the
autoregressive or moving average coefficients of the FARIMA models have larger values.
A variance reduction estimator, called TF estimator, based on linear combination of the
proposed estimator and Hidalgo and Robinson (2002)'s estimator is further proposed to
improve the efficiency. Bootstrap method is applied to estimate the weights of the linear combination. Simulation results show the TF estimator outperforms the frequency-domain as well as the time-domain approaches.
|
228 |
Optimization in Geometric Graphs: Complexity and ApproximationKahruman-Anderoglu, Sera 2009 December 1900 (has links)
We consider several related problems arising in geometric graphs. In particular,
we investigate the computational complexity and approximability properties of several optimization problems in unit ball graphs and develop algorithms to find exact
and approximate solutions. In addition, we establish complexity-based theoretical
justifications for several greedy heuristics.
Unit ball graphs, which are defined in the three dimensional Euclidian space, have
several application areas such as computational geometry, facility location and, particularly, wireless communication networks. Efficient operation of wireless networks
involves several decision problems that can be reduced to well known optimization
problems in graph theory. For instance, the notion of a \virtual backbone" in a wire-
less network is strongly related to a minimum connected dominating set in its graph
theoretic representation.
Motivated by the vastness of application areas, we study several problems including maximum independent set, minimum vertex coloring, minimum clique partition,
max-cut and min-bisection. Although these problems have been widely studied in
the context of unit disk graphs, which are the two dimensional version of unit ball
graphs, there is no established result on the complexity and approximation status
for some of them in unit ball graphs. Furthermore, unit ball graphs can provide a
better representation of real networks since the nodes are deployed in the three dimensional space. We prove complexity results and propose solution procedures for
several problems using geometrical properties of these graphs.
We outline a matching-based branch and bound solution procedure for the maximum k-clique problem in unit disk graphs and demonstrate its effectiveness through
computational tests. We propose using minimum bottleneck connected dominating
set problem in order to determine the optimal transmission range of a wireless network that will ensure a certain size of "virtual backbone". We prove that this problem
is NP-hard in general graphs but solvable in polynomial time in unit disk and unit
ball graphs.
We also demonstrate work on theoretical foundations for simple greedy heuristics.
Particularly, similar to the notion of "best" approximation algorithms with respect to
their approximation ratios, we prove that several simple greedy heuristics are "best"
in the sense that it is NP-hard to recognize the gap between the greedy solution
and the optimal solution. We show results for several well known problems such as
maximum clique, maximum independent set, minimum vertex coloring and discuss
extensions of these results to a more general class of problems.
In addition, we propose a "worst-out" heuristic based on edge contractions for
the max-cut problem and provide analytical and experimental comparisons with a
well known "best-in" approach and its modified versions.
|
229 |
Spectrum Sharing in Cognitive Radio Systems Under Outage Probablility ConstraintCai, Pei Li 2009 December 1900 (has links)
For traditional wireless communication systems, static spectrum allocation is
the major spectrum allocation methodology. However, according to the recent investigations
by the FCC, this has led to more than 70 percent of the allocated spectrum in the
United States being under-utilized. Cognitive radio (CR) technology, which supports
opportunistic spectrum sharing, is one idea that is proposed to improve the overall
utilization efficiency of the radio spectrum.
In this thesis we consider a CR communication system based on spectrum sharing
schemes, where we have a secondary user (SU) link with multiple transmitting antennas
and a single receiving antenna, coexisting with a primary user (PU) link with
a single receiving antenna. At the SU transmitter (SU-Tx), the channel state information
(CSI) of the SU link is assumed to be perfectly known; while the interference
channel from the SU-Tx to the PU receiver (PU-Rx) is not perfectly known due to
less cooperation between the SU and the PU. As such, the SU-Tx is only assumed to
know that the interference channel gain can take values from a finite set with certain
probabilities. Considering a SU transmit power constraint, our design objective is to
determine the transmit covariance matrix that maximizes the SU rate, while we protect
the PU by enforcing both a PU average interference constraint and a PU outage
probability constraint. This problem is first formulated as a non-convex optimization
problem with a non-explicit probabilistic constraint, which is then approximated as
a mixed binary integer programming (MBIP) problem and solved with the Branch and Bound (BB) algorithm. The complexity of the BB algorithm is analyzed and numerical
results are presented to validate the eff ectiveness of the proposed algorithm.
A key result proved in this thesis is that the rank of the optimal transmit covariance
matrix is one, i.e., CR beamforming is optimal under PU outage constraints.
Finally, a heuristic algorithm is proposed to provide a suboptimal solution to our
MBIP problem by efficiently (in polynomial time) solving a particularly-constructed
convex problem.
|
230 |
What Is Writing? Student Practices and Perspectives on the Technologies of Literacy in College CompositionSpring, Sarah Catherine 2010 August 1900 (has links)
Despite the increasing presence of technology in composition classrooms,
students have not yet accepted the idea of multiple writing technologies – in fact, most
students do not yet fully understand the role of the word processor in their individual
writing process. The research goal of this dissertation is therefore to examine the
physical experience of writing, both in and outside of a computer composition
classroom, from students’ perspective by investigating their definitions of writing and
how they understand the relationship between writing and technology. To highlight
student writing practices, the analysis uses both qualitative and quantitative data from
two classes in a PC computer lab at Texas A and M University, one freshman composition
and one advanced composition course. Several important patterns have emerged from
the analysis of this data, and each of the main chapters focuses on a different student
perspective.
Chapter II argues that students tend to view computers simply as instruments or
tools, an understanding that affects how they perceive and work with classroom computers. Because how they perceive and approach computers affects their writing,
Chapter III examines student theories of writing and technology. The discussion postings
indicate that students write differently at home than they do in the classroom, and this
distinction creates context-bound theories. They are more familiar with the personal
context, often exhibiting an inability to translate their ease with this type of writing or
computer functions into an academic environment. Their makeshift theories lead to
writing practices, and Chapter IV examines student responses for patterns regarding how
writing happens. Specifically, discomfort with academic writing leads them to compose
with a computer because they believe technology makes this process faster and easier;
however, their choice of medium can actually derail writing when made for reasons of
ease or convenience.
This study finds that physical set-up of the classroom and the curriculum are
factors that have perpetuated these problems. Despite these obstacles, a computer
classroom approach has unique advantages, and a new approach is proposed, one that
focuses on developing rhetorical flexibility or the ability of students to produce multiple
texts in multiple contexts.
|
Page generated in 0.0554 seconds