951 |
Turbo Equalization for HSPA / Turboutjämning för HSPAKonuskan, Cagatay January 2010 (has links)
New high quality mobile telecommunication services are offered everyday and the demand for higher data rates is continuously increasing. To maximize the uplink throughput in HSPA when transmission is propagated through a dispersive channel causing self-interference, equalizers are used. One interesting solution, where the equalizer and decoder exchange information in an iterative way, for improving the equalizer performance is Turbo equalization. In this thesis a literature survey has been performed on Turbo equalization methods and a chosen method has been implemented for the uplink HSPA standard to evaluate the performance in heavily dispersive channels. The selected algorithm has been adapted for multiple receiving antennas, oversampled processing and HARQ retransmissions. The results derived from the computer based link simulations show that the implemented algorithm provide a gain of approximately 0.5 dB when performing up to 7 Turbo equalization iterations. Gains up to 1 dB have been obtained by disabling power control, not using retransmission combining and utilizing a single receiver antenna. The algorithm has also been evaluated considering alternative dispersive channels, Log-MAP decoding, different code rates, number of Turbo equalization iterations and number of Turbo decoding iterations. The simulation results do not motivate a real implementation of the chosen algorithm considering the increased computational complexity and small gain achieved in a full featured receiver system. Further studies are needed before concluding the HSPA uplink Turbo equalization approach.
|
952 |
Explaining SOA Service Granularity : How IT-strategy shapes servicesReldin, Pierre, Sundling, Peter January 2007 (has links)
Today’s competitive business environment forces companies to introduce new product and process innovations at an increasing pace. Almost every aspect of the modern business is supported by information technology systems which, consequently, must evolve at the same pace as the business. A company’s strategic view on IT reflects the strategic importance of IT in the organization, both in terms of the opportunities IT is expected to create and the commitment to IT the business organization is willing to make. SOA is an emerging concept which aims to structure IT in a more flexible manner. The basic idea is to encapsulate distinct units of business logic in reusable services, which can be combined to support business processes. The term service granularity refers to the amount of logic contained in a service. Even though there is immense hype around SOA today, the concept of service granularity is still relatively unexplored. The service should be coarse grained enough to be reusable, but at the same time specific enough to fit the process. Most SOA literature avoids the subject as being too implementation specific and seldom makes any attempt to concretize the rather abstract term. The research was conducted at Handelsbanken, which for years has worked with service-oriented principles. The researchers have been given the opportunity to closely analyze the bank’s service initiative. In order to gain an understanding beyond merely technical aspects a rich case study was built, based on interviews with professionals at all levels of the organization. The research objective was divided in three parts. The first part was to factorize the notion of service granularity, or in other words to find a number of factors which together precisely describe the granularity of a service. The second part was to explicate how the factors are interrelated, i.e. how changing one factor will affect the others. The final part of the objective was to explain how an organization’s strategic view on IT affects the optimal service granularity. It was found that an organization’s strategic view on IT affects the amount of complexity the organization is able to handle, limiting the optimal SOA granularity, which can be precisely described using three factors: reach, range and realm. Reach defines the locations and people the service is capable of connecting, range defines how much functionality the service offers, and realm defines what kind of functionality the service offers.
|
953 |
A Parameterized Algorithm for Upward Planarity Testing of Biconnected GraphsChan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints.
Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete.
Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time.
In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw.
Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.
|
954 |
An Attempt to Automate <i>NP</i>-Hardness Reductions via <i>SO</i>∃ LogicNijjar, Paul January 2004 (has links)
We explore the possibility of automating <i>NP</i>-hardness reductions. We motivate the problem from an artificial intelligence perspective, then propose the use of second-order existential (<i>SO</i>∃) logic as representation language for decision problems. Building upon the theoretical framework of J. Antonio Medina, we explore the possibility of implementing seven syntactic operators. Each operator transforms <i>SO</i>∃ sentences in a way that preserves <i>NP</i>-completeness. We subsequently propose a program which implements these operators. We discuss a number of theoretical and practical barriers to this task. We prove that determining whether two <i>SO</i>∃ sentences are equivalent is as hard as GRAPH ISOMORPHISM, and prove that determining whether an arbitrary <i>SO</i>∃ sentence represents an <i>NP</i>-complete problem is undecidable.
|
955 |
Complexity Reduced Behavioral Models for Radio Frequency Power Amplifiers’ Modeling and LinearizationFares, Marie-Claude January 2009 (has links)
Radio frequency (RF) communications are limited to a number of frequency bands scattered over the radio spectrum. Applications over such bands increasingly require more versatile, data extensive wireless communications that leads to the necessity of high bandwidth efficient interfaces, operating over wideband frequency ranges. Whether for a base station or mobile device, the regulations and adequate transmission of such schemes place stringent requirements on the design of transmitter front-ends. Increasingly strenuous and challenging hardware design criteria are to be met, especially so in the design of power amplifiers (PA), the bottle neck of the transmitter’s design tradeoff between linearity and power efficiency. The power amplifier exhibits a nonideal behavior, characterized by both nonlinearity and memory effects, heavily affecting that tradeoff, and therefore requiring an effective linearization technique, namely Digital Predistortion (DPD). The effectiveness of the DPD is highly dependent on the modeling scheme used to compensate for the PA’s nonideal behavior. In fact, its viability is determined by the scheme’s accuracy and implementation complexity. Generic behavioral models for nonlinear systems with memory have been used, considering the PA as a black box, and requiring RF designers to perform extensive testing to determine the minimal complexity structure that achieves satisfactory results. This thesis first proposes a direct systematic approach based on the parallel Hammerstein structure to determine the exact number of coefficients needed in a DPD. Then a physical explanation of memory effects is detailed, which leads to a close-form expression for the characteristic behavior of the PA entirely based on circuit properties. The physical expression is implemented and tested as a modeling scheme. Moreover, a link between this formulation and the proven behavioral models is explored, namely the Volterra series and Memory Polynomial. The formulation shows the correlation between parameters of generic behavioral modeling schemes when applied to RF PAs and demonstrates redundancy based on the physical existence or absence of modeling terms, detailed for the proven Memory polynomial modeling and linearization scheme.
|
956 |
How incentive contracts and task complexity influence and facilitate long-term performanceBerger, Leslie 10 July 2009 (has links)
The purpose of this study is to investigate how different incentive contracts that include forward-looking and contemporaneous goals motivate managers to make decisions consistent with the organization’s long-term objectives, in tasks of varying complexity. Two research questions are addressed. First, in a long-term horizon setting, how do incentive contracts based on various combinations of forward-looking and contemporaneous measures influence decisions? Second, how does task complexity influence the expected effect of various incentive contracts on management decisions?
I address my research questions using a multi-period experiment where I compare the effects of three different incentive structure types and two different levels of task complexity. Results show that in a low complexity task, individuals perform better when only contemporaneous goal attainment is rewarded in the incentive contract than when both forward-looking and contemporaneous goal attainment is rewarded. In a high complexity task, individuals perform better when both contemporaneous and forward-looking goal attainment is rewarded, but only when the contemporaneous goal attainment is weighted more heavily in the incentive contract.
My research contributes to the existing literature in two ways. First, this is the first study of which I am aware that compares the performance effects of long-term incentive contracts that reward forward-looking and contemporaneous goal attainment. Second, this study is the first of which I am aware to experimentally test incentive contracts, for employees with a long-term horizon, that incorporate various weightings of forward-looking measures in the contract. In addition, this study will be amongst the first to examine the impact of task complexity on incentive contract effectiveness.
|
957 |
Quantum Strategies and Local OperationsGutoski, Gustav January 2009 (has links)
This thesis is divided into two parts.
In Part I we introduce a new formalism for quantum strategies, which specify the actions of one party in any multi-party interaction involving the exchange of multiple quantum messages among the parties.
This formalism associates with each strategy a single positive semidefinite operator acting only upon the tensor product of the input and output message spaces for the strategy.
We establish three fundamental properties of this new representation for quantum strategies and we list several applications, including a quantum version of von Neumann's celebrated 1928 Min-Max Theorem for zero-sum games and an efficient algorithm for computing the value of such a game.
In Part II we establish several properties of a class of quantum operations that can be implemented locally with shared quantum entanglement or classical randomness.
In particular, we establish the existence of a ball of local operations with shared randomness lying within the space spanned by the no-signaling operations and centred at the completely noisy channel.
The existence of this ball is employed to prove that the weak membership problem for local operations with shared entanglement is strongly NP-hard.
We also provide characterizations of local operations in terms of linear functionals that are positive and "completely" positive on a certain cone of Hermitian operators, under a natural notion of complete positivity appropriate to that cone.
We end the thesis with a discussion of the properties of no-signaling quantum operations.
|
958 |
Two Coalitional Models for Network Formation and Matching GamesBranzei, Simina January 2011 (has links)
This thesis comprises of two separate game theoretic models that fall under the general
umbrella of network formation games. The first is a coalitional model of interaction in social networks that is based on the idea of social distance, in which players seek interactions with similar others. Our model captures some of the phenomena observed on such networks, such as homophily driven interactions and the formation of small worlds for groups of players. Using social distance games, we analyze the interactions between players on the network, study the properties of efficient and stable networks, relate them to the underlying graphical structure of the game, and give an approximation algorithm for finding optimal social welfare. We then show that efficient networks are not necessarily stable, and stable networks do not necessarily maximise welfare. We use the stability gap to investigate the welfare of stable coalition structures, and propose two new solution concepts with improved welfare guarantees.
The second model is a compact formulation of matchings with externalities. Our formulation achieves tractability of the representation at the expense of full expressivity. We formulate a template of solution concept that applies to games where externalities are involved, and instantiate it in the context of optimistic, neutral, and pessimistic reasoning. Then we investigate the complexity of the representation in the context of many-to-many
and one-to-one matchings, and provide both computational hardness results and
polynomial time algorithms where applicable.
|
959 |
Looking beyond : the RNs' experience of caring for older hospitalized patientsMolnar, Gaylene L 09 March 2005 (has links)
Older patients comprise a large portion of patients in the acute care setting. Registered Nurses (RNs) are the main care providers in the hospital setting. RNs caring for older hospitalized patients are affected by many factors including workload pressures, issues related to the acute care environment and attitudes toward older patients. However, a literature review identified a limited number of studies exploring the RNs experience of caring for older patients in the acute care setting. This study explored the RNs experience of caring for older patients (age 65 and older) on an orthopedic unit in an acute care hospital. Saturation was reached with a purposive sample of nine RNs working on the orthopedic unit, including eight females and 1 male. Participants were interviewed using broad open-ended questions, followed by questions more specific to emerging themes. All interviews were audio-taped and transcribed verbatim. Data were analyzed using Glasers (1992) grounded theory approach. Participants described the basic social problem as dealing with the complexity of older patients. The basic social process identified was the concept of looking beyond. Looking beyond was described as looking at the big picture to find what lies outside the scope of the ordinary. Three sub-processes of looking beyond were identified as connecting, searching, and knowing. Connecting was described as getting to know patients as a person by taking time, respecting and understanding the individual. Searching was described as digging deeper, searching for the unknown by looking for clues and mining everywhere for information. Knowing was described as intuitively knowing what is going to happen and what the older patient needs by pulling it all together and knowing what to expect. These dynamic sub-processes provided the RN with the relationship and information required to look beyond to manage the older patients complexity. The results of this study have implications for nursing practice, education and research. These findings may provide RNs with a process to manage the complex care of a large portion of our population.
|
960 |
Low-Complexity Interleaver Design for Turbo CodesList, Nancy Brown 12 July 2004 (has links)
A low-complexity method of interleaver design, sub-vector interleaving, for both parallel and serially concatenated convolutional codes (PCCCs and SCCCs, respectively) is presented here. Since the method is low-complexity, it is uniquely suitable for designing long interleavers.
Sub-vector interleaving is based on a dynamical system representation of the constituent encoders employed by PCCCs and SCCCs. Simultaneous trellis termination can be achieved with a single tail sequence using sub-vector interleaving for both PCCCs and SCCCs. In the case of PCCCs, the error floor can be lowered by sub-vector interleaving which allows for an increase in the weight of the free distance codeword and the elimination of the lowest weight codewords generated by weight-2 terminating input sequences that determine the error floor at low signal-to-noise ratios (SNRs). In the case of SCCCs, sub-vector interleaving lowers the error floor by increasing the weight of the free distance codewords. Interleaver gain can also be increased for SCCCs by interleaving the lowest weight codewords from the outer into non-terminating input sequences to the inner encoder.
Sub-vector constrained S-random interleaving, a method for incorporating S-random interleaving into sub-vector interleavers, is also proposed. Simulations show that short interleavers incorporating S-random interleaving into sub-vector interleavers perform as well as or better than those designed by the best and most complex methods for designing short interleavers. A method for randomly generating sub-vector constrained S-random interleavers that maximizes the spreading factor, S, is also examined.
The convergence of the turbo decoding algorithm to maximum-likelihood decisions on the decoded input sequence is required to demonstrate the improvement in BER performance caused by the use of sub-vector interleavers. Convergence to maximum-likelihood decisions by the decoder do not always occur in the regions where it is feasible to generate the statistically significant numbers of error events required to approximate the BER performance for a particular coding scheme employing a sub-vector interleaver. Therefore, a technique for classifying error events by the mode of convergence of the decoder is used to illuminate the effect of the sub-vector interleaver at SNRs where it is possible to simulate the BER performance of the coding scheme.
|
Page generated in 0.0755 seconds