Spelling suggestions: "subject:"approximate"" "subject:"pproximate""
1 |
On Approximate Isomorphisms Between C*-AlgebrasTzeng, Jez-Hung 30 June 2004 (has links)
In this thesis, we will study several problems about approximate mappings between C*-algebras.
|
2 |
Topics in the Notion of Amenability and its Generalizations for Banach AlgebrasMakareh Shireh, Miad 14 September 2010 (has links)
This thesis has two parts. The first part deals with some questions in amenability. We show that for a Banach algebra A with a bounded approximate identity, the amenability of the projective tensor product of A with itself, the amenability of the projective tensor product of A with A^op and the amenability of A are equivalent. Also if A is a closed ideal in a commutative Banach algebra B, then the (weak) amenability of the projective tensor product of A and B implies the (weak) amenability of A. Finally, we show that if the Banach algebra A is amenable through multiplication π then is also amenable through any multiplication ρ such that the norm of π-ρ is less than 1/( 11).
The second part deals with questions in generalized notions of amenability such as approximate amenability and bounded approximate amenability. First we prove some new results about approximately amenable Banach algebras. Then we state a characterization of approximately amenable Banach algebras and a characterization of boundedly approximately amenable Banach algebras.
Finally, we prove that B(l^p (E)) is not approximately amenable for Banach spaces E with certain properties. As a corollary of this part, we give a new proof that B(l^2) is not approximately amenable.
|
3 |
Topics in the Notion of Amenability and its Generalizations for Banach AlgebrasMakareh Shireh, Miad 14 September 2010 (has links)
This thesis has two parts. The first part deals with some questions in amenability. We show that for a Banach algebra A with a bounded approximate identity, the amenability of the projective tensor product of A with itself, the amenability of the projective tensor product of A with A^op and the amenability of A are equivalent. Also if A is a closed ideal in a commutative Banach algebra B, then the (weak) amenability of the projective tensor product of A and B implies the (weak) amenability of A. Finally, we show that if the Banach algebra A is amenable through multiplication π then is also amenable through any multiplication ρ such that the norm of π-ρ is less than 1/( 11).
The second part deals with questions in generalized notions of amenability such as approximate amenability and bounded approximate amenability. First we prove some new results about approximately amenable Banach algebras. Then we state a characterization of approximately amenable Banach algebras and a characterization of boundedly approximately amenable Banach algebras.
Finally, we prove that B(l^p (E)) is not approximately amenable for Banach spaces E with certain properties. As a corollary of this part, we give a new proof that B(l^2) is not approximately amenable.
|
4 |
Three-level designs robust to model uncertaintyTsai, Pi-Wen January 1998 (has links)
No description available.
|
5 |
SUPPORTING APPROXIMATE COMPUTING ON COARSE GRAINED RE-CONFIGURABLE ARRAY ACCELERATORSDickerson, Jonathan 01 December 2019 (has links)
Recent research has shown approximate computing and Course-Grained Reconfigurable Arrays (GGRAs) are promising computing paradigms to reduce energy consumption in a compute intensive environment. CGRAs provide a promising middle ground between energy inefficient yet flexible Freely Programmable Gate Arrays (FPGAs) and energy efficient yet inflexible Application Specific Integrated Circuits (ASICs). With the integration of approximate computing in CGRAs, there is substantial gains in energy efficiency at the cost of arithmetic precision. However, some applications require a certain percent of accuracy in calculation to effectively perform its task. The ability to control the accuracy of approximate computing during run-time is an emerging topic.
|
6 |
Assessing Approximate Arithmetic Designs in the presence of Process Variations and Voltage ScalingNaseer, Adnan Aquib 01 January 2015 (has links)
As environmental concerns and portability of electronic devices move to the forefront of priorities, innovative approaches which reduce processor energy consumption are sought. Approximate arithmetic units are one of the avenues whereby significant energy savings can be achieved. Approximation of fundamental arithmetic units is achieved by judiciously reducing the number of transistors in the circuit. A satisfactory tradeoff of energy vs. accuracy of the circuit can be determined by trial-and-error methods of each functional approximation. Although the accuracy of the output is compromised, it is only decreased to an acceptable extent that can still fulfill processing requirements. A number of scenarios are evaluated with approximate arithmetic units to thoroughly cross-check them with their accurate counterparts. Some of the attributes evaluated are energy consumption, delay and process variation. Additionally, novel methods to create such approximate units are developed. One such method developed uses a Genetic Algorithm (GA), which mimics the biologically-inspired evolutionary techniques to obtain an optimal solution. A GA employs genetic operators such as crossover and mutation to mix and match several different types of approximate adders to find the best possible combination of such units for a given input set. As the GA usually consumes a significant amount of time as the size of the input set increases, we tackled this problem by using various methods to parallelize the fitness computation process of the GA, which is the most compute intensive task. The parallelization improved the computation time from 2,250 seconds to 1,370 seconds for up to 8 threads, using both OpenMP and Intel TBB. Apart from using the GA with seeded multiple approximate units, other seeds such as basic logic gates with limited logic space were used to develop completely new multi-bit approximate adders with good fitness levels. iii The effect of process variation was also calculated. As the number of transistors is reduced, the distribution of the transistor widths and gate oxide may shift away from a Gaussian Curve. This result was demonstrated in different types of single-bit adders with the delay sigma increasing from 6psec to 12psec, and when the voltage is scaled to Near-Threshold-Voltage (NTV) levels sigma increases by up to 5psec. Approximate Arithmetic Units were not affected greatly by the change in distribution of the thickness of the gate oxide. Even when considering the 3-sigma value, the delay of an approximate adder remains below that of a precise adder with additional transistors. Additionally, it is demonstrated that the GA obtains innovative solutions to the appropriate combination of approximate arithmetic units, to achieve a good balance between energy savings and accuracy.
|
7 |
Reusing and Updating Preconditioners for Sequences of MatricesGrim-McNally, Arielle Katherine 15 June 2015 (has links)
For sequences of related linear systems, the computation of a preconditioner for every system can be expensive. Often a fixed preconditioner is used, but this may not be effective as the matrix changes. This research examines the benefits of both reusing and recycling preconditioners, with special focus on ILUTP and factorized sparse approximate inverses and proposes an update that we refer to as a sparse approximate map or SAM update. Analysis of the residual and eigenvalues of the map will be provided. Applications include the Quantum Monte Carlo method, model reduction, oscillatory hydraulic tomography, diffuse optical tomography, and Helmholtz-type problems. / Master of Science
|
8 |
Top-percentile traffic routing problemYang, Xinan January 2012 (has links)
Multi-homing is a technology used by Internet Service Provider (ISP) to connect to the Internet via multiple networks. This connectivity enhances the network reliability and service quality of the ISP. However, using multi-networks may imply multiple costs on the ISP. To make full use of the underlying networks with minimum cost, a routing strategy is requested by ISPs. Of course, this optimal routing strategy depends on the pricing regime used by network providers. In this study we investigate a relatively new pricing regime – top-percentile pricing. Under top-percentile pricing, network providers divide the charging period into several fixed length time intervals and calculate their cost according to the traffic volume that has been shipped during the θ-th highest time interval. Unlike traditional pricing regimes, the network design under top-percentile pricing has not been fully studied. This paper investigates the optimal routing strategy in case where network providers charge ISPs according to top-percentile pricing. We call this problem the Top-percentile Traffic Routing Problem (TpTRP). As the ISP cannot predict next time interval’s traffic volume in real world application, in our setting up the TpTRP is a multi-stage stochastic optimisation problem. Routing decisions should be made at the beginning of every time period before knowing the amount of traffic that is to be sent. The stochastic nature of the TpTRP forms the critical difficulty of this study. In this paper several approaches are investigated in either the modelling or solving steps of the problem. We begin by exploring several simplifications of the original TpTRP to get an insight of the features of the problem. Some of these allow analytical solutions which lead to bounds on the achievable optimal solution. We also establish bounds by investigating several “naive” routing policies. In the second part of this work, we build the multi-stage stochastic programming model of the TpTRP, which is hard to solve due to the integer variables introduced in the calculation of the top-percentile traffic. A lift-and-project based cutting plane method is investigated in solving the SMIP for very small examples of TpTRP. Nevertheless it is too inefficient to be applicable on large sized instances. As an alternative, we explore the solution of the TpTRP as a Stochastic Dynamic Programming (SDP) problem by a discretization of the state space. This SDP model gives us achievable routing policies on small size instances of the TpTRP, which of course improve the naive routing policies. However, the solution approach based on SDP suffers from the curse of dimensionality which restricts its applicability. To overcome this we suggest using Approximate Dynamic Programming (ADP) which largely avoids the curse of dimensionality by exploiting the structure of the problem to construct parameterized approximations of the value function in SDP and train the model iteratively to get a converged set of parameters. The resulting ADP model with discrete parameter for every time interval works well for medium size instances of TpTRP, though it still requires too long to be trained for large size instances. To make the realistically sized TpTRP problem solvable, we improve on the ADP model by using Bezier Curves/Surfaces to do the aggregation over time. This modification accelerates the efficiency of parameter training in the solution of the ADP model, which makes the realistically sized TpTRP tractable.
|
9 |
Measuring the Approximate Number SystemSabri, Jomard January 2012 (has links)
Recent theories in numerical cognition suggest that humans are equipped with a mental system that supports the representation and processing of symbolic and nonsymbolic magnitudes, called the Approximate Number System (ANS). Prior research also suggests that the acuity of the ANS can predict individuals’ mathematical ability. However, results from research within the field has proven to be inconsistent with one another which raises questions about the reliability and validity of methods used to measure the ANS. The present study attempts to replicate the results found in studies suggesting that ANS acuity correlates with mathematical ability. The study also investigates the reliability and validity of different task that have been used to measure the ANS, and also presents a new method of measuring the ANS with an adaptive method. The results show that two tasks correlate significantly with mathematical ability, and multiple regression analyses show that ANS acuity can predict mathematical ability when controlling for general intelligence. Furthermore, the results also further highlight the issue of methodological flaws in previous studies.
|
10 |
Logical approximation and compilation for resource-bounded reasoningRajaratnam, David, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Providing a logical characterisation of rational agent reasoning has been a long standing challenge in artificial intelligence (AI) research. It is a challenge that is not only of interest for the construction of AI agents, but is of equal importance in the modelling of agent behaviour. The goal of this thesis is to contribute to the formalisation of agent reasoning by showing that the computational limitations of agents is a vital component of modelling rational behaviour. To achieve this aim, both motivational and formal aspects of resource-bounded agents are examined. It is a central argument of this thesis that accounting for computational limitations is critical to the success of agent reasoning, yet has received only limited attention from the broader research community. Consequently, an important contribution of this thesis is in its advancing of motivational arguments in support of the need to account for computational limitations in agent reasoning research. As a natural progression from the motivational arguments, the majority of this thesis is devoted to an examination of propositional approximate logics. These logics represent a step towards the development of resource-bounded agents, but are also applicable to other areas of automated reasoning. This thesis makes a number of contributions in mapping the space of approximate logics. In particular, it draws a connection between approximate logics and knowledge compilation, by developing an approximate knowledge compilation method based on Cadoli and Schaerf??s S-3 family of approximate logics. This method allows for the incremental compilation of a knowledge base, thus reducing the need for a costly recompilation process. Furthermore, each approximate compilation has well-defined logical properties due to its correspondence to a particular S-3 logic. Important contributions are also made in the examination of approximate logics for clausal reasoning. Clausal reasoning is of particular interest due to the efficiency of modern clausal satisfiability solvers and the related research into problem hardness. In particular, Finger's Logics of Limited Bivalence are shown to be applicable to clausal reasoning. This is subsequently shown to logically characterise the behaviour of the well-known DPLL algorithm for determining boolean satisfiability, when subjected to restricted branching.
|
Page generated in 0.0766 seconds