11 |
Approximation properties of groups.January 2011 (has links)
Leung, Cheung Yu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 85-86). / Abstracts in English and Chinese. / Introduction --- p.6 / Chapter 1 --- Preliminaries --- p.7 / Chapter 1.1 --- Locally compact groups and unitary representations --- p.7 / Chapter 1.2 --- Positive definite functions --- p.10 / Chapter 1.3 --- Affine isometric actions of groups --- p.23 / Chapter 1.4 --- Ultraproducts --- p.29 / Chapter 2 --- Amenability --- p.33 / Chapter 2.1 --- Reiter's property --- p.33 / Chapter 2.2 --- Fφlner's property --- p.41 / Chapter 3 --- Kazhdan's Property (T) --- p.43 / Chapter 3.1 --- Definition and basic properties --- p.43 / Chapter 3.2 --- Property (FH) --- p.51 / Chapter 3.3 --- Spectral criterion for Property (T) --- p.56 / Chapter 3.4 --- Property (T) for SL3(Z) --- p.60 / Chapter 3.5 --- Expanders --- p.72 / Approximation Properties of Groups --- p.5 / Chapter 4 --- Haagerup Property --- p.74 / Chapter 4.1 --- Equivalent formulations of Haagerup Property --- p.74 / Chapter 4.2 --- Trees and wall structures --- p.82 / Bibliography --- p.85
|
12 |
Learning from Observation Using PrimitivesBentivegna, Darrin Charles 13 July 2004 (has links)
Learning without any prior knowledge in environments that contain large or continuous state spaces is a daunting task. For robots that operate in the real world, learning must occur in a reasonable amount of time. Providing a robot with domain knowledge and also with the ability to learn from watching others can greatly increase its learning rate. This research explores learning algorithms that can learn quickly and make the most use of information obtained from observing others. Domain knowledge is encoded in the form of primitives, small parts of a task that are executed many times while a task is being performed. This thesis explores and presents many challenges involved in programming robots to learn and adapt to environments that humans operate in.
A "Learning from Observation Using Primitives" framework has been created that provides the means to observe primitives as they are performed by others. This information is used by the robot in a three level process as it performs in the environment. In the first level the robot chooses a primitive to use for the observed state. The second level decides on the manner in which the chosen primitive will be performed. This information is then used in the third level to control the robot as necessary to perform the desired action. The framework also provides a means for the robot to observe and evaluate its own actions as it performs in the environment which allows the robot to increase its performance of selecting and performing the primitives.
The framework and algorithms have been evaluated on two testbeds: Air Hockey and Marble Maze. The tasks are done both by actual robots and in simulation. Our robots have the ability to observe humans as they operate in these environments. The software version of Air Hockey allows a human to play against a cyber player and the hardware version allows the human to play against a 30 degree-of-freedom humanoid robot. The implementation of our learning system in these tasks helps to clearly present many issues involved in having robots learn and perform in dynamic environments.
|
13 |
Image compression using locally sensitive hashingChucri, Samer Gerges 18 December 2013 (has links)
The problem of archiving photos is becoming increasingly important as image databases are growing more popular, and larger in size. One could take the example of any social networking website, where users share hundreds of photos, resulting in billions of total images to be stored. Ideally, one would like to use minimal storage to archive these images, by making use of the redundancy that they share, while not sacrificing quality. We suggest a compression algorithm that aims at compressing across images, rather than compressing images individually. This is a very novel approach that has never been adopted before. This report presents the design of a new image database compression tool. In addition to that, we implement a complete system on C++, and show the significant gains that we achieve in some cases, where we compress 90% of the initial data. One of the main tools we use is Locally Sensitive Hashing (LSH), a relatively new technique mainly used for similarity search in high-dimensions. / text
|
14 |
The Value of Locally Produced Household Cheese : A study about the added value of locally produced on the market of Jönköping, Sweden.Kihlblom, Viktor, Persson, Oscar January 2014 (has links)
No description available.
|
15 |
Nonlocally Maximal Hyperbolic Sets for FlowsPetty, Taylor Michael 01 June 2015 (has links) (PDF)
In 2004, Fisher constructed a map on a 2-disc that admitted a hyperbolic set not contained in any locally maximal hyperbolic set. Furthermore, it was shown that this was an open property, and that it was embeddable into any smooth manifold of dimension greater than one. In the present work we show that analogous results hold for flows. Specifically, on any smooth manifold with dimension greater than or equal to three there exists an open set of flows such that each flow in the open set contains a hyperbolic set that is not contained in a locally maximal one.
|
16 |
Optimal weight settings in locally weighted regression: A guidance through cross-validation approachPuri, Roshan January 2023 (has links)
Locally weighted regression is a powerful tool that allows the estimation of different sets of coefficients for
each location in the underlying data, challenging the assumption of stationary regression coefficients across
a study region. The accuracy of LWR largely depends on how a researcher establishes the relationship across
locations, which is often constructed using a weight matrix or function. This paper explores the different
kernel functions used to assign weights to observations, including Gaussian, bi-square, and tri-cubic, and
how the choice of weight variables and window size affects the accuracy of the estimates. We guide this
choice through the cross-validation approach and show that the bi-square function outperforms the choice of
other kernel functions. Our findings demonstrate that an optimal window size for LWR models depends on
the cross-validation (CV) approach employed. In our empirical application, the full-sample CV guides the
choice of a higher window-size case, and CV by proxy guides the choice of a lower window size. Since the CV
by Proxy approach focuses on the predictive ability of the model in the vicinity of one specific point (usually
a policy point/site), we note that guiding a model choice through this approach makes more intuitive sense
when the aim of the researcher is to predict the outcome in one specific site (policy or target point). To
identify the optimal weight variables, while we suggest exploring various combinations of weight variables,
we argue that an efficient alternative is to merge all continuous variables in the dataset into a single weight
variable. / M.A. / Locally weighted regression (LWR) is a statistical technique that establishes a relationship between dependent
and explanatory variables, focusing primarily on data points in proximity to a specific point of
interest/target point. This technique assigns varying degrees of importance to the observations that are in
proximity to the target point, thereby allowing for the modeling of relationships that may exhibit spatial
variability within the dataset.
The accuracy of LWR largely depends on how researchers define relationships across different locations/studies,
which is often done using a “weight setting”. We define weight setting as a combination of weight
functions (determines how the observations around a point of interest are weighted before they enter the
model), weight variables (determines proximity between the point of interest and all other observations), and
window sizes (determines the number of observations that can be allowed in the local regression). To find
which weight setting is an optimal one or which combination of weight functions, weight variables, and window
sizes generates the lowest predictive error, researchers often employ a cross-validation (CV) approach.
Cross-validation is a statistical method used to assess and validate the performance of a predictive model. It
entails removing a host observation (a point of interest), predicting that point, and evaluating the accuracy
of such predicted point by comparing it with its actual value.
In our study, we employ two CV approaches. The first one is a full-sample CV approach, where we remove
a host observation, and predict it using the full set of observations used in the given local regression. The
second one is the CV by proxy approach, which uses a similar mechanism as full-sample CV to check the
accuracy of the prediction, however, by focusing only on the vicinity points that share similar characteristics
as a target point.
We find that the bi-square function consistently outperforms the choice of Gaussian and tri-cubic weight
functions, regardless of the CV approaches. However, the choice of an optimal window size in LWR models
depends on the CV approach that we employ. While the full-sample CV method guides us toward the
selection of a larger window size, the CV by proxy directs us toward a smaller window size. In the context of
identifying the optimal weight variables, we recommend exploring various combinations of weight variables.
However, we also propose an efficient alternative, which involves using all continuous variables within the
dataset into a single-weight variable instead of striving to identify the best of thousands of different weight
variable settings.
|
17 |
GAPLA: A GLOBALLY ASYNCHRONOUS LOCALLY SYNCHRONOUS FPGA ARCHITECTUREJIA, XIN 04 April 2007 (has links)
No description available.
|
18 |
Local compactness and the cofine uniformity with applications to hyperspaces /Burdick, Bruce Stanley January 1985 (has links)
No description available.
|
19 |
First l²-Cohomology GroupsEastridge, Samuel Vance 15 June 2015 (has links)
We want to take a look at the first cohomology group H^1(G, l^2(G)), in particular when G is locally-finite. First, though, we discuss some results about the space H^1(G, C G) for G locally-finite, as well as the space H^1(G, l^2(G)) when G is finitely generated. We show that, although in the case when G is finitely generated the embedding of C G into l^2(G) induces an embedding of the cohomology groups H^1(G, C G) into H^1(G, l^2(G)), when G is countably-infinite locally-finite, the induced homomorphism is not an embedding. However, even though the induced homomorphism is not an embedding, we still have that H^1(G, l^2(G)) neq 0 when G is countably-infinite locally-finite. Finally, we give some sufficient conditions for H^1(G,l^2(G)) to be zero or non-zero. / Master of Science
|
20 |
Problems on nilpotency and local finiteness in infinite groups and infinite dimensional algebrasDerakhshan, Jamshid January 1996 (has links)
No description available.
|
Page generated in 0.0334 seconds