• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 65
  • 26
  • 16
  • 15
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 273
  • 58
  • 46
  • 37
  • 31
  • 30
  • 28
  • 27
  • 25
  • 25
  • 21
  • 20
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

How the Community Affects a Community-Based Forest Management : Based on a Case Study in Tanzania

Folkesson, Malin January 2008 (has links)
No description available.
12

Determining and characterizing immunological self/non-self

Li, Ying 15 February 2007 (has links)
The immune system has the ability to discriminate self from non-self proteins and also make appropriate immune responses to pathogens. A fundamental problem is to understand the genomic differences and similarities among the sets of self peptides and non-self peptides. The sequencing of human, mouse and numerous pathogen genomes and cataloging of their respective proteomes allows host self and non-self peptides to be identified. T-cells make this determination at the peptide level based on peptides displayed by MHC molecules.<p>In this project, peptides of specific lengths (k-mers) are generated from each protein in the proteomes of various model organisms. The set of unique k-mers for each species is stored in a library and defines its "immunological self". Using the libraries, organisms can be compared to determine the levels of peptide overlap. The observed levels of overlap can also be compared with levels which can be expected "at random" and statistical conclusions drawn.<p>A problem with this procedure is that sequence information in public protein databases (Swiss-PROT, UniProt, PIR) often contains ambiguities. Three strategies for dealing with such ambiguities have been explored in earlier work and the strategy of removing ambiguous k-mers is used here.<p>Peptide fragments (k-mers) which elicit immune responses are often localized within the sequences of proteins from pathogens. These regions are known as "immunodominants" (i.e., hot spots) and are important in immunological work. After investigating the peptide universes and their overlaps, the question of whether known regions of immunological significance (e.g., epitope) come from regions of low host-similarity is explored. The known regions of epitopes are compared with the regions of low host-similarity (i.e., non-overlaps) between HIV-1 and human proteomes at the 7-mer level. Results show that the correlation between these two regions is not statistically significant. In addition, pairs involving human and human viruses are explored. For these pairs, one graph for each k-mer level is generated showing the actual numbers of matches between organisms versus the expected numbers. From graphs for 5-mer and 6-mer level, we can see that the number of overlapping occurrences increases as the size of the viral proteome increases.<p>A detailed investigation of the overlaps/non-overlaps between viral proteome and human proteome reveals that the distribution of the locations of these overlaps/non-overlaps may have "structure" (e.g. locality clustering). Thus, another question that is explored is whether the locality clustering is statistically significant. A chi-square analysis is used to analyze the locality clustering. Results show that the locality clusterings for HIV-1, HIV-2 and Influenza A virus at the 5-mer, 6-mer and 7-mer levels are statistically significant. Also, for self-similarity of human protein Desmoglein 3 to the remaining human proteome, it shows that the locality clustering is not statistically significant at the 5-mer level while it is at the 6-mer and 7-mer levels.
13

Core-characteristic-aware off-chip memory management in a multicore system-on-chip

Jeong, Min Kyu 30 January 2013 (has links)
Future processors will integrate an increasing number of cores because the scaling of single-thread performance is limited and because smaller cores are more power efficient. Off-chip memory bandwidth that is shared between those many cores, however, scales slower than the transistor (and core) count does. As a result, in many future systems, off-chip bandwidth will become the bottleneck of heavy demand from multiple cores. Therefore, optimally managing the limited off-chip bandwidth is critical to achieving high performance and efficiency in future systems. In this dissertation, I will develop techniques to optimize the shared use of limited off-chip memory bandwidth in chip-multiprocessors. I focus on issues that arise from the sharing and exploit the differences in memory access characteristics, such as locality, bandwidth requirement, and latency sensitivity, between the applications running in parallel and competing for the bandwidth. First, I investigate how the shared use of memory by many cores can result in reduced spatial locality in memory accesses. I propose a technique that partitions the internal memory banks between cores in order to isolate their access streams and eliminate locality interference. The technique compensates for the reduced bank-level parallelism of each thread by employing memory sub-ranking to effectively increase the number of independent banks. For three different workload groups that consist of benchmarks with high spatial locality, low spatial locality, and mixes of the two, the average system efficiency improves by 10%, 7%, 9% for 2-rank systems, and 18%, 25%, 20% for 1-rank systems, respectively, over the baseline shared-bank system. Next, I improve the performance of a heterogeneous system-on-chip (SoC) in which cores have distinct memory access characteristics. I develop a deadline-aware shared memory bandwidth management scheme for SoCs that have both CPU and GPU cores. I show that statically prioritizing the CPU can severely constrict GPU performance, and propose to dynamically adapt the priority of CPU and GPU memory requests based on the progress of GPU workload. The proposed dynamic bandwidth management scheme provides the target GPU performance while prioritizing CPU performance as much as possible, for any CPU-GPU workload combination with different complexities. / text
14

Έρευνα σχέσης τοπικότητας-ανταγωνιστικότητας cluster επιχειρήσεων Περιφέρειας Ηπείρου-Αργολίδας

Κυρίτσης, Τζέιμς Σαμουήλ 01 August 2014 (has links)
Η παρούσα μελέτη αφορά την διερεύνηση ενός cluster επιχειρήσεων μεταποίησης τροφίμων στην περιφέρεια της Ηπείρου και της Αργολίδας. Πρόθεση της, είναι να ερευνήσει τις αιτίες συγκέντρωσης των επιχειρήσεων αυτών στις περιοχές, τους λόγους για τους οποίους επιλέγουν οι παλαιότερες επιχειρήσεις να παραμείνουν στην ιστορική τους βάση και ωθούν νέες επιχειρήσεις να εγκατασταθούν στην περιοχή. Ερευνά τα ειδή των σχέσεων και τις συνεργασίες που αναπτύσσουν μεταξύ τους οι επιχειρήσεις, αν αναπτύσσουν, αλλά και τις σχέσεις τους με τους προμηθευτές και τους πελάτες τους. / The current study is about a cluster of food processing companies in Ipiros and Argolida Greece and the reasons why the choose the particular locations for their establishments.
15

Improving OpenMP Productivity with Data Locality Optimizations and High-resolution Performance Analysis

Muddukrishna, Ananya January 2016 (has links)
The combination of high-performance parallel programming and multi-core processors is the dominant approach to meet the ever increasing demand for computing performance today. The thesis is centered around OpenMP, a popular parallel programming API standard that enables programmers to quickly get started with writing parallel programs. However, in contrast to the quickness of getting started, writing high-performance OpenMP programs requires high effort and saps productivity. Part of the reason for impeded productivity is OpenMP’s lack of abstractions and guidance to exploit the strong architectural locality exhibited in NUMA systems and manycore processors. The thesis contributes with data distribution abstractions that enable programmers to distribute data portably in NUMA systems and manycore processors without being aware of low-level system topology details. Data distribution abstractions are supported by the runtime system and leveraged by the second contribution of the thesis – an architecture-specific locality-aware scheduling policy that reduces data access latencies incurred by tasks, allowing programmers to obtain with minimal effort upto 69% improved performance for scientific programs compared to state-of-the-art work-stealing scheduling. Another reason for reduced programmer productivity is the poor support extended by OpenMP performance analysis tools to visualize, understand, and resolve problems at the level of grains– task and parallel for-loop chunk instances. The thesis contributes with a cost-effective and automatic method to extensively profile and visualize grains. Grain properties and hardware performance are profiled at event notifications from the runtime system with less than 2.5% overheads and visualized using a new method called theGrain Graph. The grain graph shows the program structure that unfolded during execution and highlights problems such as low parallelism, work inflation, and poor parallelization benefit directly at the grain level with precise links to problem areas in source code. The thesis demonstrates that grain graphs can quickly reveal performance problems that are difficult to detect and characterize in fine detail using existing tools in standard programs from SPEC OMP 2012, Parsec 3.0 and Barcelona OpenMP Tasks Suite (BOTS). Grain profiles are also applied to study the input sensitivity and similarity of BOTS programs. All thesis contributions are assembled together to create an iterative performance analysis and optimization work-flow that enables programmers to achieve desired performance systematically and more quickly than what is possible using existing tools. This reduces pressure on experts and removes the need for tedious trial-and-error tuning, simplifying OpenMP performance analysis. / <p>QC 20151221</p>
16

Learning to hash for large scale image retrieval

Moran, Sean James January 2016 (has links)
This thesis is concerned with improving the effectiveness of nearest neighbour search. Nearest neighbour search is the problem of finding the most similar data-points to a query in a database, and is a fundamental operation that has found wide applicability in many fields. In this thesis the focus is placed on hashing-based approximate nearest neighbour search methods that generate similar binary hashcodes for similar data-points. These hashcodes can be used as the indices into the buckets of hashtables for fast search. This work explores how the quality of search can be improved by learning task specific binary hashcodes. The generation of a binary hashcode comprises two main steps carried out sequentially: projection of the image feature vector onto the normal vectors of a set of hyperplanes partitioning the input feature space followed by a quantisation operation that uses a single threshold to binarise the resulting projections to obtain the hashcodes. The degree to which these operations preserve the relative distances between the datapoints in the input feature space has a direct influence on the effectiveness of using the resulting hashcodes for nearest neighbour search. In this thesis I argue that the retrieval effectiveness of existing hashing-based nearest neighbour search methods can be increased by learning the thresholds and hyperplanes based on the distribution of the input data. The first contribution is a model for learning multiple quantisation thresholds. I demonstrate that the best threshold positioning is projection specific and introduce a novel clustering algorithm for threshold optimisation. The second contribution extends this algorithm by learning the optimal allocation of quantisation thresholds per hyperplane. In doing so I argue that some hyperplanes are naturally more effective than others at capturing the distribution of the data and should therefore attract a greater allocation of quantisation thresholds. The third contribution focuses on the complementary problem of learning the hashing hyperplanes. I introduce a multi-step iterative model that, in the first step, regularises the hashcodes over a data-point adjacency graph, which encourages similar data-points to be assigned similar hashcodes. In the second step, binary classifiers are learnt to separate opposing bits with maximum margin. This algorithm is extended to learn hyperplanes that can generate similar hashcodes for similar data-points in two different feature spaces (e.g. text and images). Individually the performance of these algorithms is often superior to competitive baselines. I unify my contributions by demonstrating that learning hyperplanes and thresholds as part of the same model can yield an additive increase in retrieval effectiveness.
17

Non-local Phonological Processes as Multi-tiered Strictly Local Maps

Burness, Phillip 07 March 2022 (has links)
Phonological processes can be characterized as functions from input strings to output strings, and treating them as mathematical objects like this reveals properties that hold regardless of how we implement them (i.e., with rules, constraints, or other tools). For example, Chandlee (2014) found that a vast majority of phonological processes can be modelled as Strictly Local (SL) functions, which are sensitive to a window of finite size. Long-distance processes like vowel and consonant harmony are exceptions to this generalization, although a key observation is that they look local once irrelevant information is ignored. This thesis shows how such selective attention can be modelled by augmenting SL functions with autosegmental tiers (e.g., Goldsmith, 1976). A single tier is sufficient to capture individual long-distance processes, and having multiple tiers available allows us to model multiple long-distance processes simultaneously as well as interactions between local and non-local patterns. Furthermore, probabilistic variants of these tier-based functions allow for a cognitively plausible model of what Zymet (2015) calls distance-based decay. Unrestricted use of multiple tiers is, however, quite powerful and so I additionally argue that tiersets should be defined from the perspective of individual input elements (i.e., potential process targets). Each input element designates a superset-subset hierarchy of tiers and pays attention to them alone; the tiers specified by another input element are either redundant or irrelevant. Restricting tiersets in this manner has beneficial consequences for learnability as it imparts a structure onto the learner's hypothesis space that can be exploited to great effect. Furthermore, tier-based functions meeting this restriction fail to generate a number of pathological behaviours that can be characterized as subsequential functions, a type of function that has previously been offered as a model of non-local phonological processes (Heinz and Lai, 2013; Luo, 2017; Payne, 2017). In light of their empirical coverage, their comparative lack of pathological predictions, and their efficient learnability, I conclude that tier-based functions act as a more accurate characterization of long-distance phonology.
18

Optimizing locality and parallelism through program reorganization

Krishnamoorthy, Sriram 07 January 2008 (has links)
No description available.
19

A Data-Locality Aware Mapping and Scheduling Framework for Data-Intensive Computing

Khanna, Gaurav 11 September 2008 (has links)
No description available.
20

Posouzení vlivu lokality na cenu obvyklou nemovitostí - porovnání cen v Jihočeském kraji / Impact assessment of locality on the usual price of real estate price comparison in region of south Bohemia

CHROBOČEK, Ondřej January 2014 (has links)
The topic of this thesis is "Impact assessment of locality on the usual price of real estate price comparison in region of south Bohemia". The basic targets were to map the theoretical terms usually used in the valuation of real estate, as well as obtain data necessary for real estate comparison. The first part is a literature review focused on definitions of valuation namely ownership, the valuation of residential units, houses and buildings. The thesis describes the differences between the concepts of price and value. The last part of this literature review is focused on impact assessment locality on real estate price. In the practical part are presented the data obtained on the property, which are then compared. The main outputs of this thesis are graphic representations of the market segments variations, which are always described in words. This text part is aimed at explaining the price differences between the localities.

Page generated in 0.0451 seconds