• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 65
  • 26
  • 16
  • 15
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 58
  • 46
  • 37
  • 31
  • 30
  • 28
  • 27
  • 25
  • 25
  • 21
  • 20
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Similarity Estimation with Non-Transitive LSH

Lewis, Robert R. 29 September 2021 (has links)
No description available.
112

The perception of local knowledge in development cooperation : A case study of a local NGO in Kibera, Nairobi

Lindberg, Matilda, Wictorin, Kajsa January 2022 (has links)
This thesis aims to study the perception of local knowledge within Wale Wale Kenya, a small local organization operating in Kibera, Nairobi. The people who run the Kenyan organization have all grown up in Kibera and thus have strong local roots. An analysis of the Kenyan organization however extends beyond the “local” because of its collaboration with its partner organization Wale Wale Sweden. The partner organization contributes among other things with volunteers and interns from Sweden. The main research question guiding the thesis concern show local knowledge is perceived by two development NGOs, Wale Wale Kenya, and its Swedish partner organization Wale Wale Sweden, and how that relates to their sense of place of Kibera. The thesis is a result of qualitative field study at the organization Wale Wale Kenya where semi-structured interviews and observations were made. The findings show that the focus on the local aspect contributes to the creation of representation, understanding and passion within the organization. Furthermore, local knowledge is seen as unique and useful since it is linked to the particular place where it will be used. Local knowledge is also valued for its long-term perspective, a strong anchoring in the local community and that it empowers the members who run the organization. However, exposure to other places, beyond the locality of Kibera through influences by interns andvolunteers, is highly valued. Local knowledge is not seen as bounded to the local place but is also a result of the local interacting with global social processes.
113

On The Application Of Locality To Network Intrusion Detection: Working-set Analysis Of Real And Synthetic Network Server Traffic

Lee, Robert 01 January 2009 (has links)
Keeping computer networks safe from attack requires ever-increasing vigilance. Our work on applying locality to network intrusion detection is presented in this dissertation. Network servers that allow connections from both the internal network and the Internet are vulnerable to attack from all sides. Analysis of the behavior of incoming connections for properties of locality can be used to create a normal profile for such network servers. Intrusions can then be detected due to their abnormal behavior. Data was collected from a typical network server both under normal conditions and under specific attacks. Experiments show that connections to the server do in fact exhibit locality, and attacks on the server can be detected through their violation of locality. Key to the detection of locality is a data structure called a working-set, which is a kind of cache of certain data related to network connections. Under real network conditions, we have demonstrated that the working-set behaves in a manner consistent with locality. Determining the reasons for this behavior is our next goal. A model that generates synthetic traffic based on actual network traffic allows us to study basic traffic characteristics. Simulation of working-set processing of the synthetic traffic shows that it behaves much like actual traffic. Attacks inserted into a replay of the synthetic traffic produce working-set responses similar to those produced in actual traffic. In the future, our model can be used to further the development of intrusion detection strategies.
114

Distributed resource allocation with scalable crash containment

Pike, Scott Mason 29 September 2004 (has links)
No description available.
115

Data Driven High Performance Data Access

Ramljak, Dusan January 2018 (has links)
Low-latency, high throughput mechanisms to retrieve data become increasingly crucial as the cyber and cyber-physical systems pour out increasing amounts of data that often must be analyzed in an online manner. Generally, as the data volume increases, the marginal utility of an ``average'' data item tends to decline, which requires greater effort in identifying the most valuable data items and making them available with minimal overhead. We believe that data analytics driven mechanisms have a big role to play in solving this needle-in-the-haystack problem. We rely on the claim that efficient pattern discovery and description, coupled with the observed predictability of complex patterns within many applications offers significant potential to enable many I/O optimizations. Our research covers exploitation of storage hierarchy for data driven caching and tiering, reduction of distance between data and computations, removing redundancy in data, using sparse representations of data, the impact of data access mechanisms on resilience, energy consumption, storage usage, and the enablement of new classes of data driven applications. For caching and prefetching, we offer a powerful model that separates the process of access prediction from the data retrieval mechanism. Predictions are made on a data entity basis and used the notions of ``context'' and its aspects such as ``belief'' to uncover and leverage future data needs. This approach allows truly opportunistic utilization of predictive information. We elaborate on which aspects of the context we are using in areas other than caching and prefetching different situations and why it is appropriate in the specified situation. We present in more details the methods we have developed, BeliefCache for data driven caching and prefetching and AVSC for pattern mining based compression of data. In BeliefCache, using a belief, an aspect of context representing an estimate of the probability that the storage element will be needed, we developed modular framework BeliefCache, to make unified informed decisions about that element or a group. For the workloads we examined we were able to capture complex non-sequential access patterns better than a state-of-the-art framework for optimizing cloud storage gateways. Moreover, our framework is also able to adjust to variations in the workload faster. It also does not require a static workload to be effective since modular framework allows for discovering and adapting to the changes in the workload. In AVSC, using an aspect of context to gauge the similarity of the events, we perform our compression by keeping relevant events intact and approximating other events. We do that in two stages. We first generate a summarization of the data, then approximately match the remaining events with the existing patterns if possible, or add the patterns to the summary otherwise. We show gains over the plain lossless compression for a specified amount of accuracy for purposes of identifying the state of the system and a clear tradeoff in between the compressibility and fidelity. In other mentioned research areas we present challenges and opportunities with the hope that will spur researchers to further examine those issues in the space of rapidly emerging data intensive applications. We also discuss the ideas how our research in other domains could be applied in our attempts to provide high performance data access. / Computer and Information Science
116

The authenticity of Swedish Craft Gin : A multimodal analysis of marketing of Swedish Craft Gin for international audiences

Freij, Anton January 2022 (has links)
This thesis aims to investigate how Swedish Companies can appropriate products that have strong cultural connections towards other nations or cultures and reintroduce them onto an international market in an authentic way, with emphasis on their Swedish locality. To do this, the study used the booming market of Swedish Craft Gin Distilleries. With a multimodal approach and a comparative analysis of five different Swedish Craft Gin Producers and their international websites, the study explored how authenticity and locality are discursively constructed in the branding and promotion of Swedish Craft Gin Brand published in English for an international audience and market. Additionally, the multimodal analysis was complemented with an analysis of the intentions behind semantic choices and strategies in the marking, based on interview material from the Swedish podcast Gin Podden (The Gin Podcast). The study found that all of the five target distilleries use common themes and values to accommodate meaning, such as the handcrafted and the small scale, Swedish quality, technology and sustainability. Furthermore, the study found that locality is the key feature within construction of authenticity on the market, and that this is constructed and strengthened multimodally through written and visual elements.
117

Increasing big data front end processing efficiency via locally sensitive Bloom filter for elderly healthcare

Cheng, Yongqiang, Jiang, Ping, Peng, Yonghong January 2015 (has links)
No / In support of the increasing number of elderly population, wearable sensors and portable mobile devices capable of monitoring, recording, reporting and alerting are envisaged to enable them an independent lifestyle without relying on intrusive care programmes. However, the big data readings generated from the sensors are characterized as multidimensional, dynamic and non-linear with weak correlation with observable human behaviors and health conditions which challenges the information transmission, storing and processing. This paper proposes to use Locality Sensitive Bloom Filter to increase the Instance Based Learning efficiency for the front end sensor data pre-processing so that only relevant and meaningful information will be sent out for further processing aiming to relieve the burden of the above big data challenges. The approach is proven to optimize and enhance a popular instance-based learning method benefits from its faster speed, less space requirements and is adequate for the application.
118

Predictors of Academic Success in an Early College Entrance Program

Earls, Samuel Wayne 12 1900 (has links)
Early college entrance programs have existed in the United States since the 1950s, but in-depth research on academic success in these programs is lacking. Every year, early college entrance programs utilize a variety of data-gathering and candidate-screening techniques to select hundreds of students for admission into these accelerated programs. However, only a smattering of research articles has discussed the factors that predict academic success in these programs. This exploratory study investigated commonly-relied-upon admissions data points—such as high school GPA and ACT scores—and demographic information—such as sex, ethnicity, and locality—to see if any of these factors predicted academic success: namely, graduation and early college entrance program GPA. Secondary data from nearly 800 students admitted over an 11-year period to a state-supported, residential early college entrance program located at a large Southern university in the United States were utilized for this study. Logistic regression failed to yield a model that could accurately predict whether or not a student would graduate from the program. Multiple regression models showed that high school GPA and ACT scores were predictive of performance, and that factors like locality and ethnicity can have predictive power as well. However, the low variance in performance explained by the variables included in this study demonstrates that high school GPA, standardized test scores, locality, sex, and ethnicity can only tell us so much about a student's likelihood of success in an early college entrance program.
119

Register Transfer Level Simulation Acceleration via Hardware/Software Process Migration

Blumer, Aric David 16 November 2007 (has links)
The run-time reconfiguration of Field Programmable Gate Arrays (FPGAs) opens new avenues to hardware reuse. Through the use of process migration between hardware and software, an FPGA provides a parallel execution cache. Busy processes can be migrated into hardware-based, parallel processors, and idle processes can be migrated out increasing the utilization of the hardware. The application of hardware/software process migration to the acceleration of Register Transfer Level (RTL) circuit simulation is developed and analyzed. RTL code can exhibit a form of locality of reference such that executing processes tend to be executed again. This property is termed executive temporal locality, and it can be exploited by migration systems to accelerate RTL simulation. In this dissertation, process migration is first formally modeled using Finite State Machines (FSMs). Upon FSMs are built programs, processes, migration realms, and the migration of process state within a realm. From this model, a taxonomy of migration realms is developed. Second, process migration is applied to the RTL simulation of digital circuits. The canonical form of an RTL process is defined, and transformations of HDL code are justified and demonstrated. These transformations allow a simulator to identify basic active units within the simulation and combine them to balance the load across a set of processors. Through the use of input monitors, executive locality of reference is identified and demonstrated on a set of six RTL designs. Finally, the implementation of a migration system is described which utilizes Virtual Machines (VMs) and Real Machines (RMs) in existing FPGAs. Empirical and algorithmic models are developed from the data collected from the implementation to evaluate the effect of optimizations and migration algorithms. / Ph. D.
120

[en] LSHSIM: A LOCALITY SENSITIVE HASHING BASED METHOD FOR MULTIPLE-POINT GEOSTATISTICS / [pt] LSHSIM: UM MÉTODO DE GEOESTATÍSTICA MULTIPONTO BASEADO EM LOCALITY SENSITIVITY HASHING

PEDRO NUNO DE SOUZA MOURA 14 November 2017 (has links)
[pt] A modelagem de reservatórios consiste em uma tarefa de muita relevância na medida em que permite a representação de uma dada região geológica de interesse. Dada a incerteza envolvida no processo, deseja-se gerar uma grande quantidade de cenários possíveis para se determinar aquele que melhor representa essa região. Há, então, uma forte demanda de se gerar rapidamente cada simulação. Desde a sua origem, diversas metodologias foram propostas para esse propósito e, nas últimas duas décadas, Multiple-Point Geostatistics (MPS) passou a ser a dominante. Essa metodologia é fortemente baseada no conceito de imagem de treinamento (TI) e no uso de suas características, que são denominadas de padrões. No presente trabalho, é proposto um novo método de MPS que combina a aplicação de dois conceitos-chave: a técnica denominada Locality Sensitive Hashing (LSH), que permite a aceleração da busca por padrões similares a um dado objetivo; e a técnica de compressão Run-Length Encoding (RLE), utilizada para acelerar o cálculo da similaridade de Hamming. Foram realizados experimentos com imagens de treinamento tanto categóricas quanto contínuas que evidenciaram que o LSHSIM é computacionalmente eficiente e produz realizações de boa qualidade, enquanto gera um espaço de incerteza de tamanho razoável. Em particular, para dados categóricos, os resultados sugerem que o LSHSIM é mais rápido do que o MS-CCSIM, que corresponde a um dos métodos componentes do estado-da-arte. / [en] Reservoir modeling is a very important task that permits the representation of a geological region of interest. Given the uncertainty involved in the process, one wants to generate a considerable number of possible scenarios so as to find those which best represent this region. Then, there is a strong demand for quickly generating each simulation. Since its inception, many methodologies have been proposed for this purpose and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this work, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. We have performed experiments with both categorical and continuous images which showed that LSHSIM is computationally efficient and produce good quality realizations, while achieving a reasonable space of uncertainty. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

Page generated in 0.0564 seconds