• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Low-cost and efficient architectural support for correctness and performance debugging

Venkataramani, Guru Prasadh V. 15 July 2009 (has links)
With rapid growth in computer hardware technologies and architectures, software programs have become increasingly complex and error-prone. This software complexity has resulted in program crashes and even security threats. Correctness Debugging is making sure that the program does not exhibit any unintended behavior at runtime. A fully correct program without good performance does not lend any commercial success to the software product. Performance Debugging ensures good performance on hardware platforms. A number of prior debugging solutions either suffer from huge performance overheads or incur high implementation costs. We propose low-cost and efficient hardware solutions that target three specific correctness and performance problems, namely, memory debugging, taint propagation and comprehensive cache miss classification. Experiments show that our mechanisms incur low performance overheads and can be designed with minimal changes to existing processor hardware. While architects invest time and resources into designing high-end architectures, we show that it is equally important to incorporate useful debugging features into these processors in order to enhance the ease of use for programmers.
232

Analysis of Hybrid CSMA/CA-TDMA Channel Access Schemes with Application to Wireless Sensor Networks

Shrestha, Bharat 27 November 2013 (has links)
A wireless sensor network consists of a number of sensor devices and coordinator(s) or sink(s). A coordinator collects the sensed data from the sensor devices for further processing. In such networks, sensor devices are generally powered by batteries. Since wireless transmission of packets consumes significant amount of energy, it is important for a network to adopt a medium access control (MAC) technology which is energy efficient and satisfies the communication performance requirements. Carrier sense multiple access with collision avoidance (CSMA/CA), which is a popular access technique because of its simplicity, flexibility and robustness, suffers poor throughput and energy inefficiency performance in wireless sensor networks. On the other hand, time division multiple access (TDMA) is a collision free and delay bounded access technique but suffers from the scalability problem. For this reason, this thesis focuses on design and analysis of hybrid channel access schemes which combine the strengths of both the CSMA/CA and TDMA schemes. In a hybrid CSMA/CA-TDMA scheme, the use of the CSMA/CA period and the TDMA period can be optimized to enhance the communication performance in the network. If such a hybrid channel access scheme is not designed properly, high congestion during the CSMA/CA period and wastage of bandwidth during the TDMA period result in poor communication performance in terms of throughput and energy efficiency. To address this issue, distributed and centralized channel access schemes are proposed to regulate the activities (such as transmitting, receiving, idling and going into low power mode) of the sensor devices. This regulation during the CSMA/CA period and allocation of TDMA slots reduce traffic congestion and thus improve the network performance. In this thesis work, time slot allocation methods in hybrid CSMA/CA-TDMA schemes are also proposed and analyzed to improve the network performance. Finally, such hybrid CSMA/CA-TDMA schemes are used in a cellular layout model for the multihop wireless sensor network to mitigate the hidden terminal collision problem.
233

Unanticipated evolution of web service provision software using generative object communication

Bradford, Lindsay William January 2006 (has links)
Providing service via theWeb differs from other service provision environments in that it is possible for the unexpected arrival of a massive number of service requests in a small time-frame, a situation commonly referred to as a flash crowd. Events of this nature are beyond the control of the service provider, and have the potential to severely degrade service quality and, in the worst case, to deny service to all clients completely. The occurrence, severity and sought Web content of a flash crowd is beyond the control of service provision software. How this software reacts to such a flash crowd, however, is not. Given the short-lived nature of flash crowds, it is unreasonable to expect such systems to increase the system resources they can apply to a particular flash crowd event. It is also difficult to predict the particular nature of any flash crowd, and subsequently which system resources will bottleneck. The driving hypothesis of this research is that, if we are to reasonably expect to have software react effectively to flash crowd events, we need to alter that software at runtime to remove system bottlenecks, whilst a flash crowd event is in progress. This is a special case of what is usually known as "unanticipated software evolution". This thesis reports on an investigation into how unanticipated software evolution can be applied to running Web service provision software to remove system bottlenecks. It does so by introducing automated dynamic Web content degradation to running software currently subject to simulated flash crowd events. The thesis describes and validates appropriate runtime extensions to allow generative object communication architectures (a promising class of architecture for unanticipated software evolution) to be converted initially into a Web application server, and then later accept further runtime behaviour changes. Such changes could alter system bottlenecks by replacing the key programming logic causing system bottlenecks at runtime.
234

Towards scalable training : narrowing the research-practice gap in the treatment of eating disorders

Bailey-Straebler, Suzanne January 2015 (has links)
Empirically supported treatments (ESTs) now exist for a variety of psychological disorders; however, few individuals have access to these treatments and even fewer receive them in well delivered form. This has been termed the research-practice gap. It is likely that a combination of factors contribute to individuals not receiving good quality ESTs. One major reason is the limited availability of effective training in these treatments. Although many therapists wish to learn such treatments, they seldom have the opportunity as training relies on scarce expert resources and is costly. Furthermore, relatively little is known about the effectiveness of this method or how best to train clinicians: despite having evidence-based treatments, there are no evidence-based trainings. This dissertation examined one example of an EST - enhanced cognitive behavior therapy for eating disorders (CBT-E) - with the overarching aim of evaluating both existing, and commonly accepted, training methods, as well as, newly developed more scalable ones. How best to train clinicians in CBT for eating disorders has not been investigated previously. The Kirkpatrick training evaluation framework was adopted to guide the studies. Chapter One provided an overview of the research-practice gap with a particular emphasis on the obstacles faced in training therapists. Chapter Two reviewed the literature on training in ESTs and highlighted gaps in the research evidence and areas for improvement in future studies. An important conclusion was that, although studies varied in design and the precise form and content of the training investigated, results were mostly consistent in indicating that knowledge and skills tended to improve following training. However, the outcome measures used to assess training were often poorly described with unknown psychometric properties. Perhaps most importantly the lack of clearly defined competence cut-points made interpretation difficult. In addition, much of the training investigated had limitations in terms of scalability. Chapters Three, Four and Five, aimed to overcome some of these difficulties and provided a series of studies investigating training in CBT-E. Chapter Three employed qualitative methods to investigate trainees' reaction to conventional workshop and more scalable web-based training and found that although trainees enjoyed training, they had a variety of reasons for not planning to implement the treatment as learned. Chapters Four and Five evaluated the impact of different forms of training on knowledge and skill acquisition respectively. Training in CBT-E was associated with increases in knowledge especially when paired with supervision or scalable guidance, which proved feasible and acceptable to clinician trainees. The results for skill acquisition were less clear, but the new scalable online training was associated with therapists achieving competence. Finally Chapter Six discussed the broader implications of the work and highlighted areas for future research.
235

Server hardware health status monitoring : Examining the reliability of a centralized monitoring architecture

Jarlow, Victor January 2018 (has links)
Monitoring of servers over the network is important to detect anomalies in servers in adatacenter. Systems management software exist which can receive messages from servers on which such anomalies occur. Network monitoring software are often used to periodically poll servers for their hardware health status. A centralized approach to network monitoring ispresented in this thesis, in which a systems management software receives messages from servers, and is polled by a network monitoring software. This thesis examines the reliabilityof a centralized monitoring approach in terms of how accurate its response is, as well as the time it took to respond with the correct hardware health status when polled, when it is affected by varying degrees of traffic through conducting an experiment. The results of the experiment show that the monitoring architecture is accurate when exposed to a level of load which is in line with scalability guidelines as offered by the company developing the systems management software, and that the time it takes for a hardware health status to be poll-able for the majority of the measurements lie within the interval 0 to 15 seconds.
236

Topics in Power and Performance Optimization of Embedded Systems

January 2011 (has links)
abstract: The ubiquity of embedded computational systems has exploded in recent years impacting everything from hand-held computers and automotive driver assistance to battlefield command and control and autonomous systems. Typical embedded computing systems are characterized by highly resource constrained operating environments. In particular, limited energy resources constrain performance in embedded systems often reliant on independent fuel or battery supplies. Ultimately, mitigating energy consumption without sacrificing performance in these systems is paramount. In this work power/performance optimization emphasizing prevailing data centric applications including video and signal processing is addressed for energy constrained embedded systems. Frameworks are presented which exchange quality of service (QoS) for reduced power consumption enabling power aware energy management. Power aware systems provide users with tools for precisely managing available energy resources in light of user priorities, extending availability when QoS can be sacrificed. Specifically, power aware management tools for next generation bistable electrophoretic displays and the state of the art H.264 video codec are introduced. The multiprocessor system on chip (MPSoC) paradigm is examined in the context of next generation many-core hand-held computing devices. MPSoC architectures promise to breach the power/performance wall prohibiting advancement of complex high performance single core architectures. Several many-core distributed memory MPSoC architectures are commercially available, while the tools necessary to effectively tap their enormous potential remain largely open for discovery. Adaptable scalability in many-core systems is addressed through a scalable high performance multicore H.264 video decoder implemented on the representative Cell Broadband Engine (CBE) architecture. The resulting agile performance scalable system enables efficient adaptive power optimization via decoding-rate driven sleep and voltage/frequency state management. The significant problem of mapping applications onto these architectures is additionally addressed from the perspective of instruction mapping for limited distributed memory architectures with a code overlay generator implemented on the CBE. Finally runtime scheduling and mapping of scalable applications in multitasking environments is addressed through the introduction of a lightweight work partitioning framework targeting streaming applications with low latency and near optimal throughput demonstrated on the CBE. / Dissertation/Thesis / Ph.D. Computer Science 2011
237

Towards effective analysis of big graphs : from scalability to quality

Tian, Chao January 2017 (has links)
This thesis investigates the central issues underlying graph analysis, namely, scalability and quality. We first study the incremental problems for graph queries, which aim to compute the changes to the old query answer, in response to the updates to the input graph. The incremental problem is called bounded if its cost is decided by the sizes of the query and the changes only. No matter how desirable, however, our first results are negative: for common graph queries such as graph traversal, connectivity, keyword search and pattern matching, their incremental problems are unbounded. In light of the negative results, we propose two new characterizations for the effectiveness of incremental computation, and show that the incremental computations above can still be effectively conducted, by either reducing the computations on big graphs to small data, or incrementalizing batch algorithms by minimizing unnecessary recomputation. We next study the problems with regards to improving the quality of the graphs. To uniquely identify entities represented by vertices in a graph, we propose a class of keys that are recursively defined in terms of graph patterns, and are interpreted with subgraph isomorphism. As an application, we study the entity matching problem, which is to find all pairs of entities in a graph that are identified by a given set of keys. Although the problem is proved to be intractable, and cannot be parallelized in logarithmic rounds, we provide two parallel scalable algorithms for it. In addition, to catch numeric inconsistencies in real-life graphs, we extend graph functional dependencies with linear arithmetic expressions and comparison predicates, referred to as NGDs. Indeed, NGDs strike a balance between expressivity and complexity, since if we allow non-linear arithmetic expressions, even of degree at most 2, the satisfiability and implication problems become undecidable. A localizable incremental algorithm is developed to detect errors using NGDs, where the cost is determined by small neighbors of nodes in the updates instead of the entire graph. Finally, a rule-based method to clean graphs is proposed. We extend graph entity dependencies (GEDs) as data quality rules. Given a graph, a set of GEDs and a block of ground truth, we fix violations of GEDs in the graph by combining data repairing and object identification. The method finds certain fixes to errors detected by GEDs, i.e., as long as the GEDs and the ground truth are correct, the fixes are assured correct as their logical consequences. Several fundamental results underlying the method are established, and an algorithm is developed to implement the method. We also parallelize the method and guarantee to reduce its running time with the increase of processors.
238

Scalable Register File Architecture for CGRA Accelerators

January 2016 (has links)
abstract: Coarse-grained Reconfigurable Arrays (CGRAs) are promising accelerators capable of accelerating even non-parallel loops and loops with low trip-counts. One challenge in compiling for CGRAs is to manage both recurring and nonrecurring variables in the register file (RF) of the CGRA. Although prior works have managed recurring variables via rotating RF, they access the nonrecurring variables through either a global RF or from a constant memory. The former does not scale well, and the latter degrades the mapping quality. This work proposes a hardware-software codesign approach in order to manage all the variables in a local nonrotating RF. Hardware provides modulo addition based indexing mechanism to enable correct addressing of recurring variables in a nonrotating RF. The compiler determines the number of registers required for each recurring variable and configures the boundary between the registers used for recurring and nonrecurring variables. The compiler also pre-loads the read-only variables and constants into the local registers in the prologue of the schedule. Synthesis and place-and-route results of the previous and the proposed RF design show that proposed solution achieves 17% better cycle time. Experiments of mapping several important and performance-critical loops collected from MiBench show proposed approach improves performance (through better mapping) by 18%, compared to using constant memory. / Dissertation/Thesis / Masters Thesis Computer Science 2016
239

Declarative parallel query processing on large scale astronomical databases / Traitement parallèle et déclaratif de requêtes sur des masses de données issues d'observations astronomiques

Mesmoudi, Amin 03 December 2015 (has links)
Les travaux de cette thèse s'inscrivent dans le cadre du projet Petasky. Notre objectif est de proposer des outils permettant de gérer des dizaines de Peta-octets de données issues d'observations astronomiques. Nos travaux se focalisent essentiellement sur la conception des nouveaux systèmes permettant de garantir le passage à l'échelle. Dans cette thèse, nos contributions concernent trois aspects : Benchmarking des systèmes existants, conception d'un nouveau système et optimisation du système. Nous avons commencé par analyser la capacité des systèmes fondés sur le modèle MapReduce et supportant SQL à gérer les données LSST et leurs capacités d'optimisation de certains types de requêtes. Nous avons pu constater qu'il n'y a pas de technique « magique » pour partitionner, stocker et indexer les données mais l'efficacité des techniques dédiées dépend essentiellement du type de requête et de la typologie des données considérées. Suite à notre travail de Benchmarking, nous avons retenu quelques techniques qui doivent être intégrées dans un système de gestion de données à large échelle. Nous avons conçu un nouveau système de façon à garantir la capacité dudit système à supporter plusieurs mécanismes de partitionnement et plusieurs opérateurs d'évaluation. Nous avons utilisé BSP (Bulk Synchronous Parallel) comme modèle de calcul. Les données sont représentées logiquement par des graphes. L'évaluation des requêtes est donc faite en explorant le graphe de données en utilisant les arcs entrants et les arcs sortants. Les premières expérimentations ont montré que notre approche permet une amélioration significative des performances par rapport aux systèmes Map/Reduce / This work is carried out in framework of the PetaSky project. The objective of this project is to provide a set of tools allowing to manage Peta-bytes of data from astronomical observations. Our work is concerned with the design of a scalable approach. We first started by analyzing the ability of MapReduce based systems and supporting SQL to manage the LSST data and ensure optimization capabilities for certain types of queries. We analyzed the impact of data partitioning, indexing and compression on query performance. From our experiments, it follows that there is no “magic” technique to partition, store and index data but the efficiency of dedicated techniques depends mainly on the type of queries and the typology of data that are considered. Based on our work on benchmarking, we identified some techniques to be integrated to large-scale data management systems. We designed a new system allowing to support multiple partitioning mechanisms and several evaluation operators. We used the BSP (Bulk Synchronous Parallel) model as a parallel computation paradigm. Unlike MapeReduce model, we send intermediate results to workers that can continue their processing. Data is logically represented as a graph. The evaluation of queries is performed by exploring the data graph using forward and backward edges. We also offer a semi-automatic partitioning approach, i.e., we provide the system administrator with a set of tools allowing her/him to choose the manner of partitioning data using the schema of the database and domain knowledge. The first experiments show that our approach provides a significant performance improvement with respect to Map/Reduce systems
240

Mapeamento sistemático sobre escalabilidade do i* (ISTAR)

CAVALCANTI, Paulo de Lima 14 September 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-07T13:01:50Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) plc2DissertacaoScaleIstar.pdf: 7858953 bytes, checksum: f3d9339b198f486f87b0ab0a60d298b9 (MD5) / Made available in DSpace on 2016-04-07T13:01:50Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) plc2DissertacaoScaleIstar.pdf: 7858953 bytes, checksum: f3d9339b198f486f87b0ab0a60d298b9 (MD5) Previous issue date: 2015-09-14 / A linguagem iStar (i*) é um framework de modelagem aplicado na Engenharia de Requisitos proposto há duas décadas. Os modelos i* relacionam todo os participantes envolvidos (atores, agentes, papéis e posições) através de relacionamentos de dependências estratégicas e intenções (metas, tarefas, metas brandas e recursos). Ao longo dos anos, relatos científicos descrevem estudos sobre o iStar (i*) e variações dessa linguagem e apontam que, fundamentalmente, a linguagem i* tem sido usada para modelar diferentes domínios, tais como telecomunicações, controle de trafego aéreo, dentre outros. Entretanto, nesses estudos, constatou-se que vários pontos fracos e limitações podem ser observados na linguagem i*, como por exemplo: falta de padronização, diferentes métodos de modelagem, falta de reusabilidade, ferramentas não profissionais, e, dentre outros muitos desafios, destaca-se a escalabilidade de seus modelos, segundo reconhecidos pesquisadores desta área de estudo. Assim, esta pesquisa mapeia estudos que abordaram a questão da escalabilidade do i* e tem como por objetivo conhecer: distribuição desses estudos, definições sobre a escalabilidade do i*, menções para contribuições que tratem do assunto, os julgamentos sobre a escalabilidade do i*, e, questões abertas relacionadas a esse tema. Todas as informações foram obtidas a partir de um estudo realizado sob a forma de mapeamento sistemático da literatura, tendo por base um protocolo com foco na escalabilidade do i*. Os estudos retornados foram filtrados por critérios de exclusão, inclusão, qualificação e agrupamento das publicações. Os dados foram extraídos desses estudos para apoiarem na síntese e a responder às perguntas de pesquisa propostas. No total, foram encontrados 119 estudos sobre escalabilidade de i*, dos quais, onze deles tiveram como foco central a escalabilidade do i* propriamente dita, enquanto dez estudos possuíam definição para o termo escalabilidade. Assim, nove estudos foram considerados como de melhor cobertura para responder as perguntas de pesquisa. No geral, foram identificadas 150 menções à contribuições associados a escalabilidade do i*. Em relação a facilidade de se escalar o i*, 62 dos 119 estudos afirmaram que i* não possui uma escalabilidade bem tratada, enquanto que em 93 desses mesmos 119 estudos, foram identificadas questões em aberto quanto à escalabilidade do i*. O mapeamento realizado sintetiza quais estudos possuem informações sobre a escalabilidade do i*. Isto será útil para pesquisas futuras, por facilitar agrupamento e identificação de potenciais fontes de dados e publicações, apesar de notar-se que a cobertura dos estudos precisa ser melhorada, pois apenas 9 dos 119 estudos avaliados, de fato, contribuíram mais com as perguntas de pesquisa realizadas. Por fim, as definições de escalabilidade e lista de publicações com contribuições permitirão comparações e reuso de técnicas para escalar modelos i*. / The iStar language (i*) is a modeling framework applied in Requirements Engineering which was proposed two decades ago. The i * models relate all the participants involved (actors, agents, roles and positions) through relationships of strategic dependencies and intentions (goals, tasks, soft goals and resources). Over the years, scientific reports describe studies on the iStar (i*) and variations of this language and point out that, fundamentally, the i* language has been used to model various domains such as telecommunications, air traffic control, among others. However, in these studies, it was found that several weaknesses and limitations may be observed in the language i *, for example, lack of standardization, different methods of forming, lack of reusability, nonprofessional tools and, among many other challenges we highlights the scalability of their models according to recognized researchers in this study area. Thus, this research maps studies that addressed the question of scalability of the i* and it has as objective to meet: distribution of these studies, settings on the scalability of i *, references to contributions dealing with the subject, the judgments about the scalability of i *, and open issues related to this theme. All information was obtained from a study conducted in the form of systematic mapping of literature, based on a protocol focusing on the scalability of the i*. The studies returned were filtered by criteria for exclusion, inclusion, qualification and grouping of publications. The data were extracted from these studies to support the synthesis and answering to the proposed research questions. In total, were found 119 studies on the i* scalability, of which eleven of them had as its central focus the scalability of i* itself, while ten studies had definition for the term scalability. Thus, nine studies were considered to be of better coverage to answer the research questions. Overall, 150 indications were identified to the contributions associated with the i* scalability. Regarding the ease of scale the i*, 62 of these 119 studies stated that the i* does not have a scalability treated well, while in 93 of those 119 studies were identified open issues regarding the scalability of i*. The mapping performed summarizes what studies have information about the scalability of the i*. This will be useful for future research by facilitating grouping and identification of potential data sources and publications, though noted that the coverage of the studies need to be improved, because only 9 of 119 studies evaluated, actually contributed more to the research questions carried out. Finally, the scale settings and list of publications with contributions will allow comparisons and reuse techniques for scale the i* models.

Page generated in 0.0413 seconds