• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 19
  • 19
  • 14
  • 14
  • 7
  • 7
  • 6
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 219
  • 46
  • 35
  • 31
  • 31
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Paralelizando unidades de cache hierárquicas para roteadores ICN

Mansilha, Rodrigo Brandão January 2017 (has links)
Um desafio fundamental em ICN (do inglês Information-Centric Networking) é desenvolver Content Stores (ou seja, unidades de cache) que satisfaçam três requisitos: espaço de armazenamento grande, velocidade de operação rápida e custo acessível. A chamada Hierarchical Content Store (HCS) é uma abordagem promissora para atender a esses requisitos. Ela explora a correlação temporal entre requisições para prever futuras solicitações. Por exemplo, assume-se que um usuário que solicita o primeiro minuto de um filme também solicitará o segundo minuto. Teoricamente, essa premissa permitiria transferir proativamente conteúdos de uma área de cache relativamente grande, mas lenta (Layer 2 - L2), para uma área de cache mais rápida, porém menor (Layer 1 - L1). A estrutura hierárquica tem potencial para incrementar o desempenho da CS em uma ordem de grandeza tanto em termos de vazão como de tamanho, mantendo o custo. Contudo, o desenvolvimento de HCS apresenta diversos desafios práticos. É necessário acoplar as hierarquias de memória L2 e L1 considerando as suas taxas de transferência e tamanhos, que dependem tanto de aspectos de hardware (por exemplo, taxa de leitura da L2, uso de múltiplos SSD físicos em paralelo, velocidade de barramento, etc.), como de software (por exemplo, controlador do SSD, gerenciamento de memória, etc.). Nesse contexto, esta tese apresenta duas contribuições principais. Primeiramente, é proposta uma arquitetura para superar os gargalos inerentes ao sistema através da paralelização de múltiplas HCS. Em resumo, o esquema proposto supera desafios inerentes à concorrência (especificamente, sincronismo) através do particionamento determinístico das requisições de conteúdos entre múltiplas threads. Em segundo lugar, é proposta uma metodologia para investigar o desenvolvimento de HCS explorando técnicas de emulação e modelagem analítica conjuntamente. A metodologia proposta apresenta vantagens em relação a metodologias baseadas em prototipação e simulação. A L2 é emulada para viabilizar a investigação de uma variedade de cenários de contorno (tanto em termos de hardware como de software) maior do que seria possível através de prototipação (considerando as tecnologias atuais). Além disso, a emulação emprega código real de um protótipo para os outros componentes do HCS (por exemplo L1, gerência das camadas e API) para fornecer resultados mais realistas do que seriam obtidos através de simulação. / A key challenge in Information Centric Networking (ICN) is to develop cache units (also called Content Store - CS) that meet three requirements: large storage space, fast operation, and affordable cost. The so-called HCS (Hierarchical Content Store) is a promising approach to satisfy these requirements jointly. It explores the correlation between content requests to predict future demands. Theoretically, this idea would enable proactively content transfers from a relatively large but slow cache area (Layer 2 - L2) to a faster but smaller cache area (Layer 1 - L1). Thereby, it would be possible to increase the throughput and size of CS in one order of magnitude, while keeping the cost. However, the development of HCS introduces several practical challenges. HCS requires a careful coupling of L2 and L1 memory levels considering their transfer rates and sizes. This requirement depends on both hardware specifications (e.g., read rate L2, use of multiple physical SSD in parallel, bus speed, etc.), and software aspects (e.g., the SSD controller, memory management, etc.). In this context, this thesis presents two main contributions. First, we propose an architecture for overcoming the HCS bottlenecks by parallelizing multiple HCS. In summary, the proposed scheme overcomes racing condition related challenges through deterministic partitioning of content requests among multiple threads. Second, we propose a methodology to investigate the development of HCS exploiting emulation techniques and analytical modeling jointly. The proposed methodology offers advantages over prototyping and simulation-based methods. We emulate the L2 to enable the investigation of a variety of boundary scenarios that are richer (regarding both hardware and software aspects) than would be possible through prototyping (considering current technologies). Moreover, the emulation employs real code from a prototype for the other components of the HCS (e.g., L1, layers management and API) to provide more realistic results than would be obtained through simulation.
102

Intercultural relations in Northern Peru: the north central highlands during the Middle Horizon / Relaciones interculturales en el norte del Perú: la sierra nor-central durante el Horizonte Medio

Lau, George 10 April 2018 (has links) (PDF)
This contribution surveys the emergence and character of the Middle Horizon in Peru’s north highlands. It centers on Ancash department, a region with a rich and unique archaeological record for contextualizing interaction during the period. My discussion begins by detailing the sequence and variability of interregional interaction in Ancash Department during the latter half of the 1st millennium AD. Then I will examine the general implications of the available data – especially architecture, long distance goodsand ceramic style – with a view to identify current difficulties and to encourage future problem-oriented investigations. Two terms help contextualize the cultural dynamism of the Middle Horizon: bundling (purposeful acquisition and clustering of objects from long-distance) and vector (a distinct cultural predisposition facilitating interaction). Although there is evidence of Wari contact before imperial expansion, trade interaction increased dramatically during the early Middle Horizon, focused on ‘bundled’ patterns of acquisition. These were followed by new exchange orientations and stylistic emulation. There is very little evidence to indicate territorial control, but Wari strategies highlighted the rich areas of western Ancash, while apparently de-emphasising EasternAncash. Religion and prestige economies appear to have been the most common factors for local engagements with Wari culture. / Esta contribución investiga el surgimiento y el carácter del Horizonte Medio en la sierra norte del Perú. Se enfoca en el departamento de Áncash, una región con un registro amplio para contextualizar la interacción durante dicho periodo. Mi discusión comienza detallando la secuencia y la variabilidad de la interacción interregional en Áncash (500-1000 d.C.). Se describen los patrones generales con los datos disponibles —la arquitectura, los bienes de larga distancia y el estilo de cerámica— con el fin de identificar las dificultades actuales. Dos conceptos ayudan a contextualizar el dinamismo cultural del periodo: «bundling»(adquisición intencional y la agrupación de objetos de larga distancia) y «vector» (una predisposición cultural que facilita la interacción). Aunque existe evidencia de contacto wari antes de la expansión imperial, la interacción comercial aumentó dramáticamente durante el Horizonte Medio temprano; se centró en los patrones de «bundling» de adquisición. Estos fueron seguidos por nuevas orientaciones de intercambio y emulación estilística. Hay poca evidencia para indicar el control territorial, pero las estrategias wari destacaron las ricas áreas del oeste de Áncash, con menos presencia en el este. Economías rituales y prestigio parecen haber sido los factores más comunes para las interacciones locales con la cultura Wari.
103

Africa Online : A Study on the Emulation of Chinese Practices and Policy in the Telecom sectors of Ethiopia and Nigeria

Blom, Hampus January 2018 (has links)
In recent years, Chinese development efforts in Africa have increased in scope making China the second largest investor on the African content with Chinese MNCs dominating multiple markets across the continent. The author investigates whether there is empirical support for the assumption that there is a correlation between market dominance of Chinese MNCs and similarity in policy and practices to those of China, an assumption based on Eleanor Westney’s study on the emulation of organizational models in late 19th century Japan. To affirm this correlation and describe where it exists, the author examines the regulation of the telecom markets in Ethiopia and Nigeria, two cases where Chinese MNCs have varying degrees of control over the telecom market. Whether or not the studied cases share similarities with the policy and practices of China is studied using the functional method of comparative law as described by Mark Van Hoecke. The study is based on data collected from Freedom House’s reports on freedom on the net which scrutinizes legislation, court cases and the behaviour of government institutions in 65 countries. The author then discusses similarities and differences between the studied cases and China, concluding that the before mentioned correlation does exist to a certain extent and that further research is required.
104

Emulator for complex sensor-based IT system

Gederin, Ruslan, Mazepa, Viktor January 2013 (has links)
Developing and testing complex information technology (IT) systems is a difficult task. This is even more difficult if parts of the system, no matter if hard- or software, are not available when needed due to delays or other reasons. The architecture and design cannot be evaluated and existing parts cannot be reliably tested. Thus the whole concept of the system cannot be validated before all parts are in place. To solve this problem in an industrial project, where the development of the server-side should be finished and tested (clear for production) while the hardware sensors where designed but not implemented, we developed an emulator (software) for the hardware sensors meeting the exact specification as well as parts of the server solution. This allowed proceeding with the server-side development, testing, and system validation without the hardware sensors in place. Following the exact specification should allow replacing the emulator with the real sensors without complications, once they are available. In the end, being able to develop hard- and software in parallel the project can be in production much earlier than performing the development in sequence.
105

Improving the Timeliness of SCTP Message Transfers

Hurtig, Per January 2008 (has links)
Due to the cheap and flexible framework that the underlying IP-technology of the internet provides, IP-networks are becoming popular in more and more contexts. For instance, telecommunication operators have started to replace the fixed legacy telephony networks with IP-networks. To support a smooth transition towards IP-networks, the Stream Control Transmission Protocol (SCTP) was standardized. SCTP is used to carry telephony signaling traffic, and solves a number of problems that would have followed from using the Transmission Control Protocol (TCP) in this context. However, the design of SCTP is still heavily influenced by TCP. In fact, many protocol mechansisms in SCTP are directly inherited from TCP. Unfortunately, many of these mechanisms are not adapted to the kind of traffic that SCTP is intended to transport: time critical message-based traffic, e.g. telephony signaling.In this thesis we examine, and adapt some of SCTP's mechanisms to more efficiently transport time critical message-based traffic. More specifically, we adapt SCTP's loss recovery and message bundling for timely message transfers. First, we propose and experimentally evaluate two loss recovery mechanisms: a packet-based Early Retransmit algorithm, and a modified retransmission timeout management algorithm. We show that these enhancements can reduce loss recovery times with at least 30-50%, in some scenarios. In addition, we adapt the message bundling of SCTP to better support timely message delivery. The proposed bundling algorithm can in some situations reduce the transfer time of a message with up to 70%.In addition to these proposals we have also indentified and reported mistakes in some of the most popular SCTP implementations. Furthermore, we have continously developed the network emulation software KauNet to support our experimental evaluations.
106

Embodied Cognition as Internal Simulation of Perception and Action: Towards a cognitive robotics

Svensson, Henrik January 2002 (has links)
This dissertation discusses the view that embodied cognition is essentially internal simulation (or emulation) of perception and action, and that the same (neural) mechanisms are underlying both real and simulated perception and action. More specifically, it surveys evidence supporting the simulation view from different areas of cognitive science (neuroscience, perception, psychology, social cognition, theory of mind). This is integrated with related research in situated robotics and directions for future work on internal simulation of perception and action in robots are outlined. In sum, the ideas discussed here provide an alternative view of representation, which is opposed to the traditional correspondence notions of representation that presuppose objectivism and functionalism. Moreover, this view is suggested as a viable route for situated robotics, which due to its rejection of traditional notions of representation so far has mostly dealt with more or less reactive behavior, to scale up to a cognitive robotics, and thus to further contribute to cognitive science and the understanding of higher-level cognition
107

Fast retransmit inhibitions for TCP

Hurtig, Per January 2006 (has links)
The Transmission Control Protocol (TCP) has been the dominant transport protocol in the Internet for many years. One of the reasons to this is that TCP employs congestion control mechanisms which prevent the Internet from being overloaded. Although TCP's congestion control has evolved during almost twenty years, the area is still an active research area since the environments where TCP are employed keep on changing. One of the congestion control mechanisms that TCP uses is fast retransmit, which allows for fast retransmission of data that has been lost in the network. Although this mechanism provides the most effective way of retransmitting lost data, it can not always be employed by TCP due to restrictions in the TCP specification. The primary goal of this work was to investigate when fast retransmit inhibitions occur, and how much they affect the performance of a TCP flow. In order to achieve this goal a large series of practical experiments were conducted on a real TCP implementation. The result showed that fast retransmit inhibitions existed, in the end of TCP flows, and that the increase in total transmission time could be as much as 301% when a loss were introduced at a fast retransmit inhibited position in the flow. Even though this increase was large for all of the experiments, ranging from 16-301%, the average performance loss, due to an arbitrary placed loss, was not that severe. Because fast retransmit was inhibited in fewer positions of a TCP flow than it was employed, the average increase of the transmission time due to these inhibitions was relatively small, ranging from 0,3-20,4%.
108

A framework for rapid development of dynamic binary translators

Holm, David January 2004 (has links)
Binary recompilation and translation play an important role in computer systems today. It is used by systems such as Java and .NET, and system emulators like VMWare and VirtualPC. A dynamic binary translator have several things in common with a regular compiler but as they usually have to translate code in real-time several constraints have to be made, especially when it comes to making code optimisations. Designing a dynamic recompiler is a complex process that involves repetitive tasks. Translation tables have to be constructed for the source architecture which contains the data necessary to translate each instruction into binary code that can be executed on the target architecture. This report presents a method that allows a developer to specify how the source and target architectures work using a set of scripting languages. The purpose of these languages is to relocate the repetitive tasks to computer software, so that they do not have to be performed manually by programmers. At the end of the report a simple benchmark is used to evaluate the performance of a basic IA32 emulator running on a PowerPC target that have been implemented using the system described here. The results of the benchmark is compared to the results of running the same benchmark on other, existing, emulators in order to show that the system presented here can compete with the existing methods used today. Several ongoing research projects are looking into ways of designing binary translators. Most of these projects focus on ways of optimising code in real-time and how to solve the problems related to binary translation, such as handling self-modifying code.
109

Recompiling DSP applications to x86 using LLVM IR

Stenberg, David January 2014 (has links)
This thesis describes the design and implementation of a prototype LLVM compiler backend, x86-64p, that compiles code written for a DSP architecture, FADER, into executables for the x86-64 architecture. The prototype takes LLVM IR generated for the FADER architecture and compiles x86-64 executables that emulate the properties of the DSP architecture, e.g. the multiple address spaces, the big-endianness and the support for fixed-point arithmetics. The backend is compared to a previous solution, C-Emu, that converts the DSP code to normal C code that is compiled using a normal x86-64 compiler. The two solutions are compared in terms of their correctness, debuggability and performance. The created prototype handles code containing low-level architectural assumptions better than C-Emu. However, the added emulation reduces the debuggability and performance of the generated executables. We have measured a runtime overhead of up to a factor of two compared to C-Emu. We also present some possible solutions for these issues.
110

Combining qualitative and quantitative reasoning to support hazard identification by computer

McCoy, Stephen Alexander January 1999 (has links)
This thesis investigates the proposition that use must be made of quantitative information to control the reporting of hazard scenarios in automatically generated HAZOP reports. HAZOP is a successful and widely accepted technique for identification of process hazards. However, it requires an expensive commitment of time and personnel near the end of a project. Use of a HAZOP emulation tool before conventional HAZOP could speed up the examination of routine hazards, or identify deficiencies I in the design of a plant. Qualitative models of process equipment can efficiently model fault propagation in chemical plants. However, purely qualitative models lack the representational power to model many constraints in real plants, resulting in indiscriminate reporting of failure scenarios. In the AutoHAZID computer program, qualitative reasoning is used to emulate HAZOP. Signed-directed graph (SDG) models of equipment are used to build a graph model of the plant. This graph is searched to find links between faults and consequences, which are reported as hazardous scenarios associated with process variable deviations. However, factors not represented in the SDG, such as the fluids in the plant, often affect the feasibility of scenarios. Support for the qualitative model system, in the form of quantitative judgements to assess the feasibility of certain hazards, was investigated and is reported here. This thesis also describes the novel "Fluid Modelling System" (FMS) which now provides this quantitative support mechanism in AutoHAZID. The FMS allows the attachment of conditions to SDG arcs. Fault paths are validated by testing the conditions along their arcs. Infeasible scenarios are removed. In the FMS, numerical limits on process variable deviations have been used to assess the sufficiency of a given fault to cause any linked consequence. In a number of case studies, use of the FMS in AutoHAZID has improved the focus of the automatically generated HAZOP results. This thesis describes qualitative model-based methods for identifying process hazards by computer, in particular AutoHAZID. It identifies a range of problems where the purely qualitative approach is inadequate and demonstrates how such problems can be tackled by selective use of quantitative information about the plant or the fluids in it. The conclusion is that quantitative knowledge is' required to support the qualitative reasoning in hazard identification by computer.

Page generated in 0.1148 seconds