• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 36
  • 34
  • 24
  • 19
  • 14
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 478
  • 83
  • 63
  • 53
  • 52
  • 49
  • 48
  • 45
  • 45
  • 44
  • 41
  • 35
  • 35
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Open-Source Parameterized Low-Latency Aggressive Hardware Compressor and Decompressor for Memory Compression

Jearls, James Chandler 16 June 2021 (has links)
In recent years, memory has shown to be a constraining factor in many workloads. Memory is an expensive necessity in many situations, from embedded devices with a few kilobytes of SRAM to warehouse-scale computers with thousands of terabytes of DRAM. Memory compression has existed in all major operating systems for many years. However, while faster than swapping to a disk, memory decompression adds latency to data read operations. Companies and research groups have investigated hardware compression to mitigate these problems. Still, open-source low-latency hardware compressors and decompressors do not exist; as such, every group that studies hardware compression must re-implement. Importantly, because the devices that can benefit from memory compression vary so widely, there is no single solution to address all devices' area, latency, power, and bandwidth requirements. This work intends to address the many issues with hardware compressors and decompressors. This work implements hardware accelerators for three popular compression algorithms; LZ77, LZW, and Huffman encoding. Each implementation includes a compressor and decompressor, and all designs are entirely parameterized. There are a total of 22 parameters between the designs in this work. All of the designs are open-source under a permissive license. Finally, configurations of the work can achieve decompression latencies under 500 nanoseconds, much closer than existing works to the 255 nanoseconds required to read an uncompressed 4 KB page. The configurations of this work accomplish this while still achieving compression ratios comparable to software compression algorithms. / Master of Science / Computer memory, the fast, temporary storage where programs and data are held, is expensive and limited. Compression allows for data and programs to be held in memory in a smaller format so they take up less space. This work implements a hardware design for compression and decompression accelerators to make it faster for the programs using the compressed data to access it. This work includes three hardware compressor and decompressor designs that can be easily modified and are free for anyone to use however they would like. The included designs are orders of magnitude smaller and less expensive than the existing state of the art, and they reduce the decompression time by up to 6x. These smaller areas and latencies result in a relatively small reduction in compression ratios: only 13% on average across the tested benchmarks.
32

Characterizing Web Response Time

Liu, Binzhang M.S. 07 May 1998 (has links)
It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed. / Master of Science
33

The Virginia Tech Phasor Data Concentrator Analysis & Testing System

Dekhane, Kunal Shashikant 20 January 2012 (has links)
The development of Smart Grid and an increased emphasis on Wide Area Measurement, Automation, Protection and Control (WAMPAC) has lead to the substantial increase in the development and use of Synchrophasor Systems. The Department of Energy having realized its importance in the Power System has encouraged its deployment through the Smart Grid Investment Grant. With many utilities beginning to implement a large number of PMUs over their respective power systems, Phasor Data Concentrators (PDCs) play a crucial part in accurately relaying data from the point of measurement to the operators at the control center. The current Synchrophasor standard, IEEE C37.118-2005 covers adequately the steady state characterization of PMUs but does not specify requirements for PDCs. Having recognized the need for such a standard for PDCs, the North American Synchrophasor Initiative (NASPI) has developed a guide outlining some of its objectives, functions and tests requirements. Virginia Tech has developed a PDC Test System under these guidelines and as per the requirements of the PJM Synchrophasor Systems Deployment Project. This thesis focuses on the testing tools developed and the procedures implemented in the Virginia Tech PDC Test System. / Master of Science
34

Bcl11b, a T-cell commitment factor, and its role in human immunodeficiency virus-1 transcription

Woerner, Andrew James 22 January 2016 (has links)
Advancements of antiretroviral therapies (ART) have made significant strides in reducing human immunodeficiency virus (HIV) viral loads in patients to undetectable levels. Upon interruption of ART, viral load rebounds and AIDS symptoms return. Latent reservoirs of virus are responsible for this phenomenon because they contain integrated provirus, which is transcriptionally silent, thus unaffected by ART and hidden from host immune surveillance. A commonly proposed mechanism for HIV latency is the presence of host cell transcription factors that lead to transcriptional silencing. CD4+ T cells and other immune cells, whether due to their subset phenotype, activation state, or stage in development, will vary in their battery of transcription factors. Of particular interest is Bcl11b, a critical transcription factor involved in the commitment to a T-cell fate during thymocyte development that has recently been shown to play a role in silencing HIV-1 transcription. Bcl11b is required for inhibiting the development of natural killer cell-like traits during the early development of T cells. The repressive role of this zinc-finger transcription factor has recently been shown to inhibit HIV-1 transcription in the context of microglial cells via recruitment of chromatin remodeling factors. Also, Bcl11b has been shown to interact with other HIV-1 transcriptional silencing factors such as NuRD and NCoR. Preliminary mass spectrophotometry results have pointed to a physical interaction of Bcl11b with NELF, another proven repressive factor of HIV transcription. We hypothesize that Bcl11b represses HIV transcription and is recruited to the HIV-1 long terminal repeat (LTR) through a paused RNA polymerase II complex, contributing to the establishment and maintenance of latency. Our studies confirm Bcl11b's repressive role in T cells, and investigate the mechanism with NELF. Transfection of HEK293T cells with HIV-LUC shows nearly 50% reduction in HIV transcription in the presence of Bcl11b, and analysis of viral protein output by p24 ELISA confirms this result. Furthermore, when co-transfected with NELF-B, the two transcription factors lead to nearly 90% reduction in HIV transcription. Results suggest that these factors cooperate to repress HIV transcriptional elongation. Protein and chromatin immunoprecipitations (ChIP) were also performed to see a direct interaction between the two transcription factors and the HIV LTR. Physical interaction of the two factors was not witnessed, while ChIP analysis shows enrichment of RNA polymerase II at the transcriptional start site suggesting Bcl11b increasing RNA polymerase II pausing. We conclude that Bcl11b plays a repressive role in HIV transcription through promoter-proximal pausing with a synergistic effect with NELF, but a yet to be identified factor is responsible for the coordination of the two factors. As an important T-cell commitment factor, Bcl11b may play an important role in establishing and maintaining cellular latency through transcriptional repression via a complex with NELF. Confirming Bcl11b's role as a repressive transcription factor and providing further support for a synergistic role with NELF, could highlight a new target for therapeutic strategies against the elusive latent reservoir.
35

QoS-aware content oriented flow routing in optical computer network

Al-Momin, Mohammed M. Saeed Abdullah January 2013 (has links)
In this thesis, one of the most important issues in the field of networks communication is tackled and addressed. This issue is represented by QoS, where the increasing demand on highquality applications together with the fast increase in the rates of Internet users have led to massive traffic being transmitted on the Internet. This thesis proposes new ideas to manage the flow of this huge traffic in a manner that contributes in improving the communication QoS. This can be achieved by replacing the conventional application-insensitive routing schemes by others which take into account the type of applications when making the routing decision. As a first contribution, the effect on the potential development in the quality of experience on the loading of Basra optical network has been investigated. Furthermore, the traffic due to each application was dealt with in different ways according to their delay and loss sensitivities. Load rate distributions over the various links due to the different applications were deployed to investigate the places of possible congestions in the network and the dominant applications that cause such congestions. In addition, OpenFlow and Optica Burst Switching (OBS) techniques were used to provide a wider range of network controllability and management. A centralised routing protocol that takes into account the available bandwidth, delay, and security as three important QoS parameters, when forwarding traffics of different types, was proposed and implemented using OMNeT++ networks simulator. As a novel idea, security has been incorporated in our QoS requirements by incorporating Oyster Optics Technology (OOT) to secure some of the optical links aiming to supply the network with some secure paths for those applications that have high privacy requirements. A particular type of traffic is to be routed according to the importance of these three QoS parameters for such a traffic type. The link utilisation, end to end delays and securities due to the different applications were recorded to prove the feasibility of our proposed system. In order to decrease the amount of traffic overhead, the same QoS constraints were implemented on a distributed Ant colony based routing. The traditional Ant routing protocol was improved by adopting the idea of Red-Green-Blue (RGB) pheromones routing to incorporate these QoS constraints. Improvements of 11% load balancing, and 9% security for private data was achieved compared to the conventional Ant routing techniques. In addition, this Ant based routing was utilised to propose an improved solution for the routing and wavelength assignment problem in the WDM optical computer networks.
36

IMPROVING REAL-TIME LATENCY PERFORMANCE ON COTS ARCHITECTURES

Bono, John, Hauck, Preston 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Telemetry systems designed to support the current needs of mission-critical applications often have stringent real-time requirements. These systems must guarantee a maximum worst-case processing and response time when incoming data is received. These real-time tolerances continue to tighten as data rates increase. At the same time, end user requirements for COTS pricing efficiencies have forced many telemetry systems to now run on desktop operating systems like Windows or Unix. While these desktop operating systems offer advanced user interface capabilities, they cannot meet the realtime requirements of the many mission-critical telemetry applications. Furthermore, attempts to enhance desktop operating systems to support real-time constraints have met with only limited success. This paper presents a telemetry system architecture that offers real-time guarantees while at the same time extensively leveraging inexpensive COTS hardware and software components. This is accomplished by partitioning the telemetry system onto two processors. The first processor is a NetAcquire subsystem running a real-time operating system (RTOS). The second processor runs a desktop operating system running the user interface. The two processors are connected together with a high-speed Ethernet IP internetwork. This architecture affords an improvement of two orders of magnitude over the real-time performance of a standalone desktop operating system.
37

Link Validation and Performance Measurement within the NASA Space Network

Puri, Amit, Lokshin, Kirill, Tao, Felix, Cunniff, David, Glasscock, David, Ramlagan, Raj 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The National Aeronautics and Space Administration (NASA) Space Network (SN) consists of a Space Segment, composed of the Tracking and Data Relay Satellite (TDRS) fleet, and a Ground Segment that includes the White Sands Ground Terminal (WSGT), Second TDRS Ground Terminal (STGT) and the Guam Remote Ground Terminal (GRGT). Collectively, the SN Ground Segment is commonly referred to as the White Sands Complex (WSC). Traditional methods of latency and performance measurement across the component links of network have relied on the use of simplified test patterns and basic data formats that are often specific to the instruments providing the measurements. These tests do not often correlate to the operational data normally transferred through the network. This paper discusses an alternative approach to performance measurement within the Space Network. By embedding and extracting performance metrics directly within simulated data sets that closely resemble operational traffic, performance measurement can be combined with link verification and validation to provide a single, comprehensive set of test and measurement activities.
38

Characterization of the Transcripts that Encode pUL138, a Latency Determinant, During Human Cytomegalovirus Infection

Grainger, Lora Ann January 2010 (has links)
Mechanisms involved in the establishment of HCMV latency are poorly understood, however, work in our laboratory has demonstrated the ULb' encoded protein, pUL138, as the first viral determinant to function in the establishment of HCMV latency in CD34+ hematopoietic progenitor cells (HPCs). This work characterizes the transcripts that encode pUL138, identifies three novel ULb' proteins (pUL133, pUL135, and pUL136) and represents the first demonstration of an internal ribosome entry site (IRES) mediated expression of pUL138. pUL138 is encoded on three polycistronic transcripts of 3.6-, 2.7- and 1.4-kb in length. pUL133, pUL135 and truncated pUL136, are expressed on the 3.6-, 2.7- and 1.4-kb transcripts, respectively, in addition to pUL138. We demonstrate that pUL138 expression is inducible from the IRES on the 3.6- and 2.7-kb transcripts under conditions of cellular stress, whereas pUL138 expression from the 1.4-kb transcript is inhibited under these same conditions. Differential utilization of the UL138 transcripts and their respective encoded proteins may regulate the outcome of viral infection in a cell type or cell context dependent manner. The interaction of these proteins during HCMV latency is the focus of ongoing research. In addition, this work represents preliminary data regarding the type I interferon (IFN) response during HCMV during productive infection in MRC5 fibroblasts and during the establishment of HCMV latency in CD34+ HPCs.
39

Développement d’une nouvelle classe d'agents de sortie de latence du VIH-1 ciblant la protéine virale Tat / Development of a novel class of HIV-1 latency-reversing agent targeting the viral protein Tat

Tong, Phuoc Bao Viet 23 July 2019 (has links)
Bien que le traitement antirétroviral (ART) supprime efficacement la multiplication du VIH-1 chez les patients infectés, l’ART ne guérit pas l'infection. En effet, si l'ART est arrêté, nous observons un rebond viral. Celui–ci est principalement dû à l'activation stochastique de cellules latentes qui contiennent le génome viral intégré mais ne produisent pas de virus et ne sont donc pas ciblées par l'ART ou le système immunitaire. Ces cellules latentes sont peu nombreuses (1-10 par million de cellules T-CD4+ quiescentes) mais elles apparaissent rapidement après la primo infection et constituent donc un obstacle majeur à l'éradication virale. La stratégie la plus prometteuse pour supprimer ces cellules, dite "Shock and Kill", est de les réactiver pour qu'elles soient ensuite ciblées par l'ART et/ou lysées par les cellules T cytotoxiques. Un certain nombre d’agents de sortie de latence (LRAs) ont été mis au point pour réactiver ces cellules. Ils ciblent les protéines cellulaires telles que les Histone-désacétylases (HDAC) ou la protéine kinase C. La plupart d'entre eux présentent donc des effets non spécifiques et parfois une toxicité. Tat est la protéine du VIH-1 qui permet la transcription virale et favorise la traduction des gènes viraux. Tat est la protéine clé pour la levée de latence et l'initiation de la production des protéines virales par la cellule latente. Sur la base des structures RMN de Tat disponibles, nous avons identifié par dynamique moléculaire les conformations les plus stables de Tat. Cela nous a permis d'identifier par criblage in silico des ligands potentiels de Tat. Dix molécules ont été sélectionnées. Une molécule appelée D10 se fixe spécifiquement à la protéine Tat et augmente son activité de transactivation d'environ 4 fois. De plus, D10 présente une activité LRA sur les lignées cellulaires latentes JLat-9.2 et OM-10.1. L’activité LRA de D10 sur ces lignées représente 50 à 70% de celle du SAHA (vorinostat), un inhibiteur des HDAC candidat LRA en cours d’essais cliniques (Phase 2). Sur les cellules latentes de patients VIH traités, D10 à 50 nM a une activité LRA très efficace, 80% supérieure à celle de la bryostatine-1 qui agit sur la PKC et est considéré comme le LRA le plus prometteur actuellement. Le mécanisme d’action de D10, à semble être la stabilisation du complexe de transcription Tat-TAR. Cet effet est observé à 30 nM D10. En utilisant une approche chémoinformatique nous avons sélectionné 11 analogues de D10, dit N1-N11. Certains de ces analogues (N5, N8) montrent un effet plus fort que D10 sur l’augmentation de la transactivation de Tat ainsi que pour l’effet LRA sur les lignées cellulaires latentes. Ce résultat nous a permis d'ébaucher une relation structure chimique / activité LRA de ces molécules. Nous avons donc identifié de nouveaux agents de sortie de latence du VIH-1 ciblant Tat, plus spécifiques que les LRAs ciblant les protéines cellulaires. Ce sont les premiers activateurs de Tat identifiés. / Despite its efficiency to prevent viral multiplication, antiretroviral therapy (ART) is unable to cure patients with HIV-1. Indeed if ART is stopped, a viral rebound is observed. This increase in blood viral load is due to the activation of HIV-1 reservoirs, among which latently-infected memory CD4+ T cells. These cells are rare (1 per million of quiescent T cells) and appear very quickly following infection. To purge this long-lived reservoir the "Shock and Kill" approach was developed. This strategy relies on the use of latency reversing agents (LRAs) to induce reservoir activation. All LRAs developed until now target cellular proteins such as Histone deacetylases or protein kinase C. These LRAs are not specific for viral transcription and displayed modest effects ex vivo. Here we present a new LRA family that binds to and activates HIV-1 Tat which is the key regulator for viral transcription and latency reversal. These compounds are not cytotoxic and specifically activate Tat transcriptional activity. They were less efficient than available LRAs on HIV-1 latent cell lines. Nevertheless, when tested on latent T-cells from HIV-1 patients, the lead compound D10 was ~ 80% more efficient than bryostatin-1, one of the best LRA available to date. This effect was observed at 50 nM, which corresponds to the D10 concentration required for this compound to stabilize the Tat-TAR transcription complex. These molecules are the first Tat activators available.
40

Modelos de sobrevivência para estimação do período de latência do câncer / Survival models to estimate the latency period of cancer

Bettim, Bárbara Beltrame 29 June 2017 (has links)
O câncer é responsável por aproximadamente 13% de todas as mortes no mundo, sendo que elas ocorrem principalmente em pessoas que são diagnosticadas tardiamente e em estágios avançados. Devido às suas características devastadoras e à prevalência cada vez maior da doença, é inquestionável a necessidade de investigações e pesquisas constantes na área, no sentido de aprimorar a detecção precoce e auxiliar em sua prevenção e tratamento. Dentre as diversas abordagens existentes, uma alternativa é a criação de técnicas para estimar o período de crescimento \"silencioso\" do câncer, que significa conhecer o momento do início do processo cancerígeno, também chamado de período de latência. A partir da revisão da literatura realizada, foi verificada uma escassez de modelos que estimam a latência do câncer, indicando a necessidade de estudo sobre o tema. Nesse contexto, métodos de análise de sobrevivência surgem como uma ferramenta útil para a construção desses modelos. No presente trabalho, é apresentada uma revisão de um modelo já existente, bem como sua formulação e métodos de estimação. Além disso, apresenta-se uma aplicação em um conjunto de dados reais e uma discussão dos resultados obtidos.Foi identificada a necessidade da formulação de um novo modelo, visto que o método estudado apresenta algumas limitações. Com isso são apresentadas 3 alternativas de modelos que solucionam os pontos apresentados na discussão, com respectivas aplicações. / Cancer is responsible for about 13% of all deaths in the world occuring mainly in people who are late diagnosed and in advanced stages. Due to its devastating characteristics and the growing prevalence of the disease, it is unquestionable the need of constant investigation and research in this area, in order to improve the early detection and to help in its prevention and treatment. Among the existing approaches, one alternative is the creation of techniques to estimate the \"silent\" growth period of cancer, which means to know the beginning moment of the carcinogen period, also known as latency period. In a literature review, it was found an shortage of models that estimate the latency of cancer, indicating the need of study about this theme. In this context, survival analysis methods appear as an useful tool to build these models. In this study, a review of an existing model is presented, as well as its formulation and estimation methods. Furthermore, an application on real data and a discussion of the obtained results are made. As a result, it was identified the need to formulate a new model, because of the limitations of the studied one. We present 3 alternative models that solve the points presented in the discussion, with applications.

Page generated in 0.0672 seconds