• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 744
  • 291
  • 279
  • 144
  • 99
  • 93
  • 90
  • 86
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

A Study of Insects Attacking Pinus Flexilis James Cones in Cache National Forest

Nebeker, Thomas Evan 01 May 1970 (has links)
Six species of insects were found attacking limber pine cones from July 26, 1968, through October 4, 1969, in Cache National Forest. The three species considered of major importance are: Conophthorus flexilis Hopkins, Dioryctria abietella (D. & S.), and D. sp. near or disclusa Heinrich. The three minor species encountered are; Bradysia sp., Trogoderma parabile Beal, and Asynapta keeni (Foote). In addition to the major and minor cone pests three parasites, Apanteles sp . prob. starki Mason, Elacherus sp., and Hypopteromalus percussor Girault were found associated with the cone pests. C. flexilis, which completely destroys the cone, was ranked as the number one pest on the basis of numbers present plus severity of damage. During 1968 and 1969 C.flexilis destroyed 11.47 percent of the 1500 cones examined, with a mean of 5.87 larvae per infested cone. The cone moths, D. abietella and D. sp. near or disclusa, were ranked second and third in importance respectively. D. sp . near or disclusa was potentially the more important cone moth, as it caused a total destruction of the seed bearing portion of the cones. However, D. abietella infested 15.40 percent of the cones, in contrast to 2.00 percent by D. sp. near or disclusa. There were no significant statistical differences in insect populations between 1968 and 1969, although the percent infestation of C. flexilis and D. sp. near or disclusa increased slightly and D. abietella decreased.
232

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
233

MatRISC : a RISC multiprocessor for matrix applications / Andrew James Beaumont-Smith.

Beaumont-Smith, Andrew James January 2001 (has links)
"November, 2001" / Errata on back page. / Includes bibliographical references (p. 179-183) / xxii, 193 p. : ill. (some col.), plates (col.) ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis proposes a highly integrated SOC (system on a chip) matrix-based parallel processor which can be used as a co-processor when integrated into the on-chip cache memory of a microprocessor in a workstation environment. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2002
234

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
235

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
236

Supervision de trac au niveau applicatif : application à la sécurit é et à l'ingénierie des réseaux

Carlinet, Yannick 30 June 2010 (has links) (PDF)
Les travaux décrits dans ce mémoire portent sur la supervision du trac dans le c÷ur de réseau, au niveau de la couche applicative. Nous illustrons l'intérêt de la supervision dans la couche 7 grâce à trois études qui montrent les bénéces obtenus pour la sécurité et pour l'évaluation de modications d'architecture du réseau. La première étude utilise l'épidémiologie, la science qui étudie les causes et la propagation des maladies. L'épidémiologie fournit des concepts et des mé- thodes pour analyser à quels risques potentiels d'infection les PC des clients ADSL sont exposés. En particulier, nous voulons analyser les risques par rapport aux applications utilisées par les clients. Grâce à la supervision applicative du trac d'un large échantillon de clients ADSL dans le c÷ur de réseau, nous construisons un prol d'utilisation du réseau pour chaque client et nous détectons ceux qui génèrent du trac malveillant. A partir de ces données, nous étudions le lien entre certaines caractéristiques des clients avec leurs risques d'être infecté. Nous mettons en évidence deux applications et un syst ème d'exploitation qui constituent des facteurs de risque. Nous en déduisons un prol de client pour lequel existe un risque important de se voir infecter par un virus informatique. La deuxième étude porte sur l'opportunité pour un opérateur d'installer des caches P2P dans son réseau. Mettre les contenus P2P en cache pourrait être un bon moyen de réduire la charge dans le réseau. Cependant, les performances des caches sont aectées par de nombreux facteurs, en relation avec les caches eux-mêmes, mais également avec les propriétés de l'overlay, les contenus P2P et la localisation du cache. Dans le but d'évaluer l'utilité potentielle des caches P2P, nous eectuons la supervision à grande échelle du trac P2P, dans le réseau opérationnel de France Télécom. Après avoir étudié certaines propriétés du trac observé, nous simulons le fonctionnement d'un cache en nous basant sur les données recueillies pendant 10 mois. Nous sommes alors en mesure d'évaluer les performances d'un cache, en termes d'économie de bande passante si un cache était réellement déployé au moment de la capture de nos traces. De plus, nous étudions l'impact sur la performance de paramètres tels que les caractéristiques du cache et de mani ère plus importante le nombre de clients servis par le cache. Les résultats montrent que l'on pourrait réduire le trac P2P d'échange de chiers de 23% avec un cache passif. Enn, la troisème étude porte sur l'opportunité pour un opérateur de réseau de coopérer avec les réseaux P2P à travers une interface de type P4P. L'approche P4P permet aux clients P2P d'améliorer leur sélection de pairs dans l'overlay. Le trac P2P représente une proportion importante du volume du trac dans les réseaux. Cependant, les systèmes P2P les plus couramment utilisés à l'heure actuelle ne tiennent pas compte de l'infrastructure réseau sous-jacente. Grâce à la supervision applicative, nous déterminons les béné- ces de P4P, d'une part pour les applications P2P et d'autre part pour les opérateurs. Les résultats de cette expérimentation indiquent que les applications P2P ont besoin de plus d'informations que seulement l'AS de provenance des sources potentielles pour améliorer leurs performances. De plus nous montrons que le trac P2P inter-domaines pourrait être réduit d'au moins 72% grâce à P4P. Nous montrons donc dans ces travaux que la supervision applicative permet : d'analyser des phénomènes complexes liés à l'usage qui est fait du ré- seau, tels que la contamination par un ver ou un virus informatique ; d'évaluer, de manière précise et quantitative, l'impact de certaines modi cations d'architecture sur le trac opérationnel. D'une manière plus générale, nous illustrons l'importance du rôle de l'opé- rateur de réseau dans le déploiement et l'exploitation des services Internet, toujours plus gourmands en bande passante, que nous ne manquerons pas de voir apparaître à l'avenir. En eet la supervision applicative est un outil essentiel pour l'évaluation des protocoles et architectures mis en ÷uvre dans les services Internet, complémentaire des autres outils dans ce domaine.
237

Cache-based vulnerabilities and spam analysis

Neve de Mevergnies, Michael 14 July 2006 (has links)
Two problems of computer security are investigated. On one hand, we are facing a practical problematic of actual processors: the cache, an element of the architecture that brings flexibility and allows efficient utilization of the resources, is demonstrated to open security breaches from which secret information can be extracted. This issue required a delicate study to understand the problem and the role of the incriminated elements, to discover the potential of the attacks and find effective countermeasures. Because of the intricate behavior of a processor and limited resources of the cache, it is extremely hard to write constant-time software. This is particularly true with cryptographic applications that often rely on large precomputed data and pseudo-random accesses. The principle of time-driven attacks is to analyze the overall execution time of a cryptographic process and extract timing profiles. We show that in the case of AES those profiles are dependent on the memory lookups, i.e. the addition of the plaintext and the secret key. Correlations between some profiles with known inputs and some with partially unknown ones (known plaintext but unknown secret key) lead to the recovery of the secret key. We then detail access-driven attacks: another kind of cache-based side channel. This case relies on stronger assumptions regarding the attacker's capacities: he must be able to run another process, concurrent to the security process. Even if the security policies prevent the so-called "spy" process from accessing directly the data of the "crypto" process, the cache is shared between them and its behavior can lead the spy process to deduce the secrets of the crypto process. Several ways are explored for mitigations, depending on the security level to reach and on the attacker's capabilities. The respective performances of the mitigations are given. The scope is however oriented toward software mitigations as they can be directly applied to patch programs and reduce the cache leakage. On the other hand, we tackle a situation of computer science that also concerns many people and where important economical aspects are at stake: although spam is often considered as the other side of the Internet coin, we believe that it can be defeated and avoided. A increasing number of researches for example explores the ways cryptographic techniques can prevent spams from being spread. We concentrated on studying the behavior of the spammers to understand how e-mail addresses can be prevented from being gathered. The motivation for this work was to produce and make available quantitative results to efficiently prevent spam, as well as to provide a better understanding of the behavior of spammers. Even if orthogonal, both parts tackle practical problems and their results can be directly applied.
238

A pervasive information framework based on semantic routing and cooperative caching

Chen, Weisong, January 2004 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2005. / Title proper from title frame. Also available in printed format.
239

Site Suitability Analysis for an Intermountain Solid Waste Facility: A Study for Cache County, Utah

Campo, Joseph B. 01 January 1996 (has links)
The goal of this project was to analyze Cache County for potential sanitary landfill sites covering the period 2020 to 2120. The county population and per capita solid waste were estimated. The minimum landfill size was then calculated. A geographic information system (GIS) was used for data storage and vii analysis. Relevant data were gathered. Areas which would not support a landfill were eliminated. Remaining sites were rated as having slight, moderate, or severe restrictions for use as an area method sanitary landfill based on the Natural Resource Conservation Service (NRCS) Sanitary Facility Report, and the NRCS Soil Interpretations Rating Guide. Seventeen sites were designated as sites for further evaluation. A landfill ranking system giving a primary and/or secondary rating to data items was developed. Nine prime sites had one secondary (.,a ting. These sites should be more closely investigated to determine which are the best potential sites. (136 pages)
240

A cache framework for nomadic clients of web services

Elbashir, Kamaleldin 15 September 2009
This research explores the problems associated with caching of SOAP Web Service request/response pairs, and presents a domain independent framework enabling transparent caching of Web Service requests for mobile clients. The framework intercepts method calls intended for the web service and proceeds by buffering and caching of the outgoing method call and the inbound responses. This enables a mobile application to seamlessly use Web Services by masking fluctuations in network conditions. This framework addresses two main issues, firstly how to enrich the WS standards to enable caching and secondly how to maintain consistency for state dependent Web Service request/response pairs.

Page generated in 0.0544 seconds