Spelling suggestions: "subject:"fetal.""
1 |
Performance Evaluation of Data Access Latency Hiding Techniques in Processor DesignJhang, Jia-hao 11 September 2007 (has links)
Due to the effect of deep submicron technology, the gap between processor speed and memory access performance increases continuingly. In order to improve performance degradation due to the performance gap, one way is to fetch the data about to be accessed by processor to buffer memory on the processor chip in advance. The memory access waiting time can thus reduced between main memory and cache memory on the processor. Previous research utilizes low-level techniques to pre-fetch data, such as insertion of pre-fetch instructions and pre-fetch with predicted data location based on dynamic learning. They do not utilize analysis on program¡¦s high-level data structure to assist data pre-fetch. In this research, we carried out performance evaluation on our proposed data pre-fetch technique based on analysis of high-level data structures. We also compare our method with some existing low-level data pre-fetch techniques. The evaluation metrics includes the accuracy of data pre-fetches, memory latency hiding, and overall execution performance.
|
2 |
Spatial Correlation Between Framework Geology And Shoreline Morphology In Grand Bay, MississippiMullennex, Asa J 12 August 2016 (has links)
The Grand Bay National Estuarine Research Reserve (GBNERR) adjoins two costal embayments in the eastern Mississippi Sound, Grand Bay and Point Aux Chenes Bay, which encompass a late Pleistocene/ Holocene delta of the Pascagoula-Escatawpa fluvial system. Historical maps and aerial imagery indicate that the GBNERR shoreline has experienced long-term retreat at spatially variable rates. The research presented here investigates the relationship between the coastal geomorphological evolution of GBNERR and the underlying geological framework. Coastal morphology and stratigraphy were characterized by analyzing 85 km of chirp sonar sub-bottom seismic profiles and 45 sediment cores. Shoreline retreat rates were determined through geospatial regression analysis of 11 historical shorelines surveyed between 1850 and 2015. Results indicate that Pleistocene paleochannels in the underlying fluvial distributary ravinement surfaces are spatially correlated with shoreline segments that exhibit elevated retreat rates and should be accounted for in future models of local as well as regional coastal evolution.
|
3 |
Multiscale habitat use by muskrats in lacustrine wetlandsLarreur, Maximillian Roger 02 August 2018 (has links)
Master of Science / Department of Horticulture and Natural Resources / Adam A. Ahlers / The muskrat (Ondatra zibethicus) is an economically and ecologically important furbearer species that occupy wetlands throughout North America. However, populations across the United States (US) are declining and there is little evidence as to the cause of this decline. Wetlands in the upper Midwest, US, are shifting into more homogeneous vegetation states due to an invasive hybrid cattail species, Typha x glauca (hereafter ‘T. x glauca’), outcompeting native vegetation. This hybrid cattail species is now an abundant potential resource for muskrats and has outcompeted native wetland vegetation. I investigated how landscape composition and configuration affected multiscale habitat use by muskrats during the summers of 2016 – 2017. Additionally, I assessed how fetch (impact of wind and wave action), a process dictated by large-scale landscape configuration, influenced muskrat habitat use at a local-scale representing a resource patch. I randomly selected 71 wetland sites within Voyageurs National Park, Minnesota, and used presence/absence surveys to assess site occupancy by muskrats. Each year, multiple surveys were conducted at each site and I used multiseason occupancy modeling to investigate how both local and landscape factors affect site occupancy and turnover. I predicted a positive relationship between local-scale (2 ha) sites, characterized by shallower and less open water, and muskrat occupancy and colonization rates. I also predicted increased occupancy probabilities and colonization rates in wetlands that contain higher amounts of T. x glauca. However, I expected the amount of fetch at each site to negatively influence site occupancy probabilities and colonization rates. At the landscape-scale (2 km), I expected habitat use by muskrats to be positively related to the percentage of T. x glauca and area of wetlands surrounding sites. At the local-scale, muskrats occupied wetlands that contained shallower water depths and less open water. As predicted, site occupancy probabilities were greater in areas with greater amounts of T. x glauca coverage. My results revealed a cross-scale interaction between the severity of fetch impacts and percent of T. x glauca coverage at sites. Muskrats were more likely to colonize areas with greater fetch impacts if there was also greater coverage of T. x glauca at these sites. At the landscape-scale, site-occupancy probabilities were positively influenced by the percent of open water and landscape heterogeneity surrounding each site. My study was the first to document how invasive T. x glauca populations can mitigate negative effects that high wave intensity may have on muskrat spatial distributions. I was also the first to identify multiscale factors affecting the spatial distribution of muskrats in lacustrine ecosystems.
|
4 |
Studie över signifikant våghöjds förändring beroende på vind, 'fetch' och varaktighetNordin, Lisa January 2009 (has links)
<p>Utanför Östergarnsholm, öster om Gotland, har mätningar utförts sedan våren 1995. Mätningarna är gjorda med instrument monterade på en 30 m hög mast samt med hjälp av vågbojar. Mätstationerna är placerade så att vid vind från Gotland blir vågorna begränsade av avståndet från mätstation till land (kallat fetch). Då vinden kommer norr och söder om Gotland, kan vågorna antas komma från öppet hav. Denna klara uppdelning på fetch är både till för- och nackdel för studien. En modell gjord och beskriven av Khama (1986), bygger i stora drag på integrering av vågspektrum. Modellen beskriver den signifikanta våghöjdens beroende av vind, varaktighet (den tid det blåst med konstant hastighet över ett område) och fetch. Modellen är indelad i två delar, där den ena är beroende av vind och fetch och den andra enbart av vind, därför att fetchen är så lång att den förlorar signifikans. Storleken på den dimensionslösa fetchen, x, bestämmer gränsen när vågorna beror på fetch och när de kommer från öppet hav. I modellen är denna gräns satt till 22 000, men borde enligt denna studie ligga betydligt lägre. Modellens resultat och mätningar stämmer förhållandevis bra överens vid fetchberoende förhållanden. Dock så underskattar modellen våghöjden en aning, vilket ökar för ökande våghöjd. Vid stor skillnad mellan våg- och vindriktning är det ökad risk för stora skillnader mellan modell och mätningar. Modellen överskattar signifikanta våghöjden 2-3 gånger för öppet hav, eftersom modellen antar att vågor hela tiden kommer från samma håll och inte tar hänsyn till synoptiska förändringar under transporttiden. Modellens funktion för varaktighet ger en större ökning av våghöjd än vad den faktiska ökningen är. Funktionen för varaktighen har därför om, men visar sig endast gälla för ung sjö, med definitionen C<sub>p</sub>/U<0.9. Vid dyning då C<sub>p</sub>/U>1.2, ökar våghöjden med tiden betydligt mindre än vid ung sjö. Stabilitet har också viss påverkan på våghöjden, dock har inga särskilda skillnader mellan modell och mätningar iakttagits beroende på stabilitet.</p><p> </p><p> </p><p> </p> / <p>By the coast of Östergarnsholm, on the east side of Gotland, measurements have been carried out since spring 1995. The measurements are performed by using instruments on a 30 meter high tower, but also by means of wave buoys. The measuring stations are placed in such a way that in case of wind from Gotland the waves are limited by the distance from the measuring station to the shore (called fetch). However, when the wind is not by the coast of Gotland, the waves could be considered to originate from the open sea. This obvious division of fetch is both limiting and beneficial for the study. A model created and described by Khama (1986), is more or less based on the integration of wave spectrum. The model illustrates how the significant wave height depends on wind, duration (the amount of time it has been blowing with constant velocity within an area) and fetch. The model is divided into two parts, whereas one is dependent on wind and fetch and the other one is solely dependent on wind, since the fetch is long enough to lose significance. The size of the dimensionless fetch, x, decides the barrier when the waves are dependent on fetch and when they originate from the open sea. In the model this barrier is set at 22 000, but should according to this study be set considerably lower. The measurements and result of the model coincides relatively well in case of fetch-dependency. However, the model is slightly underestimating the wave height, which increases in line with increasing wave height. In case of great difference between wave- and wind direction there is a bigger risk of great differences between the model and the measurements. The model is overestimating the significant wave height 2-3 times for the open sea, since the model suggests that waves always originate from the same direction. Instead of just studying waves from the open sea, that solely are wind-dependent, consideration to duration should also be made. The models function for duration suggests a greater rise in wave height than it actually is. The function for duration is therefore revised, but appears only to be valid for young sea, defined as follows C<sub>p</sub>/U<0.9. By swell whenC<sub>p</sub>/U>1.2, the wave height is increasing less with time than during young sea. Stability has also appeared to have some effect on the wave height, thus no particular differences between the model and the measurements have been observed depending on stability.</p><p> </p>
|
5 |
Studie över signifikant våghöjds förändring beroende på vind, 'fetch' och varaktighetNordin, Lisa January 2009 (has links)
Utanför Östergarnsholm, öster om Gotland, har mätningar utförts sedan våren 1995. Mätningarna är gjorda med instrument monterade på en 30 m hög mast samt med hjälp av vågbojar. Mätstationerna är placerade så att vid vind från Gotland blir vågorna begränsade av avståndet från mätstation till land (kallat fetch). Då vinden kommer norr och söder om Gotland, kan vågorna antas komma från öppet hav. Denna klara uppdelning på fetch är både till för- och nackdel för studien. En modell gjord och beskriven av Khama (1986), bygger i stora drag på integrering av vågspektrum. Modellen beskriver den signifikanta våghöjdens beroende av vind, varaktighet (den tid det blåst med konstant hastighet över ett område) och fetch. Modellen är indelad i två delar, där den ena är beroende av vind och fetch och den andra enbart av vind, därför att fetchen är så lång att den förlorar signifikans. Storleken på den dimensionslösa fetchen, x, bestämmer gränsen när vågorna beror på fetch och när de kommer från öppet hav. I modellen är denna gräns satt till 22 000, men borde enligt denna studie ligga betydligt lägre. Modellens resultat och mätningar stämmer förhållandevis bra överens vid fetchberoende förhållanden. Dock så underskattar modellen våghöjden en aning, vilket ökar för ökande våghöjd. Vid stor skillnad mellan våg- och vindriktning är det ökad risk för stora skillnader mellan modell och mätningar. Modellen överskattar signifikanta våghöjden 2-3 gånger för öppet hav, eftersom modellen antar att vågor hela tiden kommer från samma håll och inte tar hänsyn till synoptiska förändringar under transporttiden. Modellens funktion för varaktighet ger en större ökning av våghöjd än vad den faktiska ökningen är. Funktionen för varaktighen har därför om, men visar sig endast gälla för ung sjö, med definitionen Cp/U<0.9. Vid dyning då Cp/U>1.2, ökar våghöjden med tiden betydligt mindre än vid ung sjö. Stabilitet har också viss påverkan på våghöjden, dock har inga särskilda skillnader mellan modell och mätningar iakttagits beroende på stabilitet. / By the coast of Östergarnsholm, on the east side of Gotland, measurements have been carried out since spring 1995. The measurements are performed by using instruments on a 30 meter high tower, but also by means of wave buoys. The measuring stations are placed in such a way that in case of wind from Gotland the waves are limited by the distance from the measuring station to the shore (called fetch). However, when the wind is not by the coast of Gotland, the waves could be considered to originate from the open sea. This obvious division of fetch is both limiting and beneficial for the study. A model created and described by Khama (1986), is more or less based on the integration of wave spectrum. The model illustrates how the significant wave height depends on wind, duration (the amount of time it has been blowing with constant velocity within an area) and fetch. The model is divided into two parts, whereas one is dependent on wind and fetch and the other one is solely dependent on wind, since the fetch is long enough to lose significance. The size of the dimensionless fetch, x, decides the barrier when the waves are dependent on fetch and when they originate from the open sea. In the model this barrier is set at 22 000, but should according to this study be set considerably lower. The measurements and result of the model coincides relatively well in case of fetch-dependency. However, the model is slightly underestimating the wave height, which increases in line with increasing wave height. In case of great difference between wave- and wind direction there is a bigger risk of great differences between the model and the measurements. The model is overestimating the significant wave height 2-3 times for the open sea, since the model suggests that waves always originate from the same direction. Instead of just studying waves from the open sea, that solely are wind-dependent, consideration to duration should also be made. The models function for duration suggests a greater rise in wave height than it actually is. The function for duration is therefore revised, but appears only to be valid for young sea, defined as follows Cp/U<0.9. By swell whenCp/U>1.2, the wave height is increasing less with time than during young sea. Stability has also appeared to have some effect on the wave height, thus no particular differences between the model and the measurements have been observed depending on stability.
|
6 |
Pre-fetch document caching to improve World-Wide Web user response timeLee, David Chunglin 01 October 2008 (has links)
The World-Wide Web, or the Web, is currently one of the most highly used network services. Because of this, improvements and new technologies are rapidly being developed and deployed. One important area of study is improving user response time through the use of caching mechanisms. Most prior work considered multiple user caches running on cache relay systems. These systems are mostly post-caching systems; they perform no "look ahead," or pre-fetch, functions. This research studies a pre-fetch caching scheme based on Web server access statistics. The scheme employs a least-recently used replacement policy and allows for multiple simultaneous document retrievals to occur. The scheme is based on a combined statistical and locality of reference model associated with the links in hypertext systems. Results show that cache hit rates are doubled over schemes that use only post-caching and are mixed for user response time improvements. The conclusion is that pre-fetch caching Web documents offers an improvement over post-caching methods and should be studied in detail for both single user and multiple user systems. / Master of Science
|
7 |
Optimizing Communication Cost in Distributed Query Processing / Optimisation du coût de communication des données dans le traitement des requêtes distribuéesBelghoul, Abdeslem 07 July 2017 (has links)
Dans cette thèse, nous étudions le problème d’optimisation du temps de transfert de données dans les systèmes de gestion de données distribuées, en nous focalisant sur la relation entre le temps de communication de données et la configuration du middleware. En réalité, le middleware détermine, entre autres, comment les données sont divisées en lots de F tuples et messages de M octets avant d’être communiqués à travers le réseau. Concrètement, nous nous concentrons sur la question de recherche suivante : étant donnée requête Q et l’environnement réseau, quelle est la meilleure configuration de F et M qui minimisent le temps de communication du résultat de la requête à travers le réseau?A notre connaissance, ce problème n’a jamais été étudié par la communauté de recherche en base de données.Premièrement, nous présentons une étude expérimentale qui met en évidence l’impact de la configuration du middleware sur le temps de transfert de données. Nous explorons deux paramètres du middleware que nous avons empiriquement identifiés comme ayant une influence importante sur le temps de transfert de données: (i) la taille du lot F (c’est-à-dire le nombre de tuples dans un lot qui est communiqué à la fois vers une application consommant des données) et (ii) la taille du message M (c’est-à-dire la taille en octets du tampon du middleware qui correspond à la quantité de données à transférer à partir du middleware vers la couche réseau). Ensuite, nous décrivons un modèle de coût permettant d’estimer le temps de transfert de données. Ce modèle de coût est basé sur la manière dont les données sont transférées entre les noeuds de traitement de données. Notre modèle de coût est basé sur deux observations cruciales: (i) les lots et les messages de données sont communiqués différemment sur le réseau : les lots sont communiqués de façon synchrone et les messages dans un lot sont communiqués en pipeline (asynchrone) et (ii) en raison de la latence réseau, le coût de transfert du premier message d’un lot est plus élevé que le coût de transfert des autres messages du même lot. Nous proposons une stratégie pour calibrer les poids du premier et non premier messages dans un lot. Ces poids sont des paramètres dépendant de l’environnement réseau et sont utilisés par la fonction d’estimation du temps de communication de données. Enfin, nous développons un algorithme d’optimisation permettant de calculer les valeurs des paramètres F et M qui fournissent un bon compromis entre un temps optimisé de communication de données et une consommation minimale de ressources. L’approche proposée dans cette thèse a été validée expérimentalement en utilisant des données issues d’une application en Astronomie. / In this thesis, we take a complementary look to the problem of optimizing the time for communicating query results in distributed query processing, by investigating the relationship between the communication time and the middleware configuration. Indeed, the middleware determines, among others, how data is divided into batches and messages before being communicated over the network. Concretely, we focus on the research question: given a query Q and a network environment, what is the best middleware configuration that minimizes the time for transferring the query result over the network? To the best of our knowledge, the database research community does not have well-established strategies for middleware tuning. We present first an intensive experimental study that emphasizes the crucial impact of middleware configuration on the time for communicating query results. We focus on two middleware parameters that we empirically identified as having an important influence on the communication time: (i) the fetch size F (i.e., the number of tuples in a batch that is communicated at once to an application consuming the data) and (ii) the message size M (i.e., the size in bytes of the middleware buffer, which corresponds to the amount of data that can be communicated at once from the middleware to the network layer; a batch of F tuples can be communicated via one or several messages of M bytes). Then, we describe a cost model for estimating the communication time, which is based on how data is communicated between computation nodes. Precisely, our cost model is based on two crucial observations: (i) batches and messages are communicated differently over the network: batches are communicated synchronously, whereas messages in a batch are communicated in pipeline (asynchronously), and (ii) due to network latency, it is more expensive to communicate the first message in a batch compared to any other message that is not the first in its batch. We propose an effective strategy for calibrating the network-dependent parameters of the communication time estimation function i.e, the costs of first message and non first message in their batch. Finally, we develop an optimization algorithm to effectively compute the values of the middleware parameters F and M that minimize the communication time. The proposed algorithm allows to quickly find (in small fraction of a second) the values of the middleware parameters F and M that translate a good trade-off between low resource consumption and low communication time. The proposed approach has been evaluated using a dataset issued from application in Astronomy.
|
8 |
High performance instruction fetch using software and hardware co-designRamírez Bellido, Alejandro 12 July 2002 (has links)
En los últimos años, el diseño de procesadores de altas prestaciones ha progresado a lo largo de dos corrientes de investigación: incrementar la profundidad del pipeline para permitir mayores frecuencias de reloj, y ensanchar el pipeline para permitir la ejecución paralela de un mayor numero de instrucciones. Diseñar un procesador de altas prestaciones implica balancear todos los componentes del procesador para asegurar que el rendimiento global no esta limitado por ningún componente individual. Esto quiere decir que si dotamos al procesador de una unidad de ejecución mas rápida, hay que asegurarse de que podemos hacer fetch y decodificar instrucciones a una velocidad suficiente para mantener ocupada a esa unidad de ejecución.Esta tesis explora los retos presentados por el diseño de la unidad de fetch desde dos puntos de vista: el diseño de un software mas adecuado para las arquitecturas de fetch ya existente, y el diseño de un hardware adaptado a las características especiales del nuevo software que hemos generado.Nuestra aproximación al diseño de un suevo software ha sido la propuesta de un nuevo algoritmo de reordenación de código que no solo pretende mejorar el rendimiento de la cache de instrucciones, sino que al mismo tiempo pretende incrementar la anchura efectiva de la unidad de fetch. Usando información sobre el comportamiento del programa (profile data), encadenamos los bloques básicos del programa de forma que los saltos condicionales tendrán tendencia a ser no tomados, lo cual favorece la ejecución secuencial del código. Una vez hemos organizado los bloques básicos en estas trazas, mapeamos las diferentes trazas en memoria de forma que minimicen la cantidad de espacio requerida para el código realmente útil, y los conflictos en memoria de este código. Además de describir el algoritmo, hemos realizado un análisis en detalle del impacto de estas optimizaciones sobre los diferentes aspectos del rendimiento de la unidad de fetch: la latencia de memoria, la anchura efectiva de la unidad de fetch, y la capacidad de predicción del predictor de saltos.Basado en el análisis realizado sobre el comportamiento de los códigos optimizados, proponemos también una modificacion del mecanismo de la trace cache que pretende realizar un uso mas efectivo del escaso espacio de almacenaje disponible. Este mecanismo utiliza la trace cache únicamente para almacenar aquellas trazas que no podrían ser proporcionadas por la cache de instrucciones en un único ciclo.También basado en el conocimiento adquirido sobre el comportamiento de los códigos optimizados, proponemos un nuevo predictor de saltos que hace un uso extensivo de la misma información que se uso para reordenar el código, pero en este caso se usa para mejorar la precisión del predictor de saltos.Finalmente, proponemos una nueva arquitectura para la unidad de fetch del procesador basada en explotar las características especiales de los códigos optimizados. Nuestra arquitectura tiene un nivel de complejidad muy bajo, similar al de una arquitectura capaz de leer un único bloque básico por ciclo, pero ofrece un rendimiento muy superior, siendo comparable al de una trace cache, mucho mas costosa y compleja.
|
9 |
Heterogeneity-awareness in multithreaded multicore processorsAcosta Ojeda, Carmelo Alexis 07 July 2009 (has links)
During the last decades, Computer Architecture has experienced a great series of revolutionary changes. The increasing transistor count on a single chip has led to some of the main milestones in the field, from the release of the first Superscalar (1965) to the state-of-the-art Multithreaded Multicore Architectures, like the Intel Core i7 (2009).Moore's Law has continued for almost half of a century and is not expected to stop for at least another decade, and perhaps much longer. Moore observed a trend in the process technology advances. So, the number of transistors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years. Nevertheless, having more available transistors can not be always directly translated into having more performance.The complexity of state-of-the-art software has reached heights unthinkable in prior ages, both in terms of the amount of computation and the complexity involved. If we deeply analyze this complexity in software we would realize that software is comprised of smaller execution processes that, although maintaining certain spatial/temporal locality, imply an inherently heterogeneous behavior. That is, during execution time the hardware executes very different portions of software, with huge differences in terms of behavior and hardware requirements. This heterogeneity in the behaviour of the software is not specific of the latest videogame, but it is inherent to software programming itself, since the very beginning of Algorithmics.In this PhD dissertation we deeply analyze the inherent heterogeneity present in software behavior. We identify the main issues and sources of this heterogeneity, that hamper most of the state-of-the-art processor designs from obtaining their maximum potential. Hence, the heterogeneity in software turns most of the current processors, commonly called general-purpose processors, into overdesigned. That is, they have much more hardware resources than really needed to execute the software running on them. This fact would not represent a main problem if we were not concerned on the additional power consumption involved in software computation.The final goal of this PhD dissertation consists in assigning each portion of software exactly the amount of hardware resources really needed to fully exploit its maximal potential; without consuming more energy than the strictly needed. That is, obtaining complexity-effective executions using the inherent heterogeneity in software behavior as steering indicator. Thus, we start deeply analyzing the heterogenous behaviour of the software run on top of general-purpose processors and then matching it on top of a heterogeneously distributed hardware, which explicitly exploit heterogeneous hardware requirements. Only by being heterogeneity-aware in software, and appropriately matching this software heterogeneity on top of hardware heterogeneity, may we effectively obtain better processor designs.The PhD dissertation is comprised of four main contributions that cover both multithreaded single-core (hdSMT) and multicore (TCA Algorithm, hTCA Framework and MFLUSH) scenarios, deeply explained in their corresponding chapters in the PhD dissertation memory. Overall, these contributions cover a significant range of the Heterogeneity-Aware Processors' design space. Within this design space, we have focused on the state-of-the-art trend in processor design: Multithreaded Multicore (CMP+SMT) Processors.We make special emphasis on the MPsim simulation tool, specifically designed and developed for this PhD dissertation. This tool has already gone beyond this PhD dissertation, becoming a reference tool by an important group of researchers spread over the Computer Architecture Department (DAC) at the Polytechnic University of Catalonia (UPC), the Barcelona Supercomputing Center (BSC) and the University of Las Palmas de Gran Canaria (ULPGC).
|
10 |
Hydraulics Studies In Port ConceptionHermite, Sophie January 2015 (has links)
In the Maritime Works Engineering department of Saipem, studies have been carried out to design an extension to an existing LNG export facility. The scope of work comprises the design of a jetty on piles. For this purpose, wave propagation and ship mooring computations have been performed, as well as shore protection and abutment studies. These studies were preceded by meteocean site data and bathymetry analysis.
|
Page generated in 0.0558 seconds