• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 337
  • 129
  • 63
  • 34
  • 33
  • 22
  • 15
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 803
  • 90
  • 87
  • 78
  • 60
  • 52
  • 48
  • 48
  • 46
  • 45
  • 45
  • 44
  • 44
  • 42
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Energy-efficient mapping and pipeline for the multi-resource systems with multiple supply voltages

Wu, Kun-Yi 13 August 2007 (has links)
Since the development of SoC is very fast, how to reduce the power consumption of SoC and improve the performance of SoC has become a very important issue. The power consumption of a system depends upon the hardware and software of a system. To overcome the issue of power consumption, the hardware circuit provides multi-voltage method to reduce task power consumption. On the other hand, the software tool decides the exact voltage for each task to minimize the total power consumption and finds a pipelined schedule of the periodic tasks to enhance the total throughput. In this thesis, a Tabu search is used to solve the voltage mapping and resource mapping problems of multi-voltage systems. This goal of this Tabu search is to find the solution with minimal power consumption for the multi-voltage system under the time constraints and resource constraints at the same time in the multi-voltage system to. Under the throughput constraints we use Tabu search to find solutions including the task¡¦s execution voltage and resource mapping, and then use list pipelined scheduling to schedule task and data communication and check their correctness. This method can reduce total power consumption. Experimental results show that our proposed algorithm can decide the resources mapping and pipeline in seconds, and it can reduce the power consumption efficiently.
282

Utvecklingen av en animationspipeline : En handbok för animatören som är med och startar ett nytt spelprojekt

Rindevall, Karin January 2009 (has links)
Denna rapport är en reflekterande text över skapandet av en digital handbok för animatörer som är med och startar ett nytt spelprojekt. Handboken är ett verk som gjordes som examensarbete på Högskolan i Skövde vårterminen 2009. Rapporten beskriver processen bakom skapandet av handboken. Denna process innefattar praktiskt arbete på företaget Junebug AB i Malmö, en förberedande litteraturstudie av hur text skrivs på ett tydligt, pedagogiskt sätt, en ytterligare litteraturstudie av tidigare skrivna handböcker för animatörer, intervjuer med animatörer i branschen och på utbildningen och sammanställning av handboken. Den centrala frågeställningen var: Hur kan man i handboksform på bästa sätt förmedla hur en animationspipeline1 tas fram, underhålls/effektiviseras och dokumenteras? Målet med arbetet har varit att sammanställa en handbok som andra animatörer kan använda sig av. Syftet har varit att lära mig om en riktig arbetsmiljö i spelbranschen och förmedla detta i handboken. Ett sekundärt mål har varit att uppmana andra animatörer i spelbranschen och på utbildningen att bidra med sina erfarenheter, tips och råd genom intervjuer. Resultatet blev en digital handbok på Internet för animatörer som jobbar, eller snart ska börja jobba, med animation i spelbranschen.
283

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.
284

Turkey&#039 / s Energy Strategy And Development Of Ceyhan As An Energy Hub

Degirmenci, Deniz 01 May 2010 (has links) (PDF)
This thesis aims to analyze the Turkish policy of being an energy hub. Within this context Turkey, as it is geographically very close to the two thirds of the world&#039 / s proven oil and natural gas reserves, has a very big advantage to manage its location and the purpose of this study is to discuss the measures taken to utilize this advantage. Therefore relative weakness of Turkey in comparison to the other actors like Russia, the USA or the EU and the strengths of the Turkish policy like the geopolitical advantage, the ethnic link between Turkey and the newly independent states of the Caspian and the already existing infrastructure for the transportation of oil and natural gas like Kirkuk-Yumurtalik Pipeline, Baku Tblisi Ceyhan Oil Pipeline, Ceyhan Terminal, and Baku Tblisi Erzurum Natural Gas Pipeline are discussed. With this respect, this study argues that, as a result of the existing and planned projects, Ceyhan&#039 / s claim to become a hub is a realistic objective and in addition to BTC and Kirkuk-Yumurtalik Pipeline, the realization of Samsun-Ceyhan Pipeline will increase Ceyhan&#039 / s potential as an energy hub.
285

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003 (has links)
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.
286

High performance instruction fetch using software and hardware co-design

Ramírez Bellido, Alejandro 12 July 2002 (has links)
En los últimos años, el diseño de procesadores de altas prestaciones ha progresado a lo largo de dos corrientes de investigación: incrementar la profundidad del pipeline para permitir mayores frecuencias de reloj, y ensanchar el pipeline para permitir la ejecución paralela de un mayor numero de instrucciones. Diseñar un procesador de altas prestaciones implica balancear todos los componentes del procesador para asegurar que el rendimiento global no esta limitado por ningún componente individual. Esto quiere decir que si dotamos al procesador de una unidad de ejecución mas rápida, hay que asegurarse de que podemos hacer fetch y decodificar instrucciones a una velocidad suficiente para mantener ocupada a esa unidad de ejecución.Esta tesis explora los retos presentados por el diseño de la unidad de fetch desde dos puntos de vista: el diseño de un software mas adecuado para las arquitecturas de fetch ya existente, y el diseño de un hardware adaptado a las características especiales del nuevo software que hemos generado.Nuestra aproximación al diseño de un suevo software ha sido la propuesta de un nuevo algoritmo de reordenación de código que no solo pretende mejorar el rendimiento de la cache de instrucciones, sino que al mismo tiempo pretende incrementar la anchura efectiva de la unidad de fetch. Usando información sobre el comportamiento del programa (profile data), encadenamos los bloques básicos del programa de forma que los saltos condicionales tendrán tendencia a ser no tomados, lo cual favorece la ejecución secuencial del código. Una vez hemos organizado los bloques básicos en estas trazas, mapeamos las diferentes trazas en memoria de forma que minimicen la cantidad de espacio requerida para el código realmente útil, y los conflictos en memoria de este código. Además de describir el algoritmo, hemos realizado un análisis en detalle del impacto de estas optimizaciones sobre los diferentes aspectos del rendimiento de la unidad de fetch: la latencia de memoria, la anchura efectiva de la unidad de fetch, y la capacidad de predicción del predictor de saltos.Basado en el análisis realizado sobre el comportamiento de los códigos optimizados, proponemos también una modificacion del mecanismo de la trace cache que pretende realizar un uso mas efectivo del escaso espacio de almacenaje disponible. Este mecanismo utiliza la trace cache únicamente para almacenar aquellas trazas que no podrían ser proporcionadas por la cache de instrucciones en un único ciclo.También basado en el conocimiento adquirido sobre el comportamiento de los códigos optimizados, proponemos un nuevo predictor de saltos que hace un uso extensivo de la misma información que se uso para reordenar el código, pero en este caso se usa para mejorar la precisión del predictor de saltos.Finalmente, proponemos una nueva arquitectura para la unidad de fetch del procesador basada en explotar las características especiales de los códigos optimizados. Nuestra arquitectura tiene un nivel de complejidad muy bajo, similar al de una arquitectura capaz de leer un único bloque básico por ciclo, pero ofrece un rendimiento muy superior, siendo comparable al de una trace cache, mucho mas costosa y compleja.
287

Creation of a Simulation Model based upon Process Mapping within Pipeline Management at Scania

Ovesson, Elin, Stadler, Niklas January 2013 (has links)
This is a Master’s Thesis that has been carried out at the Global Outbound Logistics department at Scania. Scania manufactures trucks, buses and engines. Some trucks and buses are delivered to markets where it, due to reduced customs duties and cheaper manpower, is more profitable to do the assembly locally at so called Regional Product Centres (RPCs). Since the components are produced far away from the RPC markets the lead times become long. In addition, the customers’ buying behaviour at the RPC markets is often not comparable to the European culture were a customer can accept to wait for weeks for a unit to be delivered. The long lead time in combination with the customer behaviour implies that the RPCs need to keep a certain selection of standard models of buses and trucks in stock. It has turned out to be difficult for the pipeline managers at the RPCs to place order volumes that correspond well to what will be delivered to the business units or distributors later on. The result of this is high stock levels at the RPCs, which leads to an important amount of tied up capital. Due to what is explained above, the purpose of this study is “to create a simulation model, based upon a process mapping, that visualises future volume levels in the pipeline due to different demand and ordering scenarios”. The short term target, which is also the target of this study, is to increase the RPCs understanding for how different demand and ordering scenarios influence the future volume levels in the pipeline. The long term target is to reduce tied up capital by adjusting buffer levels and lead times, while still ensuring a certain service level. The model should contribute to more accurate decision making with respect to the previous mentioned aspects. First, a high level process mapping was made in order to select which flows that were suitable for being subject for a detailed mapping. Second, a detailed mapping was made during which several RPC-, process- and function responsible were interviewed. After the detailed mapping, common denominators between the flows were identified and all activities were clustered into a solution that could be generalised and suitable for all flows. Factors such as lead times, deviation risks and capacity limitations were taken into account during the aggregation of activities. When a common view of the different RPC flows had been created, the mathematical relationships for how the goods can move throughout the process could be established. Then, the development and validation of the simulation model, which was an iterative process, could start. A directive was to build the simulation model in Microsoft Excel. Interviews were made with experienced model creators in order to find out how to create a user-friendly and robust model. The creation of the simulation model started with the development of a structure and then the content of each part was defined. A final validation, which consisted of sensitivity analysis and user trials, was finally done in order to ensure the simulation models functioning and accuracy. To conclude, a simulation model that will serve as a helpful tool for the RPCs when they are to decide which order volumes to place has been created. By clearly visualising the simulation results, the simulation model will hopefully increase the RPCs’ comprehension for how the pipeline works with respect to different ordering and demand scenarios. On top of this, the method used, the process mapping and the mathematical relationships that have been defined are important input for a possible future development of a more permanent and robust non-Microsoft Excel solution. This solution could probably be even more precise, automatically updated and have an even higher granularity.
288

Turkey’s Foreign Energy Policy andRealist Theory : The Cases of Nabuccoand South Stream Gas Pipeline Projects

Akin, Manolya January 2010 (has links)
This paper focuses on Turkey’s foreign energy policy with a special focus on cases ofNabucco and South Stream Gas Pipeline Projects and examines the issue from the perspectiveof “realist theory”.The research question aims to discover the realist tendency in Turkishforeign energy policy and to find out which gas pipeline project is more beneficial in terms ofnational interest for Turkey and also relevant for meeting the goals of Turkish Foreign EnergyPolicy.Energy is the key concept of the discussions about future of our world and sustainabledevelopment. If energy functions as a subject that increases the tensions between countriesthis means a threat to sustainable development since it becomes a factor jeopardizing peaceand makes cooperation between states imporssible. Also; energy constitutes a fundamentalplace national strategies of states along with sustainable development.In order to make the theory operational, three main dimensions, being security, economicsand strategic are used as tools or in other words as filters to look through, in the analysis offoreign and energy policy, as well as cases of Nabucco and South Stream Gas Pipeline Projects.
289

Characterization and Avoidance of Critical Pipeline Structures in Aggressive Superscalar Processors

Sassone, Peter G. 20 July 2005 (has links)
In recent years, with only small fractions of modern processors now accessible in a single cycle, computer architects constantly fight against propagation issues across the die. Unfortunately this trend continues to shift inward, and now the even most internal features of the pipeline are designed around communication, not computation. To address the inward creep of this constraint, this work focuses on the characterization of communication within the pipeline itself, architectural techniques to avoid it when possible, and layout co-design for early detection of problems. I present work in creating a novel detection tool for common case operand movement which can rapidly characterize an applications dataflow patterns. The results produced are suitable for exploitation as a small number of patterns can describe a significant portion of modern applications. Work on dynamic dependence collapsing takes the observations from the pattern results and shows how certain groups of operations can be dynamically grouped, avoiding unnecessary communication between individual instructions. This technique also amplifies the efficiency of pipeline data structures such as the reorder buffer, increasing both IPC and frequency. I also identify the same sets of collapsible instructions at compile time, producing the same benefits with minimal hardware complexity. This technique is also done in a backward compatible manner as the groups are exposed by simple reordering of the binarys instructions. I present aggressive pipelining approaches for these resources which avoids the critical timing often presumed necessary in aggressive superscalar processors. As these structures are designed for the worst case, pipelining them can produce greater frequency benefit than IPC loss. I also use the observation that the dynamic issue order for instructions in aggressive superscalar processors is predictable. Thus, a hardware mechanism is introduced for caching the wakeup order for groups of instructions efficiently. These wakeup vectors are then used to speculatively schedule instructions, avoiding the dynamic scheduling when it is not necessary. Finally, I present a novel approach to fast and high-quality chip layout. By allowing architects to quickly evaluate what if scenarios during early high-level design, chip designs are less likely to encounter implementation problems later in the process.
290

Efficient Verification of Bit-Level Pipelined Machines Using Refinement

Srinivasan, Sudarshan Kumar 24 August 2007 (has links)
Functional verification is a critical problem facing the semiconductor industry: hardware designs are extremely complex and highly optimized, and even a single bug in deployed systems can cost more than $10 billion. We focus on the verification of pipelining, a key optimization that appears extensively in hardware systems such as microprocessors, multicore systems, and cache coherence protocols. Existing techniques for verifying pipelined machines either consume excessive amounts of time, effort, and resources, or are not applicable at the bit-level, the level of abstraction at which commercial systems are designed and functionally verified. We present a highly automated, efficient, compositional, and scalable refinement-based approach for the verification of bit-level pipelined machines. Our contributions include: (1) A complete compositional reasoning framework based on refinement. Our notion of refinement guarantees that pipelined machines satisfy the same safety and liveness properties as their instruction set architectures. In addition, our compositional framework can be used to decompose correctness proofs into smaller, more manageable pieces, leading to drastic reductions in verification times and a high-degree of scalability. (2) The development of ACL2-SMT, a verification system that integrates the popular ACL2 theorem prover (winner of the 2005 ACM Software System Award) with decision procedures. ACL2-SMT allows us to seamlessly take advantage of the two main approaches to hardware verification: theorem proving and decision procedures. (3) A proof methodology based on our compositional reasoning framework and ACL2-SMT that allows us to reduce the bit-level verification problem to a sequence of highly automated proof steps. (4) A collection of general-purpose refinement maps, functions that relate pipelined machine states to instruction set architecture states. These refinement maps provide more flexibility and lead to increased verification efficiency. The effectiveness of our approach is demonstrated by verifying various pipelined machine models, including a bit-level, Intel XScale inspired processor that implements 593 instructions and includes features such as branch prediction, precise exceptions, and predicated instruction execution.

Page generated in 0.0482 seconds