Spelling suggestions: "subject:"[een] PIPELINE"" "subject:"[enn] PIPELINE""
291 |
Creation of a Simulation Model based upon Process Mapping within Pipeline Management at ScaniaOvesson, Elin, Stadler, Niklas January 2013 (has links)
This is a Master’s Thesis that has been carried out at the Global Outbound Logistics department at Scania. Scania manufactures trucks, buses and engines. Some trucks and buses are delivered to markets where it, due to reduced customs duties and cheaper manpower, is more profitable to do the assembly locally at so called Regional Product Centres (RPCs). Since the components are produced far away from the RPC markets the lead times become long. In addition, the customers’ buying behaviour at the RPC markets is often not comparable to the European culture were a customer can accept to wait for weeks for a unit to be delivered. The long lead time in combination with the customer behaviour implies that the RPCs need to keep a certain selection of standard models of buses and trucks in stock. It has turned out to be difficult for the pipeline managers at the RPCs to place order volumes that correspond well to what will be delivered to the business units or distributors later on. The result of this is high stock levels at the RPCs, which leads to an important amount of tied up capital. Due to what is explained above, the purpose of this study is “to create a simulation model, based upon a process mapping, that visualises future volume levels in the pipeline due to different demand and ordering scenarios”. The short term target, which is also the target of this study, is to increase the RPCs understanding for how different demand and ordering scenarios influence the future volume levels in the pipeline. The long term target is to reduce tied up capital by adjusting buffer levels and lead times, while still ensuring a certain service level. The model should contribute to more accurate decision making with respect to the previous mentioned aspects. First, a high level process mapping was made in order to select which flows that were suitable for being subject for a detailed mapping. Second, a detailed mapping was made during which several RPC-, process- and function responsible were interviewed. After the detailed mapping, common denominators between the flows were identified and all activities were clustered into a solution that could be generalised and suitable for all flows. Factors such as lead times, deviation risks and capacity limitations were taken into account during the aggregation of activities. When a common view of the different RPC flows had been created, the mathematical relationships for how the goods can move throughout the process could be established. Then, the development and validation of the simulation model, which was an iterative process, could start. A directive was to build the simulation model in Microsoft Excel. Interviews were made with experienced model creators in order to find out how to create a user-friendly and robust model. The creation of the simulation model started with the development of a structure and then the content of each part was defined. A final validation, which consisted of sensitivity analysis and user trials, was finally done in order to ensure the simulation models functioning and accuracy. To conclude, a simulation model that will serve as a helpful tool for the RPCs when they are to decide which order volumes to place has been created. By clearly visualising the simulation results, the simulation model will hopefully increase the RPCs’ comprehension for how the pipeline works with respect to different ordering and demand scenarios. On top of this, the method used, the process mapping and the mathematical relationships that have been defined are important input for a possible future development of a more permanent and robust non-Microsoft Excel solution. This solution could probably be even more precise, automatically updated and have an even higher granularity.
|
292 |
Turkey’s Foreign Energy Policy andRealist Theory : The Cases of Nabuccoand South Stream Gas Pipeline ProjectsAkin, Manolya January 2010 (has links)
This paper focuses on Turkey’s foreign energy policy with a special focus on cases ofNabucco and South Stream Gas Pipeline Projects and examines the issue from the perspectiveof “realist theory”.The research question aims to discover the realist tendency in Turkishforeign energy policy and to find out which gas pipeline project is more beneficial in terms ofnational interest for Turkey and also relevant for meeting the goals of Turkish Foreign EnergyPolicy.Energy is the key concept of the discussions about future of our world and sustainabledevelopment. If energy functions as a subject that increases the tensions between countriesthis means a threat to sustainable development since it becomes a factor jeopardizing peaceand makes cooperation between states imporssible. Also; energy constitutes a fundamentalplace national strategies of states along with sustainable development.In order to make the theory operational, three main dimensions, being security, economicsand strategic are used as tools or in other words as filters to look through, in the analysis offoreign and energy policy, as well as cases of Nabucco and South Stream Gas Pipeline Projects.
|
293 |
Characterization and Avoidance of Critical Pipeline Structures in Aggressive Superscalar ProcessorsSassone, Peter G. 20 July 2005 (has links)
In recent years, with only small fractions of modern processors now accessible in a single cycle, computer architects constantly fight against propagation issues across the die. Unfortunately this trend continues to shift inward, and now the even most internal features of the pipeline are designed around communication, not computation. To address the inward creep of this constraint, this work focuses on the characterization of communication within the pipeline itself, architectural techniques to avoid it when possible, and layout co-design for early detection of problems.
I present work in creating a novel detection tool for common case operand movement which can rapidly characterize an applications dataflow patterns. The results produced are suitable for exploitation as a small number of patterns can describe a significant portion of modern applications.
Work on dynamic dependence collapsing takes the observations from the pattern results and shows how certain groups of operations can be dynamically grouped, avoiding unnecessary communication between individual instructions. This technique also amplifies the efficiency of pipeline data structures such as the reorder buffer, increasing both IPC and frequency.
I also identify the same sets of collapsible instructions at compile time, producing the same benefits with minimal hardware complexity. This technique is also done in a backward compatible manner as the groups are exposed by simple reordering of the binarys instructions.
I present aggressive pipelining approaches for these resources which avoids the critical timing often presumed necessary in aggressive superscalar processors. As these structures are designed for the worst case, pipelining them can produce greater frequency benefit than IPC loss. I also use the observation that the dynamic issue order for instructions in aggressive superscalar processors is predictable. Thus, a hardware mechanism is introduced for caching the wakeup order for groups of instructions efficiently. These wakeup vectors are then used to speculatively schedule instructions, avoiding the dynamic scheduling when it is not necessary.
Finally, I present a novel approach to fast and high-quality chip layout. By allowing architects to quickly evaluate what if scenarios during early high-level design, chip designs are less likely to encounter implementation problems later in the process.
|
294 |
Efficient Verification of Bit-Level Pipelined Machines Using RefinementSrinivasan, Sudarshan Kumar 24 August 2007 (has links)
Functional verification is a critical problem facing the semiconductor
industry: hardware designs are extremely complex and highly optimized,
and even a single bug in deployed systems can cost more than $10
billion. We focus on the verification of pipelining, a key
optimization that appears extensively in hardware systems such as
microprocessors, multicore systems, and cache coherence protocols.
Existing techniques for verifying pipelined machines either consume
excessive amounts of time, effort, and resources, or are not
applicable at the bit-level, the level of abstraction at which
commercial systems are designed and functionally verified.
We present a highly automated, efficient, compositional, and scalable
refinement-based approach for the verification of bit-level pipelined
machines. Our contributions include:
(1) A complete compositional reasoning framework based on refinement.
Our notion of refinement guarantees that pipelined machines satisfy
the same safety and liveness properties as their instruction set
architectures.
In addition, our compositional framework can be used to decompose
correctness proofs into smaller, more manageable pieces, leading to
drastic reductions in verification times and a high-degree of
scalability.
(2) The development of ACL2-SMT, a verification system that integrates
the popular ACL2 theorem prover (winner of the 2005 ACM Software
System Award) with decision procedures. ACL2-SMT allows us to
seamlessly take advantage of the two main approaches to hardware
verification: theorem proving and decision procedures.
(3) A proof methodology based on our compositional reasoning framework
and ACL2-SMT that allows us to reduce the bit-level verification
problem to a sequence of highly automated proof steps.
(4) A collection of general-purpose refinement maps, functions that
relate pipelined machine states to instruction set architecture
states. These refinement maps provide more flexibility and lead to
increased verification efficiency.
The effectiveness of our approach is demonstrated by verifying various
pipelined machine models, including a bit-level, Intel XScale inspired
processor that implements 593 instructions and includes features such
as branch prediction, precise exceptions, and predicated instruction
execution.
|
295 |
A 1.2V 10bits 100-MS/s Pipelined Analog-to-Digital Converter in 90 nm CMOS TechnologyWu, Chun-Tung 07 September 2010 (has links)
The trend toward higher-level circuit integration is the result of demand for lower cost and smaller feature size. The goal of this trend is to have a single-chip solution, in which analog and digital circuits are placed on the same die with advanced CMOS technology. The complete integration of a system may include a digital processor, memory, ADC, DAC, signal conditioning amplifiers, frequency translation, filtering, reference voltage/current generator, etc.
Although advanced fabrication technology benefits digital circuits, it poses great challenges for analog circuits. For instance, the scaling of CMOS devices degrades important analog performance such as output resistance, lowering amplifier gain. Simply lowering the power supply voltage in analog circuits does not necessarily result in lower power dissipation. The many design constraints common to the design of analog circuits makes it difficult to curb their power consumption. This is especially true for already complicated analog systems like ADCs; reducing their appetite for power requires careful analysis of system requirements and special strategies.
This thesis describes a 10bits 100-MS/s low-voltage pipelined analog-to-digital converter (ADC), which consists of 8-stage-pipelined low resolution ADCs and a 2-bit flash ADC. Several critical technologies are adopted to guarantee the resolution and high sampling and converting rate such as 1.5bits per stage conversion, digital correction logic, folded-cascode gain-boosted amplifiers and so on. The ADC is designed in a 90nm CMOS technology with a 1.2V supply voltage.
|
296 |
Design and implementation of sequential input-output order FFT processorHuang, Chien-Chih 17 January 2007 (has links)
In this thesis, a new design methodology for pipeline FFT processor has been proposed. The pipeline FFT processor can achieve high throughput rate, and is very suitable for those systems where the continuous data sequences that call for the FFT processing enter systems sample by sample sequentially. However, the traditional pipeline FFT design based on the common single delay feed-back approach suffers low hardware utilization for the butterfly unit. In addition, the resulted transformed sequence is in the form of bit-reverse order which is not suitable for some FFT applications such as OFDM (Orthogonal Frequency Division Multiplexing). Therefore, this thesis proposes a novel pipelined FFT design by first splitting the input sequence into two data streams, which can then be applied to the FFT data-path based on the feed-forward dual-delay path data commutator. The resulted FFT architecture can achieve full butterfly utilization such that the required number of adders can be reduced by almost a half. One potential drawback of the proposed approach is that some additional large storage buffer is required at the last stage. However, the additional storage buffer can be re-organized and merged with the output reordering buffer together such that the normal-order transformed output sequence can be generated. The proposed approach has been applied to the design of 8-K point FFT in this thesis. The 8-K FFT architecture proposed in this thesis is designed based on the radix- 2^4 algorithm such that the required number of general complex number multipliers can be minimized to three. The multiplication of is realized by the dedicated constant multiplier architecture. By proper data partition and allocation, the large buffer required for many data commutator and the output reordering buffer can both be efficiently realized by multi-bank single-port memory modules. The other salient features of the 8-K FFT also include the table reduction for twiddle factors as well as the optimized variable internal data representation. The proposed FFT processor has been implemented by the TSMC 0.18um 1P6M CMOS process technology with core area of 8.74 which is the smallest design reported in the literature for normal sequential input/output order FFT.
|
297 |
Simulation Of Flow Transients In Liquid Pipeline SystemsKoc, Gencer 01 November 2007 (has links) (PDF)
ABSTRACT
SIMULATION OF FLOW TRANSIENTS IN LIQUID PIPELINE SYSTEMS
Koç / , Genç / er
M.S., Department of Mechanical Engineering
Supervisor: Prof. Dr. O. Cahit Eralp
November 2007, 142 pages
In liquid pipeline systems, transient flow is the major cause of pipeline damages.
Transient flow is a situation where the pressure and flow rate in the pipeline rapidly
changes with time. Flow transients are also known as surge and Waterhammer which
originates from the hammering sound of the water in the taps or valves. In liquid
pipelines, preliminary design parameters are chosen for steady state operations, but a
transient check is always necessary. There are various types of transient flow
situations such as valve closures, pump trips and flow oscillations. During a transient
flow, pressure inside the pipe may increase or decrease in an unexpected way that
cannot be foreseen by a steady state analysis. Flow transients should be considered
by a complete procedure that simulates possible transient flow scenarios and by the
obtained results, precautions should be taken.
There are different computational methods that can be used to solve and simulate
flow transients in computer environment. All computational methods utilize basic
v
flow equations which are continuity and momentum equations. These equations are
nonlinear differential equations and some mathematical tools are necessary to make
these equations linear. In this thesis a computer program is coded that utilizes
&ldquo / Method of Characteristics&rdquo / which is a numerical method in solving partial
differential equations. In pipeline hydraulics, two partial differential equations,
continuity and momentum equations are solved together, in order to obtain the
pressure and flow rate values in the pipeline, during transient flow. In this thesis,
MATLAB 7.1 is used as the programming language and obtained code is converted
to a C# language to be able to integrate the core of the program with a user friendly
Graphical User Interface (GUI).
The Computer program is verified for different scenarios with the available real
pipeline data and results of various reputable agencies. The output of the computer
program is the tabulated pressure and flow rate values according to time indexes and
graphical representations of these values. There are also prompts for users warning
about possible dangerous operation modes of the pipeline components.
|
298 |
TurkeyTasan, Fatma 01 July 2008 (has links) (PDF)
This thesis analyzes Turkey&rsquo / s energy security and its energy cooperation with the European Union and Russia. The thesis argues that Turkey&rsquo / s energy cooperation with Russia and the European Union&rsquo / s energy dialogue between Russia contradict with Turkey&rsquo / s claim to be an exclusive energy corridor between the Caspian Sea region and the European Union. The first part of the thesis deals with the energy security issue in terms of the diversification of energy routes and pipeline politics. In the second part, Turkey&rsquo / s energy needs and its potential to become an energy corridor will be discussed. Turkey&rsquo / s energy cooperation with the European Union and Russia will be explored in the following parts of the thesis. Energy cooperation between the European Union and Russia will be analyzed in the fifth chapter. The last chapter is the conclusion.
|
299 |
Investigation Of Waterhammer Problems In Camlidere Dam - Ivedik Water Treatment Plant Pipeline At Various Hydraulic ConditionsSakabas, Emre 01 February 2012 (has links) (PDF)
Ç / amlidere Dam supplies significant portion of the potable water demand of the City of Ankara. Consequently, it is very important that the pipelines extending over 60 km between the dam and the treatment plant at Ivedik operate continuously. At present, two composite parallel lines are in operation and construction of a third line is considered for the future. It is the aim of this study to investigate the water hammer problems to be expected under various scenarios and also suggest the safe operation conditions for the system. Water hammer analyses of the pipeline are carried out by computer software named HAMMER. This software employs the Method of Characteristics (MoC) which is a widely used mathematical procedure in solving the non-linear differential equations caused by unsteady flow. Within this theses work, existing tunnels, prestressed concrete and steel pipes, third steel pipeline which is planned to be constructed in the future and existing, and future-planned valves are modeled and calibration of the model is implemented. A plenty of scenarios and valve closure principles are constituted in order to specify steady-state conditions and additional water hammer pressures generated by several excitations through the pipeline. Results of these scenarios are compared with previous works conducted on the pipeline system and the most unfavorable ones among those are determined. Then, appropriate closure durations are identified and suggested for pipe fracture safety valves and the flow control valves at Ivedik in order not to cause excessive pressures in the system.
|
300 |
Dynamic execution prediction and pipeline balancing of streaming applicationsAleen, Farhana Afroz 30 August 2010 (has links)
The number and scope of data driven streaming applications is growing. Such streaming applications are promising targets for
effectively utilizing multi-cores because of their inherent amenability to pipelined parallelism. While existing methods of
orchestrating streaming programs on multi-cores have mostly been static, real-world applications show ample variations in execution time that may cause the achieved speedup and throughput to be sub-optimal. One of the principle challenges for moving towards dynamic pipeline balancing has been the lack of approaches that can predict upcoming dynamic variations in execution efficiently, well before they occur. In this thesis, we propose an automated dynamic execution behavior prediction approach based on compiler analysis that can be used to efficiently estimate the time to be spent in different pipeline stages for
upcoming inputs. Our approach first uses dynamic taint analysis to automatically generate an input-based execution characterization of the streaming program, which identifies the key control points where variation in execution might occur with respect to the associated input elements. We then automatically generate a light-weight emulator from the
program using this characterization that can predict the execution paths taken for new streaming inputs and provide execution time estimates and possible dynamic variations. The main challenge in devising such an approach is the essential trade-off
between accuracy and overhead of dynamic analysis. We present experimental evidence that our technique can accurately and
efficiently estimate dynamic execution behaviors for several benchmarks with a small error rate. We also showed that the error rate could be lowered with the trade-off of execution overhead by implementing a selective symbolic expression generation for each of the complex conditions of control-flow operations. Our experiments show that dynamic pipeline balancing using our predicted execution behavior can achieve considerably higher speedup and throughput along with more effective utilization of multi-cores than static balancing approaches.
|
Page generated in 0.2078 seconds