• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1771
  • 529
  • 473
  • 274
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 109
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
721

Modeling the Microstructural Evolution during Hot Deformation of Microalloyed Steels

Bäcke, Linda January 2009 (has links)
This thesis contains the development of a physically-based model describing the microstructural evolution during hot deformation of microalloyed steels. The work is mainly focused on the recrystallization kinetics. During hot rolling, the repeated deformation and recrystallization provides progressively refined recrystallized grains. Also, recrystallization enables the material to be deformed more easily and knowledge of the recrystallization kinetics is important in order to predict the required roll forces. Hot strip rolling is generally conducted in a reversing roughing mill followed by a continuous finishing mill. During rolling in the roughing mill the temperature is high and complete recrystallization should occur between passes. In the finishing mill the temperature is lower which means slower recrystallization kinetics and partial or no recrystallization often occurs. If microalloying elements such as Nb, Ti or V are present, the recrystallization can be further retarded by either solute drag or particle pinning. When recrystallization is completely retarded and strain is accumulated between passes, the austenite grains will be severely deformed, i.e. pancaking occurs. Pancaking of the grains provides larger amount of nucleation sites for ferrite grains upon transformation and hence a finer ferrite grain size is achieved. In this work a physically-based model has been used to describe the microstructural evolution of austenite. The model is built-up by several sub-models describing dislocation density evolution, recrystallization, grain growth and precipitation. It is based on dislocation density theory where the generated dislocations during deformation provide the driving force for recrystallization. In the model, subgrains act as nuclei for recrystallization and the condition for recrystallization to start is that the subgrains reach a critical size and configuration. The retarding effect due to elements in solution and as precipitated particles is accounted for in the model. To verify and validate the model axisymmetric compression tests combined with relaxation were modeled and the results were compared with experimental data. The precipitation sub-model was verified by the use of literature data. In addition, rolling in the hot strip mill was modeled using process data from the hot strip mill at SSAB Strip Products Division. The materials investigated were plain C-Mn steels and Nb microalloyed steels. The results from the model show good agreement with measured data. / QC 20100706
722

Dynamic Ground Clearance

Hamache, Violette January 2013 (has links)
The purpose of this work is to develop a test method which will consider the variation of the ground clearance when driving, the so-called dynamic ground clearance. This has been done through the analysis of a specific application: the tractors in grain used in Brazil. Series of real life tests are run in order to obtain data on the tire compressions and the suspension travels. The tractor used is a 6x4 and is loaded with a trailer. When investigating critical cases, the minimum dynamic ground clearance is found to be as small as 123 mm at the axle 1, 78 mm at the exhaust outlet, 137 mm at the fuel tank, 35 mm at the bumper and 213 mm at the axle 2. These data will be transmitted to the engineer responsible for the chassis design in order for him to get a better understanding of the motion of the truck relative to ground.
723

Dygnsvariation av metanemission från en anlagd våtmark / Diurnal patterns of methane emission from a constructed wetland

Heiberg, Lisa January 2000 (has links)
The aim of the study was to investigate if methane emission in a constructed wetland changed in a diurnal pattern correlating to temperature, humidity or light conditions. The gas measurements were carried out with a static chamber technique. The wetland (in Nykvarn outside of Linköping, Sweden) takes care of wastewater to reduce the nitrogen loads. Measurements were carried out at three different occasions in the summer of 1998 on two sites in the wetland. One site was close to the inflow, inhabited by Lemnaceae, and another site was located further downstream inhabited by the emergent macrophyte Typha latifolia. The results showed a variation, but no discernible diurnal pattern. The Typha site had a methane emission rate of 166 mg CH4 m-2d-1 and the Lemnaceae site had an methane emission rate of 712 mg CH4 m-2d-1. In all experiments at the Typha site, the highest methane emission rate was obtained at sunrise.
724

Ply clustering effect on composite laminates under low-velocity impact using FEA

Liu, Hongquan 01 1900 (has links)
With the development of the design and manufacture technology, composite materials are widely used in the aeronautical industry. But, one of the main concerns which affects the application of composites is foreign object impact. The damages induced by the Low Velocity Impact (LVI), which can significantly reduce the strength of the structures, can’t be easily inspected routinely. The so-called Barely Visible Impact Damages (BVID) due to LVI typically includes interlaminar delamination, matrix cracks and fibre fracture at the back face. Previous researches have shown that the results of LVI test are similar to that of the Quasi-Static Load (QSL) test. The initiation and propagation of delamination can be detected more easily in the QSL test and the displacement and reaction force of the impactor can be controlled and measured much more accurately. Moreover, it is easier to model QSL tests than dynamic impacts. To investigate the impact damage induced by LVI, a Finite Element (FE) model employing cohesive elements was used. At the same time, the ply clustering effect, when several plies of the same orientation were stack together, was modelled in the FE model in terms of damage resistance and damage size. A bilinear traction-separation law was introduced in the cohesive elements employed to simulate the initiation and propagation of the impact damage and delamination. Firstly, a 2D FE model of the Double Cantilever Beam (DCB) and End Notched Flexure (ENF) specimens were built using the commercial FEM software ABAQUS. The results have shown that the cohesive elements can be used to simulate mode I and mode II delamination sufficiently and correctly. Secondly, an FE model of a composite plate under QSL but without simulating damage was built using the continuum shell elements. Agreement between the FEA results with published test results is good enough to validate the capability of continuum shell elements and cohesive elements in modelling the composite laminate under the transverse load condition (QSL). Thirdly, an FE model containing discrete interface delamination and matrix cracks at the back face of the composite plate was built by pre-setting the cohesive failure elements at potential damage locations according to the experimental observation. A cross-ply laminate was modelled first where fewer interfaces could be delaminated. Good agreement was found in terms of the delamination area and impactor’s displacement-force curve. Finally, the effect of ply clustering on impact damage resistance was studied using Quasi-Isotropic (QI) layup laminates. Because of the limited time available for calculation, the simulation was only partly completed for the quasi-isotropic laminates (L2 configuration) which have more delaminated interfaces. The results showed that cohesive elements obeying the bilinear traction-separation law were capable of predicting the reaction force in quasi-isotropic laminates. However, discrepancies with the test results in terms of delamination area were observed for quasi-isotropic laminates. These discrepancies are mainly attributed to the simplification of matrix cracks simulation and compressive load at the interface in the thickness direction which is not taken into account.
725

Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

Abu-Rahma, Mohamed Hassan 11 1900 (has links)
Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique.
726

Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

Abu-Rahma, Mohamed Hassan 11 1900 (has links)
Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique.
727

Adaptive Critic Designs Based Neurocontrollers for Local and Wide Area Control of a Multimachine Power System with a Static Compensator

Mohagheghi, Salman 10 July 2006 (has links)
Modern power systems operate much closer to their stability limits than before. With the introduction of highly sensitive industrial and residential loads, the loss of system stability becomes increasingly costly. Reinforcing the power grid by installing additional transmission lines, creating more complicated meshed networks and increasing the voltage level are among the effective, yet expensive solutions. An alternative approach is to improve the performance of the existing power system components by incorporating more intelligent control techniques. This can be achieved in two ways: introducing intelligent local controllers for the existing components in the power network in order to employ their utmost capabilities, and implementing global intelligent schemes for optimizing the performance of multiple local controllers based on an objective function associated with the overall performance of the power system. Both these aspects are investigated in this thesis. In the first section, artificial neural networks are adopted for designing an optimal nonlinear controller for a static compensator (STATCOM) connected to a multimachine power system. The neurocontroller implementation is based on the adaptive critic designs (ACD) technique and provides an optimal control policy over the infinite horizon time of the problem. The ACD based neurocontroller outperforms a conventional controller both in terms of improving the power system dynamic stability and reducing the control effort required. The second section investigates the further improvement of the power system behavior by introducing an ACD based neurocontroller for hierarchical control of a multimachine power system. The proposed wide area controller improves the power system dynamic stability by generating optimal control signals as auxiliary reference signals for the synchronous generators automatic voltage regulators and the STATCOM line voltage controller. This multilevel hierarchical control scheme forces the different controllers throughout the power system to optimally respond to any fault or disturbance by reducing a predefined cost function associated with the power system performance.
728

Environment Analysis of Higher-Order Languages

Might, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the tri-facetted nature of lambda. In one construct, the fundamental control, environment and data structures of a language meet and intertwine. With the control facet tamed nearly two decades ago, this work brings the environment facet to heel, defining the environment problem and developing its solution: environment analysis. Environment analysis allows a compiler to reason about the equivalence of environments, i.e., name-to-value mappings, that arise during a program's execution. In this dissertation, two different techniques-abstract counting and abstract frame strings-make this possible. A third technique, abstract garbage collection, makes both of these techniques more precise and, counter to intuition, often faster as well. An array of optimizations and even deeper analyses which depend upon environment analysis provide motivation for this work. In an abstract interpretation, a single abstract entity represents a set of concrete entities. When the entities under scrutiny are bindings-single name-to-value mappings, the atoms of environment-then determining when the equality of two abstract bindings infers the equality of their concrete counterparts is the crux of environment analysis. Abstract counting does this by tracking the size of represented sets, looking for singletons, in order to apply the following principle: If {x} = {y}, then x = y. Abstract frame strings enable environmental reasoning by statically tracking the possible stack change between the births of two environments; when this change is effectively empty, the environments are equivalent. Abstract garbage collection improves precision by intermittently removing unreachable environment structure during abstract interpretation.
729

A Study on Wind Turbine Low Voltage Ride Through Capability Enhancement by STATCOM and DVR

Lin, Chih-peng 05 February 2010 (has links)
When more induction generator based wind farms are integrated into the power system, the system voltage dips and stability problems may arise due to the draw of reactive power by induction generators. The power system short-circuit event induced wind turbine trips could result in power imbalance and lead to power system instability. This thesis studies the influence of two compensation techniques on the wind turbine low voltage ride-through (LVRT) capability. One of which is based on a parallel compensation by a static synchronous compensator (STATCOM), and the other one is a series compensation by a dynamic voltage restorer (DVR). In this study, Matlab tools and models are used to simulate an active-stall controlled fixed-speed induction generator connected to a power system. Two system configurations are used to simulate three phase faults and compare the improvement of wind turbine LVRT capability due to the two studied compensation techniques. Simulation results indicate that wind turbine compensated by DVR would have better LVRT performance than that by STATCOM in dealing with the low voltage situations due to system faults.
730

Development of Intelligent-Based Solar and Diesel-Wind Hybrid Power Control Systems

Chang-Chien, Nan-Yi 21 June 2010 (has links)
A solar and diesel-wind hybrid power control systems is proposed in the thesis. The system consists of solar power, wind power, diesel-engine, a static synchronous compensator and an intelligent power controller. MATLAB/Simulink was used to build the dynamic model and simulate the solar and diesel-wind hybrid power system. A static synchronous compensator was used to supply reactive power and regulate the voltage of the hybrid system. To achieve a fast and stable response for the real power control, an intelligent controller was proposed, which consists of the Radial Basis Function Network (RBFN) and the Elman Neural Network (ENN) for maximum power point tracking (MPPT). The pitch angle control of wind power uses ENN controller, and the output is fed to the wind turbine to achieve the MPPT. The solar system uses RBFN, and the output signal is used to control the DC / DC boost converters to achieve the MPPT.

Page generated in 0.0655 seconds