• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 727
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1779
  • 534
  • 473
  • 275
  • 184
  • 139
  • 137
  • 117
  • 117
  • 116
  • 115
  • 110
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Vibrationally resolved silicon L-edge spectrum of SiCl4 in the static exchange approximation

Jonsson, Johnny January 2008 (has links)
The X-ray absorption spectrum of silicon in of SiCl4 has been calculated for the LIII and LII edges. The resulting spectrum has been vibrationally resolved by considering the symmetric stretch vibrational mode and the results has been compared to experiment [4]. One peak from the experiment was found to be missing in the calculated vibrationally resolved spectrum. The other calculated peaks could be matched to the corresponding experimental peaks although significant basis set effects are present. An investigation of one peak beyond the Franck–Condon principle shows it to be a good approximation in the case of the studied system.
332

Multipole Moments of Stationary Spacetimes

Bäckdahl, Thomas January 2008 (has links)
In this thesis we study the relativistic multipole moments for stationary asymptotically flat spacetimes as introduced by Geroch and Hansen. These multipole moments give an asymptotic description of the gravitational field in a coordinate independent way. Due to this good description of the spacetimes, it is natural to try to construct a spacetime from only the set of multipole moments. Here we present a simple method to do this for the static axisymmetric case. We also give explicit solutions for the cases where the number of non-zero multipole moments are finite. In addition, for the general stationary axisymmetric case, we present methods to generate solutions. It has been a long standing conjecture that the multipole moments give a complete characterization of the stationary spacetimes. Much progress toward a proof has been made over the years. However, there is one remaining difficult task: to prove that a spacetime exists with an a-priori given arbitrary set of multipole moments subject to some given condition. Here we present such a condition for the axisymmetric case, and prove that it is both necessary and sufficient. We also extend this condition to the general case without axisymmetry, but in this case we only prove the necessity of our condition.
333

Comparison Of The 2d And 3d Analyses Methods For Cfrds

Ozel, Halil Firat 01 September 2012 (has links) (PDF)
The purpose is to compare the 2D and 3D analysis methodologies in investigating the performance of a Concrete Faced Rockfill Dam (CFRD) under static and dynamic loading conditions. &Ccedil / okal Dam is the case study which is a CFRD located in northwest Turkey at the Thracian Peninsula. Rockfill interface and faceplate were simulated as nonlinear modulus of elasticity, detailed nonlinear tractive behavior and total strain rotating crack model, respectively. These behaviors were calibrated to define the exact behavior by detailed material tests. The analyses that cannot be done by 2D analyses, such as stress, crack width distribution along the face slab are conducted by 3D analyses to determine the necessity of these outcomes. Since effect of valley ends cannot be produced by 2D analyses, it is necessary to check 3D analyses to ensure liability of the results. Another comparison between detailed analysis of 2D models and linear elastic 2D models were covered to get practical and industrial solutions for the guiding methods of CFRDs for preliminary designs in this study.
334

Benchmarking Points-to Analysis

Gutzmann, Tobias January 2013 (has links)
Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools. Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis. In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy. We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard. We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
335

System Level Exploration of RRAM for SRAM Replacement

Dogan, Rabia January 2013 (has links)
Recently an effective usage of the chip area plays an essential role for System-on-Chip (SOC) designs. Nowadays on-chip memories take up more than 50%of the total die-area and are responsible for more than 40% of the total energy consumption. Cache memory alone occupies 30% of the on-chip area in the latest microprocessors. This thesis project “System Level Exploration of RRAM for SRAM Replacement” describes a Resistive Random Access Memory (RRAM) based memory organizationfor the Coarse Grained Reconfigurable Array (CGRA) processors. Thebenefit of the RRAM based memory organization, compared to the conventional Static-Random Access Memory (SRAM) based memory organization, is higher interms of energy and area requirement. Due to the ever-growing problems faced by conventional memories with Dynamic Voltage Scaling (DVS), emerging memory technologies gained more importance. RRAM is typically seen as a possible candidate to replace Non-volatilememory (NVM) as Flash approaches its scaling limits. The replacement of SRAMin the lowest layers of the memory hierarchies in embedded systems with RRAMis very attractive research topic; RRAM technology offers reduced energy and arearequirements, but it has limitations with regards to endurance and write latency. By reason of the technological limitations and restrictions to solve RRAM write related issues, it becomes beneficial to explore memory access schemes that tolerate the longer write times. Therefore, since RRAM write time cannot be reduced realistically speaking we have to derive instruction memory and data memory access schemes that tolerate the longer write times. We present an instruction memory access scheme to compromise with these problems. In addition to modified instruction memory architecture, we investigate the effect of the longer write times to the data memory. Experimental results provided show that the proposed architectural modifications can reduce read energy consumption by a significant frame without any performance penalty.
336

Delay Analysis of Digital Circuits Using Prony's Method

Fu, Jingyi J.Y. 28 July 2011 (has links)
This thesis describes possible applications of Prony's method in timing analysis of digital circuits. Such applications include predicting the future shape of the waveform in DTA(Dynamic Timing Analysis) and delay look-up table in STA(Static Timing Analysis). Given some equally spaced output values, the traditional Prony's method can be used to extract poles and residues of a linear system, i.e. to characterize a waveform using an exponential function. In this thesis, not only values but also equally spaced derivatives are tested. Still using same idea of the traditional Prony's method, poles and residues can also be extracted with those values and derivatives. The resultant poles and residues will be used to predict the output waveform in DTA analysis. The benefits brought by the using of derivatives include less simulation steps and less CPU time consuming than the regular constant step simulation. As a matter of fact, the Prony's method can precisely approximate a complicated waveform. Such property can be applied for STA analysis. The Prony's approximation can be used to precisely record an output waveform, which is used as an entry of the look-up table of STA. Since the accuracy of STA analysis relies on the accuracy of the input and output waveform in the look-up table, the accuracy of the Prony's approach is promising.
337

Three dimensional heterogeneous finite element method for static multi‐group neutron diffusion

Aydogdu, Elif Can 01 August 2010 (has links)
Because current full‐core neutronic‐calculations use two‐group neutron diffusion and rely on homogenizing fuel assemblies, reconstructing pin powers from such a calculation is an elaborate and not very accurate process; one which becomes more difficult with increased core heterogeneity. A three‐dimensional Heterogeneous Finite Element Method (HFEM) is developed to address the limitations of current methods by offering fine‐group energy representation and fuel‐pin‐level spatial detail at modest computational cost. The calculational cost of the method is roughly equal to the calculational cost of the Finite Differences Method (FDM) using one mesh box per fuel assembly and a comparable number of energy groups. Pin‐level fluxes are directly obtained from the method’s results without the need for reconstruction schemes. / UOIT
338

Circuit Timing and Leakage Analysis in the Presence of Variability

Heloue, Khaled R. 15 February 2011 (has links)
Driven by the need for faster devices and higher transistor densities, technology trends have pushed transistor dimensions into the deep sub-micron regime. This continued scaling, however, has led to many challenges facing digital integrated circuits today. One important challenge is the increased variations in the underlying process and environmental parameters, and the significant impact of this variability on circuit timing and leakage power, making it increasingly difficult to design circuits that achieve a required specification. Given these challenges, there is a need for computer-aided design (CAD) techniques that can predict and analyze circuit performance (timing and leakage) accurately and efficiently in the presence of variability. This thesis presents new techniques for variation-aware timing and leakage analysis that address different aspects of the problem. First, on the timing front, a pre-placement statistical static timing analysis technique is presented. This technique can be applied at an early stage of design, when within-die correlations are still unknown. Next, a general parameterized static timing analysis framework is proposed, which supports a general class of nonlinear delay models and handles both random (process) parameters with arbitrary distributions and non-random (environmental) parameters. Following this, a parameterized static timing analysis technique is presented, which can capture circuit delay exactly at any point in the parameter space. This is enabled by identifying all potentially critical paths in the circuit through novel and efficient pruning algorithms that improve on the state of art both in theoretical complexity and runtime. Also on the timing front, a novel distance-based metric for robustness is proposed. This metric can be used to quantify the susceptibility of parameterized timing quantities to failure, thus enabling designers to fix the nodes with smallest robustness values in order to improve the overall design robustness. Finally, on the leakage front, a statistical technique for early-mode and late-mode leakage estimation is presented. The novelty lies in the random gate concept, which allows for efficient and accurate full-chip leakage estimation. In its simplest form, the leakage estimation reduces to finding the area under a scaled version of the within-die channel length auto-correlation function, which can be done in constant time.
339

Design and implementation of an approximate full adder and its use in FIR filters

Satheesh Varma, Nikhil January 2013 (has links)
Implementation of the polyphase decomposed FIR filter structure involves two steps; the generation of the partial products and the efficient reduction of the generated partial products. The partial products are generated by a constant multiplication of the filter coefficients with the input data and the reduction of the partial products is done by building a pipelined adder tree using FAs and HAs. To improve the speed and to reduce the complexity of the reduction tree a4:2 counter is introduced into the reduction tree. The reduction tree is designed using a bit-level optimized ILP problem which has the objective function to minimize the overall cost of the hardware used. For this purpose the layout design for a 4:2 counter has been developed and the cost function has been derived by comparing the complexity of the design against a standard FA design. The layout design for a 4:2 counter is implemented in a 65nm process using static CMOS logic style and DPL style. The average power consumption drawn from a 1V power supply, for the static CMOS design was found to be 16.8μWand for the DPL style it was 12.51μW. The worst case rise or fall time for the DPL logic was 350ps and for the static CMOS logic design it was found to be 260ps. The usage of the 4:2 counter in the reduction tree infused errors into the filter response, but it helped to reduce the number of pipeline stages and also to improve the speed of the partial product reduction.
340

Measurements Versus Predictions for the Static and Dynamic Characteristics of a Four-pad Rocker-pivot, Tilting-pad Journal Bearing

Tschoepe, David 1987- 14 March 2013 (has links)
Measured and predicted static and dynamic characteristics are provided for a four-pad, rocker-pivot, tilting-pad journal bearing in the load-on-pad and load-between-pad orientations. The bearing has the following characteristics: 4 pads, .57 pad pivot offset, 0.6 L/D ratio, 60.33 mm (2.375in) pad axial length, 0.08255 mm (0.00325 in) radial clearance in the load-on-pad orientation, and 0.1189 mm (0.00468 in) radial clearance in the load-between-pad orientation. Tests were conducted on a floating test bearing design with unit loads ranging from 0 to 2903 kPa (421.1 psi) and speeds from 6.8 to 13.2 krpm. For all rotor speeds, hot-clearance measurements were taken to show the reduction in bearing clearance due to thermal expansion of the shaft and pads during testing. As the testing conditions get hotter, the rotor, pads, and bearing expand, decreasing radial bearing clearance. Hot-clearance measurements showed a 16-25% decrease in clearance compared to a clearance measurement at room temperature. For all test conditions, dynamic tests were performed over a range of excitation frequencies to obtain complex dynamic stiffness coefficients as a function of frequency. The direct real dynamic stiffness coefficients were then fitted with a quadratic function with respect to frequency. From the curve fit, the frequency dependence was captured by including a virtual-mass matrix [M] to produce a frequency independent [K][C][M] model. The direct dynamic stiffness coefficients for the load-on-pad orientation showed significant orthotropy, while the load-between-pad did not. The load-between-pad showed slight orthotropy as load increased. Experimental cross-coupled stiffness coefficients were measured in both load orientations, but were of the same sign and significantly less than direct stiffness coefficients. In both orientations the imaginary part of the measured dynamic stiffness increased linearly with increasing frequency, allowing for frequency independent direct damping coefficients. Rotordynamic coefficients presented were compared to predictions from two different Reynolds-based models. Both models showed the importance of taking into account pivot flexibility and different pad geometries (due to the reduction in bearing clearance during testing) in predicting rotordynamic coefficients. If either of these two inputs were incorrect, then predictions for the bearings impedance coefficients were very inaccurate. The main difference between prediction codes is that one of the codes incorporates pad flexibility in predicting the impedance coefficients for a tilting-pad journal bearing. To look at the effects that pad flexibility has on predicting the impedance coefficients, a series of predictions were created by changing the magnitude of the pad's bending stiffness. Increasing the bending stiffness used in predictions by a factor of 10 typically caused a 3-11% increase in predicted Kxx and Kyy, and a 10-24% increase in predicted Cxx and Cyy. In all cases, increasing the calculated bending stiffness from ten to a hundred times the calculated value caused slight if any change in Kxx, Kyy, Cxx, and Cyy. For a flexible pad an increase in bending stiffness can have a large effect on predictions; however, for a more rigid pad an increase in pad bending stiffness will have a much lesser effect. Results showed that the pad's structural bending stiffness can be an important factor in predicting impedance coefficients. Even though the pads tested in this thesis are extremely stiff, changes are still seen in predictions when the magnitude of the pad?s bending stiffness is increased, especially in Cxx, and Cyy. The code without pad flexibility predicted Kxx and Kyy much more accurately than the code with pad flexibility. The code with pad flexibility predicts Cxx more accurately, while the code without pad flexibility predicted Cyy more accurately. Regardless of prediction Code used, the Kxx and Kyy were over-predicted at low loads, but predicted more accurately as load increased. Cxx, and Cyy were modeled very well in the load-on-pad orientation, while slightly overpredicted in the load-between-pad orientation. For solid pads, like the ones tested here, both codes do a decent job at predicting impedance coefficients

Page generated in 0.079 seconds