• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 629
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 10
  • Tagged with
  • 1227
  • 1227
  • 276
  • 267
  • 254
  • 253
  • 164
  • 160
  • 160
  • 129
  • 128
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Simulated Annealing : Simulated Annealing for Large Scale Optimization in Wireless Communications / : Simulated Annealing using Matlab Software

Sakhavat, Tamim, Grissa, Haithem, Abdalrahman, Ziyad January 2012 (has links)
In this thesis a simulated annealing algorithm is employed as an optimization tool for a large scale optimization problem in wireless communication. In this application, we have 100 places for transition antennas and 100 places for receivers, and also a channel between each position in both areas. Our aim is to nd, say the best 3 positions there, in a way that the channel capacity is maximized. The number of possible combinations is huge. Hence, nding the best channel will take a very long time using an exhaustive search. To solve this problem, we use a simulated annealing algorithm and estimate the best answer. The simulated annealing algorithm chooses a random element, and then from the local search algorithm, compares the selected element with its neighbourhood. If the selected element is the maximum among its neighbours, it is a local maximum. The strength of the simulated annealing algorithm is its ability to escape from local maximum by using a random mechanism that mimics the Boltzmann statistic.
62

SOLID SOURCE CHEMICAL VAPOR DEPOSITION OF REFRACTORY METAL SILICIDES FOR VLSI INTERCONNECTS.

HEY, HANS PETER WILLY. January 1984 (has links)
Low resistance gate level interconnects can free the design of VLSI circuits from the R-C time constant limitations currently imposed by poly-silicon based technology. The hotwall low pressure chemical vapor deposition of molybdenum and tungsten silicide from their commercially available hexacarbonyls and silane is presented as a deposition method producing IC-compatible gate electrodes of reduced resistivity. Good hotwall deposition uniformity is demonstrated at low temperatures (200 to 300 C). The as-deposited films are amorphous by x-ray diffraction and can be crystallized in subsequent anneal steps with anneal induced film shrinkage of less than 12 percent. Surface oxide formation is possible during this anneal cycle. Auger spectroscopy and Rutherford backscattering results indicate that silicon-rich films can be deposited, and that the concentrations of carbon and oxygen incorporated from the carbonyl source are a function of the deposition parameters. At higher deposition temperatures and larger source throughput the impurity incorporation is markedly reduced. Good film adhesion and excellent step coverage are observed. Electrical measurements show that the film resistivities after anneal are comparable to those of sputtered or evaporated silicide films. Bias-temperature capacitance-voltage measurements demonstrate that direct silicide gate electrodes have properties comparable to standard metal-oxide-silicon systems. The substitution of CVD silicides for standard MOS gate metals appears to be transparent in terms of transistor performance, except for work function effects on the threshold voltage. The large wafer throughput and good step coverage of hotwall low pressure silicide deposition thus promises to become a viable alternative to the poly-silicon technology currently in use.
63

An intelligence driven test system for detection of stuck-open faults in CMOS sequential circuits

Sagahyroon, Assim Abdelrahman January 1989 (has links)
This paper discusses an intelligence driven test system for generation of test sequences for stuck-open faults in CMOS VLSI sequential circuits. The networks in system evaluation are compiled from an RTL representation of the digital system. To excite a stuck-open fault it is only necessary that the output of the gate containing the fault take on opposite values during two successive clock periods. Excitation of the fault must therefore constrain two successive input/present-state vectors, referred to in the paper as the pregoal and goal nodes respectively. An initialization procedure is used to determine the pregoal state. Two theorems are proved establishing a 1-1 correspondence between stuck-at and stuck-open faults. As a result the D-algorithm may be used to determine the goal node. Determining the nodes was tried on many circuits and a high success rate was achieved. The pregoal is observed to have more "don't care" values. The next step is a "sensitization search" for an input sequence (X(s)) that drives the memory elements to the determined pregoal and goal states over two consecutive clock periods. It is easier for the search to reach the pregoal due to the greater number of "don't cares." Following a "propagation search" for an input sequence (X(p)) to drive the effect of the fault to an external output, the sequence of vectors (X(s)), (X(p)) will be passed to an "ALL-Fault Simulator" for verification. The simulation will be clock mode but will represent the output retention resulting from the stuck-open faults. One measure of the value of a special search procedure for stuck-open faults can be obtained by comparing the results employing this search with results obtained by searching only for the analogous stuck-at faults. A first order prediction would be a likelihood less than 0.5 that the predecessor of a stuck-at goal node would excite an opposite output in the gate containing the fault. A comparison of the two methods using the stuck-open "All-Fault Simulator" is presented.
64

Tracing large-scale structure with radio sources

Lindsay, Samuel Nathan January 2015 (has links)
In this thesis, I investigate the spatial distribution of radio sources, and quantify their clustering strength over a range of redshifts, up to z _ 2:2, using various forms of the correlation function measured with data from several multi-wavelength surveys. I present the optical spectra of 30 radio AGN (S1:4 > 100 mJy) in the GAMA/H-ATLAS fields, for which emission line redshifts could be deduced, from observations of 79 target sources with the EFOSC2 spectrograph on the NTT. The mean redshift of these sources is z = 1:2; 12 were identified as quasars (40 per cent), and 6 redshifts (out of 24 targets) were found for AGN hosts to multiple radio components. While obtaining spectra for hosts of these multi-component sources is possible, their lower success rate highlights the difficulty in acheiving a redshift-complete radio sample. Taking an existing spectroscopic redshift survey (GAMA) and radio sources from the FIRST survey (S1:4 > 1 mJy), I then present a cross-matched radio sample with 1,635 spectroscopic redshifts with a median value of z = 0:34. The spatial correlation function of this sample is used to find the redshiftspace (s0) and real-space correlation lengths (r0 _ 8:2 h 1Mpc), and a mass bias of _1.9. Insight into the redshift-dependence of these quantities is gained by using the angular correlation function and Limber inversion to measure the same spatial clustering parameters. Photometric redshifts from SDSS/UKIDSS are incorporated to produce a larger matched radio sample at z ' 0:48 (and low- and high-redshift subsamples at z ' 0:30 and z ' 0:65), while their redshift distribution is subtracted from that taken from the SKADS radio simulations to estimate the redshift distribution of the remaining unmatched sources (z ' 1:55). The observed bias evolution over this redshift range is compared with model predictions based on the SKADS simulations, with good agreement at low redshift. The bias found at high redshift significantly exceeds these predictions, however, suggesting a more massive population of galaxies than expected, either due to the relative proportions of different radio sources, or a greater typical halo mass for the high-redshift sources. Finally, the reliance on a model redshift distribution to reach to higher redshifts is removed, as the angular cross-correlation function is used with deep VLA data (S1:4 > 90 _Jy) and optical/IR data from VIDEO/CFHTLS (Ks < 23:5) over 1 square degree. With high-quality photometric redshifts up to z _ 4, and a high signal-to-noise clustering measurement (due to the _100,000 Ks-selected galaxies), I am able to find the bias of a matched sample of only 766 radio sources (as well as of v vi the VIDEO sources), divided into 4 redshift bins reaching a median bias at z ' 2:15. Again, at high redshift, the measured bias appears to exceed the prediction made from the SKADS simulations. Applying luminosity cuts to the radio sample at L > 1023 WHz 1 and higher (removing any non-AGN sources), I find a bias of 8–10 at z _ 1:5, considerably higher than for the full sample, and consistent with the more numerous FRI AGN having similar mass to the FRIIs (M _ 1014 M_), contrary to the assumptions made in the SKADS simulations. Applying this adjustment to the model bias produces a better fit to the observations for the FIRST radio sources cross-matched with GAMA/SDSS/UKIDSS, as well as for the high-redshift radio sources in VIDEO. Therefore, I have shown that we require a more robust model of the evolution of AGN, and their relation to the underlying dark matter distribution. In particular, understanding these quantities for the abundant FRI population is crucial if we are to use such sources to probe the cosmological model as has been suggested by a number of authors (e.g. Raccanelli et al., 2012; Camera et al., 2012; Ferramacho et al., 2014).
65

Distributed Interactive Simulation (DIS): An Overview Of The System And Its Potential Uses

Boyd, Edward L., Novits, Charles S., Boisvert, Robert A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / The Distributed Interactive Simulation (DIS) concept, since its inception, has been defined into three separate but distinct areas of service. • Viewing of data in the real-time environment. • Multiple range viewing and usage of"real-time data." • Problems with the sharing of information through DIS. This paper will discuss the DIS concept and some of the various methods available to display this data to users of the system.
66

Computationally efficient passivity-preserving model order reduction algorithms in VLSI modeling

Chu, Chung-kwan., 朱頌君. January 2007 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
67

An intelligent function level backward state justification search for ATPG.

Karunaratne, Maddumage Don Gamini. January 1989 (has links)
This dissertation describes an innovative approach to the state justification portion of the sequential circuit automatic test pattern generation (ATPG) process. Given the absence of a stored fault an ATPG controller invokes some combinational circuit test generation procedure, such as the D-algorithm, to identify a circuit state (goal state) and input vectors that will sensitize a selected fault. The state justification phase then finds a transfer sequence to the goal from the present state. A forward fault propogation search can be successfully guided through state space from the present state but the forward justification search is less efficient and the failure rate is high. The backward function level search invokes inverse RTL level primitives and exploits easy movement of data vectors in structured VLSI circuits. Examples illustrated are in AHPL. This search is equally applicable to an RTL level subset of VHDL. Combinational logic units are treated as functions and the circuit states are partitioned into control states and data states. The search proceeds backwards over the control state space starting from the goal state node and data states are transformed according to the control flow. Vectorized data paths in VLSI circuits and search guiding heuristics which favor convenient inverse functions keep the number of search nodes low. Partial covers, conceptually similar to singular covers in D-algorithm, model the inverse functions of combinational logic units. The search successfully terminates when a child state node logically matches the present state and the present state values can satisfy all the constraints encountered along the search path.
68

Robust Measurement of the Cosmic Distance Scale Using Baryon Acoustic Oscillations

Xu, Xiaoying January 2012 (has links)
We present techniques for obtaining precision distance measurements using the baryon acoustic oscillations (BAO) through controlling systematics and reducing statistical uncertainties. Using the resulting distance-redshift relation, we can infer cosmological parameters such as w, the equation of state of dark energy. We introduce a new statistic, ɷ(l)(r(s)), for BAO analysis that affords better control over systematics. It is computed by band-filtering the power spectrum P(k) or the correlation function ξ(r) to extract the BAO signal. This is conducive to several favourable outcomes. We compute ɷ(l)(r(s)) from 44 simulations and compare the results to P(k) and ξ(r). We find that the acoustic scales and theoretical errors we measure are consistent between all three statistics. We demonstrate the first application of reconstruction to a galaxy redshift survey. Reconstruction is designed to partially undo the effects of non-linear structure growth on the BAO, allowing more precise measurements of the acoustic scale. We also present a new method for deriving a smooth covariance matrix based on a Gaussian model. In addition, we develop and perform detailed robustness tests on the ξ(r) model we employ to extract the BAO scale from the data. Using these methods, we obtain spherically-averaged distances to z = 0.35 and z = 0.57 from SDSS DR7 and DR9 with 1.9% and 1.7% precision respectively. Combined with WMAP7 CMB observations, SNLS3 data and BAO measurements from 6dF, we measure w = -1.08 ± 0.08 assuming a wCDM cosmology. This represents a ~8% measurement of w and is consistent with a cosmological constant.The preceding does not capture the expansion history of the universe, H(z), encoded in the line-of-sight distance scale. To disentangle H(z), we exploit the anisotropic BAO signal that arises if we assume the wrong cosmology when calculating the clustering distribution. Since we expect the BAO signal to be isotropic, we can use the magnitude of the anisotropy to separately measure H(z) and D(A)(z). We apply our simple models to SDSS DR7 data and obtain a ~3.6% measurement of D(A)(z=0.35) and a ~8.4% measurement of H(z = 0.35).
69

THE HIGH FREQUENCY AND TEMPERATURE DEPENDENCE OF DIELECTRIC PROPERTIES OF PRINTED CIRCUIT BOARD MATERIALS

Rasafar, Hamid, 1954- January 1987 (has links)
New VLSI and VHSIC devices require increased performance from electronic packages. The major challenge that must be met in materials/process development for high complexity and high speed integrated circuits is the processing of even larger amounts of signals with low propagation delay. Hence, materials with low dielectric constant and low dissipation factor are being sought. In this investigation the dielectric properties of the most commonly used composite materials for printed circuit boards, Teflon-glass and Epoxy-glass, were measured in the frequency and temperature intervals of 100 HZ-1 GHZ and 25-260°C, respectively. From the measured results, it is concluded that Teflon-glass is more suitable for the board level packaging of high performance circuits due to its lower dielectric constant and low dissipation factor.
70

Efficient reconfiguration by degradation in defect-tolerant VLSI arrays

Chen, Ing-yi, 1962- January 1989 (has links)
This thesis addresses the problem of constructing a flawless subarray from a defective VLSI/WSI array consists of identical cells such as memory cells or processors. In contrast to the redundancy approach in which some cells are dedicated as spares, all the cells in the degradation approach are treated in a uniform way. Each cell can be either fault-free or defective and a subarray which contains no faulty cell is derived under constraints of switching and routing mechanisms. Although extensive literatures exist concerning spare allocation and reconfiguration in arrays with redundancy, little research has been published on optimal reconfiguration in a degradable array. A systematic method based on graph theoretic models is developed to deal with the problem. The complexities of reconfiguration are analyzed for schemes using different switching mechanisms. Efficient heuristic algorithms are presented to determine a target subarray from the defective host array.

Page generated in 0.0426 seconds