• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1988
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 28
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4312
  • 1286
  • 517
  • 516
  • 464
  • 330
  • 315
  • 306
  • 296
  • 291
  • 282
  • 274
  • 271
  • 260
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Improving initial conditions for cosmological N -body simulations

Garrison, Lehman H., Eisenstein, Daniel J., Ferrer, Douglas, Metchnik, Marc V., Pinto, Philip A. 01 October 2016 (has links)
In cosmological N-body simulations, the representation of dark matter as discrete 'macroparticles' suppresses the growth of structure, such that simulations no longer reproduce linear theory on small scales near k(Nyquist). Marcos et al. demonstrate that this is due to sparse sampling of modes near k(Nyquist) and that the often-assumed continuum growing modes are not proper growing modes of the particle system. We develop initial conditions (ICs) that respect the particle linear theory growing modes and then rescale the mode amplitudes to account for growth suppression. These ICs also allow us to take advantage of our very accurate N-body code ABACUS to implement second-order Lagrangian perturbation theory (2LPT) in configuration space. The combination of 2LPT and rescaling improves the accuracy of the late-time power spectra, halo mass functions, and halo clustering. In particular, we achieve 1 per cent accuracy in the power spectrum down to k(Nyquist), versus k(Nyquist)/4 without rescaling or k(Nyquist)/13 without 2LPT, relative to an oversampled reference simulation. We anticipate that our 2LPT will be useful for large simulations where fast Fourier transforms are expensive and that rescaling will be useful for suites of medium-resolution simulations used in cosmic emulators and galaxy survey mock catalogues. Code to generate ICs is available at https://github.com/lgarrison/zeldovich-PLT.
292

A novel reliability evaluation method for large engineering systems

Farag, Reda, Haldar, Achintya 06 1900 (has links)
A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first-or second-order reliability method (FORM/SORM) will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS) method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM) developed by the author and his research team using FORM, response surface method (RSM), an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples. (C) 2016 Faculty of Engineering, Ain Shams University. Production and hosting by Elsevier B.V.
293

Cosmic voids and void lensing in the Dark Energy Survey Science Verification data

Sánchez, C., Clampitt, J., Kovacs, A., Jain, B., García-Bellido, J., Nadathur, S., Gruen, D., Hamaus, N., Huterer, D., Vielzeuf, P., Amara, A., Bonnett, C., DeRose, J., Hartley, W. G., Jarvis, M., Lahav, O., Miquel, R., Rozo, E., Rykoff, E. S., Sheldon, E., Wechsler, R. H., Zuntz, J., Abbott, T. M. C., Abdalla, F. B., Annis, J., Benoit-Lévy, A., Bernstein, G. M., Bernstein, R. A., Bertin, E., Brooks, D., Buckley-Geer, E., Rosell, A. Carnero, Kind, M. Carrasco, Carretero, J., Crocce, M., Cunha, C. E., D'Andrea, C. B., da Costa, L. N., Desai, S., Diehl, H. T., Dietrich, J. P., Doel, P., Evrard, A. E., Neto, A. Fausti, Flaugher, B., Fosalba, P., Frieman, J., Gaztanaga, E., Gruendl, R. A., Gutierrez, G., Honscheid, K., James, D. J., Krause, E., Kuehn, K., Lima, M., Maia, M. A. G., Marshall, J. L., Melchior, P., Plazas, A. A., Reil, K., Romer, A. K., Sanchez, E., Schubnell, M., Sevilla-Noarbe, I., Smith, R. C., Soares-Santos, M., Sobreira, F., Suchyta, E., Tarle, G., Thomas, D., Walker, A. R., Weller, J. 11 February 2017 (has links)
Cosmic voids are usually identified in spectroscopic galaxy surveys, where 3D information about the large-scale structure of the Universe is available. Although an increasing amount of photometric data is being produced, its potential for void studies is limited since photometric redshifts induce line-of-sight position errors of >= 50 Mpc h(-1)which can render many voids undetectable. We present a new void finder designed for photometric surveys, validate it using simulations, and apply it to the high-quality photo-z redMaGiC galaxy sample of the DES Science Verification data. The algorithm works by projecting galaxies into 2D slices and finding voids in the smoothed 2D galaxy density field of the slice. Fixing the line-of-sight size of the slices to be at least twice the photo-z scatter, the number of voids found in simulated spectroscopic and photometric galaxy catalogues is within 20 per cent for all transverse void sizes, and indistinguishable for the largest voids (R-v >= 70 Mpc h(-1)). The positions, radii, and projected galaxy profiles of photometric voids also accurately match the spectroscopic void sample. Applying the algorithm to the DES-SV data in the redshift range 0.2 < z < 0.8, we identify 87 voids with comoving radii spanning the range 18-120 Mpc h(-1), and carry out a stacked weak lensing measurement. With a significance of 4.4 sigma, the lensing measurement confirms that the voids are truly underdense in the matter field and hence not a product of Poisson noise, tracer density effects or systematics in the data. It also demonstrates, for the first time in real data, the viability of void lensing studies in photometric surveys.
294

Optimal cosmology from gravitational lensing : utilising the magnification and shear signals

Duncan, Christopher Alexander James January 2015 (has links)
Gravitational lensing studies the distortions of a distant galaxy’s observed size, shape or flux due to the tidal bending of photons by matter between the source and observer. Such distortions can be used to infer knowledge on the mass distribution of the intervening matter, such as the dark matter halos in which clusters of individual galaxies may reside, or on cosmology through the statistics of the matter density of large scale structure and geometrical factors. In particular, gravitational lensing has the advantage that it is insensitive to the nature of the lensing matter. However, contamination of the signal by correlations between galaxy shape or size and local environment complicate a lensing analysis. Further, measurement of traditional lensing estimators is made more difficult by limitations on observations, in the form of atmospheric distortions or optical limits of the telescope itself. As a result, there has been a large effort within the lensing community to develop methods to either reduce or remove these contaminants, motivated largely by stringent science requirements for current and forthcoming surveys such as CFHTLenS, DES, LSST, HSC, Euclid and others. With the wealth of data from these wide-field surveys, it is more important than ever to understand the full range of independent probes of cosmology at our disposal. In particular, it is desirable to understand how each probe may be used, individually and in conjunction, to maximise the information of a lensing analysis and minimise or mitigate the systematics of each. With this in mind, I investigate the use of galaxy clustering measurements using photometric redshift information, including a contribution from flux magnification, as a probe of cosmology. I present cosmological forecasts when clustering data alone are used, and when clustering is combined with a cosmic shear analysis. I consider two types of clustering analysis: firstly, clustering with only redshift auto-correlations in tomographic redshift bins; secondly, clustering using all available redshift bin correlations. Finally, I consider how inferred cosmological parameters may be biased using each analysis when flux magnification is neglected. Results are presented for a Stage–III ground-based survey, and a Stage–IV space-based survey modelled with photometric redshift errors, and values for the slope of the luminosity function inferred from CFHTLenS catalogues. I find that combining clustering information with shear gives significant improvement on cosmological parameter constraints, with the largest improvement found when all redshift bins are included in the analysis. The addition of galaxy-galaxy lensing gives further improvement, with a full combined analysis improving constraints on dark energy parameters by a factor of > 3. The presence of flux magnification in a clustering analysis does not significantly affect the precision of cosmological constraints when combined with cosmic shear and galaxy-galaxy lensing. However if magnification is neglected, inferred cosmological parameter values are biased, with biases in some cosmological parameters found to be larger than statistical errors. We find that a combination of clustering, cosmic shear and galaxy-galaxy lensing can provide a significant reduction in statistical errors from each analysis individually, however care must be taken to measure and model flux magnification. Finally, I consider how measurements of galaxy size and flux may be used to constrain the dark matter profile of a foreground lens, such as galaxy- or galaxy-cluster-dark matter halos. I present a method of constructing probability distributions for halo profile free parameters using Bayes’ Theorem, provided the intrinsic size-magnitude distribution may be measured from data. I investigate the use of this method on mock clusters, with an aim of investigating the precision and accuracy of returned parameter constraints under certain conditions. As part of this analysis, I quantify the size and significance of inaccuracies in the dark matter reconstruction as a result of limitations in the data from which the sample and size-magnitude distribution is obtained. This method is applied to public data from the Space Telescope A901/902 Galaxy Evolution Survey (STAGES), and results are presented for the four STAGES clusters using measurements of source galaxy size and magnitude, and a combination of both. I find consistent results with existing shear measurements using measurements of galaxy magnitudes, but interesting inconsistent results when galaxy size measurements are used. The simplifying assumptions and limitations of the analysis are discussed, and extensions to the method presented.
295

Mezinárodní smlouvy o dodávce investičních celků / International Contract for Large Industrial Works

Kohout, Petr January 2011 (has links)
Large industrial works represents an interesting subject of foreign trade in which countries may have competitive advantage. These industrial works also forms a large part of the state economy. Contracts that covers such transactions are very difficult, whether due to the volume and duration of supply, and because of possible problems with funding, with the political situation at the place of delivery, etc. Contracts covering such a transactions could be very tricky to draft because of either volume and duration of supply, funding issues or political instability in the region etc. The aim of this thesis is to introduce basic concept and principles that are linked to international trade with large industrial works. The thesis is mainly focused on contracting opportunities, ie the general approaches to contracting for the supply of capital equipment. The main theme is - within the specification, form of contract for the supply of capital equipment, as well as other resources that can be used during drafting the contract, such as commercial conventions and more. Thesis is logically divided into six chapters (including introduction and conclusion).The firts part of the thesis familiarize the reader with basic concepts - those are the "large industrial works " and "international business transactions."...
296

LPCVD TUNGSTEN MULTILAYER METALLIZATION FOR VLSI SYSTEMS.

KRISHT, MUHAMMED HUSSEIN., KRISHT, MUHAMMED HUSSEIN. January 1985 (has links)
Advances in microlithography, dry etching, scaling of devices, ion-implantation, process control, and computer aid design brought the integrated circuit technology into the era of VLSI circuits. Those circuits are characterized by high packing density, improved performance, complex circuits, and large chip sizes. Interconnects and their spacing dominate the chip area of VLSI circuits and they degrade the circuit performance through the unacceptable high time delays. Multilayer metallization enables shorter interconnects, ease of design and yet higher packing density for VLSI circuits. It was shown in this dissertation that, tungsten films deposited in a cold-wall LPCVD reactor offer viable solution to the problems of VLSI multilayer interconnects. Experiments showed that LPCVD tungsten films have good uniformity, high purity, low resistivity, low stress-good adherence and are readily patterned into high resolution lines. Moreover, a multilayer interconnect system consisting of three layers of tungsten metallization followed by a fourth layer of aluminum metallization has been designed, fabricated and tested. The interlevel dielectric used to separate the metal layers was CVD phosphorus doped silicon dioxide. Low ohmic contacts were achieved for heavily doped silicon. Also, low resistance tungsten-tungsten intermetallic contacts were obtained. In addition to excellent step coverage, high electromigration resistance of interconnects was realized. Finally, CMOS devices and logic gates were successfully fabricated and tested using tungsten multilayer metallization schemes.
297

Large-Amplitude Vibration of Imperfect Rectangular, Circular and Laminated Plate with Viscous Damping

Huang, He 18 December 2014 (has links)
Large-amplitude vibration of thin plates and shells has been critical design issues for many engineering structures. The increasingly more stringent safety requirements and the discovery of new materials with amazingly superior properties have further focused the attention of research on this area. This thesis deals with the vibration problem of rectangular, circular and angle-ply composite plates. This vibration can be triggered by an initial vibration amplitude, or an initial velocity, or both. Four types of boundary conditions including simply supported and clamped combined with in-plane movable/immovable are considered. To solve the differential equation generated from the vibration problem, Lindstedt's perturbation technique and Runge-Kutta method are applied. In previous works, this problem was solved by Lindstedt's Perturbation Technique. This technique can lead to a quick approximate solution. Yet based on mathematical assumptions, the solution will no longer be accurate for large amplitude vibration, especially when a significant amount of imperfection is considered. Thus Runge-Kutta method is introduced to solve this problem numerically. The comparison between both methods has shown the validity of the Lindstedt's Perturbation Technique is generally within half plate thickness. For a structure with a sufficiently large geometric imperfection, the vibration can be represented as a well-known backbone curve transforming from soften-spring to harden-spring. By parameter variation, the effects of imperfection, damping ratio, boundary conditions, wave numbers, young's modulus and a dozen more related properties are studied. Other interesting research results such as the dynamic failure caused by out-of-bound vibration and the change of vibration mode due to damping are also revealed.
298

Performance analyses for large-scale antennas equipped two-way AF relaying and heterogeneous networks

Dai, Yongyu 14 September 2016 (has links)
In this dissertation, performance analyses for large-scale antennas equipped two-way amplify-and-forward (AF) relaying and heterogeneous network (HetNet) are carried out. Energy-efficiency oriented design becomes more important for the next generation of wireless systems, which motivates us to study the strong candidates, such as massive multiple-input multiple-output (MIMO) combined with cooperative relaying and HetNet. Based on the achievable rate analyses for both massive MIMO two-way AF relaying, effective power allocation schemes are presented to further improve system performance. Focusing on the MIMO downlinks in the HetNet, mean square error (MSE) based precoding schemes are designed and employed by the macro base station (BS) and the small cell (SC) nodes. Considering a HetNet where both macro BS and SC nodes are equipped with large-scale antenna arrays, the capacity lower bounds are derived, followed by the proposed user scheduling algorithms. The work on multi-pair two-way AF relaying with linear processing considers a system where multiple sources exchange information via a relay equipped with massive antennas. Given that channel estimation is non-ideal, and that the relay employs either maximum-ratio combining/maximum-ratio transmission (MRC/MRT) or zero-forcing reception/zero-forcing transmission (ZFR/ZFT) beamforming, we derive two corresponding closed-form lower bound expressions for the ergodic achievable rate of each pair sources. The closed-form expressions enable us to design an optimal power allocation (OPA) scheme that maximizes the sum spectral efficiency under certain practical constraints. As the antenna array size tends to infinity and the signal to noise ratios become very large, asymptotically optimal power allocation schemes in simple closed-form are derived. The capacity lower bounds are verified to be accurate predictors of the system performance by simulations, and the proposed OPA outperforms equal power allocation (EPA). It is also found that in the asymptotic regime, when MRC/MRT is used at the relay and the link end-to-end large-scale fading factors among all pairs are equal, the optimal power allocated to a user is inverse to the large-scale fading factor of the channel from the user to the relay, while OPA approaches EPA when ZFR/ZFT is adopted. The work on the MSE-based precoding design for MIMO downlinks investigates a HetNet system consisting of a macro tier overlaid with a second tier of SCs. First, a new sum-MSE of all users based minimization problem is proposed aiming to design a set of macro cell (MC) and SC transmit precoding matrices or vectors. To solve it, two different algorithms are presented. One is via a relaxed-constraints based alternating optimization (RAO) realized by efficient alternating optimization and relaxing non-convex constraints to convex ones. The other is via an unconstrained alternating optimization with normalization (UAON) implemented by introducing the constraints into the iterations with the normalization operation. Second, a separate MSE minimization based two-level precoder is proposed by considering the signal and interference terms corresponding to the macro tier and the individual SCs separately. Furthermore, robust precoders are designed correspondingly with estimated imperfect channel. Simulation results show that the sum-MSE based RAO algorithm provides the best MSE performance among the proposed schemes under a number of system configurations. When the number of antennas at the macro-BS is sufficiently large relative to the number of MUEs, the MSE of the separate MSE-based precoding is found to approach those of RAO and UAON. Together, this thesis provides a suite of three new precoding techniques that is expected to meet the need in a broad range of HetNet environments with balance between performance and complexity. The work on a large-scale HetNet studies the performance for MIMO downlink systems where both macro BS and SC nodes are equipped with large-scale antenna arrays. Suppose that the large-scale antenna arrays at both macro BS and SC nodes employ MRT or ZFT precoding, and transmit data streams to the served users simultaneously. A new pilot reuse pattern among small cells is proposed for channel estimation. Taking into account imperfect CSI, lower capacity bounds for MRT and ZFT are derived, respectively, in closed-form expressions involving only statistical CSI. Then asymptotic analyses for massive arrays are presented, from which we obtain the optimal antenna number ratio between BS and SCs under specific power scaling laws. Subsequently, two user scheduling algorithms, that is, greedy scheduling algorithm and asymptotical scheduling algorithm (ASA), are proposed based on the derived capacity lower bounds and asymptotic analyses, respectively. ASA is demonstrated to be a near optimal user scheduling scheme in the asymptotic regime and has low complexity. Finally, the derived closed-form achievable rate expressions are verified to be accurate predictors of the system performance by Monte-Carlo simulations. Numerical results demonstrate the effectiveness of the asymptotic analysis and the proposed user scheduling schemes. / Graduate / 0544 / 0984
299

Aerodynamic and thermal modeling of effusion cooling systems in Large Eddy Simulation

Bizzari, Romain 05 November 2018 (has links) (PDF)
Numerical simulation is progressively taking importance in the design of an aero- nautical engine. However, concerning the particular case of cooling devices, the high number of sub-millimetric cooling holes is an obstacle for computational sim- ulations. A classical approach goes through the modelling of the effusion cooling by homogenisation. It allows to simulate a full combustor but failsin representing the jet penetration and mixing. A new approach named thickened-hole model was developed during this thesis to overcome this issue. A work on improving the mesh resolution onkey areas thanks to an automatic adaptive method is also presented, leading to a clear breakthrough. In parallel, as the flame tube temperature is a cornerstone for the combustor durability,a low-cost approach is proposed to predict it. To meet the time-constraints of design, it is based on thermal modelling instead of a direct thermal resolution.
300

Energy reconstruction on the LHC ATLAS TileCal upgraded front end: feasibility study for a sROD co-processing unit

Cox, Mitchell Arij 10 May 2016 (has links)
Dissertation presented in ful lment of the requirements for the degree of: Master of Science in Physics 2016 / The Phase-II upgrade of the Large Hadron Collider at CERN in the early 2020s will enable an order of magnitude increase in the data produced, unlocking the potential for new physics discoveries. In the ATLAS detector, the upgraded Hadronic Tile Calorimeter (TileCal) Phase-II front end read out system is currently being prototyped to handle a total data throughput of 5.1 TB/s, from the current 20.4 GB/s. The FPGA based Super Read Out Driver (sROD) prototype must perform an energy reconstruction algorithm on 2.88 GB/s raw data, or 275 million events per second. Due to the very high level of pro ciency required and time consuming nature of FPGA rmware development, it may be more e ective to implement certain complex energy reconstruction and monitoring algorithms on a general purpose, CPU based sROD co-processor. Hence, the feasibility of a general purpose ARM System on Chip based co-processing unit (PU) for the sROD is determined in this work. A PCI-Express test platform was designed and constructed to link two ARM Cortex-A9 SoCs via their PCI-Express Gen-2 x1 interfaces. Test results indicate that the latency of the PCI-Express interface is su ciently low and the data throughput is superior to that of alternative interfaces such as Ethernet, for use as an interconnect for the SoCs to the sROD. CPU performance benchmarks were performed on ve ARM development platforms to determine the CPU integer, oating point and memory system performance as well as energy e ciency. To complement the benchmarks, Fast Fourier Transform and Optimal Filtering (OF) applications were also tested. Based on the test results, in order for the PU to process 275 million events per second with OF, within the 6 s timing budget of the ATLAS triggering system, a cluster of three Tegra-K1, Cortex-A15 SoCs connected to the sROD via a Gen-2 x8 PCI-Express interface would be suitable. A high level design for the PU is proposed which surpasses the requirements for the sROD co-processor and can also be used in a general purpose, high data throughput system, with 80 Gb/s Ethernet and 15 GB/s PCI-Express throughput, using four X-Gene SoCs.

Page generated in 0.0486 seconds