• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1973
  • 524
  • 512
  • 204
  • 117
  • 91
  • 54
  • 42
  • 35
  • 27
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4288
  • 1283
  • 515
  • 514
  • 462
  • 327
  • 308
  • 306
  • 294
  • 290
  • 282
  • 274
  • 270
  • 260
  • 238
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Vascular and metabolic profile of 5-year sustained hypertensive versus normotensive black South Africans / Melissa Maritz

Maritz, Melissa January 2014 (has links)
Motivation A close association exists between hypertension and arterial stiffness. Whether the increased arterial stiffness seen in hypertensives are due to structural or functional adaptations in the vasculature is uncertain. Hypertension is more common in blacks and they have an increased arterial stiffness and higer stroke prevalence than white populations. Arterial stiffening, or a loss of arterial distensibility, increases the risk for cardiovascular events, including stroke and heart failure, as it increases the afterload on the heart, as well as creating a higher pulsatile load on the microcirculation. The stiffness of the carotid artery is associated with cardiovascular events, like stroke, and all-cause mortality. Furthermore, carotid stiffness is independently associated with stroke, probably because stiffening of the carotid artery may lead to a higher pressure load on the brain. Inflammation, endothelial activation, dyslipidemia, hyperglycemia and health behaviours may also influence hypertension and arterial stiffness. Limited information is availiable on these associations in black South Africans. The high prevalence of hypertension and cardiovascular disease in blacks creates the need for effective prevention and intervention programs in South Africa. Aim We aimed to compare the characteristics of the carotid artery between 5-year sustained hypertensive and normotensive black participants. Furthermore, we aimed to determine whether blood pressure, conventional cardio-metabolic risk factors, markers of inflammation, endothelial activation and measures of health behaviours are related to these carotid characteristics. Methodology This sub-study forms part of the South African leg of the multi-national Prospective Urban and Rural Epidemiology (PURE) study. The participants of the PURE-SA study were from the North West Province of South Africa, and baseline data collection took place in 2005 (N=2010), while follow-up data was collected five years later, in 2010 (N=1288). HIV-free participants who were either hypertensive or normotensive (N=592) for the 5-year period, and who had complete datasets, were included in this sub-study. The study population thus consists of a group of 5-year sustained normotensive (n=241) and hypertensive (n=351) black participants. Anthropometric measurements included height, weight, waist circumference and the calculation of body mass index (BMI). We included several cardiovascular measurements, namely brachial systolic- and diastolic blood pressure, heart rate, central systolic blood pressure, central pulse pressure and the carotid dorsalis-pedis pulse wave velocity. Carotid characteristics included distensibility, intima media thickness, cross sectional wall area, maximum and minimum lumen diameter. Biochemical variables that were determined included HIV status, low-density lipoprotein-cholesterol, high-density lipoprotein-cholesterol, triglycerides, fasting glucose, glycated haemoglobin (HbA1c), creatinine clearance, interleukin-6, C-reactive protein, intracellular adhesion-molecule-1 and vascular adhesion molecule-1. Health behaviours were quantified by measuring γ-glutamyltransferase and by self-reported alcohol, tobacco and anti-hypertensive, anti-inflammatory and lipid-lowering medication use. We compared the normotensive and hypertensive groups by using independent t-tests and chi-square tests. The carotid characteristics were plotted according to quartiles of central systolic blood pressure by making use of standard analyses of variance (ANOVA) and the analyses of co-variance (ANCOVA). Pearson correlations done in the normotensive and hypertensive Africans helped to determine covariates for the multiple regression models. We used forward stepwise multiple regression analyses with the carotid characteristics as dependent variables to determine independent associations between variables. Results and Conclusion The cardiovascular measures, including pulse wave velocity, were significantly higher in the hypertensive group (all p≤0.024). The lipid profile, markers of inflammation, endothelial activation and glycaemia, as well as health behaviours, did not differ between the hypertensives and normotensives after adjustments for age, sex, waist circumference, γ-glutamyltransferase, tobacco use and anti-hypertensive medication use. After similar adjustments, all carotid characteristics, except IMT, were significantly different between the groups (all p≤0.008). However, upon additional adjustment for cSBP, significance was lost. The stiffness and functional adaptation seen in this study are not explained by the classic cardio-metabolic risk factors, markers of endothelial activation or health behaviours of the participants. The differences that exist in terms of arterial stiffness between the normotensive and hypertensive groups may be explained by the increased distending pressure in the hypertensive group. Despite their hypertensive status, it seems that there are no structural adaptations in these hypertensive Africans. / MSc (Physiology), North-West University, Potchefstroom Campus, 2015
262

Vascular and metabolic profile of 5-year sustained hypertensive versus normotensive black South Africans / Melissa Maritz

Maritz, Melissa January 2014 (has links)
Motivation A close association exists between hypertension and arterial stiffness. Whether the increased arterial stiffness seen in hypertensives are due to structural or functional adaptations in the vasculature is uncertain. Hypertension is more common in blacks and they have an increased arterial stiffness and higer stroke prevalence than white populations. Arterial stiffening, or a loss of arterial distensibility, increases the risk for cardiovascular events, including stroke and heart failure, as it increases the afterload on the heart, as well as creating a higher pulsatile load on the microcirculation. The stiffness of the carotid artery is associated with cardiovascular events, like stroke, and all-cause mortality. Furthermore, carotid stiffness is independently associated with stroke, probably because stiffening of the carotid artery may lead to a higher pressure load on the brain. Inflammation, endothelial activation, dyslipidemia, hyperglycemia and health behaviours may also influence hypertension and arterial stiffness. Limited information is availiable on these associations in black South Africans. The high prevalence of hypertension and cardiovascular disease in blacks creates the need for effective prevention and intervention programs in South Africa. Aim We aimed to compare the characteristics of the carotid artery between 5-year sustained hypertensive and normotensive black participants. Furthermore, we aimed to determine whether blood pressure, conventional cardio-metabolic risk factors, markers of inflammation, endothelial activation and measures of health behaviours are related to these carotid characteristics. Methodology This sub-study forms part of the South African leg of the multi-national Prospective Urban and Rural Epidemiology (PURE) study. The participants of the PURE-SA study were from the North West Province of South Africa, and baseline data collection took place in 2005 (N=2010), while follow-up data was collected five years later, in 2010 (N=1288). HIV-free participants who were either hypertensive or normotensive (N=592) for the 5-year period, and who had complete datasets, were included in this sub-study. The study population thus consists of a group of 5-year sustained normotensive (n=241) and hypertensive (n=351) black participants. Anthropometric measurements included height, weight, waist circumference and the calculation of body mass index (BMI). We included several cardiovascular measurements, namely brachial systolic- and diastolic blood pressure, heart rate, central systolic blood pressure, central pulse pressure and the carotid dorsalis-pedis pulse wave velocity. Carotid characteristics included distensibility, intima media thickness, cross sectional wall area, maximum and minimum lumen diameter. Biochemical variables that were determined included HIV status, low-density lipoprotein-cholesterol, high-density lipoprotein-cholesterol, triglycerides, fasting glucose, glycated haemoglobin (HbA1c), creatinine clearance, interleukin-6, C-reactive protein, intracellular adhesion-molecule-1 and vascular adhesion molecule-1. Health behaviours were quantified by measuring γ-glutamyltransferase and by self-reported alcohol, tobacco and anti-hypertensive, anti-inflammatory and lipid-lowering medication use. We compared the normotensive and hypertensive groups by using independent t-tests and chi-square tests. The carotid characteristics were plotted according to quartiles of central systolic blood pressure by making use of standard analyses of variance (ANOVA) and the analyses of co-variance (ANCOVA). Pearson correlations done in the normotensive and hypertensive Africans helped to determine covariates for the multiple regression models. We used forward stepwise multiple regression analyses with the carotid characteristics as dependent variables to determine independent associations between variables. Results and Conclusion The cardiovascular measures, including pulse wave velocity, were significantly higher in the hypertensive group (all p≤0.024). The lipid profile, markers of inflammation, endothelial activation and glycaemia, as well as health behaviours, did not differ between the hypertensives and normotensives after adjustments for age, sex, waist circumference, γ-glutamyltransferase, tobacco use and anti-hypertensive medication use. After similar adjustments, all carotid characteristics, except IMT, were significantly different between the groups (all p≤0.008). However, upon additional adjustment for cSBP, significance was lost. The stiffness and functional adaptation seen in this study are not explained by the classic cardio-metabolic risk factors, markers of endothelial activation or health behaviours of the participants. The differences that exist in terms of arterial stiffness between the normotensive and hypertensive groups may be explained by the increased distending pressure in the hypertensive group. Despite their hypertensive status, it seems that there are no structural adaptations in these hypertensive Africans. / MSc (Physiology), North-West University, Potchefstroom Campus, 2015
263

Simulated Annealing : Simulated Annealing for Large Scale Optimization in Wireless Communications / : Simulated Annealing using Matlab Software

Sakhavat, Tamim, Grissa, Haithem, Abdalrahman, Ziyad January 2012 (has links)
In this thesis a simulated annealing algorithm is employed as an optimization tool for a large scale optimization problem in wireless communication. In this application, we have 100 places for transition antennas and 100 places for receivers, and also a channel between each position in both areas. Our aim is to nd, say the best 3 positions there, in a way that the channel capacity is maximized. The number of possible combinations is huge. Hence, nding the best channel will take a very long time using an exhaustive search. To solve this problem, we use a simulated annealing algorithm and estimate the best answer. The simulated annealing algorithm chooses a random element, and then from the local search algorithm, compares the selected element with its neighbourhood. If the selected element is the maximum among its neighbours, it is a local maximum. The strength of the simulated annealing algorithm is its ability to escape from local maximum by using a random mechanism that mimics the Boltzmann statistic.
264

SOLID SOURCE CHEMICAL VAPOR DEPOSITION OF REFRACTORY METAL SILICIDES FOR VLSI INTERCONNECTS.

HEY, HANS PETER WILLY. January 1984 (has links)
Low resistance gate level interconnects can free the design of VLSI circuits from the R-C time constant limitations currently imposed by poly-silicon based technology. The hotwall low pressure chemical vapor deposition of molybdenum and tungsten silicide from their commercially available hexacarbonyls and silane is presented as a deposition method producing IC-compatible gate electrodes of reduced resistivity. Good hotwall deposition uniformity is demonstrated at low temperatures (200 to 300 C). The as-deposited films are amorphous by x-ray diffraction and can be crystallized in subsequent anneal steps with anneal induced film shrinkage of less than 12 percent. Surface oxide formation is possible during this anneal cycle. Auger spectroscopy and Rutherford backscattering results indicate that silicon-rich films can be deposited, and that the concentrations of carbon and oxygen incorporated from the carbonyl source are a function of the deposition parameters. At higher deposition temperatures and larger source throughput the impurity incorporation is markedly reduced. Good film adhesion and excellent step coverage are observed. Electrical measurements show that the film resistivities after anneal are comparable to those of sputtered or evaporated silicide films. Bias-temperature capacitance-voltage measurements demonstrate that direct silicide gate electrodes have properties comparable to standard metal-oxide-silicon systems. The substitution of CVD silicides for standard MOS gate metals appears to be transparent in terms of transistor performance, except for work function effects on the threshold voltage. The large wafer throughput and good step coverage of hotwall low pressure silicide deposition thus promises to become a viable alternative to the poly-silicon technology currently in use.
265

An intelligence driven test system for detection of stuck-open faults in CMOS sequential circuits

Sagahyroon, Assim Abdelrahman January 1989 (has links)
This paper discusses an intelligence driven test system for generation of test sequences for stuck-open faults in CMOS VLSI sequential circuits. The networks in system evaluation are compiled from an RTL representation of the digital system. To excite a stuck-open fault it is only necessary that the output of the gate containing the fault take on opposite values during two successive clock periods. Excitation of the fault must therefore constrain two successive input/present-state vectors, referred to in the paper as the pregoal and goal nodes respectively. An initialization procedure is used to determine the pregoal state. Two theorems are proved establishing a 1-1 correspondence between stuck-at and stuck-open faults. As a result the D-algorithm may be used to determine the goal node. Determining the nodes was tried on many circuits and a high success rate was achieved. The pregoal is observed to have more "don't care" values. The next step is a "sensitization search" for an input sequence (X(s)) that drives the memory elements to the determined pregoal and goal states over two consecutive clock periods. It is easier for the search to reach the pregoal due to the greater number of "don't cares." Following a "propagation search" for an input sequence (X(p)) to drive the effect of the fault to an external output, the sequence of vectors (X(s)), (X(p)) will be passed to an "ALL-Fault Simulator" for verification. The simulation will be clock mode but will represent the output retention resulting from the stuck-open faults. One measure of the value of a special search procedure for stuck-open faults can be obtained by comparing the results employing this search with results obtained by searching only for the analogous stuck-at faults. A first order prediction would be a likelihood less than 0.5 that the predecessor of a stuck-at goal node would excite an opposite output in the gate containing the fault. A comparison of the two methods using the stuck-open "All-Fault Simulator" is presented.
266

Tracing large-scale structure with radio sources

Lindsay, Samuel Nathan January 2015 (has links)
In this thesis, I investigate the spatial distribution of radio sources, and quantify their clustering strength over a range of redshifts, up to z _ 2:2, using various forms of the correlation function measured with data from several multi-wavelength surveys. I present the optical spectra of 30 radio AGN (S1:4 > 100 mJy) in the GAMA/H-ATLAS fields, for which emission line redshifts could be deduced, from observations of 79 target sources with the EFOSC2 spectrograph on the NTT. The mean redshift of these sources is z = 1:2; 12 were identified as quasars (40 per cent), and 6 redshifts (out of 24 targets) were found for AGN hosts to multiple radio components. While obtaining spectra for hosts of these multi-component sources is possible, their lower success rate highlights the difficulty in acheiving a redshift-complete radio sample. Taking an existing spectroscopic redshift survey (GAMA) and radio sources from the FIRST survey (S1:4 > 1 mJy), I then present a cross-matched radio sample with 1,635 spectroscopic redshifts with a median value of z = 0:34. The spatial correlation function of this sample is used to find the redshiftspace (s0) and real-space correlation lengths (r0 _ 8:2 h 1Mpc), and a mass bias of _1.9. Insight into the redshift-dependence of these quantities is gained by using the angular correlation function and Limber inversion to measure the same spatial clustering parameters. Photometric redshifts from SDSS/UKIDSS are incorporated to produce a larger matched radio sample at z ' 0:48 (and low- and high-redshift subsamples at z ' 0:30 and z ' 0:65), while their redshift distribution is subtracted from that taken from the SKADS radio simulations to estimate the redshift distribution of the remaining unmatched sources (z ' 1:55). The observed bias evolution over this redshift range is compared with model predictions based on the SKADS simulations, with good agreement at low redshift. The bias found at high redshift significantly exceeds these predictions, however, suggesting a more massive population of galaxies than expected, either due to the relative proportions of different radio sources, or a greater typical halo mass for the high-redshift sources. Finally, the reliance on a model redshift distribution to reach to higher redshifts is removed, as the angular cross-correlation function is used with deep VLA data (S1:4 > 90 _Jy) and optical/IR data from VIDEO/CFHTLS (Ks < 23:5) over 1 square degree. With high-quality photometric redshifts up to z _ 4, and a high signal-to-noise clustering measurement (due to the _100,000 Ks-selected galaxies), I am able to find the bias of a matched sample of only 766 radio sources (as well as of v vi the VIDEO sources), divided into 4 redshift bins reaching a median bias at z ' 2:15. Again, at high redshift, the measured bias appears to exceed the prediction made from the SKADS simulations. Applying luminosity cuts to the radio sample at L > 1023 WHz 1 and higher (removing any non-AGN sources), I find a bias of 8–10 at z _ 1:5, considerably higher than for the full sample, and consistent with the more numerous FRI AGN having similar mass to the FRIIs (M _ 1014 M_), contrary to the assumptions made in the SKADS simulations. Applying this adjustment to the model bias produces a better fit to the observations for the FIRST radio sources cross-matched with GAMA/SDSS/UKIDSS, as well as for the high-redshift radio sources in VIDEO. Therefore, I have shown that we require a more robust model of the evolution of AGN, and their relation to the underlying dark matter distribution. In particular, understanding these quantities for the abundant FRI population is crucial if we are to use such sources to probe the cosmological model as has been suggested by a number of authors (e.g. Raccanelli et al., 2012; Camera et al., 2012; Ferramacho et al., 2014).
267

Searches for supersymmetric partners of the bottom and top quarks with the ATLAS detector

Dafinca, Alexandru January 2014 (has links)
Supersymmetry is a promising candidate theory that could solve the hierarchy problem and explain the dark matter density in the Universe. The ATLAS experiment at the Large Hadron Collider is sensitive to a variety of such supersymmetric models. This thesis reports on a search for pair production of the supersymmetric scalar partners of bottom and top quarks in 20.1 fb<sup>−1</sup> of pp collisions at a centre-of-mass energy of 8 TeV using the ATLAS experiment. The study focuses on final states with large missing transverse momentum, no electrons or muons and two jets identified as originating from a b-quark. This final state can be produced in a R-parity conserving minimal supersymmetric scenario, assuming that the scalar bottom decays exclusively to a bottom quark and a neutralino and the scalar top decays to a bottom quark and a chargino, with a small mass difference with the neutralino. As no signal is observed above the Standard Model expectation, competitive exclusion limits are set on scalar bottom and top production, surpassing previously existing limits. Sbottom masses up to 640 GeV are excluded at 95% CLs for neutralino masses of up to 150 GeV. Differences in mass between <sup>~</sup><sub style='position: relative; left: -.7em;'>b</sub><sub>1</sub> and <sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>0</sup><sub style='position: relative; left: -.5em;'>1</sub> larger than 50 GeV are excluded up to sbottom masses of 300 GeV. In the case of stop pair production and decay <sup>~</sup><sub style='position: relative; left: -.7em;'>t</sub><sub>1</sub> → b + <sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>±</sup><sub style='position: relative; left: -.5em;'>1</sub> and <sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>±</sup><sub style='position: relative; left: -.5em;'>1</sub> → <sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>0</sup><sub style='position: relative; left: -.5em;'>1</sub> + W* with mass differences &triangle;m = m<sub><sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>±</sup><sub style='position: relative; left: -.5em;'>1</sub></sub> − m<sub><sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>0</sup><sub style='position: relative; left: -.5em;'>1</sub></sub> = 5 GeV (20 GeV), stop masses up to 580 GeV (440 GeV) are excluded for m<sub><sup>~</sup><sub style='position: relative; left: -.7em;'>X</sub><sup>0</sup><sub style='position: relative; left: -.5em;'>1</sub></sub> = 100 GeV. Neutralino masses up to 280 GeV (230 GeV) are excluded for m<sub><sup>~</sup><sub style='position: relative; left: -.7em;'>t</sub><sub>1</sub></sub> = 420 GeV for &triangle;m = 5 GeV (20 GeV). In an extension of this analysis, sbottom quarks cascade-decaying to at least a Higgs boson are searched for in final states with large missing transverse momentum, at least 3 b-tagged jets and no electrons or muons, using neural network discriminants.
268

CP-violation in beautiful-strange oscillations at LHCb

Currie, Robert Andrew January 2014 (has links)
The LHCb experiment is an experiment based at the LHC in Geneva and is dedicated to the study of mesons containing bottom and charm quarks. One of the primary goals of the physics at LHCb is to measure CP-violating effects which lead to a dominance of matter over anti-matter in the universe. This thesis presents the measurement of the CP-violating phase Ø s which is one of the golden channels at LHCb. This phase is observed as the interference between mixing of B0s ↔ B-0s and decay of B0s → J/ψ K+K−. The results, based upon the 1.0 fb−1 dataset collected by LHCb during 2011, are: Ø s = 0.07±0.09±0.01 rad , ∆Γs = 0.100±0.016±0.002 ps−1 , Γs = 0.663±0.005±0.006 ps−1 . This analysis is also able to measure the mixing parameter ms = 17.71±0.10±0.01 ps−1. To improve upon this measurement the B0s → J/ψ K+K− analysis is combined with the B0s → J/ψ π+ π − decay channel to make the most accurate measurements to date of, Ø s = 0.01±0.07±0.01 rad, ∆Γs = 0.106±0.011±0.007 ps−1 and Γs = 0.661±0.004±0.006 ps−1. As an integral part of this work a comprehensive software suite known as RapidFit was developed, which is used by many other physicists and this is described.
269

Distributed Interactive Simulation (DIS): An Overview Of The System And Its Potential Uses

Boyd, Edward L., Novits, Charles S., Boisvert, Robert A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / The Distributed Interactive Simulation (DIS) concept, since its inception, has been defined into three separate but distinct areas of service. • Viewing of data in the real-time environment. • Multiple range viewing and usage of"real-time data." • Problems with the sharing of information through DIS. This paper will discuss the DIS concept and some of the various methods available to display this data to users of the system.
270

Using large eddy simulation to model buoyancy-driven natural ventilation

Durrani, Faisal January 2013 (has links)
The use of Large Eddy Simulation (LES) for modelling air flows in buildings is a growing area of Computational Fluid Dynamics (CFD). Compared to traditional CFD techniques, LES provides a more detailed approach to modelling turbulence in air. This offers the potential for more accurate modelling of low energy natural ventilation which is notoriously difficult to model using traditional CFD. Currently, very little is known about the performance of LES for modelling natural ventilation, and its computational intensity makes its practical use on desk top computers prohibitive. The objective of this work was to apply LES to a variety of natural ventilation strategies and to compile guidelines for practitioners on its performance, including the trade-off between accuracy and cost.

Page generated in 0.0213 seconds