• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4830
  • 3799
  • 378
  • 165
  • 103
  • 25
  • 20
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • Tagged with
  • 11290
  • 1155
  • 490
  • 475
  • 475
  • 393
  • 324
  • 282
  • 261
  • 250
  • 239
  • 233
  • 201
  • 188
  • 187
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Nestedness and modularity in bipartite networks

Beckett, Stephen J. January 2015 (has links)
Bipartite networks are a useful way of representing interactions between two sets of entities. Understanding the underlying structures of such networks may give insights into the functionality and behaviour of the systems they represent. Two important structural patterns identified in bipartite networks are nestedness and modularity. Nestedness describes a hierarchical ordering of nodes such that more specialised nodes have interactions with a subset of the partners with which the more generalised nodes interact. Modularity captures the community structure of a network as distinct clusters of interactions, such that there are more connections within communities than between communities. While these network architectures are easy to describe in writing, their quantitative measurement for a given network is a difficult task. Several different methods have been proposed in each case and it is currently unclear which of them should be used in practice. This thesis considers the use, measurement and interpretation of nestedness and modularity in bipartite networks. First, it is shown how bipartite networks can be an effective tool for linking data and theory in community ecology, though use of a coevolutionary model of virus-bacteria interactions. Next, a series of studies is presented that push towards clarification of the best procedures to measure nestedness and modularity in bipartite networks. Robustness of nestedness measures is tested on a synthetic ensemble of networks, showing that apparent nestedness depends strongly on the choice of measure, null model and effect size statistics. Recommendations for performing nestedness are made with relation to individual and cross-network comparisons. Additionally, a new algorithm for identifying weighted modularity is proposed that can be shown to outperform existing methods. Crucially, it is shown that quantitative modular structures differ from traditional binary modular structures with implications for how modularity is reported and used. Improving the way in which nestedness and modularity are measured is a necessary step for integrating data and theory in bipartite networks.
22

Investigating pathogenesis and virulence of the human pathogen, Vibrio vulnificus

Church, Selina Rebecca January 2015 (has links)
V. vulnificus is a Gram negative opportunistic pathogen that is ubiquitous in the marine environment. Of the three main biotypes, biotype 1 is most commonly associated with human infection and is the causative agent of septicaemia, gastroenteritis and wound infection. In the United States V. vulnificus is the leading cause of seafood related deaths and is commonly associated with ingestion of raw or undercooked oysters. However, despite the abundant prevalence of this bacterium in the environment, the number of severe human infections is low. This has led to the hypothesis that not all strains of this pathogen are equal in virulence, with some strains better adapted to causing human disease than others. Therefore the current study tested a panel of 10 V. vulnificus strains in several phenotypic experiments that assayed the strains for known virulence factors, with the aim of identifying a marker for strains hazardous to human health. However, not one assay correlated with either virulence potential of the strains, as determined by an in vivo mouse model of virulence, or source of isolation. As the study hypothesised that the varying virulence potentials displayed by the strains may be due to genetic differences, whole genome sequencing (WGS) was performed. Bioinformatic comparison of the strains demonstrated many genetic differences between the strains. However, in unison with the WGS comparison, WGS gene annotation was also performed. This identified the presence of two previously undescribed type 6 secretion systems (T6SS). Therefore the current study continued investigation into the T6SSs. The two T6SSs identified were termed T6SS1 and T6SS2. T6SS2 was found in all sequenced isolates, whereas T6SS1 was only present in a sub-set of strains. As T6SS1 shared synteny with the previously described T6SS in V. cholerae, T6SS1 was chosen for further investigation. During this study T6SS1 was shown to be functional and displayed thermoregulation. Further investigation into T6SS1 by construction and characterising of a T6SS1 mutant, demonstrated that T6SS1 contained anti-prokaryotic properties.
23

Studies of hadronic decays of high transverse momentum W and Z bosons with the ATLAS detector at the LHC

Chislett, R. T. January 2014 (has links)
This work is based on data recorded by the ATLAS experiment at the LHC in 2011 at a centre of mass energy of 7 TeV and in 2012 at 8 TeV. The work predominantly focusses on searches for hadronically decaying W and Z bosons in the high $p_{T}$ regime such that both the decay products are contained within a single jet. Firstly, a selection using jet shapes boosted back into the centre of mass frame of the jet is used to locate a W and Z boson peak above the large QCD background in 2011 data and from this a cross section is extracted. This peak is then used to study a variety of grooming techniques designed to reduce the effects of pileup and leave only the components of jet related to the hard scatter. The effect of these techniques on a sample already containing strongly signal-like jets is assessed in terms of terms of data-Monte Carlo agreement, signal to background ratio and pileup dependence. They are found to perform well to a level that will be useful in future measurements at the LHC. Secondly an analysis designed to search for heavy diboson resonances in the hadronic channel is described. This search is based on a optimised version of a splitting and filtering technique investigated in the W and Z boson analysis with some additional substructure cuts. The analysis is run on 2012 data and the preliminary results and limits are presented. An excess corresponding to about 3.4σ local significance is observed and this peak is currently undergoing a series of cross checks. This thesis also contains some performance studies of the ATLAS forward jet trigger in 2011 in terms of efficiency plots and work done to investigate applying calibration to the jet triggers. The calibration was seen to improve the jet trigger and was subsequently used in 2012 running.
24

Trapping ultracold argon atoms

Edmunds, P. D. January 2015 (has links)
This thesis describes the dipole trapping of both metastable and ground state argon atoms. Metastable argon atoms are first Doppler-cooled down to ∼80 μK in a magneto- optical trap (MOT) on the 4s[3/2]2 to 4p[5/2]3 transitions. These were loaded into dipole traps formed both within the focus of a high-power CO2 laser beam and within an optical build-up cavity. The optical cavity’s well depth could be rapidly modulated: allowing efficient loading of the trap, characterisation of trapped atom temperature, and reduction of intensity noise. Collisional properties of the trapped metastable atoms were studied within the cavity and the Penning and associative losses from the trap calculated. Ground state noble gas atoms were also trapped for the first time. This was achieved by optically quenching metastable atoms to the ground state and then trapping the atoms in the cavity field. Although the ground state atoms could not be directly probed, we detected them by observing the additional collisional loss from co-trapped metastable argon atoms. This trap loss was used to determine an ultra-cold elastic cross section between the ground and metastable states. Using a type of parametric loss spectroscopy we also determined the polarisability of metastable argon at the trapping wavelength of 1064 nm.
25

Novel algorithms for the understanding of the chemical cosmos

Makrymallis, A. January 2015 (has links)
Molecular data from the interstellar medium (ISM) contain information that holds the key to understanding our chemically controlled cosmos and to unlocking the secrets of our universe. Observational data, as well as synthetic data from chemical codes, provide a cornucopia of digital information that conceals knowledge of the ISM. Astrochemistry studies the chemical interactions in the ISM and translates this information into knowledge of the physical characteristics of the ISM. As larger datasets and more complex models are being employed in astrochemistry, the need for intelligent data mining algorithms wil increase. Machine learning algorithms provide novel methods for human-driven analysis of astrochemical data by augmenting scientific intelligence. The aim of this thesis is to introduce machine learning methods for solving typical astrochemical problems. The main application focus will be the physical parameter profile of dark molecular clouds. Time-dependent chemical codes are typically used as a tool to interpret observations, but their potential to explore a large physical and chemical parameter space is often ne- glected due to the computational complexity or the complexity of the parameter space. We will present clustering analysis methods, using traditional and probabilistic hierar- chical clustering, for the efficient discovery of structure and patterns in vast parameter spaces generated solely from an astrochemical code. Moreover, we will demonstrate how Bayesian methods in conjunction with Markov Chain Monte Carlo sampling algorithms can efficiently solve nonlinear inverse problems for the probabilistic estimation of chemical and physical parameters of dark molecular clouds. The computational cost of sampling algorithms can be preventive for a full Bayesian approach in some cases, hence we will also present how artificial neural networks can accelerate the inference process without much loss of accuracy. Finally, we will demonstrate how the Bayesian approach and smart sampling techniques can tackle uncertainty about surface reactions and rate coefficients, even with vague and not very informative observational constraints, and assist laboratory astrochemists by guiding experimental techniques probabilistically.
26

Towards fault-tolerant quantum computation with higher-dimensional systems

Anwar, H. January 2014 (has links)
The main focus of this thesis is to explore the advantages of using higher-dimensional quantum systems (qudits) as building blocks for fault-tolerant quantum computation. In particular, we investigate the two main essential ingredients of many state-of-the-art fault-tolerant schemes [133], which are magic state distillation and topological error correction. The theory for both of these components is well established for the qubit case, but little has been known for the generalised qudit case. For magic state distillation, we first present a general numerical approach that can be used to investigate the distillation properties of any stabilizer code. We use this approach to study small threedimensional (qutrit) codes and classify, for the first time, new types of qutrit magic states. We then provide an analytic study of a family of distillation protocols based on the quantum Reed-Muller codes. We discover a particular five-dimensional code that, by many measures, outperforms all known qubit codes. For the topological error correction, we study the qudit toric code serving as a quantum memory. For this purpose we examine an efficient renormalization group decoder to estimate the error correction threshold. We find that when the qudit toric code is subject to a generalised bit-flip noise, and for a sufficiently high dimension, a threshold of 30% can be obtained under perfect decoding.
27

Characterisation of extrasolar planets : applications to radial velocity cataloguing and atmospheric radiative transfer

Hollis, M. D. J. January 2014 (has links)
This thesis concerns the cataloguing and characterisation of extrasolar planets, an important topic given its potential to inform theories of planet formation and evolution, and its relevance for future studies defining and assessing the habitability of other worlds. The first aspect of the study is the calculation of orbits, using radial velocity measurements coupled with Bayesian and Markov chain Monte Carlo methods, to produce a catalogue of orbital elements for a sizeable sample of planets. This constitutes a self-consistent, uniformly-derived catalogue, useful for statistical planetary population and formation studies, to be contrasted with other databases of planetary parameters, which are in general compilations of measurements from different sources and using various techniques. The orbital elements determine important star-on-planet forcings (for example ultra-violet irradiation, which has significant impacts on planetary (photo)chemistry and dynamics), and this study also looks at characterising planets explicitly in terms of their atmospheres. A 1D radiative transfer model for planetary transmission spectroscopy has been produced, and made freely-available for use by the community. This method is particularly useful since it allows the retrieval of first-order abundances of trace atmospheric molecules, which in turn can be used to estimate parameters such as the C/O ratio, potentially providing further constraints on planetary formation processes. The code in question has been validated by comparison to models in the literature, and applied to several real planetary atmospheres. It has also been extended by incorporating a method to estimate the opacity due to scattering particles in clouds and haze layers. If present in an atmosphere such phenomena can lead to the persistence of various parameter degeneracies, and limit the extent to which inferences can be drawn from spectra (leading to potentially order-of-magnitude errors in estimates of molecular abundances). Future extensions to this work could include the development of an automated inversion framework, utilising joint Bayesian/Markov chain Monte Carlo techniques to explore the parameter space of all relevant atmospheric quantities in order to retrieve a complete solution that is consistent with observations.
28

Bi and Mn nanostructures on the Si(001) surface

Kirkham, C. January 2014 (has links)
With the increasing miniaturisation of electronics, it is becoming important to study nanoscale systems, down to the control and manipulation of individual atoms. This work focuses on several different structures, all on the technologically important Si(001) surface, including individual spin active Bi adatoms, the Bi nanoline and the Mn nanowire. Research in this area is guided by both experimental results and theoretical simulations. Here I explore the latter, via Density Functional Theory, with a particular focus on simulated STM images, demonstrating both the successes and limitations of these techniques. This work aims to both explain experimental results and suggest new experimental avenues. Adsorption of individual Bi atoms on Si(001) shows promise for quantum computing applications, due to the existence of spin active adsorption sites. However, rapid diffusion makes them unsuitable for real world applications. Selective depassivation of the H:Si(001) surface is shown to be a viable technique for trapping spin active Bi atoms, and offers the possibility of targeted Bi incorporation. Nanolines of Bi, which spontaneously form on Si(001) have been extensively studied, both experimentally and theoretically. Recent experimental STM results have shown a strong bias dependence to the appearance of the nanolines, and here I present simulations which successfully explain these results. I also present further studies into defects on the nanoline. I also studied nanowires that form when Mn is adsorbed on Si(001), which offer the possibility of magnetic nanowires. However, at present their physical structure is still unknown, despite prior efforts to address this. Here I present a thorough investigation into potential models for the Mn nanowire, encompassing prior models, their extensions and other surface or subsurface Mn arrangements. This remains an open problem, although identification of specific features in the experimental images, and deficiencies in previous models, has furthered our understanding of the problem.
29

Search for double beta decay of 82Se with the NEMO-3 detector and development of apparatus for low-level radon measurements for the SuperNEMO experiment

Mott, J. E. January 2014 (has links)
The 2νββ half-life of 82Se has been measured as (9.93 ± 0.14 (stat) ± 0.72 (syst)) × 10^19 yr using a 932 g sample measured for a total of 5.25 years in the NEMO-3 detector. The corresponding nuclear matrix element is found to be 0.0484 ± 0.0018. In addition, a search for 0νββ in the same isotope has been conducted and no evidence for a signal has been observed. The resulting half-life limit of > 2.18 × 10^23 yr (90% CL) for the neutrino mass mechanism corresponds to an effective Majorana neutrino mass of <mββ> < 1.0 - 2.8 eV (90% CL). Furthermore, constraints on lepton number violating parameters for other 0νββ mechanisms, such as right-handed current and Majoron emission modes, have been set. SuperNEMO is the successor to NEMO-3 and will be one of the next generation of 0νββ experiments. It aims to measure 82Se with an half-life sensitivity of 10^26 yr corresponding to <mββ> < 50 - 100 meV. Radon can be one of the most problematic backgrounds to any 0vBB search due to the high Q value of its daughter isotope, 214Bi. In order to achieve the target sensitivity, the radon concentration inside the tracking volume of SuperNEMO must be less than 150 μBq/m3. This low level of radon is not measurable with standard radon detectors, so a “radon concentration line” has been designed and developed. This apparatus has a sensitivity to radon concentration in the SuperNEMO tracker at the level of 40 μBq/m3, and has performed the first measurements of the radon level inside a sub-section of SuperNEMO, which is under construction. It has also been used to measure the radon content of nitrogen and helium gas cylinders, which are found to be in the ranges 70 - 120 μBq/m3 and 370 - 960 μBq/m3, respectively.
30

Technical development and scientific preparation for the e-MERLIN Cygnus OB2 radio survey

Peck, L. W. January 2014 (has links)
e-MERLIN is a recent upgrade to the MERLIN radio array. This enhanced facility utilises recent developments in wide bandwidth receivers, a new WIDAR correlator, and a new optical fibre network. This upgrade provides an increase in sensitivity and image fidelity, but also results in a significant increase in data volume. This thesis is motivated by the Cygnus OB2 Radio Survey (COBRaS), an e-MERLIN Legacy project observing the core region of the largest OB association in the northern hemisphere. COBRaS has been awarded ~ 300 hours observing time, resulting in a total Legacy dataset of tens of terabytes. It is not feasible to calibrate this amount of data manually, highlighting the necessity for automated procedures. This thesis primarily contains technical development for e-MERLIN during the commissioning phase and early Legacy observations from COBRaS, which focuses on the creation of automated flagging and calibration pipelines. This includes an automated RFI-mitigation and reduction tool (SERPent), as well as a full calibration pipeline consisting of: phase calibration with fringe fitting, amplitude calibration with the flux calibrator 3C286, bandpass calibration with spectral index and curvature fitting, and automated self-calibration on combined or individual IFs. A program for extracting fluxes for resolved and unresolved sources from radio maps with a detection significance boosting module has also been developed. In addition to the technical work, scientific preparations and initial results for COBRaS are also presented. A catalogue amalgamation routine for the Cyg OB2 association cross correlates previous surveys of Cyg OB2 into one definitive catalogue. Subsequent specific catalogues are compiled from this one catalogue to create an OB star catalogue and candidate catalogue. The predicted mass loss rates and radio fluxes from the winds of O-type stars and early B-type supergiants are determined, and this includes predictions from smooth wind models as well as predictions including the effects of clumping in the winds. The inclusion of an X-ray variability study of the Chandra Cyg OB2 Legacy dataset, provides a multi-wavelength view of the population of Cyg OB2, which complements COBRaS. The first COBRaS 1.6 GHz and 5 GHz radio images of Cyg OB2 are presented with source and flux lists and some initial analysis. The technical developments presented in this thesis are discussed in the context of COBRaS and of future interferometers such as the SKA and its associated pathfinders.

Page generated in 0.0286 seconds