• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 37
  • 26
  • 18
  • 18
  • 16
  • 16
  • 16
  • 14
  • 13
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A feasibility study into total electron content prediction using neural networks /

Habarulema, John Bosco. January 2007 (has links)
Thesis (M.Sc. (Physics & Electronics)) - Rhodes University, 2008. / A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science.
52

Forecasting solar cycle 24 using neural networks /

Uwamahoro, Jean January 2008 (has links)
Thesis (Ph.D. (Physics & Electronics)) - Rhodes University, 2009 / A thesis submitted in partial fulfilment of the requirements for the degree of Master of Science
53

An investigation into the use of scattered photons to improve 2D Position Emission Tomography (PET) functional imaging quality

Sun, Hongyan January 2012 (has links)
Positron emission tomography (PET) is a powerful metabolic imaging modality, which is designed to detect two anti-parallel 511 keV photons origniating from a positron-electron annihilation. However, it is possible that one or both of the annihilation photons undergo a Compton scattering in the object. This is more serious for a scanner operated in 3D mode or with large patients, where the scatter fraction can be as high as 40-60%. When one or both photons are scattered, the line of response (LOR) defined by connecting the two relevant detectors no longer passes through the annihilation position. Thus, scattered coincidences degrade image contrast and compromise quantitative accuracy. Various scatter correction methods have been proposed but most of them are based on estimating and subtracting the scatter from the measured data or incorporating it into an iterative reconstruction algorithm. By accurately measuring the scattered photon energy and taking advantage of the kinematics of Compton scattering, two circular arcs (TCA) in 2D can be identified, which describe the locus of all the possible scattering positions and encompass the point of annihilation. In the limiting case where the scattering angle approaches zero, the TCA approach the LOR for true coincidences. Based on this knowledge, a Generalized Scatter (GS) reconstruction algorithm has been developed in this thesis, which can use both true and scattered coincidences to extract the activity distribution in a consistent way. The annihilation position within the TCA can be further confined by adding a patient outline as a constraint into the GS algorithm. An attenuation correction method for the scattered coincidences was also developed in order to remove the imaging artifacts. A geometrical model that characterizes the different probabilities of the annihilation positions within the TCA was also proposed. This can speed up image convergence and improve reconstructed image quality. Finally, the GS algorithm has been adapted to deal with non-ideal energy resolutions. In summary, an algorithm that implicitly incorporates scattered coincidences into the image reconstruction has been developed. Our results demonstrate that this eliminates the need for scatter correction and can improve system sensitivity and image quality. / February 2016
54

Investigation of X-ray induced radiation damage in proteins, nucleic acids and their complexes

Bury, Charles S. January 2017 (has links)
Macromolecular X-ray crystallography (MX) is currently the dominant technique for the structural eluci- dation of macromolecules at near atomic resolution. However, the progression and deleterious effects of radiation damage remains a major limiting factor in the success of diffraction data collection and subsequent structural solution at modern third generation synchrotron facilities. For experiments conducted at 100 K, protein specific damage to particular amino acids has been widely reported at doses of just several MGy, before any observable decay in average diffraction intensities. When undetected, such artefacts of X-ray irradiation can lead to significant modelling errors in protein structures, and ultimately the failure to derive the correct biological function from a model. It is thus vital to develop tools to help MX experimenters to detect and correct for such damage events. This thesis presents the development of an automated program, RIDL, which is designed to objectively quantify radiation-induced changes to electron density at individual atoms, based on F<sub>obs,n</sub> − F<sub>obs,1</sub> Fourier difference maps between different dose states for a single crystal. The high-throughput RIDL program developed in this work provides the ability to systematically investigate a wide range of macromolecular systems. To date, damage to the broad class of nucleic acids and nucleoprotein complexes has remained largely uncharacterised, and it is unclear how radiation damage will disrupt the validity of such models derived from MX experiments. This thesis presents the first systematic investigations on a range of nucleic acid, protein-RNA and protein-DNA complex case studies. In general, it is concluded that nucleic acids are highly robust to radiation damage effects at 100K, relative to control protein counterparts across the tested systems. For protein crystals at 100K, cleavage of the phenolic C-O bond in tyrosine has disseminated through the MX radiation damage literature as a dominant specific damage event at 100K, despite the absence of any energetically favourable cleavage mechanism. To clarify the radiation susceptibility of tyrosine, this thesis presents a systematic investigation on radiation damage to tyrosine in a wide range of MX protein radiation damage series retrieved from the Protein Data Bank. It is concluded that the tyrosine C-O bond remains intact following X-ray irradiation, however the aromatic side-group can undergo radiation-induced displacement. This thesis also presents further applications of the RIDL program. A protocol is introduced to calculate explicit half-dose values for the electron density at individual atoms to decay to half of their initial value at zero absorbed dose. In addition, a methodology is developed to detect radiation-induced changes to electron density occurring over the course of the collection of a single MX dataset of diffraction images, all of which are required for structural solution. These protocols aim to advise experimenters of when previously-undetected site-specific damage effects may have corrupted the quality of their macromolecular model. Overall, the work in this thesis is highly applicable to both the future understanding of radiation damage in macromolecular structures, as well as of interest to the wider crystallographic community.
55

An investigation into improved ionospheric F1 layer predictions over Grahamstown, South Africa

Jacobs, Linda January 2005 (has links)
This thesis describes an analysis of the F1 layer data obtained from the Grahamstown (33.32°S, 26.500 E), South Africa ionospheric station and the use of this data in improving a Neural Network (NN) based model of the F1 layer of the ionosphere. An application for real-time ray tracing through the South African ionosphere was identified, and for this application real-time evaluation of the electron density profile is essential. Raw real-time virtual height data are provided by a Lowell Digisonde (DPS), which employs the automatic scaling software, ARTIST whose output includes the virtual-toreal height data conversion. Experience has shown that there are times when the ray tracing performance is degraded because of difficulties surrounding the real-time characterization of the F1 region by ARTIST. Therefore available DPS data from the archives of the Grahamstown station were re-scaled manually in order to establish the extent of the problem and the times and conditions under which most inaccuracies occur. The re-scaled data were used to update the F1 contribution of an existing NN based ionospheric model, the LAM model, which predicts the values of the parameters required to produce an electron density profile. This thesis describes the development of three separate NNs required to predict the ionospheric characteristics and coefficients that are required to describe the F1 layer profile. Inputs to the NNs include day number, hour and measures of solar and magnetic activity. Outputs include the value of the critical frequency of the F1 layer, foF1, the real height of reflection at the peak, hmFl, as well as information on the state of the F1 layer. All data from the Grahamstown station from 1973 to 2003 was used to train these NNs. Tests show that the predictive ability of the LAM model has been improved by incorporating the re-scaled data.
56

Development of an ionospheric map for Africa

Ssessanga, Nicholas January 2014 (has links)
This thesis presents research pertaining to the development of an African Ionospheric Map (AIM). An ionospheric map is a computer program that is able to display spatial and temporal representations of ionospheric parameters such as, electron density and critical plasma frequencies, for every geographical location on the map. The purpose of this development was to make the most optimum use of all available data sources, namely ionosondes, satellites and models, and to implement error minimisation techniques in order to obtain the best result at any given location on the African continent. The focus was placed on the accurate estimation of three upper atmosphere parameters which are important for radio communications: critical frequency of the F2 layer (foF2), Total Electron Content (TEC) and the maximum usable frequency over a distance of 3000 km (M3000F2). The results show that AIM provided a more accurate estimation of the three parameters than the internationally recognised and recommended ionosphere model (IRI-2012) when used on its own. Therefore, the AIM is a more accurate solution than single independent data sources for applications requiring ionospheric mapping over the African continent.
57

A feasibility study into total electron content prediction using neural networks

Habarulema, John Bosco January 2008 (has links)
Global Positioning System (GPS) networks provide an opportunity to study the dynamics and continuous changes in the ionosphere by supplementing ionospheric measurements which are usually obtained by various techniques such as ionosondes, incoherent scatter radars and satellites. Total electron content (TEC) is one of the physical quantities that can be derived from GPS data, and provides an indication of ionospheric variability. This thesis presents a feasibility study for the development of a Neural Network (NN) based model for the prediction of South African GPS derived TEC. The South African GPS receiver network is operated and maintained by the Chief Directorate Surveys and Mapping (CDSM) in Cape Town, South Africa. Three South African locations were identified and used in the development of an input space and NN architecture for the model. The input space includes the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), and magnetic index(measure of the magnetic activity). An attempt to study the effects of solar wind on TEC variability was carried out using the Advanced Composition Explorer (ACE) data and it is recommended that more study be done using low altitude satellite data. An analysis was done by comparing predicted NN TEC with TEC values from the IRI2001 version of the International Reference Ionosphere (IRI), validating GPS TEC with ionosonde TEC (ITEC) and assessing the performance of the NN model during equinoxes and solstices. Results show that NNs predict GPS TEC more accurately than the IRI at South African GPS locations, but that more good quality GPS data is required before a truly representative empirical GPS TEC model can be released.
58

A global ionospheric F2 region peak electron density model using neural networks and extended geophysically relevant inputs

Oyeyemi, Elijah Oyedola January 2006 (has links)
This thesis presents my research on the development of a neural network (NN) based global empirical model of the ionospheric F2 region peak electron density using extended geophysically relevant inputs. The main principle behind this approach has been to utilize parameters other than simple geographic co-ordinates, on which the F2 peak electron density is known to depend, and to exploit the technique of NNs, thereby establishing and modeling the non-linear dynamic processes (both in space and time)associated with the F2 region electron density on a global scale. Four different models have been developed in this work. These are the foF2 NN model, M(3000)F2 NN model, short-term forecasting foF2 NN, and a near-real time foF2 NN model. Data used in the training of the NNs were obtained from the worldwide ionosonde stations spanning the period 1964 to 1986 based on availability, which included all periods of calm and disturbed magnetic activity. Common input parameters used in the training of all 4 models are day number (day of the year, DN), Universal Time (UT), a 2 month running mean of the sunspot number (R2), a 2 day running mean of the 3-hour planetary magnetic index ap (A16), solar zenith angle (CHI), geographic latitude (q), magnetic dip angle (I), angle of magnetic declination (D), angle of meridian relative to subsolar point (M). For the short-term and near-real time foF2 models, additional input parameters related to recent past observations of foF2 itself were included in the training of the NNs. The results of the foF2 NN model and M(3000)F2 NN model presented in this work, which compare favourably with the IRI (International Reference Ionosphere) model successfully demonstrate the potential of NNs for spatial and temporal modeling of the ionospheric parameters foF2 and M(3000)F2 globally. The results obtained from the short-term foF2 NN model and nearreal time foF2 NN model reveal that, in addition to the temporal and spatial input variables, short-term forecasting of foF2 is much improved by including past observations of foF2 itself. Results obtained from the near-real time foF2 NN model also reveal that there exists a correlation between measured foF2 values at different locations across the globe. Again, comparisons of the foF2 NN model and M(3000)F2 NN model predictions with that of the IRI model predictions and observed values at some selected high latitude stations, suggest that the NN technique can successfully be employed to model the complex irregularities associated with the high latitude regions. Based on the results obtained in this research and the comparison made with the IRI model (URSI and CCIR coefficients), these results justify consideration of the NN technique for the prediction of global ionospheric parameters. I believe that, after consideration by the IRI community, these models will prove to be valuable to both the high frequency (HF) communication and worldwide ionospheric communities.
59

Challenges in topside ionospheric modelling over South Africa

Sibanda, Patrick January 2010 (has links)
This thesis creates a basic framework and provides the information necessary to create a more accurate description of the topside ionosphere in terms of the altitude variation of the electron density (Ne) over the South African region. The detailed overview of various topside ionospheric modelling techniques, with specific emphasis on their implications for the efforts to model the South African topside, provides a starting point towards achieving the goals. The novelty of the thesis lies in the investigation of the applicabilityof three different techniques to model the South African topside ionosphere: (1) The possibility of using Artificial Neural Network (ANN) techniques for empirical modelling of the topside ionosphere based on the available, however irregularly sampled, topside sounder measurements. The goal of this model was to test the ability of ANN techniques to capture the complex relationships between the various ionospheric variables using irregularly distributed measurements. While this technique is promising, the method did not show significant improvement over the International Reference Ionosphere (IRI) model results when compared with the actual measurements. (2) Application of the diffusive equilibrium theory. Although based on sound physics foundations, the method only operates on a generalised level leading to results that are not necessarily unique. Furthermore, the approach relies on many ionospheric variables as inputs which are derived from other models whose accuracy is not verified. (3) Attempts to complement the standard functional techniques, (Chapman, Epstein, Exponential and Parabolic), with Global Positioning System (GPS) and ionosonde measurements in an effort to provide deeper insights into the actual conditions within the ionosphere. The vertical Ne distribution is reconstructed by linking together the different aspects of the constituent ions and their transition height by considering how they influence the shape of the profile. While this approach has not been tested against actual measurements, results show that the method could be potentially useful for topside ionospheric studies. Due to the limitations of each technique reviewed, this thesis observes that the employment of an approach that incorporates both theoretical onsiderations and empirical aspects has the potential to lead to a more accurate characterisation of the topside ionospheric behaviour, and resulting in improved models in terms of reliability and forecasting ability. The point is made that a topside sounder mission for South Africa would provide the required measured topside ionospheric data and answer the many science questions that this region poses as well as solving a number of the limitations set out in this thesis.
60

Forecasting solar cycle 24 using neural networks

Uwamahoro, Jean January 2009 (has links)
The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.

Page generated in 0.0584 seconds