• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 692
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1731
  • 534
  • 243
  • 183
  • 165
  • 153
  • 153
  • 124
  • 112
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Going beyond the Random Phase Approximation: A systematic assessment of structural phase transitions and interlayer binding energies

Sengupta, Niladri January 2018 (has links)
The Random Phase Approximation and beyond Random Phase Approximation methods based on Adiabatic Connection Fluctuation Dissipation Theorem (ACFD) are tested for structural phase transitions of different groups of materials, including metal to metal, metal to semiconductor, semiconductor to semiconductor transitions. Also the performance assessment of semilocal density functionals with or without empirical long range dispersion corrections has been explored for the same cases. We have investigated the structural phase transitions of three broad group of materials, semi- conductor to metal transitions involving two symmetric structures, semiconductor to metal and wide bandgap semiconductor to semiconductor transitions involving at least one lower symmetric structure and lastly special cases comprising metal to metal transitions and transitions between energetically very close structural phases. The first group contains Si (diamond → β-tin), Ge (diamond → β-tin) and SiC (zinc blende → rocksalt), second group contains GaAs (zinc blende → cmcm) and SiO 2 (quartz → stishovite) and third group contains Pb (fcc → hcp), C(graphite → diamond) and BN (cubic → hexagonal) respectively. We have found that the difference in behavior of exchange and correlation in semilocal functionals and ACFD methods is striking. For the former, the exchange potential and energy often comprise the majority of the binding described by density functional approximations, and the addition of the correlation energy and potential often induce only a (relatively) small shift from the exchange- only results. For the ACFD, however, non self-consistent EXX typically underbinds by a considerable degree resulting in wildly inaccurate results. Thus the addition of correlation leads to very large shifts in the exchange-only results, in direct contrast to semilocal correlation. This difference in behavior is directly linked to the non-local nature of the EXX, and even though the exchange-only starting point is often nowhere close to experiment, the non-local correlation from the ACFD corrects this deficiency and yields the missing binding needed to produce accurate results. Thus we find the ACFD approach to be vital in the validation of semilocal results and recommend its use in materials where experimental results cannot be straightforwardly compared to other approximate electronic structure calculations. Utilizing the second-order approximation to Random Phase Approximation renormalized (RPAr) many-body perturbation theory for the interacting density-density response function, we have used a so-called higher-order terms (HOT) approximation for the correlation energy. In combination with the first-order RPAr correction, the HOT method faithfully captures the infinite- order correlation for a given exchange-correlation kernel, yielding errors of the total correlation energy on the order of 1% or less for most systems. For exchange-like kernels, our new method has the further benefit that the coupling-strength integration can be completely eliminated resulting in a modest reduction in computational cost compared to the traditional approach. When the correlation energy is accurately reproduced by the HOT approximation, structural properties and energy differences are also accurately reproduced, as confirmed by finding interlayer binding energies of several periodic solids and compared that to some molecular systems along with some phase transition parameters of SiC. Energy differences involving fragmentation have proved to be challenging for the HOT method, however, due to errors that do not cancel between a composite system and its constituent pieces which has been verified in our work as well. / Physics
132

Analytic 3D Scatter Correction in Pet Using the Klein-Nishna Equation

Bowen, Christopher V. 11 1900 (has links)
In order to perform quantitative 3D positron tomography, it is essential that an accurate means of correcting for the effects of Compton scattered photons be developed. The two main approaches to compensate for scattered radiation rely on energy considerations or on filtering operations. Energy based scatter correction methods exploit the reduced energy of scattered photons to differentiate them from unscattered photons. Filtered scatter correction methods require the measurement of scatter point spread functions to be used for convolution with the acquired emission data set. Neither approach has demonstrated sufficient accuracy to be applied in a clinical environment. In this thesis, I have developed the theoretical framework for generating the scatter point spread functions for the general case of any source position within any nonuniform attenuation object. This calculation is based on a first principles approach using the Klein-Nishina differential cross section for Compton scattering to describe the angular distribution of scatter annihilation photons. The attenuation correction factors from transmission scans are included within the theory as inputs describing the distribution of matter in the object being imaged. The theory has been tested by comparison with experimental scatter profiles of point sources which are either centered, or off-center in water-filled cylinders. Monte Carlo simulations have been used to identify the detector energy threshold where the single scatter assumption employed by the theory is most satisfied. The validity of a mean scatter position assumption, used in the development of the theory, is tested using analytic calculations of a non-uniform attenuation phantom. The physical effects most responsible for determining the shape of the scatter profiles, as well as the assumptions employed by several common scatter correction methods, are revealed using the analytic scatter correction theory. / Thesis / Master of Science (MS)
133

Corrections to and Applications of the Antineutrino Spectrum Generated by Nuclear Reactors

Jaffke, Patrick John 16 November 2015 (has links)
In this work, the antineutrino spectrum as specifically generated by nuclear reactors is studied. The topics covered include corrections and higher-order effects in reactor antineutrino experiments, one of which is covered in Ref. [1] and another contributes to Ref. [2]. In addition, a practical application, antineutrino safeguards for nuclear reactors, as summarized in Ref. [3,4] and Ref. [5], is explored to determine its viability and limits. The work will focus heavily on theory, simulation, and statistical analyses to explain the corrections, their origins, and their sizes, as well as the applications of the antineutrino signal from nuclear reactors. Chapter [1] serves as an introduction to neutrinos. Their origin is briefly covered, along with neutrino properties and some experimental highlights. The next chapter, Chapter [2], will specifically cover antineutrinos as generated in nuclear reactors. In this chapter, the production and detection methods of reactor neutrinos are introduced as well as a discussion of the theories behind determining the antineutrino spectrum. The mathematical formulation of neutrino oscillation will also be introduced and explained. The first half of this work focuses on two corrections to the reactor antineutrino spectrum. These corrections are generated from two specific sources and are thus named the spent nuclear fuel contribution and the non-linear correction for their respective sources. Chapter [3] contains a discussion of the spent fuel contribution. This correction arises from spent nuclear fuel near the reactor site and involves a detailed application of spent fuel to current reactor antineutrino experiments. Chapter [4] will focus on the non-linear correction, which is caused by neutron-captures within the nuclear reactor environment. Its quantification and impact on future antineutrino experiments are discussed. The research projects presented in the second half, Chapter [5], focus on neutrino applications, specifically reactor monitoring. Chapter [5] is a comprehensive examination of the use of antineutrinos as a reactor safeguards mechanism. This chapter will include the theory behind safeguards, the statistical derivation of power and plutonium measurements, the details of reactor simulations, and the future outlook for non-proliferation through antineutrino monitoring. / Ph. D.
134

Contribution à l'étude du traitement des erreurs au niveau lexico-syntaxique dans un texte écrit en français

Strube Den Lima, Vare Lucia 15 March 1990 (has links) (PDF)
Cette thèse aborde le thème du traitement des erreurs aux niveaux lexical et syntaxique dans un texte écrit en français. Nous présentons d'abord une approche générale des erreurs pouvant apparaitre dans un texte. Nous donnons les éléments de base d'un ensemble de méthodes utilisées actuellement dans le traitement d'erreurs aux niveaux lexical et syntaxique et décrivons des méthodes de correction proposées dans les principales études réalisées dans le domaine de la correction. Après une brève description de l'environnement pilaf de traitement de la langue naturelle, ou s'insère l'étude en question, nous proposons et décrivons la mise en œuvre d'un algorithme de correction d'erreurs lexicales par la phonétique applicable a un dictionnaire de grandeur réelle. Cet algorithme realise la transduction phonétique du mot a corriger, suivie de sa reconstitution graphique. Nous présentons ensuite la mise en œuvre d'un pré-prototype de vérification syntaxique et de correction des erreurs d'accord. La vérification syntaxique est réalisée par unifications de traits; la détection d'une faute d'accord est a l'origine d'une correction par génération morphologique. Une maquette de détection/correction d'erreurs au niveau lexico-syntaxique permet de démontrer la faisabilité d'un système multi-algorithmique de détection/correction d'erreurs au niveau lexico-syntaxique
135

Corrections for improved quantitative accuracy in SPECT and planar scintigraphic imaging

Larsson, Anne January 2005 (has links)
A quantitative evaluation of single photon emission computed tomography (SPECT) and planar scintigraphic imaging may be valuable for both diagnostic and therapeutic purposes. For an accurate quantification it is usually necessary to correct for attenuation and scatter and in some cases also for septal penetration. For planar imaging a background correction for the contribution from over- and underlying tissues is needed. In this work a few correction methods have been evaluated and further developed. Much of the work relies on the Monte Carlo method as a tool for evaluation and optimisation. A method for quantifying the activity of I-125 labelled antibodies in a tumour inoculated in the flank of a mouse, based on planar scintigraphic imaging with a pin-hole collimator, has been developed and two different methods for background subtraction have been compared. The activity estimates of the tumours were compared with measurements in vitro. The major part of this work is attributed to SPECT. A method for attenuation and scatter correction of brain SPECT based on computed tomography (CT) images of the same patient has been developed, using an attenuation map calculated from the CT image volume. The attenuation map is utilised not only for attenuation correction, but also for scatter correction with transmission dependent convolution subtraction (TDCS). A registration method based on fiducial markers, placed on three chosen points during the SPECT examination, was evaluated. The scatter correction method, TDCS, was then optimised for regional cerebral blood flow (rCBF) SPECT with Tc-99m, and was also compared with a related method, convolution scatter subtraction (CSS). TDCS has been claimed to be an iterative technique. This requires however some modifications of the method, which have been demonstrated and evaluated for a simulation with a point source. When the Monte Carlo method is used for evaluation of corrections for septal penetration, it is important that interactions in the collimator are taken into account. A new version of the Monte Carlo program SIMIND with this capability has been evaluated by comparing measured and simulated images and energy spectra. This code was later used for the evaluation of a few different methods for correction of scatter and septal penetration of I-123 brain SPECT. The methods were CSS, TDCS and a method where correction for scatter and septal penetration are included in the iterative reconstruction. This study shows that quantitative accuracy in I-123 brain SPECT benefits from separate modelling of scatter and septal penetration.
136

An analysis of a relationship between Remuneration and Labour Productivity in South Africa / Johannes Tshepiso Tsoku

Tsoku, Johannes Tshepiso January 2014 (has links)
This study analyses the relationship between remuneration (real wage) and labour productivity in South Africa at the macroeconomic level, using time series and econometric techniques. The results depict that there is a significant evidence of a structural break in 1990. The break appears to have affected the employment level and subsequently fed through into employees' remuneration (real wage) and productivity. A long run cointegrating relationship was found between remuneration and labour productivity for the period 1990 to 2011. In the long run, 1% increase in labour productivity is linked with an approximately 1.98% rise in remuneration. The coefficient of the error correction term in the labour productivity is large, indicating a rapid adjustment of labour productivity to equilibrium. However, remuneration does not Granger cause labour productivity and vice versa. / Thesis (M.Com.(Statistics) North-West University, Mafikeng Campus, 2014
137

Complexité du décodage des codes stabilisateurs quantiques / Hardness of decoding stabilizer codes

Iyer Sridharan, Pavithran January 2014 (has links)
Résumé : Ce mémoire porte sur l’étude de la complexité du problème du décodage des codes stabilisateurs quantiques. Les trois premiers chapitres introduisent les notions nécessaires pour comprendre notre résultat principal. D’abord, nous rappelons les bases de la théorie de la complexité et illustrons les concepts qui s’y rattachent à l’aide d’exemples tirés de la physique. Ensuite, nous expliquons le problème du décodage des codes correcteurs classiques. Nous considérons les codes linéaires sur le canal binaire symétrique et nous discutons du célèbre résultat de McEliece et al. [1]. Dans le troisième chapitre, nous étudions le problème de la communication quantique sur des canaux de Pauli. Dans ce chapitre, nous introduisons le formalisme des codes stabilisateurs pour étudier la correction d’erreur quantique et mettons en évidence le concept de dégénérescence. Le problème de décodage des codes stabilisateurs quantiques négligeant la dégénérescence est appelé «quantum maximum likelihood decoding»(QMLD). Il a été démontré que ce problème est NP-complet par Min Hseiu Heish et al., dans [2]. Nous nous concentrons sur la stratégie optimale de décodage, appelée «degenerate quantum maximum likelihood decoding »(DQMLD), qui prend en compte la présence de la dégénérescence et nous mettons en évidence quelques instances pour lesquelles les performances de ces deux méthodes diffèrent drastiquement. La contribution principale de ce mémoire est de prouver que DQMLD est considérablement plus difficile que ce que les résultats précédents indiquaient. Dans le dernier chapitre, nous présentons notre résultat principal (Thm. 5.1.1), établissant que DQMLD est #P-complet. Pour le prouver, nous démontrons que le problème de l’évaluation de l’énumérateur de poids d’un code linéaire, qui est #P-complet, se réduit au problème DQMLD. Le résultat principal de ce mémoire est présenté sous forme d’article dans [3] et est présentement considéré pour publication dans IEEE Transactions on Information Theory. Nous montrons également que, sous certaines conditions, les résultats de QMLD et DQMLD coïncident. Il s’agit d’une amélioration par rapport aux résultats obtenus dans [4, 5]. // Abstract : This thesis deals with the study of computational complexity of decoding stabilizer codes. The first three chapters contain all the necessary background to understand the main result of this thesis. First, we explain the necessary notions in computational complexity, introducing P, NP, #P classes of problems, along with some examples intended for physicists. Then, we explain the decoding problem in classical error correction, for linear codes on the binary symmetric channel and discuss the celebrated result of Mcleicee et al., in [1]. In the third chapter, we study the problem of quantum communication, over Pauli channels. Here, using the stabilizer formalism, we discuss the concept of degenerate errors. The decoding problem for stabilizer codes, which simply neglects the presence of degenerate errors, is called quantum maximum likelihood decoding (QMLD) and it was shown to be NP-complete, by Min Hseiu Heish et al., in [2]. We focus on the problem of optimal decoding, called degenerate quantum maximum likelihood decoding (DQMLD), which accounts for the presence of degenerate errors. We will highlight some instances of stabilizer codes, where the presence of degenerate errors causes drastic variations between the performances of DQMLD and QMLD. The main contribution of this thesis is to demonstrate that the optimal decoding problem for stabilizer codes is much harder than what the previous results had anticipated. In the last chapter, we present our own result (in Thm. 5.1.1), establishing that the optimal decoding problem for stabilizer codes, is #P-complete. To prove this, we demonstrate that the problem of evaluating the weight enumerator of a binary linear code, which is #P-complete, can be reduced (in polynomial time) to the DQMLD problem, see (Sec. 5.1). Our principal result is also presented as an article in [3], which is currently under review for publication in IEEE Transactions on Information Theory. In addition to the main result, we also show that under certain conditions, the outputs of DQMLD and QMLD always agree. We consider the conditions developed by us to be an improvement over the ones in [4, 5].
138

Étude de la confusion résiduelle et erreur de mesure dans les modèles de régression

Fourati, Mariem January 2015 (has links)
Dans ce travail, j'ai étudié l'analyse des régressions linéaire et logistique comme méthodes de traitement des facteurs de confusion, qui ont servi à déterminer les effets d'une erreur de mesure dans une variable de confusion.
139

Decoding and Turbo Equalization for LDPC Codes Based on Nonlinear Programming

Iltis, Ronald A. 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Decoding and Turbo Equalization (TEQ) algorithms based on the Sum-Product Algorithm (SPA) are well established for LDPC codes. However there is increasing interest in linear and nonlinear programming (NLP)-based decoders which may offer computational and performance advantages over the SPA. We present NLP decoders and Turbo equalizers based on an Augmented Lagrangian formulation of the decoding problem. The decoders update estimates of both the Lagrange multipliers and transmitted codeword while solving an approximate quadratic programming problem. Simulation results show that the NLP decoder performance is intermediate between the SPA and bit-flipping algorithms. The NLP may thus be attractive in some applications as it eliminates the tanh/atanh computations in the SPA.
140

The use of rubrics and correction codes in the marking of Grade 10 Sesotho home language creative writing essays

Sibeko, Johannes January 2015 (has links)
This study investigates the assessment of creative essays in grade 10 Sesotho home language. Nine participants from a total of six schools took part in the research. For the purpose of this study, no literature was found on the assessment of Sesotho essays (or essay writing in any other African language) in general or specific to creative writing in high schools in South Africa. The literature on English first language teaching and English second language teaching were then used to theoretically contextualise the writing and assessment of creative writing essays in Sesotho home language in South African high schools. Data were collected through questionnaires completed by teachers, an analysis of a sample of marked scripts (representing above average, average and below average grades) and interviews with teachers (tailored to investigate the asset of creativity and the aspect of style in Sesotho creative writing essays). The researcher manually coded open-ended responses in the questionnaires. Interview responses were coded with Atlas.ti version 7. Frequencies were calculated for the close-ended questions in the questionnaire. Participating teachers perceived their assessment of essays with the use of the rubric and the correction to be standardised. This was evident in their awarding of marks. It was found in this study that teachers generally award marks around 60%. However, their report that they use comments as per their responses in the questionnaire was disproven by the lack of comments in the scripts analysed in this study. There was also no relationship observed between the correction code frequencies observed in the marked essays that were analysed and the marks granted for specific sections of the rubric. This study recommends use of the rubric in earlier drafts of the writing process. In addition, it proposes an expansion of the marking grid used to provide clearer feedback via the revised rubric to the learners. Due to the participating teachers’ evident lack of clarity on what style in Sesotho home language essays entail, it was inferred that teachers are not clear on the distinctions between different essay assessment criteria in the rubric. A recommendation was the development of a rubric guide, which would clearly indicate to teachers what each criterion of the rubric assesses.

Page generated in 0.0992 seconds