• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3081
  • 943
  • 353
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 37
  • 30
  • Tagged with
  • 6330
  • 1456
  • 1126
  • 1081
  • 845
  • 741
  • 735
  • 723
  • 651
  • 625
  • 510
  • 493
  • 484
  • 484
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Bayesian inference for models with infinite-dimensionally generated intractable components

Villalobos, Isadora Antoniano January 2012 (has links)
No description available.
772

Auditory temporal contextual cueing

Doan, Lori Anne 05 September 2014 (has links)
When conducting a visual search task participants respond faster to targets embedded in a repeated array of visual distractors compared to targets embedded in a novel array, an effect referred to as contextual cueing. There are no reports of contextual cueing in audition, and generalizing this effect to the auditory domain would provide a new paradigm to investigate similarities, differences, and interactions in visual and auditory processing. In 4 experiments, participants identified a numerical target embedded in a sequence of alphabetic letter distractors. The training phase (Epochs 1, 2, and 3) of all experiments contained repeated sequences, and the testing phase (Epoch 4) contained novel sequences. Temporal contextual cueing was measured as slower response times in Epoch 4 than in Epoch 3. Repeated context was defined by the order of distractor identities and the rhythmic structure of the portion of the sequence immediately preceding the target digit, either together (Experiments 1 and 2) or separately (Experiments 3 and 4). An auditory temporal contextual cueing effect was obtained in Experiments 1, 2, and 4. This is the first report of an auditory temporal contextual cueing effect and, thus, it extends the contextual cueing effect to a new modality. This new experimental paradigm could be useful in furthering our understanding of fundamental auditory processes and could eventually be used to aid in diagnosing language deficits.
773

Statistical learning in brain damaged patients: A multimodal impairment

Shaqiri, Albulena January 2013 (has links)
Spatial neglect has mainly been described through its spatial deficits (such as attentional bias, disengagement deficit or exploratory motor behavior), but numerous other studies have reported non-spatial impairments in patients suffering from this disorder. In the present thesis, non-spatial deficits in neglect are hypothesized to form a core impairment, which can be summarized as a difficulty to learn and benefit from regularities in the environment. The different studies conducted and reported in the present thesis have converged to support this hypothesis that neglect patients have difficulty to interact with environmental statistics. The two first studies, which tested the visual modality (Chapters 2 and 3), have demonstrated that neglect patients have difficulties to become faster to respond to targets that appear successively at the same location (position priming). This difficulty is also more generic, as neglect patients do not learn that some things occur more often than others, such as for example that a target has a high probability to be repeated at a specific region. Those two studies have shown that neglect patients are impaired in position priming and statistical learning, which corresponds to difficulties benefiting from regularities presented in the visual domain. This difficulty may be explained by patients’ impairment in working memory or temporal processing. Several studies have reported the implication of those two mechanism in statistical learning: if patients tend to underestimate the time that a target is presented on the screen and have difficulties keeping in memory its precedent location, this translates into a difficulty to benefit from the repeats of the target position as well as a difficulty to benefit from transition probability. In order to verify if priming and statistical learning impairments were specific to the visual modality or if neglect patients present a multimodal difficulty to learn the transition probability in general, brain damaged patients were tested in the auditory domain (Chapter 5), with a paradigm that has shown statistical learning in infants. This study confirmed that for the auditory modality too, brain damaged patients are impaired in statistical learning. The different results of the studies reported in Chapters 2, 3, 4 and 5 converge to support the hypothesis that spatial neglect patients have difficulties benefiting from regularities of their environment. Nevertheless, this impairment is not irreversible, as it was demonstrated by a chronic neglect patient who was trained with three sessions distributed over three days (Chapter 2). Although having similar results to the other patients for the first session, this patients’ performance improved over the sessions to show a faster reaction time for the targets presented on the high probability region (his contralesional side). Therefore, priming and statistical learning investigated in this thesis are worth exploring further for their potential outcome in the rehabilitation domain.
774

Statistical downscaling prediction of sea surface winds over the global ocean

Sun, Cangjie 28 August 2012 (has links)
The statistical prediction of local sea surface winds at a number of locations over the global ocean (Northeast Pacific, Northwest Atlantic and Pacific, tropical Pacific and Atlantic) is investigated using a surface wind statistical downscaling model based on multiple linear regression. The predictands (mean and standard deviation of both vector wind components and wind speed) calculated from ocean buoy observations on daily, weekly and monthly temporal scales are regressed on upper level predictor fields (derived from zonal wind, meridional wind, wind speed, and air temperature) from reanalysis products. The predictor fields are subject to a combined Empirical Orthogonal Function (EOF) analysis before entering the regression model. It is found that in general the mean vector wind components are more predictable than mean wind speed in the North Pacific and Atlantic, while in the tropical Pacific and Atlantic the difference in predictive skill between mean vector wind components and wind speed is not substantial. The predictability of wind speed relative to vector wind components is interpreted by an idealized Gaussian model of wind speed probability density function, which indicates that the wind speed is more sensitive to the standard deviations (which generally are not well predicted) than to the means of vector wind component in the midlatitude region and vice versa in the tropical region. This sensitivity of wind speed statistics to those of vector wind components can be characterized by a simple scalar quantity theta=arctan(mu/sigma) (in which mu is the magnitude of average vector wind and sigma is the isotropic standard deviation of the vector winds). The quantity theta is found to be dependent on season, geographic location and averaging timescale of wind statistics. While the idealized probability model does a good job of characterizing month-to-month variations in the mean wind speed based on those of the vector wind statistics, month-to-month variations in the standard deviation of speed are not well modelled. A series of Monte Carlo experiments demonstrates that the inconsistency in the characterization of wind speed standard deviation is the result of differences of sampling variability between the vector wind and wind speed statistics. / Graduate
775

Statistical mechanics of neural networks

Whyte, William John January 1995 (has links)
We investigate five different problems in the field of the statistical mechanics of neural networks. The first three problems involve attractor neural networks that optimise particular cost functions for storage of static memories as attractors of the neural dynamics. We study the effects of replica symmetry breaking (RSB) and attempt to find algorithms that will produce the optimal network if error-free storage is impossible. For the Gardner-Derrida network we show that full RSB is necessary for an exact solution everywhere above saturation. We also show that, no matter what the cost function that is optimised, if the distribution of stabilities has a gap then the Parisi replica ansatz that has been made is unstable. For the noise-optimal network we find a continuous transition to replica symmetry breaking at the AT line, in line with previous studies of RSB for different networks. The change to RSBl improves the agreement between "experimental" and theoretical calculations of the local stability distribution ρ(λ) significantly. The effect on observables is smaller. We show that if the network is presented with a training set which has been generated from a set of prototypes by some noisy rule, but neither the noise level nor the prototypes are known, then the perceptron algorithm is the best initial choice to produce a network that will generalise well. If additional information is available more sophisticated algorithms will be faster and give a smaller generalisation error. The remaining problems deal with attractor neural networks with separable interaction matrices which can be used (under parallel dynamics) to store sequences of patterns without the need for time delays. We look at the effects of correlations on a singlesequence network, and numerically investigate the storage capacity of a network storing an extensive number of patterns in such sequences. When correlations are implemented along with a term in the interaction matrix designed to suppress some of the effects of those correlations, the competition between the two produces a rich range of behaviour. Contrary to expectations, increasing the correlations and the operating temperature proves capable of improving the sequenceprocessing behaviour of the network. Finally, we demonstrate that a network storing a large number of sequences of patterns using a Hebb-like rule can store approximately twice as many patterns as the network trained with the Hebb rule to store individual patterns.
776

Karl Pearson : evolutionary biology and the emergence of a modern theory of statistics (1884-1936)

Magnello, Eileen January 1994 (has links)
This thesis examines the development of modern statistical theory and its emergence as a highly specialised mathematical discipline at the end of the nineteenth century. The statistical work of the mathematician and statistician Karl Pearson (1857-1936), who almost singularly created the modern theory of statistics, is the focus of the thesis. The impact of the statistical and experimental work of the Darwinian zoologist W.F.R. Weldon (1860-1906), on the emergence and construction of Pearsonian statistical innovation, is central to the arguments developed in this thesis. Contributions to the Pearsonian corpus from such statisticians as Francis Ysidro Edgeworth (1845-1926), Francis Galton (1822-1911), and George Udny Yule (1871- 1951) are also addressed. The scope of the thesis does not involve a detailed account of every technical contribution that Pearson made to statistics. Instead, it provides a unifying assessment of Pearson's most seminal and innovative contributions to modern statistical theory devised in the Biometric School, at University College London, from 1892 to 1903. An assessment of Pearson's statistical contributions also entails a comprehensive examination of the two separate methodologies he developed in the Drapers' Biometric Laboratory (from 1903 to 1933) and in the Galton Eugenics Laboratory (from 1907 to 1933). This thesis arises, in part, from a desire to reassess the state of the historiography of Pearsonian statistics over the course of the last half century. Some of the earliest work on Pearson came from his former students who emphasised his achievements as a statistician usually from the perspective of the state of the discipline in their tune. The conventional view has presumed that Pearson's relationship with Galton and thus to Gallon's work on simple correlation, simple regression, inheritance and eugenics provided the impetus to Pearson's own statistical work. This approach, which focuses on a part of Pearson's statistical work, has provided minimal insight into the complexity of the totality of Pearsonian statistics. Another approach, derived from the sociology of knowledge in the 1970s, espoused this conventional view and linked Pearson's statistical work to eugenics by placing his work in a wider context of social and political ideologies. This has usually entailed frequent recourse to Pearson's social and political views vis-a-vis his popular writings on eugenics. This approach, whilst indicating the political and social dimensions of science, has produced a rather mono-causal or uni-dimensional view of history. The crucial question of the relation between his technical contributions and his ideology in the construction of his statistical methods has not yet been adequately considered. This thesis argues that the impetus to Pearson's earliest statistical work was given by his efforts to tackle the problems of asymmetrical biological distributions (arising from Weldon's dimorphic distribution of the female shore crab in the Bay of Naples). Furthermore, it argues that the fundamental developments and construction of Pearsonian statistics arose from the Darwinian biological concepts at the centre of Weldon's statistical and experimental work on marine organisms in Naples and in Plymouth. Charles Darwin's recognition that species comprised different sets of 'statistical' populations (rather than consisting of 'types' or 'essences') led to a reconceptualisation of statistical populations by Pearson and Weldon which, in turn, led to their attempts to find a statistical resolution of the pre-Darwinian Aristotelian essentialistic concept of species. Pearson's statistical developments thus involved a greater consideration of speciation and of Darwin's theory of natural selection than hitherto considered. This has, therefore, entailed a reconstruction of the totality of Pearsonian statistics to identify the mathematical and biological developments that underpinned his work and to determine other sources of influence in this development. Pearson's writings are voluminous: as principal author he published more than 540 papers and books of which 361 are statistical. The other publications include 67 literary and historical writings, 49 eugenics publications, 36 pure mathematics and physics papers and 27 reports on university matters. He also published at least 111 letters, notes and book reviews. His collected papers and letters at University College London consist of 235 boxes of family papers, scientific manuscripts and 14,000 letters. One of the most extensive sets of letters in the collection are those of W.F.R. Weldon and his wife, Florence Joy Weldon, which consists of nearly 1,000 pieces of correspondence. No published work on Pearson to date has properly utilised the correspondence between Pearson and the Weldons. Particular emphasis has been given to this collection as these letters indicate (in tandem with Pearson's Gresham lectures and the seminal statistical published papers) that Pearson's earliest statistical work started in 1892 (rather than 1895-1896) and that Weldon's influence and work during these years was decisive in the development and advancement of Pearsonian statistics. The approach adopted in this thesis is essentially that of an intellectual biography which is thematic and is broadly chronological. This approach has been adopted to make greater use of primary sources in an attempt to provide a more historically sensitive interpretation of Pearson's work than has been used previously. It has thus been possible to examine these three (as yet unexamined) key Pearsonian developments: (1) his earliest statistical work (from 1892 to 1895), (2) his joint biometrical projects with Weldon (from 1898-1906) and a shift in the focus of research in the Drapers' Biometric Laboratory following Weldon's death in 1906 and (3) the later work in the twentieth century when he established the two laboratories which were underpinned by two separate methodologies. The arguments, which follow a chronological progression, have been built around Darwin's ideas of biological variation, 'statistical' populations, his theory of natural selection and Galton's law of ancestral inheritance. The first two chapters provide background material to the arguments developed in the thesis. Weldon's use of correlation (for the identification of species) in 1889 is examined in Chaper III. It is argued, that Pearson's analysis of Weldon's dimorphic distribution led to their work on speciation which led on to Pearson's earliest innovative statistical work. Weldon's most productive research with Pearson, discussed in Chapter IV, came to fruition when he showed empirical evidence of natural selection by detecting disturbances (or deviations) in the distribution from normality as a consequence of differential mortality rates. This research enabled Pearson to further develop his theory of frequency distributions. The central part of the thesis broadens out to examine further issues not adequately examined. Galton's statistical approach to heredity is addressed in Chapter V, and it is shown that Galton adumbrated Pearson's work on multiple correlation and multiple regression with his law of ancestral heredity. This work, in conjunction with Weldon's work on natural selection, led to Pearson's introduction of the use of determinantal matrix algebra into statistical theory in 1896: this (much neglected) development was pivotal in the professionalisation of the emerging discipline of mathematical statistics. Pearson's work on goodness of fit testing provided the machinery for reconstructing his most comprehensive statistical work which spanned four decades and encompassed his entire working life as a statistician. Thus, a greater part of Pearsonian statistics has been examined than in previous studies.
777

Statistical mechanics of fluids

Severin, E. S. January 1981 (has links)
The statistical mechanics of the interfacial region is studied using the Monte Carlo and molecular dynamics simulation techniques. The penetrable-sphere model of the liquid/vapour interface is simulated using the Monte Carlo method. The pressure equation of state is calculated in the one-phase region and compared to analytic virial expansions of the system. Density profiles of the gas/liquid surface in the two-phase region are calculated and are compared to profiles solved in the mean-field approximation. The effect of the profile near a hard wall is investigated and as a consequence the theory is modified to account for a hard wall. The theory agrees well with the computer result. This is a simple model for adsorption of a gas at a solid surface. A model for methane adsorbed on graphite is proposed. A number of simplifying assumptions are made. The surface is assumed to be perfectly smooth and rigid, and quantum effects are neglected. An effective site-site pair potential for the methane-graphite interaction is adjusted to fit the rotational barriers at OK. The isosteric enthalpy at zero coverage is predicted in the range OK to 200K, by averaging the configurational energy during a molecular dynamics simulation of one methane molecule. The surface second virial coefficients are calculated in the range 225K to 300K and agree with the experimental measurements. The effective pairwise potential predicts the height of the monolayer above the surface and the vibrational frequency against the surface. The translational and rotational behaviour of a single methane molecule are examined. Solid √3 x √3 epitaxial methane is studied at a constant coverage of θ = 0.87 by molecular dynamics simulation. The specific heat and configurational energy are monitored. A slow phase transition occurs between OK and 30K and a sharp transition is observed at 90K. Calculation of the centre-centre distribution functions and order parameters indicates the first transition is due to a slow rotational phase change. At 90K some molecules evaporate from the surface and the remaining bound molecules relax into a 2-d liquid. Between 10K and 25K the adsorbed methane floats across the surface and the question remains open whether this phenomenon is an artifact of the model system or does occur in nature. The dynamical behaviour of adsorbed methane is compared to incoherent inelastic neutron scattering. The principal peaks in the self part of the incoherent structure factor S<sub>s</sub> (0,<sub>ω</sub>) should correspond to the peaks in the Fourier transforms of the velocity and angular velocity auto-correlation functions. The peaks calculated from the Fourier transform of the auto-correlation functions agree with all the assignments in the experiments. The reorientational motion in the monolayer is monitored and the reorientational auto-correlation functions characterize the slow phase transition from UK to 30K. Three methane molecules are scattered on top of the θ = 0.87 monolayer at 30K. Reorientational correlation functions are compared for the single adsorbed molecule, the monolayer and a few particles in the bilayer. Rotation is less hindered in the monolayer than for a single adsorbed molecule and least hindered in the second layer. Adsorbed methane is studied at coverages of θ < 0.87 over a wide range of temperature in order to unravel various conflicting solid and liquid phases predicted by experiment. By careful monitoring of the structure via changes in the specific heat, the distribution functions and order parameters a liquid/gas coexistence is not observed in the region 56K to 75K. This result is confirmed by calculating the self diffusion coefficients over two isotherms at 65K and 95K. The diffusion coefficients decrease with increasing coverage over both isotherms. If liquid and gas coexist the diffusion coefficient should not change with increasing coverage. The statistical mechanical expression for the spreading pressure of an adsorbed fluid is derived and reported over a wide range of temperature and coverage. Experimental techniques are not as yet sufficiently highly developed to measure this quantity directly. An expression for the coherent neutron scattering structure factor for a model of liquid benzene adsorbed on graphite is derived. This expression is a function of the 2-dimensional centre-centre distribution function and we solve the Ornstein-Zernike equation in the Percus-Yevick approximation to obtain the 2-d distribution functions for hard discs. Agreement with present experimental results is reasonable, but a more highly orientated substrate needs to be used in experiment before a more exact comparison can be made.
778

Dynamics on scale-invariant structures

Christou, Alexis January 1987 (has links)
We investigate dynamical processes on random and regular fractals. The (static) problem of percolation in the semi-infinite plane introduces many pertinent ideas including real space renormalisation group (RSRG) fugacity transformations and scaling forms. We study the percolation probability to determine the surface critical behaviour and to establish exponent relations. The fugacity approach is generalised to study random walks on diffusion-limited aggregates (DLA). Using regular and random models, we calculate the walk dimensionality and demonstrate that it is consistent with a conjecture by Aharony and Stauffer. It is shown that the kinetically grown DLA is in a distinct dynamic universality class to lattice animals. Similarly, the speculation of Helman-Coniglio-Tsallis regarding diffusion on self-avoiding walks (SAWs) is shown to be incorrect. The results are corroborated by an exact enumeration analysis of the internal structure of SAWs. A 'spin' and field theoretic Hamiltonian formulation for the conformational and resistance properties of random walks is presented. We consider Gaussian random walks, SAWs, spiral SAWs and valence walks. We express resistive susceptibilities as correlation functions and hence e-expansions are calculated for the resistance exponents. For SAWs, the local crosslinks are shown to be irrelevant and we calculate corrections to scaling. A scaling description is introduced into an equation-of-motion method in order to study spin wave damping in d-dimensional isotropic Heisenberg ferro-, antiferro- and ferri- magnets near pc . Dynamic scaling is shown to be obeyed by the Lorentzian spin wave response function and lifetime. The ensemble of finite clusters and multicritical behaviour is also treated. In contrast, the relaxational dynamics of the dilute Anisotropic Heisenberg model is shown to violate conventional dynamic scaling near the percolation bicritical point but satisfies instead a singular scaling behaviour arising from activation of Bloch walls over percolation cluster energy barriers.
779

Sound transmission through lightweight parallel plates

Smith, R. Sean January 1997 (has links)
This thesis examines the transmission of sound through lightweight parallel plates, (plasterboard double wall partitions and timber floors). Statistical energy analysis was used to assess the importance of individual transmission paths and to determine the overall performance. Several different theoretical models were developed, the choice depending on the frequency range of interest and method of attachment of the plates, whether point or line, to the structural frame. It was found that for a line connected double wall there was very good agreement between the measured and predicted results, where the dominant transmission path was through the frame and the cavity path was weak. The transition frequency where the coupling changes from a line to a point connection is when the first half wavelength is able to fit between the spacings of the nails. For point connected double walls, where the transmission through the frame was weaker than for line connection, the cavity path was dominant unless there was absorption present. When the cavity was sufficiently deep, such that it behaved more like a room, the agreement between the measured and predicted results was good. As the cavity depth decreases the plates of the double wall are closer together and the agreement between the measured and predicted results were not as good. Detailed experiments were carried out to determine the transmission into the double wall cavities and isolated cavities. It was found that the transmission into an isolated cavity could be predicted well. However, for transmission into double wall cavities the existing theories could not predict transmission accurately when the cavity depth was small. Extensive parametric surveys were undertaken to analyse changes to the sound transmission through these structures when the material or design parameters are altered. The SEA models are able to identify the dominant mechanisms of transmission and will be a useful design tool in the design of lightweight partitions and timber floors.
780

Non-linear projection to latent structures

Baffi, Giuseppe January 1998 (has links)
This Thesis focuses on the study of multivariate statistical regression techniques which have been used to produce non-linear empirical models of chemical processes, and on the development of a novel approach to non-linear Projection to Latent Structures regression. Empirical modelling relies on the availability of process data and sound empirical regression techniques which can handle variable collinearities, measurement noise, unknown variable and noise distributions and high data set dimensionality. Projection based techniques, such as Principal Component Analysis (PCA) and Projection to Latent Structures (PLS), have been shown to be appropriate for handling such data sets. The multivariate statistical projection based techniques of PCA and linear PLS are described in detail, highlighting the benefits which can be gained by using these approaches. However, many chemical processes exhibit severely nonlinear behaviour and non-linear regression techniques are required to develop empirical models. The derivation of an existing quadratic PLS algorithm is described in detail. The procedure for updating the model parameters which is required by the quadratic PLS algorithms is explored and modified. A new procedure for updating the model parameters is presented and is shown to perform better the existing algorithm. The two procedures have been evaluated on the basis of the performance of the corresponding quadratic PLS algorithms in modelling data generated with a strongly non-linear mathematical function and data generated with a mechanistic model of a benchmark pH neutralisation system. Finally a novel approach to non-linear PLS modelling is then presented combining the general approximation properties of sigmoid neural networks and radial basis function networks with the new weights updating procedure within the PLS framework. These algorithms are shown to outperform existing neural network PLS algorithms and the quadratic PLS approaches. The new neural network PLS algorithms have been evaluated on the basis of their performance in modelling the same data used to compare the quadratic PLS approaches.

Page generated in 0.1443 seconds