721 |
Accommodating temporal semantics in data mining and knowledge discovery /Rainsford, Chris P. January 1999 (has links)
Thesis (PhD) -- University of South Australia, 1999
|
722 |
Secure location services: Vulnerability analysis and provision of security in location systemsPozzobon, O. Unknown Date (has links)
No description available.
|
723 |
On semiparametric regression and data miningOrmerod, John T, Mathematics & Statistics, Faculty of Science, UNSW January 2008 (has links)
Semiparametric regression is playing an increasingly large role in the analysis of datasets exhibiting various complications (Ruppert, Wand & Carroll, 2003). In particular semiparametric regression a plays prominent role in the area of data mining where such complications are numerous (Hastie, Tibshirani & Friedman, 2001). In this thesis we develop fast, interpretable methods addressing many of the difficulties associated with data mining applications including: model selection, missing value analysis, outliers and heteroscedastic noise. We focus on function estimation using penalised splines via mixed model methodology (Wahba 1990; Speed 1991; Ruppert et al. 2003). In dealing with the difficulties associated with data mining applications many of the models we consider deviate from typical normality assumptions. These models lead to likelihoods involving analytically intractable integrals. Thus, in keeping with the aim of speed, we seek analytic approximations to such integrals which are typically faster than numeric alternatives. These analytic approximations not only include popular penalised quasi-likelihood (PQL) approximations (Breslow & Clayton, 1993) but variational approximations. Originating in physics, variational approximations are a relatively new class of approximations (to statistics) which are simple, fast, flexible and effective. They have recently been applied to statistical problems in machine learning where they are rapidly gaining popularity (Jordan, Ghahramani, Jaakkola & Sau11999; Corduneanu & Bishop, 2001; Ueda & Ghahramani, 2002; Bishop & Winn, 2003; Winn & Bishop 2005). We develop variational approximations to: generalized linear mixed models (GLMMs); Bayesian GLMMs; simple missing values models; and for outlier and heteroscedastic noise models, which are, to the best of our knowledge, new. These methods are quite effective and extremely fast, with fitting taking minutes if not seconds on a typical 2008 computer. We also make a contribution to variational methods themselves. Variational approximations often underestimate the variance of posterior densities in Bayesian models (Humphreys & Titterington, 2000; Consonni & Marin, 2004; Wang & Titterington, 2005). We develop grid-based variational posterior approximations. These approximations combine a sequence of variational posterior approximations, can be extremely accurate and are reasonably fast.
|
724 |
Design and evaluation of database access pathsKeen, Christopher David January 1978 (has links)
196 leaves : ill., diagrs., graphs, tables ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computing Science, 1979
|
725 |
Computer aided optimisation of combinational logic / Christopher W illiam NettleNettle, Christopher William January 1979 (has links)
Typescript (photocopy) / vii, 190 leaves ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.) Dept. of Electrical and Electronic Engineering, University of Adelaide, 1979
|
726 |
The structure of the background errors in a global wave model.Greenslade, Diana J. M. January 2004 (has links)
Title page, table of contents and abstract only. The complete thesis in print form is available from the University of Adelaide Library. / One of the main limitations to current wave data assimilation systems is the lack of an accurate representation of the structure of the background errors. For example, the current operational wave data assimilation system at the Australian Bureau of Meteorology (BoM) prescribes globally uniform background error correlations of Gaussian shape with a length scale of 300 km and the error variance of both the background and observation errors is defined to be 0.25 m². This thesis describes an investigation into the determination of the background errors in a global wave model. There are two methods that are commonly used to determine background errors: the observational method and the 'NMC method'. The observational method is the main tool used in this thesis, although the 'NMC method' is considered also. The observational method considers correlations of the differences between observations and the background, in this case, the modelled Significant Wave Height (SWH) field. The observations used are satellite altimter estimates of SWH. Before applying the method, the effect of the irregular satellite sampling pattern is examined. This is achieved by constructing a set of anomaly correlations from modelled wave fields. The modelled wave fields are then sampled at the locations of the altimeter observations and the anomaly correlations are recalculated from the simulated altimeter data. The results are compared to the original anomaly correlations. It is found that in general, the altimeter sampling pattern underpredicts the spatial scale of the anomaly correlation. Observations of SWH from the ERS-2 altimeter are used in this thesis. To ensure that the observations used are of the highest quality possible, a validation of the European Remote Sensing Satellite 2 (ERS-2) SWH observations is performed. The altimeter data are compared to waverider buoy observations over a time period of approximately 4.5 years. With a set of 2823 co-located SWH estimates, it is found that in general, the altimeter overestimates low SWH and underestimates high SWH. A two-branched linear correction to the altimeter data is found, which reduces the overall rms error in SWH to approximately 0.2 m. Results from the previous sections are then used to calculate the background error correlations. Specifically, correlations of the differences between modelled SWH and the bias-corrected ERS-2 data are calculated. The irregular sampling pattern of the altimeter is accounted for by adjusting the correlation length scales according to latitude and the calculated length scale. The results show that the length scale of the background errors varies significantly over the globe, with the largest scales at low latitudes and shortest scales at high latitudes. Very little seasonal or year-to-year variability is detected. Conversely, the magnitude of the background error variance is found to have considerable seasonal and year-to-year variability. By separating the altimeter ground tracks into ascending and descending tracks, it is possible to examine, to a limited extent, whether any anisotropy exists in the background errors. Some of the areas on the globe that exhibit the most anisotropy are the Great Australian Bight and the North Atlantic Ocean. The background error correlations are also briefly examined via the 'NMC method', i.e., by considering differences between SWH forecasts of different ranges valid at the same time. It is found that the global distribution of the length scale of the error correlation is similar to that found using the observational method. It is also shown that the size of the correlation length scale increases as the forecast period increases. The new background error structure that has been developed is incorporated into a data assimilation system and evaluated over two month-long time periods. Compared to the current operational system at the BoM, it is found that this new structure improves the skill of the wave model by approximately 10%, with considerable geographical variability in the amount of improvement. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1113813 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2004
|
727 |
The structure of the background errors in a global wave model.Greenslade, Diana J. M. January 2004 (has links)
Title page, table of contents and abstract only. The complete thesis in print form is available from the University of Adelaide Library. / One of the main limitations to current wave data assimilation systems is the lack of an accurate representation of the structure of the background errors. For example, the current operational wave data assimilation system at the Australian Bureau of Meteorology (BoM) prescribes globally uniform background error correlations of Gaussian shape with a length scale of 300 km and the error variance of both the background and observation errors is defined to be 0.25 m². This thesis describes an investigation into the determination of the background errors in a global wave model. There are two methods that are commonly used to determine background errors: the observational method and the 'NMC method'. The observational method is the main tool used in this thesis, although the 'NMC method' is considered also. The observational method considers correlations of the differences between observations and the background, in this case, the modelled Significant Wave Height (SWH) field. The observations used are satellite altimter estimates of SWH. Before applying the method, the effect of the irregular satellite sampling pattern is examined. This is achieved by constructing a set of anomaly correlations from modelled wave fields. The modelled wave fields are then sampled at the locations of the altimeter observations and the anomaly correlations are recalculated from the simulated altimeter data. The results are compared to the original anomaly correlations. It is found that in general, the altimeter sampling pattern underpredicts the spatial scale of the anomaly correlation. Observations of SWH from the ERS-2 altimeter are used in this thesis. To ensure that the observations used are of the highest quality possible, a validation of the European Remote Sensing Satellite 2 (ERS-2) SWH observations is performed. The altimeter data are compared to waverider buoy observations over a time period of approximately 4.5 years. With a set of 2823 co-located SWH estimates, it is found that in general, the altimeter overestimates low SWH and underestimates high SWH. A two-branched linear correction to the altimeter data is found, which reduces the overall rms error in SWH to approximately 0.2 m. Results from the previous sections are then used to calculate the background error correlations. Specifically, correlations of the differences between modelled SWH and the bias-corrected ERS-2 data are calculated. The irregular sampling pattern of the altimeter is accounted for by adjusting the correlation length scales according to latitude and the calculated length scale. The results show that the length scale of the background errors varies significantly over the globe, with the largest scales at low latitudes and shortest scales at high latitudes. Very little seasonal or year-to-year variability is detected. Conversely, the magnitude of the background error variance is found to have considerable seasonal and year-to-year variability. By separating the altimeter ground tracks into ascending and descending tracks, it is possible to examine, to a limited extent, whether any anisotropy exists in the background errors. Some of the areas on the globe that exhibit the most anisotropy are the Great Australian Bight and the North Atlantic Ocean. The background error correlations are also briefly examined via the 'NMC method', i.e., by considering differences between SWH forecasts of different ranges valid at the same time. It is found that the global distribution of the length scale of the error correlation is similar to that found using the observational method. It is also shown that the size of the correlation length scale increases as the forecast period increases. The new background error structure that has been developed is incorporated into a data assimilation system and evaluated over two month-long time periods. Compared to the current operational system at the BoM, it is found that this new structure improves the skill of the wave model by approximately 10%, with considerable geographical variability in the amount of improvement. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1113813 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2004
|
728 |
On semiparametric regression and data miningOrmerod, John T, Mathematics & Statistics, Faculty of Science, UNSW January 2008 (has links)
Semiparametric regression is playing an increasingly large role in the analysis of datasets exhibiting various complications (Ruppert, Wand & Carroll, 2003). In particular semiparametric regression a plays prominent role in the area of data mining where such complications are numerous (Hastie, Tibshirani & Friedman, 2001). In this thesis we develop fast, interpretable methods addressing many of the difficulties associated with data mining applications including: model selection, missing value analysis, outliers and heteroscedastic noise. We focus on function estimation using penalised splines via mixed model methodology (Wahba 1990; Speed 1991; Ruppert et al. 2003). In dealing with the difficulties associated with data mining applications many of the models we consider deviate from typical normality assumptions. These models lead to likelihoods involving analytically intractable integrals. Thus, in keeping with the aim of speed, we seek analytic approximations to such integrals which are typically faster than numeric alternatives. These analytic approximations not only include popular penalised quasi-likelihood (PQL) approximations (Breslow & Clayton, 1993) but variational approximations. Originating in physics, variational approximations are a relatively new class of approximations (to statistics) which are simple, fast, flexible and effective. They have recently been applied to statistical problems in machine learning where they are rapidly gaining popularity (Jordan, Ghahramani, Jaakkola & Sau11999; Corduneanu & Bishop, 2001; Ueda & Ghahramani, 2002; Bishop & Winn, 2003; Winn & Bishop 2005). We develop variational approximations to: generalized linear mixed models (GLMMs); Bayesian GLMMs; simple missing values models; and for outlier and heteroscedastic noise models, which are, to the best of our knowledge, new. These methods are quite effective and extremely fast, with fitting taking minutes if not seconds on a typical 2008 computer. We also make a contribution to variational methods themselves. Variational approximations often underestimate the variance of posterior densities in Bayesian models (Humphreys & Titterington, 2000; Consonni & Marin, 2004; Wang & Titterington, 2005). We develop grid-based variational posterior approximations. These approximations combine a sequence of variational posterior approximations, can be extremely accurate and are reasonably fast.
|
729 |
On semiparametric regression and data miningOrmerod, John T, Mathematics & Statistics, Faculty of Science, UNSW January 2008 (has links)
Semiparametric regression is playing an increasingly large role in the analysis of datasets exhibiting various complications (Ruppert, Wand & Carroll, 2003). In particular semiparametric regression a plays prominent role in the area of data mining where such complications are numerous (Hastie, Tibshirani & Friedman, 2001). In this thesis we develop fast, interpretable methods addressing many of the difficulties associated with data mining applications including: model selection, missing value analysis, outliers and heteroscedastic noise. We focus on function estimation using penalised splines via mixed model methodology (Wahba 1990; Speed 1991; Ruppert et al. 2003). In dealing with the difficulties associated with data mining applications many of the models we consider deviate from typical normality assumptions. These models lead to likelihoods involving analytically intractable integrals. Thus, in keeping with the aim of speed, we seek analytic approximations to such integrals which are typically faster than numeric alternatives. These analytic approximations not only include popular penalised quasi-likelihood (PQL) approximations (Breslow & Clayton, 1993) but variational approximations. Originating in physics, variational approximations are a relatively new class of approximations (to statistics) which are simple, fast, flexible and effective. They have recently been applied to statistical problems in machine learning where they are rapidly gaining popularity (Jordan, Ghahramani, Jaakkola & Sau11999; Corduneanu & Bishop, 2001; Ueda & Ghahramani, 2002; Bishop & Winn, 2003; Winn & Bishop 2005). We develop variational approximations to: generalized linear mixed models (GLMMs); Bayesian GLMMs; simple missing values models; and for outlier and heteroscedastic noise models, which are, to the best of our knowledge, new. These methods are quite effective and extremely fast, with fitting taking minutes if not seconds on a typical 2008 computer. We also make a contribution to variational methods themselves. Variational approximations often underestimate the variance of posterior densities in Bayesian models (Humphreys & Titterington, 2000; Consonni & Marin, 2004; Wang & Titterington, 2005). We develop grid-based variational posterior approximations. These approximations combine a sequence of variational posterior approximations, can be extremely accurate and are reasonably fast.
|
730 |
Video analysis in MPEG compressed domainGu, Lifang January 2003 (has links)
The amount of digital video has been increasing dramatically due to the technology advances in video capturing, storage, and compression. The usefulness of vast repositories of digital information is limited by the effectiveness of the access methods, as shown by the Web explosion. The key issues in addressing the access methods are those of content description and of information space navigation. While textual documents in digital form are somewhat self-describing (i.e., they provide explicit indices, such as words and sentences that can be directly used to categorise and access them), digital video does not provide such an explicit content description. In order to access video material in an effective way, without looking at the material in its entirety, it is therefore necessary to analyse and annotate video sequences, and provide an explicit content description targeted to the user needs. Digital video is a very rich medium, and the characteristics in which users may be interested are quite diverse, ranging from the structure of the video to the identity of the people who appear in it, their movements and dialogues and the accompanying music and audio effects. Indexing digital video, based on its content, can be carried out at several levels of abstraction, beginning with indices like the video program name and name of subject, to much lower level aspects of video like the location of edits and motion properties of video. Manual video indexing requires the sequential examination of the entire video clip. This is a time-consuming, subjective, and expensive process. As a result, there is an urgent need for tools to automate the indexing process. In response to such needs, various video analysis techniques from the research fields of image processing and computer vision have been proposed to parse, index and annotate the massive amount of digital video data. However, most of these video analysis techniques have been developed for uncompressed video. Since most video data are stored in compressed formats for efficiency of storage and transmission, it is necessary to perform decompression on compressed video before such analysis techniques can be applied. Two consequences of having to first decompress before processing are incurring computation time for decompression and requiring extra auxiliary storage.To save on the computational cost of decompression and lower the overall size of the data which must be processed, this study attempts to make use of features available in compressed video data and proposes several video processing techniques operating directly on compressed video data. Specifically, techniques of processing MPEG-1 and MPEG-2 compressed data have been developed to help automate the video indexing process. This includes the tasks of video segmentation (shot boundary detection), camera motion characterisation, and highlights extraction (detection of skin-colour regions, text regions, moving objects and replays) in MPEG compressed video sequences. The approach of performing analysis on the compressed data has the advantages of dealing with a much reduced data size and is therefore suitable for computationally-intensive low-level operations. Experimental results show that most analysis tasks for video indexing can be carried out efficiently in the compressed domain. Once intermediate results, which are dramatically reduced in size, are obtained from the compressed domain analysis, partial decompression can be applied to enable high resolution processing to extract high level semantic information.
|
Page generated in 0.1476 seconds