• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 58
  • 18
  • 17
  • 8
  • 2
  • Tagged with
  • 426
  • 220
  • 184
  • 183
  • 183
  • 44
  • 34
  • 33
  • 32
  • 32
  • 32
  • 31
  • 30
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Resource-aware cloud-based elastic content delivery network with cost minimisation and QoS guarantee

Blair, Alistair January 2014 (has links)
The distribution of digital multimedia, namely, audio, video, documents, images and Web pages is commonplace across today's Internet. The successful distribution of such multimedia, in particular video can be achieved using a number of proven architectures such as Internet Protocol Television (IPTV) and Over-The-Top (OTT) services. In order to maximise the scope and reach of this multimedia a need to combine aspects of the two architectures has arisen due to the rapid uptake of multimedia steaming to the plethora of Internet enabled devices that both architectures encompass. Content Delivery Networks (CDNs) have been proposed as an effective means to facilitate this unification in order to distribute multimedia in an efficient manner that enhances end-users' Web experience by replicating or copying content to edge of network locations in proximity to the end-user. However, CDNs often face resource over-provisioning, performance degradation and Service Level Agreement (SLA) violations, thus incurring high operational costs, hardware under-utilisation and a limited scope and scale of their services. The emergence of Cloud computing as a commercial reality has created an opportunity whereby Internet Service Providers (ISPs) can leverage their Cloud resources to disseminate multimedia. However, Cloud resource provisioning techniques can still result in over-provisioning and under-utilisation. To move beyond these shortcomings this thesis sets out to establish the basis for developing advanced and efficient techniques to enable the utilisation of Cloud-based resources in a highly scalable, and cost-effective manner that reduces over-provisioning and under-utilisation while minimising latency and therefore maintaining the QoS/QoE expected by end-users for streaming multimedia.
2

The interpretation of tables in texts

Hurst, Matthew Francis January 2000 (has links)
This thesis looks at the issues relating to the development of technology capable of processing tables as they appear in textual documents so that their contents may be accessed and further interpreted by standard information extraction and natural language processing systems. The thesis offers a formal description of the table and the description and evaluation of a system which provides instances of that model for table examples. There are three parts to the thesis. The first looks at tables in general terms, suggests where their complexities are to be found, and reviews the literature dealing with research into tables in other fields. The second part introduces a layered model of the table and provides some notational equipment for encoding tables in these component layers. The final part discusses the design, implementation and evaluation of a system which produces an instance of the model for the tables found in a document. It also discusses the design and collection of a corpus of tables used for the training and evaluation of the system. The thesis catalogues a large number of phenomena discovered in the corpus collected during the research and provides appropriate terminology
3

Learning methodologies for information access and representation

Lodhi, Huma Mahmood January 2002 (has links)
No description available.
4

A new approach to securing passwords using a probabilistic neural network based on biometric keystroke dynamics

Shorrock, Steven Richard January 2003 (has links)
Passwords are a common means of identifying an individual user on a computer system. However, they are only as secure as the computer user is vigilant in keeping them confidential. This thesis presents new methods for the strengthening of password security by employing the biometric feature of keystroke dynamics. Keystroke dynamics refers to the unique rhythm generated when keys are pressed as a person types on a computer keyboard. The aim is to make the positive identification of a computer user more robust by analysing the way in which a password is typed and not just the content of what is typed. Two new methods for implementing a keystroke dynamic system utilising neural networks are presented. The probabilistic neural network is shown to perform well and be more suited to the application than traditional backpropagation method. An improvement of 6% in the false acceptance and false rejection errors is observed along with a significant decrease in training time. A novel time sequenced method using a cascade forward neural network is demonstrated. This is a totally new approach to the subject of keystroke dynamics and is shown to be a very promising method The problems encountered in the acquisition of keystroke dynamics which, are often ignored in other research in this area, are explored, including timing considerations and keyboard handling. The features inherent in keystroke data are explored and a statistical technique for dealing with the problem of outlier datum is implemented.
5

Design & optimisation of the flux switching motor and drive with genetic algorithms

Chai, Kao Siang January 2004 (has links)
The Flux switching motor is a new class of reluctance machine that has demonstrated potential as a possible replacement for brushed-dc motor in many applications. However the design and optimisation of the motor and its drive system are rather complicated and not much past knowledge and guidelines are available to aid the engineer(s) in the design of the machine.;The development of flexible and versatile design optimisation software to facilitate the design and optimisation of FS motor and drive is presented. The design optimisation software incorporates a genetic algorithm optimisation tool and dynamic simulation model with third party finite element analysis software.;The developed genetic algorithm optimisation program integrated with finite element analysis software provides the engineer with the necessary optimisation tools capable of interfacing with the FEA software. This has allowed many FSM lamination designs to be created without any requirement of user feedback once the program is initialised. In addition the application of the developed design tool can also be extended to other electromagnetic devices.;A dynamic simulation model of the FS motor drive system has been developed. The model can either be used as a standalone program or be integrated into the optimisation software. The dynamic simulation model consisted of a simple time-stepping electrical equivalent circuit coupled with a switch control algorithm, a winding optimisation model and an iron loss model. When interfaced with the FEA software it can support rapid estimation of the motor dynamic performance. The developed optimisation software has been used to design and optimise FS motors and the results have demonstrated the potential of genetic algorithms in design optimisation of the machine.
6

Knowledge based image processing

Sharman, David Buchanan January 1989 (has links)
No description available.
7

Digital processing of images using integer arithmetic transformations

Horne, D. A. January 1977 (has links)
This thesis describes an investigation into the uses of small computers in digital image processing with specific reference to the two-dimensional Fourier transform. The motivation for this work stems from a natural curiosity in determining the effectiveness and limitations of minicomputers in an area where large storage and high speed requirements predominate, and more importantly in developing effective image transformations suitable for small machines with the use of future generations of high speed microprocessors in mind. Chapters one and two are introductory in nature. Chapter one serves as an introduction to the complete range of image processing activities. It is also hoped that this chapter provides a concise summary of image processing techniques of interest to the user of small computers, which is lacking elsewhere. The necessary mathematical tools required for an understanding of digital image processing using orthogonal transformations are developed in chapter two, partly in an historical context of analogue Courier processing. The remaining chapters describe the development of an image processing system, using a typical minicomputer, based on the use of the two-dimensional fast Fourier transform and on the application of this system to the processing of side scan sonar imagery. Of particular interest is the problem of measuring imaging errors resulting from processing, or of measuring observable differences between images; an approach to this problem utilising some knowledge of the way in which a human observer's visual system composes images, and using the two-dimensional Fourier transform, is described in chapter four. There is much scope for further research in this topic. The importance of repeatable reproduction of digitally processed images is frequently referred to, and consequently the practical apparatus and photographic methods used are also briefly described in appendices.
8

Online predictions for spatio-temporal systems using time-varying RBF networks

Su, Jionglong January 2011 (has links)
In this work. we propose a unified framework called Kalman filter based Radial Basis Functions (KF-RBF) for online functional prediction based on the Radial Basis Functions and the Kalman Filter. The data are nonstationary spatio-ternporal observations irregularly sampled in the spatial domain. We shall assume that a Functional Auto-Regressive (FAR) model is generating the system dynamics. Therefore. to account for the spatial variation. a Radial Basis Function (RBF) network is fitted to the spatial data at every time step. To capture the temporal variation, the regression surfaces arc allowed to change with time. This is achieved by proposing a linear state space model for the RBF weight vectors to evolve temporally. With a fixed functional basis in expressing all regressions. the FAR model call then he re-formulated as a Vector Auto-Regressive (VAR) model embedded in a Kalman Filter. Therefore functional predictions. normally taken place in the Hilbert space. can now be easily implemented 011 a computer. The advantages of our approach are as follows. First it is computationally simple: using the KF. we can obtain the posterior and predictive distributions in closed form. This allows for quick implementation of the model. and provides for full probabilistic inference for the forecasts. Second, the model requires no restrictive assumptions such as stationarity. isotropy or separability of the space/time correlation functions. Third. the method applies to non-lattice data. in which the number and location of sensors can change over time. This framework proposed is further extended by generalizing the real-valued. scalar weights in the functional autoregressive model to operators ill the Reproducing Kernel Hilbert Space (RKHS). This essentially implies that a larger. more intricate class of functions can be represented by this functional autoregressive approach. In other words. the unknown function is expressed as a sum of transformed functions mapped from the past functions in the RKHS. This bigger class of functions can potentially yield a better candidate that is "closer". in the norm sense. to the unknown function. In our research. the KF is used despite the system and observational noise covariance are both unknown. These uncertainties may significantly impact the filter performance. resulting in sub- optimality or divergence. A multiple-model strategy is proposed in view of this. This is motivated by the Interactive Multiple Model (IMM) algorithm in which a collection of filters with different noise characteristics is run in parallel. This strategy avoids the problems associated with the estimation of the noise covariance matrices. Furthermore. it also allows future measurements to be predicted without the assumption of time stationarity of the disturbance terms.
9

Updating RDF in the semantic web

Azwari, Sana Al January 2016 (has links)
RDF is widely used in the Semantic Web for representing ontology data. Many real world RDF collections are large and contain complex graph relationships that represent knowledge in a particular domain. Such large RDF collections evolve as a consequence of their representation of the changing world. Evolution in Semantic Web content produces difference files (deltas) that track changes between ontology versions. These changes may represent ontology modifications or simply changes in application data. An ontology is typically expressed in a combination of OWL, RDFS and RDF knowledge representation languages. A data repository that represents an ontology may be large and may be duplicated over the Internet, often in the form of a relational data store. Although this data may be distributed over the Internet, it needs to be managed and updated in the face of such evolutionary changes. In view of the size of typical collections, it is important to derive efficient ways of propagating updates to distributed datastores. The deltas can be used to reduce the storage and bandwidth overhead involved in disseminating ontology updates. Minimising the delta size can be achieved by reasoning over the underlying knowledge base. OWL 2 is a development of the OWL 1 standard that incorporate new features to aid application construction. Among the sub languages of OWL 2, OWL 2 RL/RDF provides an enriched rule set that extends the semantic capability of the OWL environment. This additional semantic content can be exploited in change detection approaches that strive to minimise the alterations to be made when ontologies are updated. The presence of blank nodes (i.e. nodes that are neither a URI nor a literal) in RDF collections provides a further challenge to ontology change detection. This is a consequence of the practical problems they introduce when comparing data structures before and after an update. The contribution of this thesis is a detailed analysis of the performance of RDF change detection techniques. In addition, the work proposes a new approach to maintaining the consistency of RDF by using knowledge embedded in the structure to generate efficient update transactions. The evaluation of this approach indicates that it reduces the overall update size, at the cost of increasing the processing time needed to generate the transactions. In the light of OWL 2 RL/RDF, this thesis examines the potential for reducing the delta size by pruning the application of unnecessary rules from the reasoning process and using an approach to delta generation that produces a small number of updates. It also assesses the impact of alternative approaches to handling blank nodes during the change detection process in ontology structures. The results indicate that pruning the rule set is a potentially expensive process but has the benefit of reducing the joins over relational data stores when carrying out the subsequent inferencing.
10

Studies in source identification and video authentication for multimedia forensics

Al-Athamneh, Mohammad Hmoud January 2017 (has links)
Nowadays, powerful and easy to use editing software which is available to almost everyone allows forgers to create convincing digital forgeries. As multimedia applications require a certain level of trust in the integrity and authenticity of the data become more common, there is an increasing need to restore some of the lost trustworthiness of digital media. In multimedia forensics, Digital Signature and Digital Watermarking have long been commonly used in video authentication, but these methods have proven to have shortcomings. The main drawback of these techniques is that information must generally be inserted at the time of video capture or before video broadcasting. Both techniques require two stages are: at the sender side and then at the receiver side, which in some real world applications is not feasible. For the problem of source type identification, digital fingerprints are usually extracted and then compared with a dataset of possible fingerprints to determine the acquisition devices. Photo-Response Non-Uniformity (PRNU), which is caused by the different sensitivity of pixels to the light, has proven to be a distinctive link between the camera and its images/videos. With this in mind, this thesis proposes several new digital forensic techniques to detect evidence of manipulations in digital video content based on blind techniques (Chapter 4 and Chapter 5) where there is no need for per-embedded watermarks or per-generated digital signature. These methods showed potential to be reliable techniques in digital video authentication based on the local video information. For the problem of determining the source of digital evidence, this thesis proposes a G-PRNU method (in Chapter 3) that overcomes the accuracy obtained in PRNU method in the problem of digital videos source type identification and it is less computationally expensive. Each proposed method was tested on a dataset of videos and detailed experimental results are presented.

Page generated in 0.0203 seconds