• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5890
  • 1152
  • 723
  • 337
  • 65
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8727
  • 8727
  • 7874
  • 7232
  • 3980
  • 3979
  • 3412
  • 3334
  • 3230
  • 3230
  • 3230
  • 3230
  • 3230
  • 1164
  • 1157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Multi-resolution time-domain modelling technique and its applications in electromagnetic band gap enhanced antennas

Wang, Xiaojing January 2010 (has links)
Newly emerged Electromagnetic Band Gap (EBG) structures possess multiple frequency bands that prohibit wave propagation and such stop bands are basically determined by the periodicity of the structure. Such desirable features make EBG hybrid antenna an interesting topic. Traditional full-wave techniques lack the efficiency to fully cope with the complexity of these hybrid structures, since the periodical elements are often much smaller in size than the accompanying antenna components. The Haar wavelet based Multi-Resolution Time Domain (MRTD) technique provides improved numerical resolution over the conventional Finite-Difference Time-Domain (FDTD) method, as well as simplicity in formulation. One-dimensional, two-dimensional and three-dimensional level-one codes are developed to assist the numerical modelling of the hybrid EBG antennas. An explicit form of Perfectly Matched Layer (PML) configuration is proposed, proved and presented. As a generic approach, its extensions suit every single level of Haar wavelet functions. A source expansion scheme is proposed thereafter. The concept of a multi-band multi-layer EBG hybrid antenna is presented. The theoretical prediction of antenna resonances is achieved through an effective medium model. It has been verified via numerical simulations and measurements. The 3D MRTD code is later applied to simulate such a structure. In addition, EBG enhanced circularly polarized photonic patch antennas have been studied. It is demonstrated that split-resonant rings (SRRs) and the like in EBG antennas can lead to antenna gain enhancement, backward radiation reduction and harmonic suppression. Furthermore, a circularly polarized two-by-two antenna array with spiral EBG elements is presented. The spiral element with ground via is more compact in size than the traditional mushroom structure, which is proven very efficient in blocking unwanted surface wave. Hence it reduces the mutual coupling of the array antenna significantly.
262

Proving termination using abstract interpretation

Chawdhary, Aziem A. January 2010 (has links)
One way to develop more robust software is to use formal program verification. Formal program verification requires the construction of a formal mathematical proof of the programs correctness. In the past ten years or so there has been much progress in the use of automated tools to formally prove properties of programs. However many such tools focus on proving safety properties: that something bad does not happen. Liveness properties, where we try to prove that something good will happen, have received much less attention. Program termination is an example of a liveness property. It has been known for a long time that to prove program termination we need to discover some function which maps program states to a well-founded set. Essentially we need to find one global argument for why the program terminates. Finding such an argument which overapproximates the entire program is very difficult. Recently, Podelski and Rybalchenko discovered a more compositional proof rule to find disjunctive termination arguments. Disjunctive termination arguments requires a series of termination arguments that individually may only cover part of the program but when put together give a reason for why the entire program will terminate. Thus we do not need to search for one overall reason for termination but we can break the problem down and focus on smaller parts of the program. This thesis develops a series of abstract interpreters for proving the termination of imperative programs. We make three contributions, each of which makes use of the Podelski-Rybalchenko result. Firstly we present a technique to re-use domains and operators from abstract interpreters for safety properties to produce termination analysers. This technique produces some very fast termination analysers, but is limited by the underlying safety domain used. We next take the natural step forward: we design an abstract domain for termination. This abstract domain is built from ranking functions: in essence the abstract domain only keeps track of the information necessary to prove program termination. However, the abstract domain is limited to proving termination for language with iteration. In order to handle recursion we use metric spaces to design an abstract domain which can handle recursion over the unit type. We define a framework for designing abstract interpreters for liveness properties such as termination. The use of metric spaces allows us to model the semantics of infinite computations for programs with recursion over the unit type so that we can design an abstract interpreter in a systematic manner. We have to ensure that the abstract interpreter is well-behaved with respect to the metric space semantics, and our framework gives a way to do this.
263

Multi modal multi-semantic image retrieval

Kesorn, Kraisak January 2010 (has links)
The rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation.
264

Scalable image retrieval based on hand drawn sketches and their semantic information

Bozas, Konstantinos January 2014 (has links)
The research presented in this thesis aims to extend the capabilities of traditional content-based image retrieval systems, towards more expressive and scalable interactions. The study focuses on machine sketch understanding and its applications. In particular, sketch based image retrieval (SBIR), a form of image search where the query is a user drawn picture (sketch), and freehand sketch recognition. SBIR provides a platform for the user to express image search queries that otherwise would be di cult to describe with text. The research builds upon two main axes: extension of the state-of-the art and scalability. Three novel approaches for sketch recognition and retrieval are presented. Notably, a patch hashing algorithm for scalable SBIR is introduced, along with a manifold learning technique for sketch recognition and a horizontal ip-invariant sketch matching method to further enhance recognition accuracy. The patch hashing algorithm extracts several overlapping patches of an image. Similarities between a hand drawn sketch and the images in a database are ranked through a voting process where patches with similar shape and structure con guration arbitrate for the result. Patch similarity is e ciently estimated with a hashing algorithm. A spatially aware index structure built on the hashing keys ensures the scalability of the scheme and allows for real time re-ranking upon query updates. Sketch recognition is achieved through a discriminant manifold learning method named Discriminant Pairwise Local Embeddings (DPLE). DPLE is a supervised dimensionality reduction technique that generates structure preserving discriminant subspaces. This objective is achieved through a convex optimization formulation where Euclidean distances between data pairs that belong to the same class are minimized, while those of pairs belonging to di erent classes are maximized. A scalable one-to-one sketch matching technique invariant to horizontal mirror re ections further improves recognition accuracy without high computational cost. The matching is based on structured feature correspondences and produces a dissimilarity score between two sketches. Extensive experimental evaluation of our methods demonstrates the improvements over the state-of-the-art in SBIR and sketch recognition.
265

A credit-based approach to scalable video transmission over a peer-to-peer social network

Asioli, Stefano January 2013 (has links)
The objective of the research work presented in this thesis is to study scalable video transmission over peer-to-peer networks. In particular, we analyse how a credit-based approach and exploitation of social networking features can play a significant role in the design of such systems. Peer-to-peer systems are nowadays a valid alternative to the traditional client-server architecture for the distribution of multimedia content, as they transfer the workload from the service provider to the final user, with a subsequent reduction of management costs for the former. On the other hand, scalable video coding helps in dealing with network heterogeneity, since the content can be tailored to the characteristics or resources of the peers. First of all, we present a study that evaluates subjective video quality perceived by the final user under different transmission scenarios. We also propose a video chunk selection algorithm that maximises received video quality under different network conditions. Furthermore, challenges in building reliable peer-to-peer systems for multimedia streaming include optimisation of resource allocation and design mechanisms based on rewards and punishments that provide incentives for users to share their own resources. Our solution relies on a credit-based architecture, where peers do not interact with users that have proven to be malicious in the past. Finally, if peers are allowed to build a social network of trusted users, they can share the local information they have about the network and have a more complete understanding of the type of users they are interacting with. Therefore, in addition to a local credit, a social credit or social reputation is introduced. This thesis concludes with an overview of future developments of this research work.
266

Perceptual mixing for musical production

Terrell, Michael John January 2012 (has links)
A general model of music mixing is developed, which enables a mix to be evaluated as a set of acoustic signals. A second model describes the mixing process as an optimisation problem, in which the errors are evaluated by comparing sound features of a mix with those of a reference mix, and the parameters are the controls on the mixing console. Initial focus is placed on live mixing, where the practical issues of: live acoustic sources, multiple listeners, and acoustic feedback, increase the technical burden on the mixing engineer. Using the two models, a system is demonstrated that takes as input reference mixes, and automatically sets the controls on the mixing console to recreate their objective, acoustic sound features for all listeners, taking into account the practical issues outlined above. This reduces the complexity of mixing live music to that of recorded music, and unifies future mixing research. Sound features evaluated from audio signals are shown to be unsuitable for describing a mix, because they do not incorporate the effects of listening conditions, or masking interactions between sounds. Psychophysical test methods are employed to develop a new perceptual sound feature, termed the loudness balance, which is the first loudness feature to be validated for musical sounds. A novel, perceptual mixing system is designed, which allows users to directly control the loudness balance of the sounds they are mixing, for both live and recorded music, and which can be extended to incorporate other perceptual features. The perceptual mixer is also employed as an analytical tool, to allow direct measurement of mixing best practice, to provide fully-automatic mixing functionality, and is shown to be an improvement over current heuristic models. Based on the conclusions of the work, a framework for future automatic mixing is provided, centred on perceptual sound features that are validated using psychophysical methods.
267

Topology and congestion invariant in global internet-scale networks

Huang, Zhijia January 2010 (has links)
Infrastructures like telecommunication systems, power transmission grids and the Internet are complex networks that are vulnerable to catastrophic failure. A common mechanism behind this kind of failure is avalanche-like breakdown of the network's components. If a component fails due to overload, its load will be redistributed, causing other components to overload and fail. This failure can propagate throughout the entire network. From studies of catastrophic failures in di erent technological networks, the consensus is that the occurrence of a catastrophe is due to the interaction between the connectivity and the dynamical behaviour of the networks' elements. The research in this thesis focuses particularly on packet-oriented networks. In these networks the tra c (dynamics) and the topology (connectivity) are coupled by the routing mechanisms. The interactions between the network's topology and its tra c are complex as they depend on many parameters, e.g. Quality of Service, congestion management (queuing), link bandwidth, link delay, and types of tra c. It is not straightforward to predict whether a network will fail catastrophically or not. Furthermore, even if considering a very simpli ed version of packet networks, there are still fundamental questions about catastrophic behaviour that have not been studied, such as: will a network become unstable and fail catastrophically as its size increases; do catastrophic networks have speci c connectivity properties? One of the main di culties when studying these questions is that, in general, we do not know in advance if a network is going to fail catastrophically. In this thesis we study how to build catastrophic 5 networks. The motivation behind the research is that once we have constructed networks that will fail catastrophically then we can study its behaviour before the catastrophe occurs, for example the dynamical behaviour of the nodes before an imminent catastrophe. Our theoretical and algorithmic approach is based on the observation that for many simple networks there is a topology-tra c invariant for the onset of congestion. We have extended this approach to consider cascading congestion. We have developed two methods to construct catastrophes. The main results in this thesis are that there is a family of catastrophic networks that have a scale invariant; hence at the break point it is possible to predict the behaviour of large networks by studying a much smaller network. The results also suggest that if the tra c on a network increases exponentially, then there is a maximum size that a network can have, after that the network will always fail catastrophically. To verify if catastrophic networks built using our algorithmic approach can re ect real situations, we evaluated the performance of a small catastrophic network. By building the scenario using open source network simulation software OMNet++, we were able to simulate a router network using the Open Shortest Path First routing protocol and carrying User Datagram Protocol tra c. Our results show that this kind of networks can collapse as a cascade of failures. Furthermore, recently the failure of Google Mail routers [1] con rms this kind of catastrophic failure does occur in real situations.
268

Radio resource allocation in relay based OFDMA cellular networks

Xiao, Lin January 2010 (has links)
Adding relay stations (RS) between the base station (BS) and the mobile stations (MS) in a cellular system can extend network coverage, overcome multi-path fading and increase the capacity of the system. This thesis considers the radio resource allocation scheme in relay based cellular networks to ensure high-speed and reliable communication. The goal of this research is to investigate user fairness, system throughput and power consumption in wireless relay networks through considering how best to manage the radio resource. This thesis proposes a two-hop proportional fairness (THPF) scheduling scheme fair allocation, which is considered both in the first time subslot between direct link users and relay stations, and the second time subslot among relay link users. A load based relay selection algorithm is also proposed for a fair resource allocation. The transmission mode (direct transmission mode or relay transmission mode) of each user will be adjusted based on the load of the transmission node. Power allocation is very important for resource efficiency and system performance improvement and this thesis proposes a two-hop power allocation algorithm for energy efficiency, which adjusts the transmission power of the BS and RSs to make the data rate on the two hop links of one RS match each other. The power allocation problem of multiple cells with inter-cell interference is studied. A new multi-cell power allocation scheme is proposed from non-cooperative game theory; this coordinates the inter-cell interference and operates in a distributed manner. The utility function can be designed for throughput improvement and user fairness respectively. Finally, the proposed algorithms in this thesis are combined, and the system performance is evaluated. The joint radio resource allocation algorithm can achieve a very good tradeoff between throughput and user fairness, and also can significantly improve energy efficiency.
269

Simplifying large-scale communication networks with weights and cycles

Liu, Ling January 2010 (has links)
A communication network is a complex network designed to transfer information from a source to a destination. One of the most important property in a communication network is the existence of alternative routes between a source and destination node. The robustness and resilience of a network are related to its path diversity (alternative routes). Describing all the components and interactions of a large communication network is not feasible. In this thesis we develop a new method, the deforestation algorithm, to simplify very large networks, and we called the simplified network the skeleton network. The method is general. It conserves the number of alternative paths between all the sources and destinations when doing the simplification and also it takes into consideration the properties of the nodes, and the links (capacity and direction). When simplifying very large networks, the skeleton networks can also be large, so it is desirable to split the skeleton network into different communities. In the thesis we introduce a community-detection method which works fast and efficient for the skeleton networks. Other property that can be easily extracted from the skeleton network is the cycle basis, which can suffice in describing the cycle structure of complex network. We have tested our algorithms on the Autonomous System (AS)l evel and Internet Protocol address (IPA)le vel of the Internet. And we also show that deforestation algorithm can be extended to take into consideration of traffic directions and traffic demand matrix when simplifying medium-scale networks. Commonly, the structure of large complex networks is characterised using statistical measures. These measures can give a good description of the network connectivity but they do not provide a practical way to explore the interaction between the dynamical process and network connectivity. The methods presented in this thesis are a first step to address this practical problem.
270

Strategies for image visualisation and browsing

Janjusevic, Tijana January 2010 (has links)
The exploration of large information spaces has remained a challenging task even though the proliferation of database management systems and the state-of-the art retrieval algorithms is becoming pervasive. Signi cant research attention in the multimedia domain is focused on nding automatic algorithms for organising digital image collections into meaningful structures and providing high-semantic image indices. On the other hand, utilisation of graphical and interactive methods from information visualisation domain, provide promising direction for creating e cient user-oriented systems for image management. Methods such as exploratory browsing and query, as well as intuitive visual overviews of image collection, can assist the users in nding patterns and developing the understanding of structures and content in complex image data-sets. The focus of the thesis is combining the features of automatic data processing algorithms with information visualisation. The rst part of this thesis focuses on the layout method for displaying the collection of images indexed by low-level visual descriptors. The proposed solution generates graphical overview of the data-set as a combination of similarity based visualisation and random layout approach. Second part of the thesis deals with problem of visualisation and exploration for hierarchical organisation of images. Due to the absence of the semantic information, images are considered the only source of high-level information. The content preview and display of hierarchical structure are combined in order to support image retrieval. In addition to this, novel exploration and navigation methods are proposed to enable the user to nd the way through database structure and retrieve the content. On the other hand, semantic information is available in cases where automatic or semi-automatic image classi ers are employed. The automatic annotation of image items provides what is referred to as higher-level information. This type of information is a cornerstone of multi-concept visualisation framework which is developed as a third part of this thesis. This solution enables dynamic generation of user-queries by combining semantic concepts, supported by content overview and information ltering. Comparative analysis and user tests, performed for the evaluation of the proposed solutions, focus on the ways information visualisation a ects the image content exploration and retrieval; how e cient and comfortable are the users when using di erent interaction methods and the ways users seek for information through di erent types of database organisation.

Page generated in 0.1074 seconds