• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Resource allocation for heterogeneous radio-frequency and visible-light networks

Jin, Fan January 2015 (has links)
In recent years, mobile data traffic demands have been increased exponentially, and the conventional cellular systems can no longer support the capacity demands. A potential solution for meeting such demands may be Heterogeneous Network (HetNet) techniques. A HetNet may integrate diverse radio access technologies (RAT) such as UMTS Terrestrial Radio Access Networks (UTRAN), GSM/EDGE Radio Access and Network (GERAN), Wireless Local Area Network (WLAN) as well as possibly Visible Light Communication (VLC) networks. The improved channel gain of the HetNet techniques is achieved by employing the small cells and by reduced transmission distance. However, the deployment of HetNet techniques also impose several technical challenges, for example the interference management, handovers, resource management and modelling of HetNets. A HetNet relies on multiple types of access nodes in a wireless network. These access nodes can use either the same technology or different technologies. When the access nodes employ the same technology and use the same frequency band, a major problem is the Co-Channel-Interference (CCI) between these access nodes. We firstly investigate a Radio-Frequency (RF) based HetNet in Chapter 3, which is constituted by the macrocells and the femtocells. More explicitly, the impacts of femtocells on traditional macrocells are studied, when the macrocells are relying on Fractional Frequency Reuse (FFR). The design, performance analysis and optimization problems of this FFR aided two-tier HetNet is investigated. We found the advantage of FFR eroded in dense femtocell scenarios and the optimized network tends to become a Unity Frequency Reuse (UFR) aided system. In order to mitigate the cross-tier interference, we proposed a statics spectrum allocation scheme, namely Swapping Spectrum Access (SSA). Both the Outage Probability (OP) of femtocell Mobile Terminals (MTs) in cell centre region and that of the macrocell MTs in the cell edge region is reduced by the proposed SSA. The optimized network using our SSA is more robust to the detrimental impact of femtocells. Another constitution of a HetNet may rely on integrating different technologies of wireless communication networks. We focus on our attentions on a HetNet composing by a RF femtocell and a VLC network in Chapter 4 and 5. An important component of this architecture is its Resource Management (RM). We investigate the Resource Allocation (RA) problems, under the diverse quality of service (QoS) requirements in terms of data rate, fairness and the statistical delay requirements. Two types of MTs, multi-homing MTs and multi-mode MTs are considered, where multi-homing MTs have the capability of aggregating resources from different networks, while the multi-mode MTs always select a single network for their connection. We proposed a sub-optimal decentralized method for solving the RA problems of both the multi-homing MTs and multi-mode MTs. The simulation results confirm the conceived method is capable of satisfying the QoS requirements. Furthermore, we employ more sophisticated transmission strategies for the VLC network and study their performance in Chapter 5. Again, the RA problems of the HetNet relying on different transmission strategies are investigated.
612

Towards unconstrained ear recognition

Bustard, John January 2011 (has links)
Humans can recognise individuals in many different situations. Automated vision-based biometric systems, which identify individuals from an image of a particular physical feature, aspire to a similar level of performance but currently have to impose constraints to achieve satisfactory recognition rates. These include limitations on the background of the image in which a feature is located, the lighting on the feature, its degree of occlusion, its viewed angle, and the properties of the camera that captures it. The computational cost of any recognition system is also an issue. This thesis examines ways of reducing such constraints. Its particular focus is the recognition of individuals from the unique signature provided by their ears. Speciffically, the work develops techniques to support a hypothesis that: The constraints on the use of ear-based biometric systems can be relaxed significantly through the introduction of robust recognition techniques. Two novel techniques designed to improve robustness are described: (i) a fully automated 2D recognition system to reduce sensitivity to noise and occlusion; and (ii) the use of a 3D model to allow for variations in both pose and lighting; The thesis begins by summarising current progress in the general field of biometrics and in the associated techniques for robust recognition. Each technique is then described in successive chapters, identifying related work, explaining the technique in detail and evaluating its performance. Future work will focus on developing algorithms to enable the 3D model to be accurately fitted to images. A number of developments in this area are outlined in the appendix. While these techniques have been developed for ear recognition they also contribute to the general research challenge of recognising any object in any environment
613

Multi-agent coordination for dynamic decentralised task allocation

Macarthur, Kathryn January 2011 (has links)
Coordination of multiple agents for dynamic task allocation is an important and challenging problem, which involves deciding how to assign a set of agents to a set of tasks, both of which may change over time (i.e., it is a dynamic environment). Moreover, it is often necessary for heterogeneous agents to form teams to complete certain tasks in the environment. In these teams, agents can often complete tasks more efficiently or accurately, as a result of their synergistic abilities. In this thesis we view these dynamic task allocation problems as a multi-agent system and investigate coordination techniques for such systems. In more detail, we focus specially on the distributed constraint optimisation problem (DCOP) formalism as our coordination technique. Now, a DCOP consists of agents, variables and functions agents must work together to find the optimal configuration of variable values. Given its ubiquity, a number of decentralised algorithms for solving such problems exist, including DPOP, ADOPT, and the GDL family of algorithms. In this thesis, we examine the anatomy of the above-mentioned DCOP algorithms and highlight their shortcomings with regard to their application to dynamic task allocation scenarios. We then explain why the max-sum algorithm (a member of the GDL family) is the most appropriate for our setting, and define specific requirements for performing multi-agent coordination in a dynamic task allocation scenario: namely, scalability, robustness, efficiency in communication, adaptiveness, solution quality, and boundedness. In particular, we present three dynamic task allocation algorithms: fast-max-sum, branchand-bound fast-max-sum and bounded fast-max-sum, which build on the basic max-sum algorithm. The former introduces storage and decision rules at each agent to reduce overheads incurred by re-running the algorithm every time the environment changes. However, the overall computational complexity of fast-max-sum is exponential in the number of agents that could complete a task in the environment. Hence, in branchand- bound fast-max-sum, we give fast-max-sum significant new capabilities: namely, an online pruning procedure that simplifies the problem, and a branch-and-bound technique that reduces the search space. This allows us to scale to problems with hundreds of tasks and agents, at the expense of additional storage. Despite this, fast-max-sum is only proven to converge to an optimal solution on instances where the underlying graph contains no cycles. In contrast, bounded fast-max-sum builds on techniques found in bounded max-sum, another extension of max-sum, to find bounded approximate solutions on arbitrary graphs. Given such a graph, bounded fast-max-sum will run our iGHS algorithm, which computes a maximum spanning tree on subsections of a graph, in order to reduce overheads when there is a change in the environment. Bounded fast-max-sum will then run fast-max-sum on this maximum spanning tree in order to find a solution. We have found that fast-max-sum reduces the size of messages communicated and the amount of computation by up to 99% compared with the original max-sum. We also found that, even in large environments, branch-and-bound fast-max-sum finds a solution using 99% less computation and up to 58% fewer messages than fast-max-sum. Finally, we found bounded fast-max-sum reduces the communication and computation cost of bounded max-sum by up to 99%, while obtaining 60{88% of the optimal utility, at the expense of needing additional communication than using fast-max-sum alone. Thus, fast-max-sum or branch-and-bound fast-max-sum should be used where communication is expensive and provable solution quality is not necessary, and bounded fast-max-sum where communication is less expensive, and provable solution quality is required. Now, in order to achieve such improvements over max-sum, fast-max-sum exploits a particularly expressive model of the environment by modelling tasks in the environment as function nodes in a factor graph, which need to have some communication and computation performed for them. An equivalent problem to this can be found in operations research, and is known as scheduling jobs on unrelated parallel machines (also known as RjjCmax). In this thesis, we draw parallels between unrelated parallel machine scheduling and the computation distribution problem, and, in so doing, we present the spanning tree decentralised task distribution algorithm (ST-DTDA), the first decentralised solution to RjjCmax. Empirical evaluation of a number of heuristics for ST-DTDA shows solution quality achieved is up to 90% of the optimal on sparse graphs, in the best case, whilst worst-case quality bounds can be estimated within 5% of the solution found, in the best case
614

Quantified evaluation of the significance of higher order effective moments and dielectrophoretic forces

Nili, Hossein January 2012 (has links)
In analysis of electric field interactions with dielectrics, higher order moments and dielectrophoretic force terms are commonly ignored in what has become known as the dipole approximation. The very few multipolar studies in the literature have either confined analysis to spherical particles or modelled non-spherical particles as spheres of similar dimensions. A major obstacle in analysing the significance of higher order moments has been the limitedness of multipole moment determination techniques. Analytic derivations for higher order moments are only available for spherical particles. This work addresses this roadblock and presents a hybrid numerical-analytical method for determination of the first three effective moments of particles of any shape subjected to electric fields of arbitrary geometry. Results of applying this method for determining higher order dielectrophoretic force terms have been verified by comparison against total force calculations using the Maxwell stress tensor method, known for its mathematical rigorousness in accounting for all interaction between an applied electric field and subject dielectric(s). It is shown that the dipole approximation is particularly unreliable for non-spherical particles, importantly comprising the vast majority of bioparticles. It is shown that higher order terms can constitute up to half the dielectrophoretic force on dielectric particles in suspension. With the current trend toward micro- and nano-electrode geometries used for single particle analysis, and a consequent increase in the number of instances where invoking the dipole approximation can be highly inaccurate, this work offers a computationally inexpensive and verifiably accurate means for determining higher order moments and dielectrophoretic forces.
615

Run-time compilation techniques for wireless sensor networks

Ellul, Joshua January 2012 (has links)
Wireless sensor networks research in the past decade has seen substantial initiative,support and potential. The true adoption and deployment of such technology is highly dependent on the workforce available to implement such solutions. However, embedded systems programming for severely resource constrained devices, such as those used in typical wireless sensor networks (with tens of kilobytes of program space and around ten kilobytes of memory), is a daunting task which is usually left for experienced embedded developers. Recent initiative to support higher level programming abstractions for wireless sensor networks by utilizing a Java programming paradigm for resource constrained devices demonstrates the development benefits achieved. However, results have shown that an interpreter approach greatly suffers from execution overheads. Run-time compilation techniques are often used in traditional computing to make up for such execution overheads. However, the general consensus in the field is that run-time compilation techniques are either impractical, impossible, complex, or resource hungry for such resource limited devices. In this thesis, I propose techniques to enable run-time compilation for such severely resource constrained devices. More so, I show not only that run-time compilation is in fact both practical and possible by using simple techniques which do not require any more resources than that of interpreters, but also that run-time compilation substantially increases execution efficiency when compared to an interpreter.
616

Inference and learning in state-space point process models : algorithms and applications

Yuan, Ke January 2013 (has links)
Physiological signals such as neural spikes and heart beats are discrete events in time, driven by a continuous underlying system. A recently introduced data driven model to analyse such systems is the state-space model with point process observations (SSPP), parameters of which and the underlying state sequence are simultaneously identified in a maximum likelihood setting using an approximate expectation-maximization (EM) algorithm. This thesis provides a detailed study on the property of SSPP under the EM setting. The results strongly suggest that the Bayesian treatment is more appropriate to avoid biased estimation. For this we develop the variational methods, and a range of efficient Markov chain Monte Carlo methods. The performance of these inference mechanisms is thoroughly tested on both synthetic and real world datasets.
617

Plasmonic mirror for light-trapping in thin film solar cells

Sesuraj, Rufina January 2014 (has links)
Microcrystalline silicon solar cells require an enhanced absorption of photons in the near-bandgap region between 700-1150nm. Conventional textured mirrors scatter light and increase the path length of photons in the absorber by total internal reflection. However, these mirrors exhibit a high surface roughness which degrades the performance of the microcrystalline silicon device. An alternative solution is to use metal nanoparticles with low surface roughness to scatter light. An illuminated metal nanoparticle exhibits a resonant or plasmonic excitation which can be tuned to enable a strong scattering of light. This work aims to develop an efficient near-infrared light-scattering system using randomly arranged metal nanoparticles near a mirror. Situating the nanoparticles at the rear of the solar cell helps to target weakly absorbed photons and eliminate out-coupling losses by the inclusion of a rear mirror. Simulation results show that the electric field driving the plasmonic resonance can be tuned with particle-mirror separation distance. The plasmonic scattering is maximised when the peak of the driving field intensity coincides with the intrinsic resonance of the nanoparticle. An e-beam lithography process was developed to fabricate a pseudo-random array of Ag nanodiscs near a Ag mirror. The optimized plasmonic mirror, with 6% coverage of 200nm Ag discs, shows higher diffusive reflectivity than a conventional textured mirror in the near-infrared region, over a broad angular range. Unlike a mirror with self-organised Ag islands, the mirror with Ag nanodiscs exhibits a low surface roughness of 13.5nm and low broadband absorption losses of around 10%. An 8.20% efficient thin n-i-p μc-Si:H solar cell, with the plasmonic mirror integrated at the rear, has been successfully fabricated. The optimised plasmonic solar cell showed an increase of 2.3mA in the short-circuit current density (Jsc), 6mV in the open-circuit voltage (Voc) and 0.97% in the efficiency (η), when compared to the planar cell counterpart with no nanodiscs. The low surface roughness of the plasmonic mirror ensures no degradation in the electrical quality of the μc-Si:H layer – this is also confirmed by the constant value of the fill factor (FF). The increase in Jsc is demonstrated to be mainly due to optical absorption enhancement in the near-infrared region as a result of plasmonic scattering, by detailed calculation of the exact photogenerated current in the plasmonic and planar devices, for the 700-1150nm wavelength range.
618

Shaped apertures enhance the stability of suspended lipid bilayers

Kalsi, Sumit January 2014 (has links)
A biological membrane not only forms a protective outer boundary for cells and organelles but also houses ion channels that are attractive drug targets. The characterisation of membrane-­embedded ion channels hence is of prime importance, but in vivo studies have been hindered by the complexity of the natural membranes. Lipid bilayers suspended in apertures have provided a simple and controlled model membrane system for ion channel studies, but short lifetimes and poor mechanical stability of suspended bilayers have limited the experimental throughput of bilayer electrophysiology experiments. Although suspended bilayers are more stable when smaller apertures are used, ion channel incorporation through vesicle fusion with the suspended bilayer becomes increasingly difficult. In this project, in an alternative bilayer stabilization approach, shaped apertures with tapered sidewalls have been fabricated with serial two-­photon laser lithography and high-­throughput grayscale lithography in photoresist. Bilayers formed at the 2µm thin tip of the shaped apertures, either with the painting or the folding method, displayed drastically increased lifetimes, typically >20 hours, and mechanical stability, being able to withstand extensive perturbation of the buffer solution, as compared to the control shapes. Single-channel electrical recordings of the peptide Alamethicin, water soluble protein α-­Hemolysin and of the proteoliposome-­delivered potassium and sodium channels KcsA, hERG and NavSp pore domains demonstrate channel conductance with low noise, made possible by the small capacitance of the 50µm thick resist septum, which is only thinned around the aperture, and unimpeded proteoliposome fusion, enabled by the large aperture diameter of 80µm. Optically accessible horizontal bilayers in shaped apertures were developed to visualize suspended bilayers and incorporated ion channels. It is anticipated that these shaped apertures with micrometer edge thickness can substantially enhance the throughput of channel characterisation by bilayer lipid membrane electrophysiology, especially in combination with automated parallel bilayer platforms.
619

Segmentation of lungs from volumetric CT-scan images using prior knowledge (shape and texture)

Liu, Wanmu January 2014 (has links)
This thesis presents a hierarchical segmentation scheme for The segmentation of lungs from volumetric CT images that concerns variational segmentation methods, namely geodesic active surfaces (GAS) and active surfaces without edges(ASWE), a volumetric similarity registration technique, statistical shape modelling using principal component analysis (PCA), and volumetric texture modelling. GAS and ASWE are 3-D extensions of their 2-D version, geodesic active contours (GAC) and active contours without edges (ACWE). The two models are generalized into a uni-fied framework, referred to as integrated active contours (IAS). Numerical implementation methods are derived for 3-D and the experiments are conducted both in 2-D and 3-D on synthetic and CT images. Global and local properties of active contours/surfaces under different parameter settings are presented and several applications of these models are proposed based on experimental results. The similarity registration technique aims tom find an optimal match between shapes with respect to rotation, scale and translation parameters. In this registration method, PCA is initially employed to calculate the principal axes of shapes. These principal axes are used to obtain a coarse match between shapes to be registered. Then geometric moments are exploited to estimate the isotropic scale parameter. The rotation and translation parameters are estimated by phase correlation techniques which take advantage of the fast Fourier transform (FFT). Experimental results demonstrate that the proposed technique, compared with the standard iterative gradient descent method, is fast, robust in the presence of severe noise, and suitable in registering various types of topologically complex volumetric shapes. Shape decomposition using PCA is the current state of the art and is widely drawn on in building deformable shape templates. The major problem to be solved in the modelling is to find proper PCA shape parameters that best approximate a novel shape of the same class. A comparison of popular methods for parameter estimation in the literature is presented and a hybrid coarse-to-�ne method based on previous works is proposed. The method achieves satisfactory accuracy over previous works and is validated by a database of lung shapes. A hierarchical shape-based segmentation method that incorporates GAS, ASWE, similarity registration, and statistical shape modelling is proposed to extract lungs from volumetric low-dose CT images. The method is extensively experimented with a large variety of images including synthetic images with noise and occlusions, low-dose CT images with artificial noise and synthetic tumors, and a low-dose CT database. The results indicate that the method is robust against noise and occlusions. Last but not least, a novel volumetric texture modelling technique based on isotropic Gaussian Markov random field (IGMRF) is developed and applied to low-dose CT images of lungs. Based on the proposed texture modelling, a hard classification approach is suggested to provide proper initializations for the shape-based segmentation method and enables the segmentation to achieve a higher degree of automation. The method is evaluated by low-dose CT images with synthetic tumors and the low-dose CT database. The experimental results suggest its suitability for offering proper initializations for shape-based segmentation.
620

From requirement document to formal modelling and decomposition of control systems

Yeganefard, Sanaz January 2014 (has links)
Formal modelling of control systems can help with identifying missing requirements and design flaws before implementing them. However, modelling using formal languages can be challenging and time consuming. Therefore intermediate steps may be required to simplify the transition from informal requirements to a formal model. In this work we firstly provide a four-stage approach for structuring and formalising requirements of a control system. This approach is based on monitored, controlled, mode and commanded (MCMC) phenomena. In this approach, requirements are partitioned into MCMC sub-problems, which then will be formalised as independent sub-models. The formal language used in this thesis is Event-B, although the MCMC approach can be applied to other formal languages. We also provide guidelines and patterns which can be used to facilitate the process of modelling in the Event-B language. The second contribution of this work is to extend the structure of machines in Event-B language and provide an approach for composing the formal MCMC sub-models in order to obtain the overall specification. The composition deals with phenomena that are shared amongst the formal sub-models. In our third contribution, patterns and guidelines are provided to refine the overall formal specification further in order to define design details. In addition, we discuss the decomposition of a formal model of a control system. As practical examples, the MCMC approach is applied to the requirements of three automotive control systems, namely a cruise control system, a lane departure warning system, and a lane centering controller.

Page generated in 0.5177 seconds