• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5794
  • 1137
  • 723
  • 337
  • 66
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8690
  • 8690
  • 7752
  • 7111
  • 3981
  • 3980
  • 3290
  • 3210
  • 3106
  • 3106
  • 3106
  • 3106
  • 3106
  • 1164
  • 1157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

All-dielectric superlens and applications

Yan, Bing January 2018 (has links)
One of the great challenges in optics is to break the diffraction limit to achieve optical superresolution for applications in imaging, sensing, manufacturing and characterization. In recent years we witnessed a number of exciting developments in this field, including for example super-resolution fluorescent microscopy, negative-index metamaterial superlens and superoscillation lens. However, none of them can perform white-light super-resolution imaging until the development of microsphere nanoscopy technique, which was pioneered by the current PhD’s research group. The microscope nanoscopy technique was developed based on all-dielectric microsphere superlens which is fundamentally different from metal-based superlenses. In this research, we aim to significantly advance the technology by: (1) increasing superlens resolution to sub- 50 nm scale and (2) improving superlens usability and demonstrate application in wider context including lab-on-chip devices. Our longer-term vision is to bring the all-dielectric superlens technology to market so that each microscope user can have superlens in hand for their daily examination of nanoscale objects including viruses. To improve the superlens resolution, a systematic theoretical study was first carried out on the optical properties of dielectric microsphere superlens. New approaches were proposed to obtain precise control of the focusing properties of the microsphere lens. Using pupil mask engineering and two-material composite superlens design, one can precisely control the focusing properties of the lens and effectively surpass the diffraction limit λ/2n. To further improve the resolution, we incorporated the metamaterial concept in our superlens design. A new all-dielectric nanoparticle metamaterial superlens design was proposed. This is realized by 3D stacking of high-index nanoparticles to form a micro-sized particle lens. This man-made superlens has unusual optical properties not found in nature: highly effective conversion of evanescent wave to propagating wave for unprecedented optical super-resolution. By using 15 nm TiO2 nanoparticles as building blocks, the fabricated 3D all-dielectric metamaterial-based solid immersion lens (mSIL) can produce a sharp image with a super-resolution of at least 45 nm under a white-light optical microscope, significantly exceeding the classical diffraction limit and previous near-field imaging techniques. In additional to mSIL where only one kind of nanoparticle was used, we also studied twoVII nanomaterial hybrid system. High-quality microspheres consisting of ZrO2/polystyrene elements were synthesised and studied. We show precise tuning of the refractive index of microspheres can effectively enhance the imaging resolution and quality. To increase superlens usability and application scope, we proposed and demonstrated a new microscope objective lens that features a two times resolution improvement over conventional objective. This is accomplished by integrating a conventional microscope objective lens with a superlensing microsphere lens with a customised lens adaptor. The new objective lens was successfully demonstrated for label-free super-resolution static and scanning imaging of 100 nm features in engineering and biological samples. In an effort to reduce superlens technology entrance barrier, we studied several spider silks as naturally occurring optical superlens. These spider silks are transparent in nature and have micron-scale cylinder structure. They can distinctly resolve λ/6 features with a large field-of-view under a conventional white-light microscope. This discovery opens a new door to develop biology-based optical systems and has enriched the superlens category. Because microsphere superlenses are small in size, their application can be extended to lab-on-chip device. In this thesis, microsphere superlens was introduced to a microfluidic channel to build an on-chip microfluidic superlensing device for real-time high-resolution imaging of biological objects. Several biological samples with different features in size, transparency, low contrast and strong mobility have been visualised. This integrated device provides a new way to allow researchers to directly visualise details of biological specimens in real-time under a conventional white light microscope. The work carried out in this research has significantly improved the microsphere superlens technology which opens the door for commercial exploitation.
152

Energy-efficient cooperative resource allocation for OFDMA

Monteiro, Valdemar Celestino January 2017 (has links)
Energy is increasingly becoming an exclusive commodity in next generation wireless communication systems, where even in legacy systems, the mobile operators operational expenditure is largely attributed to the energy bill. However, as the amount of mobile traffic is expected to double over the next decade as we enter the Next Generation communications era, the need to address energy efficient protocols will be a priority. Therefore, we will need to revisit the design of the mobile network in order to adopt a proactive stance towards reducing the energy consumption of the network. Future emerging communication paradigms will evolve towards Next Generation mobile networks, that will not only consider a new air interface for high broadband connectivity, but will also integrate legacy communications (LTE/LTE-A, IEEE 802.11x, among others) networks to provide a ubiquitous communication platform, and one that can host a multitude of rich services and applications. In this context, one can say that the radio access network will predominantly be OFDMA based, providing the impetus for further research studies on how this technology can be further optimized towards energy efficiency. In fact, advanced approaches towards both energy and spectral efficient design will still dominate the research agenda. Taking a step towards this direction, LTE/LTE-A (Long Term Evolution-Advanced) have already investigated cooperative paradigms such as SON (self-Organizing Networks), Network Sharing, and CoMP (Coordinated Multipoint) transmission. Although these technologies have provided promising results, some are still in their infancy and lack an interdisciplinary design approach limiting their potential gain. In this thesis, we aim to advance these future emerging paradigms from a resource allocation perspective on two accounts. In the first scenario, we address the challenge of load balancing (LB) in OFDMA networks, that is employed to redistribute the traffic load in the network to effectively use spectral resources throughout the day. We aim to reengineer the load-balancing (LB) approach through interdisciplinary design to develop an integrated energy efficient solution based on SON and network sharing, what we refer to as SO-LB (Self-Organizing Load balancing). Obtained simulation results show that by employing SO-LB algorithm in a shared network, it is possible to achieve up to 15-20% savings in energy consumption when compared to LTE-A non-shared networks. The second approach considers CoMP transmission, that is currently used to enhance cell coverage and capacity at cell edge. Legacy approaches mainly consider fundamental scheduling policies towards assigning users for CoMP transmission. We build on these scheduling approaches towards a cross-layer design that provide enhanced resource utilization, fairness, and energy saving whilst maintaining low complexity, in particular for broadband applications.
153

Knowledge based methodologies for planning and operation of distribution system

Ananthapadmanabha, T 05 1900 (has links)
Knowledge based methodologies
154

Integrating sensors and actuators for robotic assembly

Johnson, David Gary January 1986 (has links)
This thesis addresses the problem of integrating sensors and actuators for closed-loop control of a robotic assembly cell. In addition to the problems of interfacing the physical components of the work-cell, the difficulties of representing sensory feedback at a high level within the robot control program are investigated. A new level of robot programming, called sensor-level programming, is introduced. In this, the movements of the actuators are not given explicitly, but rather are inferred by the programming system to achieve new sensor conditions given by the programmer. Control of each sensor and actuator is distributed through a master-slave hierarchy, with each sensor and actuator having its own slave controller. A protocol for information interchange between each controller and the master is defined. If possible, the control of the kinematics of a robot arm is achieved through the manufacturer's existing control system. Under these circumstances, the actuator slave would be acting as an interface between the generic command codes issued from the central controller, and the syntax of the corresponding control instructions required by the commercial system. Sensor information is preprocessed in the sensor slaves and a set of high-level descriptors, called attributes, are sent to the central controller. Closed-loop control is achieved on the basis of these attributes. The processing of sensor information which is corrupted by noise is investigated. Sources of sensor noise are identified and new algorithms are developed to quantify the noise based on information obtained from the closed-loop servoing. Once the relative magnitudes of the system and measurement noise have been estimated, a Kalman filter is used to weight the sensor information and hence reduce the credibility given to noisy sensors; in the limit ignoring the information completely. The improvements in system performance by processing the sensor information in this way are demonstrated. The sensor-level representation and automatic error processing are embedded in a software control system, which can be used to interface commercial systems as well as purpose-built devices. An'industrial research project associated with the lay-up of carbon-fibre provides an example of its operation. A list of publications resulting from the work in this thesis is given in Appendix E.
155

A study of low force fabric characteristics and vibrational behaviour for automated garment handling

Pollet, Didier Michel January 1998 (has links)
One of the fundamental concepts in automated garment assembly is that the orientation of a fabric panel should never be lost. However, if a panel does become distorted, several techniques, such as vision, air flotation tables, and vibratory conveyors are available to restore the orientation. This thesis has investigated the behaviour of a fabric panel on a vibratory table. Several table parameters such as amplitude of vibration, frequency and angle of inclination, together with some important fabric properties as friction and compressibility are required to understand the behaviour. However, most work on friction in textiles considers fibre-fibre or fabric-fabric friction, which is not appropriate to this and so low force frictional properties between unloaded fabric and engineering surfaces (i.e., aluminium, Formica and rubber) have been studied. The influence of several experimental variables on friction is demonstrated, in particular, the effect of humidity and velocity. Further, an in depth study is made on the stick-slip of fabric panels wherein a novel measuring technique is introduced. An estimate of the damping, which is required to model the fabric, has been obtained from an in-plane vibration test. The second significant fabric property to be studied is the compression both static and impact. Again, only low-force compression tests are carried out since these are the typical forces experienced by fabrics on a vibrating table. The static compressibility of knitted and woven materials is verified with van Wvk's equation. which gives a near indistinguishable fit with the experimental data.
156

Time domain threshold crossing for signals in noise

Al-Jajjoka, Sam Nooh K. January 1995 (has links)
This work investigates the discrimination of times between threshold crossings for deterministic periodic signals with added band-limited noise. The methods include very low signal to noise ratio (one or less). Investigation has concentrated on the theory of double threshold crossings, with especial care taken in the effects of correlations in the noise, and their effects on the probability of detection of double crossings. A computer program has been written to evaluate these probabilities for a wide range of signal to noise ratiOS, a wide range of signal to bandwidth ratios, and a range of times between crossings of up to two signal periods. Correlations due to the extreme cases of a Brickwall filter and a second order Butterworth filter have been included; other filters can easily be included in the program. The method is simulated and demonstrated by implementing on a digital signal processor (DSP) using a TMS32020. Results from the DSP technique are in agreement with the theoretical evaluations. Probability results could be used to determine optimum time thresholds and windows for signal detection and frequency discrimination, to determine the signal length for adequate discrimination, and to evaluate channel capacities. The ability to treat high noise, including exact effects of time correlations, promises new applications in electronic signal detection, communications, and pulse discrimination neural networks.
157

Video object segmentation and tracking.

Murugas, Themesha. 31 March 2014 (has links)
One of the more complex video processing problems currently vexing researchers is that of object segmentation. This involves identifying semantically meaningful objects in a scene and separating them from the background. While the human visual system is capable of performing this task with minimal effort, development and research in machine vision is yet to yield techniques that perform the task as effectively and efficiently. The problem is not only difficult due to the complexity of the mechanisms involved but also because it is an ill-posed problem. No unique segmentation of a scene exists as what is of interest as a segmented object depends very much on the application and the scene content. In most situations a priori knowledge of the nature of the problem is required, often depending on the specific application in which the segmentation tool is to be used. This research presents an automatic method of segmenting objects from a video sequence. The intent is to extract and maintain both the shape and contour information as the object changes dynamically over time in the sequence. A priori information is incorporated by requesting the user to tune a set of input parameters prior to execution of the algorithm. Motion is used as a semantic for video object extraction subject to the assumption that there is only one moving object in the scene and the only motion in the video sequence is that of the object of interest. It is further assumed that there is constant illumination and no occlusion of the object. A change detection mask is used to detect the moving object followed by morphological operators to refine the result. The change detection mask yields a model of the moving components; this is then compared to a contour map of the frame to extract a more accurate contour of the moving object and this is then used to extract the object of interest itself. Since the video object is moving as the sequence progresses, it is necessary to update the object over time. To accomplish this, an object tracker has been implemented based on the Hausdorff objectmatching algorithm. The dissertation begins with an overview of segmentation techniques and a discussion of the approach used in this research. This is followed by a detailed description of the algorithm covering initial segmentation, object tracking across frames and video object extraction. Finally, the semantic object extraction results for a variety of video sequences are presented and evaluated. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005
158

Traffic modeling in mobile internet protocol : version 6.

Mtika, Steve Davidson. January 2005 (has links)
Mobile Internet Protocol Version 6 (lPv6) is the new version of the Internet Protocol (IP) born out of the great success of Internet Protocol version 4 (IPv4). The motivation behind the development of Mobile IPv6 standard stems from user's demand for mobile devices which can connect and move seamlessly across a growing number of connectivity options. It is both suitable for mobility between subnets across homogenous and inhomogeneous media. The protocol allows a mobile node to communicate with other hosts after changing its point of attachment from one subnet to another. The huge address space available meets the requirements for rapid development of internet as the number of mobile nodes increases tremendously with the rapid expansion of the internet. Mobility, security and quality of service (QoS) being integrated in Mobile TPv6 makes it the important foundation stone for building the mobile information society and the future internet. Convergence between current network technologies: the intern et and mobile telephony is taking place, but the internet's IP routing was designed to work with conventional static nodes. Mobile IPv6 is therefore considered to be one of the key technologies for realizing convergence which enables seamless communication between fixed and mobile access networks. For this reason, there is numerous works in location registrations and mobility management, traffic modeling, QoS, routing procedures etc. To meet the increased demand for mobile telecommunications, traffic modeling is an important step towards understanding and solving performance problems in the future wireless IP networks. Understanding the nature of this traffic, identifying its characteristics and developing appropriate traffic models coupled with appropriate mobility management architectures are of great importance to the traffic engineering and performance evaluation of these networks. It is imperative that the mobility management used keeps providing good performance to mobile users and maintain network load due to signaling and packet delivery as low as possible. To reduce this load, Intemet Engineering Task Force (IETF) proposed a regional mobility management. The load is reduced by allowing local migrations to be handled locally transparent from the Home Agent and the Correspondent Node as the mobile nodes roams freely around the network. This dissertation tackles two major aspects. Firstly, we propose the dynamic regional mobility management (DRMM) architecture with the aim to minimize network load while keeping an optimal number of access routers in the region. The mobility management is dynamic based on the movement and population of the mobile nodes around the network. Most traffic models in telecommunication networks have been based on the exponential Poisson processes. This model unfortunately has been proved to be unsuitable for modeling busty IP traffic. Several approaches to model IP traffic using Markovian processes have been developed using the Batch Markovian Alrival Process (BMAP) by characterizing arrivals as batches of sizes of different distributions. The BMAP is constructed by generalizing batch Poisson processes to allow for non-exponential times between arrivals of batches while maintaining an underlying Markovian structure. The second aspect of this dissertation covers the traffic characterization. We give the analysis of an access router as a single server queue with unlimited waiting space under a non pre-emptive priority queuing discipline. We model the arrival process as a superposition of BMAP processes. We characterize the superimposed arrival processes using the BMAP presentation. We derive the queue length and waiting time for this type of queuing system. Performance of this traffic model is evaluated by obtaining numerical results in terms of queue length and waiting time and its distribution for the high and low priority traffic. We finally present a call admission control scheme that supports QoS. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2005.
159

Experiments in thin film deposition : plasma-based fabrication of carbon nanotubes and magnesium diboride thin films.

Coetsee, Dirk. January 2004 (has links)
A simple, low-cost plasma reactor was developed for the purpose of carrying out thin film deposition experiments. The reactor is based largely on the Atmospheric Pressure Nonequilibrium Plasma (APNEP) design with a simple modification. It was used in an attempt to fabricate magnesium diboride thin films via a novel, but unsuccessful CVD process where plasma etching provides a precursor boron flux. Carbon nanotubes were successfully synthesised with the apparatus using a plasma-based variation of the floating catalyst or vapour phase growth method. The affect of various parameters and chemicals on the quality of nanotube production was assessed. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2004.
160

Super-orthogonal space-time turbo codes in Rayleigh fading channels.

Pillai, Jayesh Narayana. January 2005 (has links)
The vision of anytime, anywhere communications coupled by the rapid growth of wireless subscribers and increased volumes of internet users, suggests that the widespread demand for always-on access data, is sure to be a major driver for the wireless industry in the years to come. Among many cutting edge wireless technologies, a new class of transmission techniques, known as Multiple-Input Multiple-Output (MIMO) techniques, has emerged as an important technology leading to promising link capacity gains of several fold increase in data rates and spectral efficiency. While the use of MIMO techniques in the third generation (3G) standards is minimal, it is anticipated that these technologies will play an important role in the physical layer of fixed and fourth generation (4G) wireless systems. Concatenated codes, a class of forward error correction codes, of which Turbo codes are a classical example, have been shown to achieve reliable performance which approach the Shannon limit. An effective and practical way to approach the capacity of MIMO wireless channels is to employ space-time coding (STC). Space-Time coding is based on introducing joint correlation in transmitted signals in both the space and time domains. Space-Time Trellis Codes (STTCs) have been shown to provide the best trade-off in terms of coding gain advantage, improved data rates and computational complexity. Super-Orthogonal Space-Time Trellis Coding (SOSTTC) is the recently proposed form of space-time trellis coding which outperforms its predecessor. The code has a systematic design method to maximize the coding gain for a given rate, constellation size, and number of states. Simulation and analytical results are provided to justify the improved performance. The main focus of this dissertation is on STTCs, SOSTTCs and their concatenated versions in quasi-static and rapid Rayleigh fading channels. Turbo codes and space-time codes have made significant impact in terms of the theory and practice by closing the gap on the Shannon limit and the large capacity gains provided by the MIMO channel, respectively. However, a convincing solution to exploit the capabilities provided by a MIMO channel would be to build the turbo processing principle into the design of MIMO architectures. The field of concatenated STTCs has already received much attention and has shown improved performance over conventional STTCs. Recently simple and double concatenated STTCs structures have shown to provide a further improvement performance. Motivated by this fact, two concatenated SOSTTC structures are proposed called Super-orthogonal space-time turbo codes. The performance of these new concatenated SOSTTC is compared with that of concatenated STTCs and conventional SOSTTCs with simulations in Rayleigh fading channels. It is seen that the SOST-CC system outperforms the ST-CC system in rapid fading channels, whereas it maintains performance similar to that in quasi-static. The SOST-SC system has improved performance for larger frame lengths and overall maintains similar performance with ST-SC systems. A further investigation of these codes with channel estimation errors is also provided. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.

Page generated in 0.1057 seconds