• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5597
  • 1126
  • 698
  • 337
  • 66
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8456
  • 8456
  • 7518
  • 6878
  • 3891
  • 3890
  • 3144
  • 3067
  • 2963
  • 2963
  • 2963
  • 2963
  • 2963
  • 1138
  • 1131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Optical studies of the pre-breakdown mechanism in dielectronic liquids

McGrath, P. B. January 1977 (has links)
The work described in this thesis provides an optical study of pre-breakdown events in liquid dielectrics. A small scale rig employing a 50SI test cell with viewing windows, as part of a high voltage co-axial line, enabled short rise time pulses to be applied to a non-uniform test gap. For the liquid dielectric, changes of refractive index resulting from applied voltage were rendered visible by means of a Schlieren optical system. A high speed image converter camera enabled rapidly changing pre-breakdown phenomena to be photographically recorded at framing speeds up to 107 per second. Scattered light photographs were taken by orthogonal flash illumination under both pulse and direct voltage conditions, allowing large format macrophotography. Using a piezoelectric transducer placed within the test cell, and a photomultiplier to view the gap region, a relationship has also been established between the generation of mechanical waves, light scintillation and conduction current pulses. From the photographic records and conventional parameter measurements there exists strong evidence for the presence of a gaseous phase in processes leading to the electrical breakdown of liquid dielectrics even under pulse conditions.
82

Trajectory based video analysis in multi-camera setups

Anjum, Nadeem January 2010 (has links)
This thesis presents an automated framework for activity analysis in multi-camera setups. We start with the calibration of cameras particularly without overlapping views. An algorithm is presented that exploits trajectory observations in each view and works iteratively on camera pairs. First outliers are identified and removed from observations of each camera. Next, spatio-temporal information derived from the available trajectory is used to estimate unobserved trajectory segments in areas uncovered by the cameras. The unobserved trajectory estimates are used to estimate the relative position of each camera pair, whereas the exit-entrance direction of each object is used to estimate their relative orientation. The process continues and iteratively approximates the configuration of all cameras with respect to each other. Finally, we refi ne the initial configuration estimates with bundle adjustment, based on the observed and estimated trajectory segments. For cameras with overlapping views, state-of-the-art homography based approaches are used for calibration. Next we establish object correspondence across multiple views. Our algorithm consists of three steps, namely association, fusion and linkage. For association, local trajectory pairs corresponding to the same physical object are estimated using multiple spatio-temporal features on a common ground plane. To disambiguate spurious associations, we employ a hybrid approach that utilises the matching results on the image plane and ground plane. The trajectory segments after association are fused by adaptive averaging. Trajectory linkage then integrates segments and generates a single trajectory of an object across the entire observed area. Finally, for activities analysis clustering is applied on complete trajectories. Our clustering algorithm is based on four main steps, namely the extraction of a set of representative trajectory features, non-parametric clustering, cluster merging and information fusion for the identification of normal and rare object motion patterns. First we transform the trajectories into a set of feature spaces on which Meanshift identi es the modes and the corresponding clusters. Furthermore, a merging procedure is devised to re fine these results by combining similar adjacent clusters. The fi nal common patterns are estimated by fusing the clustering results across all feature spaces. Clusters corresponding to reoccurring trajectories are considered as normal, whereas sparse trajectories are associated to abnormal and rare events. The performance of the proposed framework is evaluated on standard data-sets and compared with state-of-the-art techniques. Experimental results show that the proposed framework outperforms state-of-the-art algorithms both in terms of accuracy and robustness.
83

Packet level measurement over wireless access

Naqvi, Syeda Samana January 2011 (has links)
Performance Measurement of the IP packet networks mainly comprise of monitoring the network performance in terms of packet losses and delays. If used appropriately, these network parameters (i.e. delay, loss and bandwidth etc) can indicate the performance status of the network and they can be used in fault and performance monitoring, network provisioning, and traffic engineering. Globally, there is a growing need for accurate network measurement to support the commercial use of IP networks. In wireless networks, transmission losses and communication delays strongly affect the performance of the network. Compared to wired networks, wireless networks experience higher levels of data dropouts, and corruption due to issues of channel fading, noise, interference and mobility. Performance monitoring is a vital element in the commercial future of broadband packet networking and the ability to guarantee quality of service in such networks is implicit in Service Level Agreements. Active measurements are performed by injecting probes, and this is widely used to determine the end to end performance. End to end delay in wired networks has been extensively investigated, and in this thesis we report on the accuracy achieved by probing for end to end delay over a wireless scenario. We have compared two probing techniques i.e. Periodic and Poisson probing, and estimated the absolute error for both. The simulations have been performed for single hop and multi- hop wireless networks. In addition to end to end latency, Active measurements have also been performed for packet loss rate. The simulation based analysis has been tried under different traffic scenarios using Poisson Traffic Models. We have sampled the user traffic using Periodic probing at different rates for single hop and multiple hop wireless scenarios. 5 Active probing becomes critical at higher values of load forcing the network to saturation much earlier. We have evaluated the impact of monitoring overheads on the user traffic, and show that even small amount of probing overhead in a wireless medium can cause large degradation in network performance. Although probing at high rate provides a good estimation of delay distribution of user traffic with large variance yet there is a critical tradeoff between the accuracy of measurement and the packet probing overhead. Our results suggest that active probing is highly affected by probe size, rate, pattern, traffic load, and nature of shared medium, available bandwidth and the burstiness of the traffic.
84

Fuzzy knowledge based reliability evaluation and its application to power generating system

Wang, Lei January 1994 (has links)
The method of using Fuzzy Sets Theory(FST) and Fuzzy Reasoning(FR) to aid reliability evaluation in a complex and uncertain environment is studied, with special reference to electrical power generating system reliability evaluation. Device(component) reliability prediction contributes significantly to a system's reliability through their ability to identify source and causes of unreliability. The main factors which affect reliability are identified in Reliability Prediction Process(RPP). However, the relation between reliability and each affecting factor is not a necessary and sufficient one. It is difficult to express this kind of relation precisely in terms of quantitative mathematics. It is acknowledged that human experts possesses some special characteristics that enable them to learn and reason in a vague and fuzzy environment based on their experience. Therefore, reliability prediction can be classified as a human engineer oriented decision process. A fuzzy knowledge based reliability prediction framework, in which speciality rather than generality is emphasised, is proposed in the first part of the thesis. For this purpose, various factors affected device reliability are investigated and the knowledge trees for predicting three reliability indices, i.e. failure rate, maintenance time and human error rate are presented. Human experts' empirical and heuristic knowledge are represented by fuzzy linguistic rules and fuzzy compositional rule of inference is employed as inference tool. Two approaches to system reliability evaluation are presented in the second part of this thesis. In first approach, fuzzy arithmetic are conducted as the foundation for system reliability evaluation under the fuzzy envimnment The objective is to extend the underlying fuzzy concept into strict mathematics framework in order to arrive at decision on system adequacy based on imprecise and qualitative information. To achieve this, various reliability indices are modelled as Trapezoidal Fuzzy Numbers(TFN) and are proceeded by extended fuzzy arithmetic operators. In second approach, the knowledge of system reliability evaluation are modelled in the form of fuzzy combination production rules and device combination sequence control algorithm. System reliability are evaluated by using fuzzy inference system. Comparison of two approaches are carried out through case studies. As an application, power generating system reliability adequacy is studied. Under the assumption that both unit reliability data and load data are subjectively estimated, these fuzzy data are modelled as triangular fuzzy numbers, fuzzy capacity outage model and fuzzy load model are developed by using fuzzy arithmetic operations. Power generating system adequacy is evaluated by convoluting fuzzy capacity outage model with fuzzy load model. A fuzzy risk index named "Possibility Of Load Loss" (POLL) is defined based on the concept of fuzzy containment The proposed new index is tested on IEEE Reliability Test System (RTS) and satisfactory results are obtained Finally, the implementation issues of Fuzzy Rule Based Expert System Shell (FRBESS) are reported. The application of ERBESS to device reliability prediction and system reliability evaluation is discussed.
85

Antenna study and design for ultra wideband communication applications

Liang, Jianxin January 2006 (has links)
Since the release by the Federal Communications Commission (FCC) of a bandwidth of 7.5GHz (from 3.1GHz to 10.6GHz) for ultra wideband (UWB) wireless communications, UWB is rapidly advancing as a high data rate wireless communication technology. As is the case in conventional wireless communication systems, an antenna also plays a very crucial role in UWB systems. However, there are more challenges in designing a UWB antenna than a narrow band one. A suitable UWB antenna should be capable of operating over an ultra wide bandwidth as allocated by the FCC. At the same time, satisfactory radiation properties over the entire frequency range are also necessary. Another primary requirement of the UWB antenna is a good time domain performance, i. e. a good impulse response with minimal distortion. This thesis focuses on UWB antenna design and analysis. Studies have been undertaken covering the areas of UWB fundamentals and antenna theory. Extensive investigations were also carried out on two different types of UWB antennas. The first type of antenna studied in this thesis is circular disc monopole antenna. The vertical disc monopole originates from conventional straight wire monopole by replacing the wire element with a disc plate to enhance the operating bandwidth substantially. Based on the understanding of vertical disc monopole, two more compact versions featuring low-profile and compatibility to printed circuit board are proposed and studied. Both of them are printed circular disc monopoles, one fed by a micro-strip line, while the other fed by a co-planar waveguide (CPW). The second type of UWB antenna is elliptical/circular slot antenna, which can also be fed by either micro-strip line or CPW. The performances and characteristics of UWB disc monopole and elliptical/circular slot antenna are investigated in both frequency domain and time domain. The design parameters for achieving optimal operation of the antennas are also analyzed extensively in order to understand the antenna operations. It has been demonstrated numerically and experimentally that both types of antennas are suitable for UWB applications.
86

Improving relay based cellular networks performance in highly user congested and emergency situations

Mei, Haibo January 2012 (has links)
Relay based cellular networks (RBCNs) are the technologies that incorporate multi-hop communication into traditional cellular networks. A RBCN can potentially support higher data rates, more stable radio coverage and more dynamic services. In reality, RBCNs still suffer from performance degradation in terms of high user congestion, base station failure and overloading in emergency situations. The focus of this thesis is to explore the potential to improve IEEE802.16j supported RBCN performance in user congestion and emergency situations using adjustments to the RF layer (by antenna adjustments or extensions using multi-hop) and cooperative adjustment algorithms, e.g. based on controlling frequency allocation centrally and using distributed approaches. The first part of this thesis designs and validates network reconfiguration algorithms for RBCN, including a cooperative antenna power control algorithm and a heuristic antenna tilting algorithm. The second part of this thesis investigates centralized and distributed dynamic frequency allocation for higher RBCN frequency efficiency, network resilience, and computation simplicity. It is demonstrated that these benefits mitigate user congestion and base station failure problems significantly. Additionally, interweaving coordinated dynamic frequency allocation and antenna tilting is investigated in order to obtain the benefits of both actions. The third part of this thesis incorporates Delay Tolerate Networking (DTN) technology into RBCN to let users self-organize to connect to functional base station through multi-hops supported by other users. Through the use of DTN, RBCN coverage and performance are improved. This thesis explores the augmentation of DTN routing protocols to let more un-covered users connect to base stations and improve network load balancing
87

Design and applications of optical transformation devices

Bao, Di January 2012 (has links)
This thesis provides an insight into the designing, physical realization and characterization of optical transformation devices. It begins with an introduction to the discrete coordinate transformation with a design example of a carpet cloak. The realization and characterization of materials, namely a dielectric disk matrix and polyurethane/BaTiO3 foam composite, for constructing transformation optics devices are studied both numerically and experimentally. Two different kinds of low loss and broadband all-dielectric realisations of OT devices are designed and experimentally demonstrated. First, the cloaking structure made of a high-ǫ dielectric-loaded foam mixture is reported. A polyurethane foam, mixed with different ratios of barium titanate is used to produce the required range of permittivities, and the invisibility cloak is demonstrated to work for all incident angles, over a wide range of microwave frequencies. Then, based on a study on the properties of periodic dielectric particles, the cloak, realized with periodic dielectric cylinders, is proposed. The required dielectric map for the cloak is achieved by means of manipulating the dimensions, or spatial density, of the periodically distributed dielectric cylinders embedded in the host medium, whose permittivity is close to one. The scattering reduction effects are verified through both simulation and experimental results. The performances of the two different kinds of cloak are also compared quantitatively. Last, but not least, an extraordinary-transmission (ET) device made from commercially available ceramics and Teflon is designed, which exhibits broadband transmission through a sub-wavelength aperture. It is verified both numerically and experimentally that the device can provide transmission with a -3 dB bandwidth of more than 1 GHz, in a region which would otherwise be a stop band caused by the sub-wavelength aperture in an X-band waveguide.
88

Traffic control mechanisms with cell rate simulation for ATM networks

Freire, Fonseca Paula Christina January 1996 (has links)
No description available.
89

Computer models for musical instrument identification

Chetry, Nicolas D. January 2006 (has links)
A particular aspect in the perception of sound is concerned with what is commonly termed as texture or timbre. From a perceptual perspective, timbre is what allows us to distinguish sounds that have similar pitch and loudness. Indeed most people are able to discern a piano tone from a violin tone or able to distinguish different voices or singers. This thesis deals with timbre modelling. Specifically, the formant theory of timbre is the main theme throughout. This theory states that acoustic musical instrument sounds can be characterised by their formant structures. Following this principle, the central point of our approach is to propose a computer implementation for building musical instrument identification and classification systems. Although the main thrust of this thesis is to propose a coherent and unified approach to the musical instrument identification problem, it is oriented towards the development of algorithms that can be used in Music Information Retrieval (MIR) frameworks. Drawing on research in speech processing, a complete supervised system taking into account both physical and perceptual aspects of timbre is described. The approach is composed of three distinct processing layers. Parametric models that allow us to represent signals through mid-level physical and perceptual representations are considered. Next, the use of the Line Spectrum Frequencies as spectral envelope and formant descriptors is emphasised. Finally, the use of generative and discriminative techniques for building instrument and database models is investigated. Our system is evaluated under realistic recording conditions using databases of isolated notes and melodic phrases.
90

Space-variant picture coding

Popkin, Timothy John January 2010 (has links)
Space-variant picture coding techniques exploit the strong spatial non-uniformity of the human visual system in order to increase coding efficiency in terms of perceived quality per bit. This thesis extends space-variant coding research in two directions. The first of these directions is in foveated coding. Past foveated coding research has been dominated by the single-viewer, gaze-contingent scenario. However, for research into the multi-viewer and probability-based scenarios, this thesis presents a missing piece: an algorithm for computing an additive multi-viewer sensitivity function based on an established eye resolution model, and, from this, a blur map that is optimal in the sense of discarding frequencies in least-noticeable- rst order. Furthermore, for the application of a blur map, a novel algorithm is presented for the efficient computation of high-accuracy smoothly space-variant Gaussian blurring, using a specialised filter bank which approximates perfect space-variant Gaussian blurring to arbitrarily high accuracy and at greatly reduced cost compared to the brute force approach of employing a separate low-pass filter at each image location. The second direction is that of artifi cially increasing the depth-of- field of an image, an idea borrowed from photography with the advantage of allowing an image to be reduced in bitrate while retaining or increasing overall aesthetic quality. Two synthetic depth of field algorithms are presented herein, with the desirable properties of aiming to mimic occlusion eff ects as occur in natural blurring, and of handling any number of blurring and occlusion levels with the same level of computational complexity. The merits of this coding approach have been investigated by subjective experiments to compare it with single-viewer foveated image coding. The results found the depth-based preblurring to generally be significantly preferable to the same level of foveation blurring.

Page generated in 0.138 seconds