• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1148
  • 177
  • 168
  • 106
  • 78
  • 67
  • 48
  • 42
  • 18
  • 17
  • 17
  • 8
  • 8
  • 8
  • 8
  • Tagged with
  • 2348
  • 428
  • 318
  • 309
  • 303
  • 270
  • 270
  • 262
  • 209
  • 181
  • 179
  • 153
  • 136
  • 133
  • 127
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

A novel detector micro-module for computed tomography

Chen, W. (Wu) 05 October 2004 (has links)
Abstract To realize faster and more precise treatment of patients, CT technology has an urgent demand to make the CT detector arrays larger, and to cover a larger section of the body during one scan of X-ray imaging. A novel detector micro-module is developed in this thesis to meet this demand. In the novel detector micro-module, photocurrent signals are read out from the bottom side of the photodiode array chip. By avoiding the use of the top surface of the chip for routeing, as is the case in conventional CT modules, rectangular detector building blocks containing a certain number of detector elements can be produced. By tiling such building blocks in both x- and y-directions in a plane, detector arrays with any number of detector elements (in multiples of the number in a single building block) can be built. This cannot be achievable by the conventional method. The novel detector micro-module developed in this thesis consists of an array of 16×16 active elements, and the size of the array is 21×21mm2. The array of detector elements is soldered to a multilayer LTCC (low temperature co-fired ceramics) substrate via a BGA (ball grid array) with each element soldered onto one solder sphere, from which photocurrent signals are read out. In this thesis, the working principle and the evolution of CT detector modules are reviewed and the necessity of developing the novel detector modules is justified. The concept and the structure of the novel detector micro-module are presented. The thermo-mechanical stress modeling and simulation of the structure is performed. The design and the process technology of the photodiode array for the novel detector micro-modules are discussed. The electronic characteristics of the novel detector micro-modules and the related front-end electronics are theoretically analyzed. The LTCC multi-layer substrate is developed. The assembly process of the novel detector micro-module is developed. The basic detector characteristics and light response measurement results of the novel micro-module are presented and discussed. By improving the photodiode silicon process technology, a dark current density as low as 33pA/cm2 is achieved. Excellent mechanical accuracy is achieved with the LTCC substrate. The dimensional tolerance is +/-10μm and the flatness value is less than 50μm over a distance of a 30.5mm distance. A 64-slice detector module is produced successfully by tiling several novel micro-modules. The novel detector micro-modules are superior to conventional CT modules on many respects while being tileable. Their light sensitivity curve is smoother. They exhibit extremely low signal cross-talk; They have nearly zero wiring capacitance compared to up to 20pF in commercial CT detector modules. They also have almost zero wiring resistance compared to tens of ohm or more than one hundred ohms in the present products. This result will have a significant impact on CT technology and the CT industry because the detector will be no longer the limiting factor in CT system performance.
552

Array-CGH bei Kindern mit Entwicklungsstörung oder geistiger Behinderung: Bei welcher Konstellation finden sich gehäuft klinisch relevante Chromosomenaberrationen? / Array CGH in patients with developmental or intellectual disability: are there phenotypic clues to pathogenic copy number variations?

Klein, Nina 25 January 2017 (has links)
No description available.
553

Advanced radio interferometric simulation and data reduction techniques

Makhathini, Sphesihle January 2018 (has links)
This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
554

Normalization and statistical methods for crossplatform expression array analysis

Mapiye, Darlington S January 2012 (has links)
>Magister Scientiae - MSc / A large volume of gene expression data exists in public repositories like the NCBI’s Gene Expression Omnibus (GEO) and the EBI’s ArrayExpress and a significant opportunity to re-use data in various combinations for novel in-silico analyses that would otherwise be too costly to perform or for which the equivalent sample numbers would be difficult to collects exists. For example, combining and re-analysing large numbers of data sets from the same cancer type would increase statistical power, while the effects of individual study-specific variability is weakened, which would result in more reliable gene expression signatures. Similarly, as the number of normal control samples associated with various cancer datasets are often limiting, datasets can be combined to establish a reliable baseline for accurate differential expression analysis. However, combining different microarray studies is hampered by the fact that different studies use different analysis techniques, microarray platforms and experimental protocols. We have developed and optimised a method which transforms gene expression measurements from continuous to discrete data points by grouping similarly expressed genes into quantiles on a per-sample basis. After cross mapping each probe on each chip to the gene it represents, thereby enabling us to integrate experiments based on genes they have in common across different platforms. We optimised the quantile discretization method on previously published prostate cancer datasets produced on two different array technologies and then applied it to a larger breast cancer dataset of 411 samples from 8 microarray platforms. Statistical analysis of the breast cancer datasets identified 1371 differentially expressed genes. Cluster, gene set enrichment and pathway analysis identified functional groups that were previously described in breast cancer and we also identified a novel module of genes encoding ribosomal proteins that have not been previously reported, but whose overall functions have been implicated in cancer development and progression. The former indicates that our integration method does not destroy the statistical signal in the original data, while the latter is strong evidence that the increased sample size increases the chances of finding novel gene expression signatures. Such signatures are also robust to inter-population variation, and show promise for translational applications like tumour grading, disease subtype classification, informing treatment selection and molecular prognostics.
555

Discrete pulse transform of images and applications

Fabris-Rotelli, Inger Nicolette 02 May 2013 (has links)
The LULU operators Ln and Un operate on neighbourhoods of size n. The Discrete Pulse Transform (DPT) of images is obtained via recursive peeling of so-called local maximum and minimum sets with the LULU operators as n increases from 1 to the maximum number of elements in the array. The DPT provides a new nonlinear decomposition of a multidimensional array. This thesis investigates the theoretical and practical soundness of the decomposition for image analysis. Properties for the theoretical justification of the DPT are provided as consistency of the decomposition (a pseudo-linear property), and its setting as a nonlinear scale-space, namely the LULU scalespace. A formal axiomatic theory for scale-space operators and scale-spaces is also presented. The practical soundness of the DPT is investigated in image sharpening, best approximation of an image, noise removal in signals and images, feature point detection with ideas to extending work to object tracking in videos, and image segmentation. LULU theory on multidimensional arrays and the DPT is now at a point where concrete signal, image and video analysis algorithms can be developed for a wide variety of applications. / Thesis (PhD)--University of Pretoria, 2013. / Mathematics and Applied Mathematics / unrestricted
556

Null Synthesis and Implementation of Cylindrical Microstrip Patch Arrays

Niemand, Philip 16 May 2005 (has links)
As the wireless communications networks expand, the number of both unwanted directional interferences and strong nearby sources increase, which degrade system performance. The signal-tointerference ratio (SIR) can be improved by using multiple nulls in the directions of the interferences while maintaining omnidirectional coverage in the direction of the network users. For the communication system considered, the interferences are static and their spatial positions are known. A non-adaptive antenna array is needed to provide spatial filtering in a static wireless environment. Omnidirectional arrays, such as cylindrical arrays, are the most suitable to provide the omnidirectional coverage and are capable of suppressing interferences when nulls are inserted in the radiation pattern. In this thesis, a cylindrical microstrip patch antenna array is investigated as an antenna to provide an omnidirectional radiation pattern with nulls at specified angular locations to suppress interference from directional sources. Three null synthesis methods are described and used to provide the omnidirectional array pattern with nulls using the radiation characteristics of the cylindrical microstrip patch antenna elements. The orthogonal projection method is extended to incorporate the directive radiation patterns of the cylindrical microstrip patch elements. Using this method, an optimal pattern that minimises the squared pattern error with respect to the ideal pattern is obtained. Instead of only minimising the array pattern error, a multi-objective optimisation approach is also followed. The objective weighting method is applied in null pattern synthesis to improve the amplitude pattern characteristics of the cylindrical patch arrays. As a third null synthesis technique, a constraint optimisation method is applied to obtain a constrained pattern with the desired amplitude pattern characteristics. The influence of the array attributes on the characteristics of the amplitude patterns obtained from the null synthesis methods, is also studied. In addition, the implementation of the cylindrical microstrip patch array is investigated. The influence of the mutual coupling on the characteristics of the null patterns of the cylindrical patch arrays is investigated utilising simulations and measurements. A mutual coupling compensation technique is used to provide matched and equal driving impedances for all the patch antenna elements given a required set of excitations. Test cases in which this technique is used, are discussed and the consequent improvements in the bandwidth and reflection coefficient of a linear patch arrays are shown. The characteristics of the resulting null pattern for the cylindrical microstrip patch array is also improved using the compensation technique. / Thesis (PhD (Electronic Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
557

Pixel-parallel image processing techniques and algorithms

Wang, Bin January 2014 (has links)
The motivation of the research presented in this thesis is to investigate image processing algorithms utilising various SIMD parallel devices, especially massively parallel Cellular Processor Arrays (CPAs), to accelerate their processing speed. Various SIMD processors with different architectures are reviewed, and their features are analysed. The different types of parallelisms contained in image processing tasks are also analysed, and the methodologies to exploit date-level parallelisms are discussed. The efficiency of the pixel-per-processor architecture used in computer vision scenarios are discussed, as well as its limitations. Aiming to solve the problem that CPA array dimensions are usually smaller than the resolution of the images needed to be processed, a “coarse grain mapping method” is proposed. It provides the CPAs with the ability of processing images with higher resolution than the arrays themselves by allowing CPAs to process multiple pixels per processing element. It is completely software based, easy to implement, and easy to program. To demonstrate the efficiency of pixel-level parallel approach, two image processing algorithms specially designed for pixel-per-processor arrays are proposed: a parallel skeletonization algorithm based on two-layer trigger-wave propagation, and a parallel background detection algorithm. Implementations of the proposed algorithms using different platforms (i.e. CPU, GPU and CPA) are proposed and evaluated. Evaluation results indicate that the proposed algorithms have advantages both in term of processing speed and result quality. This thesis concludes that pixel-per-processor architecture can be used in image processing (or computer vision) algorithms which emphasize analysing pixel-level information, to significantly boost the processing speed of these algorithms.
558

Vertical axis wind turbine acoustics

Pearson, Charlie January 2014 (has links)
Increasing awareness of the issues of climate change and sustainable energy use has led to growing levels of interest in small-scale, decentralised power generation. Small-scale wind power has seen significant growth in the last ten years, partly due to the political support for renewable energy and the introduction of Feed In Tariffs, which pay home owners for generating their own electricity. Due to their ability to respond quickly to changing wind conditions, small-scale vertical axis wind turbines (VAWTs) have been proposed as an efficient solution for deployment in built up areas, where the wind is more gusty in nature. If VAWTs are erected in built up areas they will be inherently close to people; consequently, public acceptance of the turbines is essential. One common obstacle to the installation of wind turbines is noise annoyance, so it is important to make the VAWT rotors as quiet as possible. To date, very little work has been undertaken to investigate the sources of noise on VAWTs. The primary aim of this study was therefore to gather experimental data of the noise from various VAWT rotor configurations, for a range of operating conditions. Experimental measurements were carried out using the phased acoustic array in the closed section Markham wind tunnel at Cambridge University Engineering Department. Beamforming was used in conjunction with analysis of the measured sound spectra in order to locate and identify the noise sources on the VAWT rotors. Initial comparisons of the spectra from the model rotor and a full-scale rotor showed good qualitative agreement, suggesting that the conclusions from the experiments would be transferable to real VAWT rotors. One clear feature observed in both sets of spectra was a broadband peak around 1-2kHz, which spectral scaling methods demonstrated was due to laminar boundary layer tonal noise. Application of boundary layer trips to the inner surfaces of the blades on the model rotor was found to eliminate this noise source, and reduced the amplitude of the spectra by up to 10dB in the region of the broadband peak. This method could easily be applied to a full-scale rotor and should result in measurable noise reductions. At low tip speed ratios (TSR) the blades on a VAWT experience dynamic stall and it was found that this led to significant noise radiation from the upstream half of the rotor. As the TSR was increased the dominant source was seen to move to the downstream half of the rotor; this noise was thought to be due to the interaction of the blades in the downstream half of the rotor with the wake from the blades in the upstream half. It was suggested that blade wake interaction is the dominant noise source in the typical range of peak performance for the full-scale QR5 rotor. Different solidity rotors were investigated by using 2-, 3- and 4-bladed rotors and it was found that increasing the solidity had a similar effect to increasing the TSR. This is due to the fact that the induction factor, which governs the deflection of the flow through the rotor, is a function of both the rotor solidity and the TSR. With a large body of experimental data for validation, it was possible to investigate computational noise prediction methods. A harmonic model was developed that aimed to predict the sound radiated by periodic fluctuations in the blade loads. This model was shown to agree with similar models derived by other authors, but to make accurate predictions very high resolution input data was required. Since such high resolution blade loading data is unlikely to be available, and due to the dominance of stochastic sources, the harmonic model was not an especially useful predictive tool. However, it was used to investigate the importance of the near-field components of the sound radiated by the wind tunnel model to the acoustic array. It was shown that the near-field terms were significant over a wide range of frequencies, and the total spectrum was always greater than that of the far-field component. This implied that the noise levels measured by the acoustic array represented an upper bound on the sound radiated to the far-field, and hence that the latter would also be dominated by stochastic components. An alternative application of the harmonic model, which attempted to determine the blade loading harmonics from the harmonics in the sound field was proposed. This inversion method utilised a novel convex optimisation technique that was found to generate good solutions in the simulated test cases, even in the presence of significant random noise. The method was found to be insensitive at low frequencies, which made it ineffective for inverting the real microphone data, although this was shown to be at least partly due to the limitations imposed by the array size. In addition to the harmonic models, an empirical noise prediction method using the spectral scaling laws derived by \citet*{Brooks_1989} was trialled, and was found to be capable of making predictions that were in agreement with the measured data. The model was shown to be sensitive to the exact choice of turbulence parameters used and was also found to require good quality aerodynamic data to make accurate noise predictions. If such data were available however, it is expected that this empirical model would be able to make useful predictions of the noise radiated by a VAWT rotor.
559

Vision-Based Localization Using Reliable Fiducial Markers

Stathakis, Alexandros January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
560

Graph-dependent Covering Arrays and LYM Inequalities

Maltais, Elizabeth Jane January 2016 (has links)
The problems we study in this thesis are all related to covering arrays. Covering arrays are combinatorial designs, widely used as templates for efficient interaction-testing suites. They have connections to many areas including extremal set theory, design theory, and graph theory. We define and study several generalizations of covering arrays, and we develop a method which produces an infinite family of LYM inequalities for graph-intersecting collections. A common theme throughout is the dependence of these problems on graphs. Our main contribution is an extremal method yielding LYM inequalities for $H$-intersecting collections, for every undirected graph $H$. Briefly, an $H$-intersecting collection is a collection of packings (or partitions) of an $n$-set in which the classes of every two distinct packings in the collection intersect according to the edges of $H$. We define ``$F$-following" collections which, by definition, satisfy a LYM-like inequality that depends on the arcs of a ``follow" digraph $F$ and a permutation-counting technique. We fully characterize the correspondence between ``$F$-following" and ``$H$-intersecting" collections. This enables us to apply our inequalities to $H$-intersecting collections. For each graph $H$, the corresponding inequality inherently bounds the maximum number of columns in a covering array with alphabet graph $H$. We use this feature to derive bounds for covering arrays with the alphabet graphs $S_3$ (the star on three vertices) and $\kvloop{3}$ ($K_3$ with loops). The latter improves a known bound for classical covering arrays of strength two. We define covering arrays on column graphs and alphabet graphs which generalize covering arrays on graphs. The column graph encodes which pairs of columns must be $H$-intersecting, where $H$ is a given alphabet graph. Optimizing covering arrays on column graphs and alphabet graphs is equivalent to a graph-homomorphism problem to a suitable family of targets which generalize qualitative independence graphs. When $H$ is the two-vertex tournament, we give constructions and bounds for covering arrays on directed column graphs. FOR arrays are the broadest generalization of covering arrays that we consider. We define FOR arrays to encompass testing applications where constraints must be considered, leading to forbidden, optional, and required interactions of any strength. We model these testing problems using a hypergraph. We investigate the existence of FOR arrays, the compatibility of their required interactions, critical systems, and binary relational systems that model the problem using homomorphisms.

Page generated in 0.0391 seconds