• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 743
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 6
  • 5
  • Tagged with
  • 1531
  • 173
  • 139
  • 128
  • 123
  • 121
  • 117
  • 117
  • 112
  • 92
  • 92
  • 91
  • 83
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

EXPERIMENTAL RESULTS FOR MULTI-SYMBOL DETECTION OF PCM/FM

Geoghegan, Mark 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / It has been previously shown, through computer simulations, that a multiple symbol detector can provide substantial gains in detection efficiency (nearly 3 dB) over traditional PCM/FM detectors. This is accomplished by performing correlations over multiple symbol intervals to take advantage of the memory inherent in the continuous phase PCM/FM signal. This paper presents measured hardware results, from a prototype developed for the Advanced Range Telemetry (ARTM) Project, that substantiate the previously published performance and sensitivity predictions. Furthermore, this work confirms the feasibility of applying this technology to high-speed commercial and military telemetry applications.
172

Structure in time-frequency binary masking

Kressner, Abigail A. 27 May 2016 (has links)
Understanding speech in noisy environments is a challenge for normal-hearing and impaired-hearing listeners alike. However, it has been shown that speech intelligibility can be improved in these situations using a strategy called the ideal binary mask. Because this approach requires knowledge of the speech and noise signals separately though, it is ill-suited for practical applications. To address this, many algorithms are being designed to approximate the ideal binary mask strategy. Inevitably though, these algorithms make errors, and the implications of these errors are not well-understood. The main contributions of this thesis are to introduce a new framework for investigating binary masking algorithms and to present listener studies that use this framework to illustrate how certain types of algorithm errors can affect speech recognition outcomes with both normal-hearing listeners and cochlear implant recipients.
173

Διμεταλλικά ανοδικά ηλεκτρόδια Pt-TiO2 για την ηλεκτροχημική οξείδωση αλκοολών σε κυψελίδες καυσίμου χαμηλών θερμοκρασιών / Pt-TiO2 binary electrodes for alcohol oxidation in low temperature fuel cells

Hasa, Bjorn 11 October 2013 (has links)
Σε αυτή την μελέτη διμεταλλικά ηλεκτρόδια Pt-TiO2 παρασκευάστηκαν και χαρακτηρίστηκαν με περίθλαση ακτίνων Χ (X-ray diffraction - XRD), με ηλεκτρονικό μικροσκόπιο σάρωσης (Scanning Electron Microscopy - SEM), φασματοσκοπία φωτοηλεκτρονίων από ακτίνες Χ (X-ray photoelectron spectroscopy - XPS), ηλεκτροχημικές τεχνικές και πειράματα ρόφησης-οξείδωσης μονοξειδίου του άνθρακα (CO stripping). Ερευνήθηκε η μείωση της περιεκτικότητας σε Pt χωρίς απώλειες της ηλεκτροκαταλυτικής ενεργότητας. Το TiO2 επιλέχθηκε λόγω της χημικής του σταθερότητας και του χαμηλού κόστους. Βρέθηκε ότι περιεκτικότητα σε TiO2 μέχρι 50% οδηγεί σε αύξηση της ηλεκτροχημικά ενεργής επιφάνειας (EAS) του ηλεκτροδίου.Η EAS του ηλεκτροδίου Pt(50%)-TiO2(50%) ήταν σχεδόν μια τάξη μεγέθους μεγαλύτερη από το ηλεκτρόδιο της καθαρής Pt, ενώ για περιεκτικότητα σε Pt χαμηλότερη από 30% η EAS μειώνεται δραματικά. Το παραπάνω συμπέρασμα στηρίχθηκε σε μετρήσεις του φορτίου της αναγωγικής κορυφής του οξειδίου της Pt και σε πειράματα ρόφησης-οξείδωσης του CO (CO stripping). Όλα τα δείγματα χρησιμοποιήθηκαν επίσης και ως άνοδοι κατά την διάρκεια ηλεκτροχημικής οξείδωσης μεθανόλης και αιθανόλης. Και στις δύο περιπτώσεις το ηλεκτρόδιο Pt(50%)-TiO2(50%) παρουσίασε τη μεγαλύτερη ηλεκτροκαταλυτική ενεργότητα. / In this study Pt-TiO2 binary electrodes were prepared by thermal decomposition of chloride precursors on Ti substrates, characterised by X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), electrochemical techniques and CO stripping and used as anodes for alcohol oxidation. The minimization of the Pt loading without electrocatalytic activity losses was also explored. TiO2 was chosen due to its chemical stability, low cost and excellent properties as substrate for Pt dispersion. It was found that TiO2 loading up to 50% results in Electrochemically Active Surface (EAS) increase. The EAS of Pt(50%)-TiO2(50%) was found to be almost one order of magnitude higher than that of pure Pt while for Pt loadings lower than 30% the EAS was diminished. The above conclusion has been confirmed both by following the charge of the reduction peak of platinum oxide and by CO stripping experiments. All samples have been evaluated during the electrochemical oxidation of methanol and ethanol. In both cases the Pt(50%)-TiO2(50%) electrode exhibited better electrocatalytic activity than the pure Pt anode. The observed higher performance of the binary electrodes has been attributed to the enhanced Pt dispersion as well as to the formation of smaller Pt particles by the addition of TiO2.
174

Parallelization of virtual machines for efficient multi-processor emulation

Chakravarthy, Ramapriyan Sreenath 09 November 2010 (has links)
Simulation is an essential tool for computer systems research. The speed of the simulator has a first-order effect on what experiments can be run. The recent shift in the industry to multi-core processors has put even more pressure on simulation performance, especially since most simulators are single-threaded. In this thesis, we present QEMU-MP, a parallelized version of a fast functional simulator called QEMU. / text
175

Space astrometry of unresolved binaries: From Hipparcos to Gaia/Astrometrie spatiale des binaires non-resolues: D'Hipparcos a Gaia

Pourbaix, Dimitri 13 September 2007 (has links)
Building upon its success with the Hipparcos space astrometry mission launched in 1989, the European Space Agency has agreed to fund the construction of its successor, Gaia, and its launch in 2011. Despite the similarities between the two missions, Gaia will be orders of magnitude more powerful, more sensitive, but also more complex in terms of data processing. Growing from 120,000 stars with Hipparcos to about 120,000E4 stars with Gaia does not simply mean pushing the computing resources to their limits (1 second of processing per star yields 38 years for the whole Gaia-sky). It also means facing situations that did not occur with Hipparcos either by luck or because those cases were carefully removed from the Hipparcos Input Catalogue. This manuscript illustrates how some chunks of the foreseen Gaia data reduction pipeline can be trained and assessed using the Hipparcos observations. This is especially true for unresolved binaries because they pop up so far down in the Gaia pipeline that, by the time they get there, there is essentially no difference between Hipparcos and Gaia data. Only the number of such binaries is different, going from two thousand to ten million. Although the computing time clearly becomes an issue, one cannot sacrifice the robustness and correctness of the reduction pipeline for the sake of speed. However, owing to the requirement that everything must be Gaia-based (no help from ground-based results), the very robustness of the reduction has to be assessed as well. For instance, the underlying assumptions of some statistical tests used to assess the quality of the fits used in the Hipparcos pipeline might no longer hold with Gaia. That may not affect the fit itself but rather the quality indicators usually accompanying those fits. For the final catalogue to be a success, these issues must be addressed as soon as possible.
176

Statistical discrimination with disease categories subject to misclassification

Hilliam, Rachel M. January 2000 (has links)
No description available.
177

Memory Footprint Reduction of Operating System Kernels

He, Haifeng January 2009 (has links)
As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incurs unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems. First, we present kernel code compaction, an automated approach that reduces the code size of an OS kernel statically by removing unused functionality. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximated compilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform and a minimally configured Linux kernel obtains a code size reduction of close to 24%.Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most kernel code is executed infrequently if at all. Our second contribution is on-demand code loading, an automated approach that keeps the rarely used code on secondary storage and loads it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53%with little degradation in performance. Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in an OS kernel. A prototype implementation for the Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5%when running the MediaBench suite while incurring only minimal increases in execution time (1.9%).
178

Deobfuscation of Packed and Virtualization-Obfuscation Protected Binaries

Coogan, Kevin Patrick January 2011 (has links)
Code obfuscation techniques are increasingly being used in software for such reasons as protecting trade secret algorithms from competitors and deterring license tampering by those wishing to use the software for free. However, these techniques have also grown in popularity in less legitimate areas, such as protecting malware from detection and reverse engineering. This work examines two such techniques - packing and virtualization-obfuscation - and presents new behavioral approaches to analysis that may be relevant to security analysts whose job it is to defend against malicious code. These approaches are robust against variations in obfuscation algorithms, such as changing encryption keys or virtual instruction byte code.Packing refers to the process of encrypting or compressing an executable file. This process "scrambles" the bytes of the executable so that byte-signature matching algorithms commonly used by anti-virus programs are ineffective. Standard static analysis techniques are similarly ineffective since the actual byte code of the program is hidden until after the program is executed. Dynamic analysis approaches exist, but are vulnerable to dynamic defenses. We detail a static analysis technique that starts by identifying the code used to "unpack" the executable, then uses this unpacker to generate the unpacked code in a form suitable for static analysis. Results show we are able to correctly unpack several encrypted and compressed malware, while still handling several dynamic defenses.Virtualization-obfuscation is a technique that translates the original program into virtual instructions, then builds a customized virtual machine for these instructions. As with packing, the byte-signature of the original program is destroyed. Furthermore, static analysis of the obfuscated program reveals only the structure of the virtual machine, and dynamic analysis produces a dynamic trace where original program instructions are intermixed, and often indistinguishable from, virtual machine instructions. We present a dynamic analysis approach whereby all instructions that affect the external behavior of the program are identified, thus building an approximation of the original program that is observationally equivalent. We achieve good results at both identifying instructions from the original program, as well as eliminating instructions known to be part of the virtual machine.
179

Observational properties of brown dwarfs : the low-mass end of the mass function

Cardoso, Catia Vanessa Varejao January 2012 (has links)
Brown dwarfs are objects with sub-stellar masses that are unable to sustain hydrogen burning, cooling down through out their lifetimes. This thesis presents two projects, the study of the IMF of the double cluster, h & χ Persei, and the determination of the dynamical masses of the brown dwarf binary, ε Indi Ba, Bb. The study of a cluster’s population distribution gives us the opportunity to study a statistically meaningful population of objects over a wide range of masses (from massive stars to brown dwarfs), with a similar age and chemical composition providing formation and dynamical evolution constraints. h & χ Persei is the largest double cluster known in our galaxy. Using optical and infrared photometric data we have produced the deepest mass function for the system. A study of the radial distribution shows evidence of mass segregation while the mass function shows that these clusters may be suffering from accelerated dynamical evolution due to their interaction, triggering the ejection of brown dwarfs. The physical parameterization of brown dwarfs is reliant on the use of interior and atmospheric models. The study of brown dwarf binaries can provide crucial model independent measurements, especially masses. ε Indi Ba, Bb (spectral types T1 and T6) is the closest known brown dwarf binary to Earth. The brown dwarf binary itself orbits a main sequence star allowing us to constrain the distance, metallicity and age of the system making it possible to break the sub-stellar mass-age-luminosity degeneracy. The relative motion of the brown dwarf binary has been studied with precision astrometry from infrared AO data, allowing the determination of the system mass, 121.16 ± 0.17 ± 1.08 MJup . The individual masses of the binary components were derived from the absolute movement of the binary to be MBa = 68.04±0.94 MJup and MBb = 53.12±0.32 MJup. We concluded that the isochronally-derived masses were underestimating the system mass by ∼ 60%, due to the likely underestimation of the age of the system. The evolutionary models are consistent with the parameters measured observationally if the system has an age ∼ 4 Gyr.
180

A Bayesian expected error reduction approach to Active Learning

Fredlund, Richard January 2011 (has links)
There has been growing recent interest in the field of active learning for binary classification. This thesis develops a Bayesian approach to active learning which aims to minimise the objective function on which the learner is evaluated, namely the expected misclassification cost. We call this approach the expected cost reduction approach to active learning. In this form of active learning queries are selected by performing a `lookahead' to evaluate the associated expected misclassification cost. \paragraph{} Firstly, we introduce the concept of a \textit{query density} to explicitly model how new data is sampled. An expected cost reduction framework for active learning is then developed which allows the learner to sample data according to arbitrary query densities. The model makes no assumption of independence between queries, instead updating model parameters on the basis of both which observations were made \textsl{and} how they were sampled. This approach is demonstrated on the probabilistic high-low game which is a non-separable extension of the high-low game presented by \cite{Seung_etal1993}. The results indicate that the Bayes expected cost reduction approach performs significantly better than passive learning even when there is considerable overlap between the class distributions, covering $30\%$ of input space. For the probabilistic high-low game however narrow queries appear to consistently outperform wide queries. We therefore conclude the first part of the thesis by investigating whether or not this is always the case, demonstrating examples where sampling broadly is favourable to a single input query. \paragraph{} Secondly, we explore the Bayesian expected cost reduction approach to active learning within the pool-based setting. This is where learning is limited to a finite pool of unlabelled observations from which the learner may select observations to be queried for class-labels. Our implementation of this approach uses Gaussian process classification with the expectation propagation approximation to make the necessary inferences. The implementation is demonstrated on six benchmark data sets and again demonstrates superior performance to passive learning.

Page generated in 0.048 seconds