• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 753
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 7
  • 6
  • Tagged with
  • 1547
  • 175
  • 142
  • 127
  • 125
  • 123
  • 119
  • 119
  • 117
  • 93
  • 92
  • 92
  • 83
  • 82
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

High Rate Digital Demodulator ASIC

Ghuman, Parminder, Sheikh, Salman, Koubek, Steve, Hoy, Scott, Gray, Andrew 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / The architecture of the High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA’s Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an overview of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.
172

IMPROVING THE DETECTION EFFICIENCY OF CONVENTIONAL PCM/FM TELEMETRY BY USING A MULTI-SYMBOL DEMODULATOR

Geoghegan, Mark 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / Binary PCM/FM has been widely adopted as a standard by the telemetry community. It offers a reasonable balance between detection efficiency and spectral efficiency, with very simple implementation in both the transmitter and receiver. Current technology, however, allows practical implementations of more sophisticated demodulators, which can substantially improve the detection efficiency of the waveform, with no changes to the modulator. This is accomplished by exploiting the memory inherent in the phase continuity of the waveform. This paper describes the implementation and performance of a noncoherent multi-symbol demodulator for PCM/FM. Sensitivity to offsets in carrier frequency, timing, and modulation index is also examined. Simulation results are presented which demonstrate improvements in detection efficiency of approximately 2.5 dB over traditional noncoherent single symbol detectors.
173

EXPERIMENTAL RESULTS FOR MULTI-SYMBOL DETECTION OF PCM/FM

Geoghegan, Mark 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / It has been previously shown, through computer simulations, that a multiple symbol detector can provide substantial gains in detection efficiency (nearly 3 dB) over traditional PCM/FM detectors. This is accomplished by performing correlations over multiple symbol intervals to take advantage of the memory inherent in the continuous phase PCM/FM signal. This paper presents measured hardware results, from a prototype developed for the Advanced Range Telemetry (ARTM) Project, that substantiate the previously published performance and sensitivity predictions. Furthermore, this work confirms the feasibility of applying this technology to high-speed commercial and military telemetry applications.
174

Structure in time-frequency binary masking

Kressner, Abigail A. 27 May 2016 (has links)
Understanding speech in noisy environments is a challenge for normal-hearing and impaired-hearing listeners alike. However, it has been shown that speech intelligibility can be improved in these situations using a strategy called the ideal binary mask. Because this approach requires knowledge of the speech and noise signals separately though, it is ill-suited for practical applications. To address this, many algorithms are being designed to approximate the ideal binary mask strategy. Inevitably though, these algorithms make errors, and the implications of these errors are not well-understood. The main contributions of this thesis are to introduce a new framework for investigating binary masking algorithms and to present listener studies that use this framework to illustrate how certain types of algorithm errors can affect speech recognition outcomes with both normal-hearing listeners and cochlear implant recipients.
175

Διμεταλλικά ανοδικά ηλεκτρόδια Pt-TiO2 για την ηλεκτροχημική οξείδωση αλκοολών σε κυψελίδες καυσίμου χαμηλών θερμοκρασιών / Pt-TiO2 binary electrodes for alcohol oxidation in low temperature fuel cells

Hasa, Bjorn 11 October 2013 (has links)
Σε αυτή την μελέτη διμεταλλικά ηλεκτρόδια Pt-TiO2 παρασκευάστηκαν και χαρακτηρίστηκαν με περίθλαση ακτίνων Χ (X-ray diffraction - XRD), με ηλεκτρονικό μικροσκόπιο σάρωσης (Scanning Electron Microscopy - SEM), φασματοσκοπία φωτοηλεκτρονίων από ακτίνες Χ (X-ray photoelectron spectroscopy - XPS), ηλεκτροχημικές τεχνικές και πειράματα ρόφησης-οξείδωσης μονοξειδίου του άνθρακα (CO stripping). Ερευνήθηκε η μείωση της περιεκτικότητας σε Pt χωρίς απώλειες της ηλεκτροκαταλυτικής ενεργότητας. Το TiO2 επιλέχθηκε λόγω της χημικής του σταθερότητας και του χαμηλού κόστους. Βρέθηκε ότι περιεκτικότητα σε TiO2 μέχρι 50% οδηγεί σε αύξηση της ηλεκτροχημικά ενεργής επιφάνειας (EAS) του ηλεκτροδίου.Η EAS του ηλεκτροδίου Pt(50%)-TiO2(50%) ήταν σχεδόν μια τάξη μεγέθους μεγαλύτερη από το ηλεκτρόδιο της καθαρής Pt, ενώ για περιεκτικότητα σε Pt χαμηλότερη από 30% η EAS μειώνεται δραματικά. Το παραπάνω συμπέρασμα στηρίχθηκε σε μετρήσεις του φορτίου της αναγωγικής κορυφής του οξειδίου της Pt και σε πειράματα ρόφησης-οξείδωσης του CO (CO stripping). Όλα τα δείγματα χρησιμοποιήθηκαν επίσης και ως άνοδοι κατά την διάρκεια ηλεκτροχημικής οξείδωσης μεθανόλης και αιθανόλης. Και στις δύο περιπτώσεις το ηλεκτρόδιο Pt(50%)-TiO2(50%) παρουσίασε τη μεγαλύτερη ηλεκτροκαταλυτική ενεργότητα. / In this study Pt-TiO2 binary electrodes were prepared by thermal decomposition of chloride precursors on Ti substrates, characterised by X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), electrochemical techniques and CO stripping and used as anodes for alcohol oxidation. The minimization of the Pt loading without electrocatalytic activity losses was also explored. TiO2 was chosen due to its chemical stability, low cost and excellent properties as substrate for Pt dispersion. It was found that TiO2 loading up to 50% results in Electrochemically Active Surface (EAS) increase. The EAS of Pt(50%)-TiO2(50%) was found to be almost one order of magnitude higher than that of pure Pt while for Pt loadings lower than 30% the EAS was diminished. The above conclusion has been confirmed both by following the charge of the reduction peak of platinum oxide and by CO stripping experiments. All samples have been evaluated during the electrochemical oxidation of methanol and ethanol. In both cases the Pt(50%)-TiO2(50%) electrode exhibited better electrocatalytic activity than the pure Pt anode. The observed higher performance of the binary electrodes has been attributed to the enhanced Pt dispersion as well as to the formation of smaller Pt particles by the addition of TiO2.
176

Parallelization of virtual machines for efficient multi-processor emulation

Chakravarthy, Ramapriyan Sreenath 09 November 2010 (has links)
Simulation is an essential tool for computer systems research. The speed of the simulator has a first-order effect on what experiments can be run. The recent shift in the industry to multi-core processors has put even more pressure on simulation performance, especially since most simulators are single-threaded. In this thesis, we present QEMU-MP, a parallelized version of a fast functional simulator called QEMU. / text
177

Space astrometry of unresolved binaries: From Hipparcos to Gaia/Astrometrie spatiale des binaires non-resolues: D'Hipparcos a Gaia

Pourbaix, Dimitri 13 September 2007 (has links)
Building upon its success with the Hipparcos space astrometry mission launched in 1989, the European Space Agency has agreed to fund the construction of its successor, Gaia, and its launch in 2011. Despite the similarities between the two missions, Gaia will be orders of magnitude more powerful, more sensitive, but also more complex in terms of data processing. Growing from 120,000 stars with Hipparcos to about 120,000E4 stars with Gaia does not simply mean pushing the computing resources to their limits (1 second of processing per star yields 38 years for the whole Gaia-sky). It also means facing situations that did not occur with Hipparcos either by luck or because those cases were carefully removed from the Hipparcos Input Catalogue. This manuscript illustrates how some chunks of the foreseen Gaia data reduction pipeline can be trained and assessed using the Hipparcos observations. This is especially true for unresolved binaries because they pop up so far down in the Gaia pipeline that, by the time they get there, there is essentially no difference between Hipparcos and Gaia data. Only the number of such binaries is different, going from two thousand to ten million. Although the computing time clearly becomes an issue, one cannot sacrifice the robustness and correctness of the reduction pipeline for the sake of speed. However, owing to the requirement that everything must be Gaia-based (no help from ground-based results), the very robustness of the reduction has to be assessed as well. For instance, the underlying assumptions of some statistical tests used to assess the quality of the fits used in the Hipparcos pipeline might no longer hold with Gaia. That may not affect the fit itself but rather the quality indicators usually accompanying those fits. For the final catalogue to be a success, these issues must be addressed as soon as possible.
178

Statistical discrimination with disease categories subject to misclassification

Hilliam, Rachel M. January 2000 (has links)
No description available.
179

Memory Footprint Reduction of Operating System Kernels

He, Haifeng January 2009 (has links)
As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incurs unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems. First, we present kernel code compaction, an automated approach that reduces the code size of an OS kernel statically by removing unused functionality. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximated compilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform and a minimally configured Linux kernel obtains a code size reduction of close to 24%.Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most kernel code is executed infrequently if at all. Our second contribution is on-demand code loading, an automated approach that keeps the rarely used code on secondary storage and loads it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53%with little degradation in performance. Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in an OS kernel. A prototype implementation for the Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5%when running the MediaBench suite while incurring only minimal increases in execution time (1.9%).
180

Deobfuscation of Packed and Virtualization-Obfuscation Protected Binaries

Coogan, Kevin Patrick January 2011 (has links)
Code obfuscation techniques are increasingly being used in software for such reasons as protecting trade secret algorithms from competitors and deterring license tampering by those wishing to use the software for free. However, these techniques have also grown in popularity in less legitimate areas, such as protecting malware from detection and reverse engineering. This work examines two such techniques - packing and virtualization-obfuscation - and presents new behavioral approaches to analysis that may be relevant to security analysts whose job it is to defend against malicious code. These approaches are robust against variations in obfuscation algorithms, such as changing encryption keys or virtual instruction byte code.Packing refers to the process of encrypting or compressing an executable file. This process "scrambles" the bytes of the executable so that byte-signature matching algorithms commonly used by anti-virus programs are ineffective. Standard static analysis techniques are similarly ineffective since the actual byte code of the program is hidden until after the program is executed. Dynamic analysis approaches exist, but are vulnerable to dynamic defenses. We detail a static analysis technique that starts by identifying the code used to "unpack" the executable, then uses this unpacker to generate the unpacked code in a form suitable for static analysis. Results show we are able to correctly unpack several encrypted and compressed malware, while still handling several dynamic defenses.Virtualization-obfuscation is a technique that translates the original program into virtual instructions, then builds a customized virtual machine for these instructions. As with packing, the byte-signature of the original program is destroyed. Furthermore, static analysis of the obfuscated program reveals only the structure of the virtual machine, and dynamic analysis produces a dynamic trace where original program instructions are intermixed, and often indistinguishable from, virtual machine instructions. We present a dynamic analysis approach whereby all instructions that affect the external behavior of the program are identified, thus building an approximation of the original program that is observationally equivalent. We achieve good results at both identifying instructions from the original program, as well as eliminating instructions known to be part of the virtual machine.

Page generated in 0.0447 seconds