• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 13
  • 10
  • 9
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Zpracování 3D modelů scény / Processing of 3D Scene Models

Zdráhal, Lukáš January 2008 (has links)
Purpose of this document is acquite reader with basic principles of 3D model digitalization.This work describes general overview of 3D scanning devices, their physical principle and measurements methods. Next part of this document  describes basic method for polygonal mesh processing as smoothing an decimation which are necessary for 3D model processing.This document contains also algorithms description of implementation, user interface and publication part through WWW. Fundamental essence of this diploma thesis will be introduction with general principles of 3D scanning and working with Minolta VIVID-700 3D digitizer which is placed on our faculty. At the end are mentioned results evalution,demostration examples and next supposed project advancement.
32

Design Of Polynomial-based Filters For Continuously Variable Sample Rate Conversion With Applications In Synthetic Instrumentati

Hunter, Matthew 01 January 2008 (has links)
In this work, the design and application of Polynomial-Based Filters (PBF) for continuously variable Sample Rate Conversion (SRC) is studied. The major contributions of this work are summarized as follows. First, an explicit formula for the Fourier Transform of both a symmetrical and nonsymmetrical PBF impulse response with variable basis function coefficients is derived. In the literature only one explicit formula is given, and that for a symmetrical even length filter with fixed basis function coefficients. The frequency domain optimization of PBFs via linear programming has been proposed in the literature, however, the algorithm was not detailed nor were explicit formulas derived. In this contribution, a minimax optimization procedure is derived for the frequency domain optimization of a PBF with time-domain constraints. Explicit formulas are given for direct input to a linear programming routine. Additionally, accompanying Matlab code implementing this optimization in terms of the derived formulas is given in the appendix. In the literature, it has been pointed out that the frequency response of the Continuous-Time (CT) filter decays as frequency goes to infinity. It has also been observed that when implemented in SRC, the CT filter is sampled resulting in CT frequency response aliasing. Thus, for example, the stopband sidelobes of the Discrete-Time (DT) implementation rise above the CT designed level. Building on these observations, it is shown how the rolloff rate of the frequency response of a PBF can be adjusted by adding continuous derivatives to the impulse response. This is of great advantage, especially when the PBF is used for decimation as the aliasing band attenuation can be made to increase with frequency. It is shown how this technique can be used to dramatically reduce the effect of alias build up in the passband. In addition, it is shown that as the number of continuous derivatives of the PBF increases the resulting DT implementation more closely matches the Continuous-Time (CT) design. When implemented for SRC, samples from a PBF impulse response are computed by evaluating the polynomials using a so-called fractional interval, µ. In the literature, the effect of quantizing µ on the frequency response of the PBF has been studied. Formulas have been derived to determine the number of bits required to keep frequency response distortion below prescribed bounds. Elsewhere, a formula has been given to compute the number of bits required to represent µ to obtain a given SRC accuracy for rational factor SRC. In this contribution, it is shown how these two apparently competing requirements are quite independent. In fact, it is shown that the wordlength required for SRC accuracy need only be kept in the µ generator which is a single accumulator. The output of the µ generator may then be truncated prior to polynomial evaluation. This results in significant computational savings, as polynomial evaluation can require several multiplications and additions. Under the heading of applications, a new Wideband Digital Downconverter (WDDC) for Synthetic Instruments (SI) is introduced. DDCs first tune to a signal's center frequency using a numerically controlled oscillator and mixer, and then zoom-in to the bandwidth of interest using SRC. The SRC is required to produce continuously variable output sample rates from a fixed input sample rate over a large range. Current implementations accomplish this using a pre-filter, an arbitrary factor resampler, and integer decimation filters. In this contribution, the SRC of the WDDC is simplified reducing the computational requirements to a factor of three or more. In addition to this, it is shown how this system can be used to develop a novel computationally efficient FFT-based spectrum analyzer with continuously variable frequency spans. Finally, after giving the theoretical foundation, a real Field Programmable Gate Array (FPGA) implementation of a novel Arbitrary Waveform Generator (AWG) is presented. The new approach uses a fixed Digital-to-Analog Converter (DAC) sample clock in combination with an arbitrary factor interpolator. Waveforms created at any sample rate are interpolated to the fixed DAC sample rate in real-time. As a result, the additional lower performance analog hardware required in current approaches, namely, multiple reconstruction filters and/or additional sample clocks, is avoided. Measured results are given confirming the performance of the system predicted by the theoretical design and simulation.
33

The Fulani of Northern Nigeria / Some general notes by F.W. de St Croix

St. Croix, F.W. De. January 1945 (has links)
74 pages
34

A new approach to Decimation in High Order Boltzmann Machines

Farguell Matesanz, Enric 20 January 2011 (has links)
La Màquina de Boltzmann (MB) és una xarxa neuronal estocàstica amb l'habilitat tant d'aprendre com d'extrapolar distribucions de probabilitat. Malgrat això, mai ha arribat a ser tant emprada com d'altres models de xarxa neuronal, com ara el perceptró, degut a la complexitat tan del procés de simulació com d'aprenentatge: les quantitats que es necessiten al llarg del procés d'aprenentatge són normalment estimades mitjançant tècniques Monte Carlo (MC), a través de l'algorisme del Temprat Simulat (SA). Això ha portat a una situació on la MB és més ben aviat considerada o bé com una extensió de la xarxa de Hopfield o bé com una implementació paral·lela del SA. Malgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM. En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar. Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM. / La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA. Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM. En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse. Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM. / The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm. Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established. We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved. Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM.
35

Low-Power Low-Noise CMOS Analog and Mixed-Signal Design towards Epileptic Seizure Detection

Qian, Chengliang 03 October 2013 (has links)
About 50 million people worldwide suffer from epilepsy and one third of them have seizures that are refractory to medication. In the past few decades, deep brain stimulation (DBS) has been explored by researchers and physicians as a promising way to control and treat epileptic seizures. To make the DBS therapy more efficient and effective, the feedback loop for titrating therapy is required. It means the implantable DBS devices should be smart enough to sense the brain signals and then adjust the stimulation parameters adaptively. This research proposes a signal-sensing channel configurable to various neural applications, which is a vital part for a future closed-loop epileptic seizure stimulation system. This doctoral study has two main contributions, 1) a micropower low-noise neural front-end circuit, and 2) a low-power configurable neural recording system for both neural action-potential (AP) and fast-ripple (FR) signals. The neural front end consists of a preamplifier followed by a bandpass filter (BPF). This design focuses on improving the noise-power efficiency of the preamplifier and the power/pole merit of the BPF at ultra-low power consumption. In measurement, the preamplifier exhibits 39.6-dB DC gain, 0.8 Hz to 5.2 kHz of bandwidth (BW), 5.86-μVrms input-referred noise in AP mode, while showing 39.4-dB DC gain, 0.36 Hz to 1.3 kHz of BW, 3.07-μVrms noise in FR mode. The preamplifier achieves noise efficiency factor (NEF) of 2.93 and 3.09 for AP and FR modes, respectively. The preamplifier power consumption is 2.4 μW from 2.8 V for both modes. The 6th-order follow-the-leader feedback elliptic BPF passes FR signals and provides -110 dB/decade attenuation to out-of-band interferers. It consumes 2.1 μW from 2.8 V (or 0.35 μW/pole) and is one of the most power-efficient high-order active filters reported to date. The complete front-end circuit achieves a mid-band gain of 38.5 dB, a BW from 250 to 486 Hz, and a total input-referred noise of 2.48 μVrms while consuming 4.5 μW from the 2.8 V power supply. The front-end NEF achieved is 7.6. The power efficiency of the complete front-end is 0.75 μW/pole. The chip is implemented in a standard 0.6-μm CMOS process with a die area of 0.45 mm^2. The neural recording system incorporates the front-end circuit and a sigma-delta analog-to-digital converter (ADC). The ADC has scalable BW and power consumption for digitizing both AP and FR signals captured by the front end. Various design techniques are applied to the improvement of power and area efficiency for the ADC. At 77-dB dynamic range (DR), the ADC has a peak SNR and SNDR of 75.9 dB and 67 dB, respectively, while consuming 2.75-mW power in AP mode. It achieves 78-dB DR, 76.2-dB peak SNR, 73.2-dB peak SNDR, and 588-μW power consumption in FR mode. Both analog and digital power supply voltages are 2.8 V. The chip is fabricated in a standard 0.6-μm CMOS process. The die size is 11.25 mm^2. The proposed circuits can be extended to a multi-channel system, with the ADC shared by all channels, as the sensing part of a future closed-loop DBS system for the treatment of intractable epilepsy.
36

An Introduction to Tensor Networks and Matrix Product States with Applications in Waveguide Quantum Electrodynamics

Khatiwada, Pawan 26 July 2021 (has links)
No description available.
37

Zpracování signálu z digitálního mikrofonu / Digital microphone signal processing

Vykydal, Martin January 2011 (has links)
The aim of this work is to implement digital filters into programmable gate array. The work also includes a description of the MEMS technology, including comparisons with the technology of MEMS microphones from various manufacturers. Another part is devoted to the Sigma-delta modulation. The main section is the design and implementation of digital CIC and FIR filters for signal processing of digital microphone, including simulation and verification of properties of the proposed filter in Matlab.
38

Vyhodnocení příbuznosti organismů pomocí číslicového zpracování genomických dat / Evaluation of Organisms Relationship by Genomic Signal Processing

Škutková, Helena January 2016 (has links)
This dissertation deals with alternative techniques for analysis of genetic information of organisms. The theoretical part presents two different approaches for evaluation of relationship between organisms based on mutual similarity of genetic information contained in their DNA sequences. The first approach is currently standardized phylogenetics analysis of character based records of DNA sequences. Although this approach is computationally expensive due to the need of multiple sequence alignment, it allows evaluation of global and local similarity of DNA sequences. The second approach is represented by techniques for classification of DNA sequences in a form of numerical vectors representing characteristic features of their genetic information. These methods known as „alignment free“ allow fast evaluation of global similarity but cannot evaluate local changes. The new method presented in this dissertation combines the advantages of both approaches. It utilizes numerical representation similar to 1D digital signal, i.e. representation that contains specific trend along x-axis. The experimental part of dissertation deals with design of a set of appropriate tools for genomic signal processing to allow evaluation mutual similarity of taxonomically specific trends. On the basis of the mutual similarity of genomic signals, the classification in the form of dendrogram is applied. It corresponds to phylogenetic trees used in standard phylogenetics.
39

Non-equilibrium strongly-correlated dynamics

Johnson, Tomi Harry January 2013 (has links)
We study non-equilibrium and strongly-correlated dynamics in two contexts. We begin by analysing quantum many-body systems out of equilibrium through the lens of cold atomic impurities in Bose gases. Such highly-imbalanced mixtures provide a controlled arena for the study of interactions, dissipation, decoherence and transport in a many-body quantum environment. Specifically we investigate the oscillatory dynamics of a trapped and initially highly-localised impurity interacting with a weakly-interacting trapped quasi low-dimensional Bose gas. This relates to and goes beyond a recent experiment by the Inguscio group in Florence. We witness a delicate interplay between the self-trapping of the impurity and the inhomogeneity of the Bose gas, and describe the dissipation of the energy of the impurity through phononic excitations of the Bose gas. We then study the transport of a driven, periodically-trapped impurity through a quasi one-dimensional Bose gas. We show that placing the weakly-interacting Bose gas in a separate periodic potential leads to a phononic excitation spectrum that closely mimics those in solid state systems. As a result we show that the impurity-Bose gas system exhibits phonon-induced resonances in the impurity current that were predicted to occur in solids decades ago but never clearly observed. Following this, allowing the bosons to interact strongly, we predict the effect of different strongly-correlated phases of the Bose gas on the motion of the impurity. We show that, by observing the impurity, properties of the excitation spectrum of the Bose gas, e.g., gap and bandwidth, may be inferred along with the filling of the bosonic lattice. In other words the impurity acts as a probe of its environment. To describe the dynamics of such a strongly-correlated system we use the powerful and near-exact time-evolving block decimation (TEBD) method, which we describe in detail. The second part of this thesis then analyses, for the first time, the performance of this method when applied to simulate non-equilibrium classical stochastic processes. We study its efficacy for a well-understood model of transport, the totally-asymmetric exclusion process, and find it to be accurate. Next, motivated by the inefficiency of sampling-based numerical methods for high variance observables we adapt and apply TEBD to simulate a path-dependent observable whose variance increases exponentially with system size. Specifically we calculate the expected value of the exponential of the work done by a varying magnetic field on a one-dimensional Ising model undergoing Glauber dynamics. We confirm using Jarzynski's equality that the TEBD method remains accurate and efficient. Therefore TEBD and related methods complement and challenge the usual Monte Carlo-based simulators of non-equilibrium stochastic processes.
40

[en] CONTRIBUTIONS TO ARRAY SIGNAL PROCESSING: SPACE AND SPACE-TIME REDUCED-RANK PROCESSING AND RADAR-EMBEDDED COMMUNICATIONS / [pt] CONTRIBUIÇÕES AO PROCESSAMENTO EM ARRANJOS DE SENSORES: PROCESSAMENTO ESPACIAL E ESPÁCIO-TEMPORAL COM POSTO REDUZIDO E RADARES COM COMUNICAÇÕES INCORPORADAS

ALINE DE OLIVEIRA FERREIRA 17 July 2017 (has links)
[pt] Processamento em arranjos de sensores é uma área com vasta aplicação, tanto civil quanto militar, por exemplo em sonar, radar, sismologia e comunicações sem fio. Por meio de processamento espacial e espácio-temporal é possível melhorar suas funcionalidades e explorar novas possibilidades. Esta área vem atraindo cada vez mais a atenção e os esfor¸cos da comunidade científica, especialmente agora, em que antenas phased-array se estabeleceram como uma tecnologia comercial e madura. Neste contexto, nós tratamos o problema de processamento com posto reduzido em processamento espacial (beamforming) e espácio-temporal de sinais radar e a nova área de radares com função dual de radar e comunicações (dualfunction radar-communications, DFRC), que pode ser resumida na incorporação de mensagens de comunicações nas transmissıes radar como uma tarefa secundária. Nesta tese, nós investigamos a aplicação de um novo esquema de reduções de posto baseado em interpolação e decimação em duas áreas distintas: processamento espacial e processamento espácio-temporal de sinais radar. Este algoritmo para redução de posto nunca havia sido testado nestes ambientes antes e apresentou resultados bastante expressivos. Nós também propomos simplificações para reduzir a complexidade computacional do algoritmo em bemforming. Quanto ao tópico de DFRC, nós propomos dois métodos originais para incorporar modulação de amplitude/fase aos lóbulos laterais do diagrama de irradiação do radar de forma robusta. Os métodos propostos são muito mais simples do que o estado-da-arte e apresentam desempenho superior em termos de robustez e aplicabilidade em operações de tempo-real. Nós ainda provemos várias outras análises, comparações e contribuições a esta nova área. / [en] Array processing is an area with many civilian and military applications, e.g. sonar, radar, seismology and wireless communications. By means of space and space-time processing it is possible to enhance their features and explore new possibilities. This area has been attracting increasingly more attention and gathering more efforts of the science community, especially now, that phased array antennas are established as a commercial and mature technology. Within this context, we address the problem of reduced rank processing in space and space-time radar signal processing and the new area of dual-function radar-communications (DFRC), which may be summarized as embedding communication messages into radar emissions as a secondary task for the radar. In this thesis, we investigate the application of a new joint interpolation and decimation rank reducing scheme in two different areas: beamforming and space-time radar processing. This rank reducing algorithm was never tested within these contexts before and shows impressive results. We also propose simplifications for decreasing the computational complexity of the algorithm in beamforming. In the topic of DFRC, we propose two original robust radar-embedded sidelobe phase/amplitude modulation methods which have simple closed form equations. The proposed methods are much simpler than the state of the art and have superior performance in terms of robustness and real-time applicability.

Page generated in 0.1241 seconds