• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 94
  • 40
  • 38
  • 19
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 539
  • 97
  • 60
  • 51
  • 50
  • 50
  • 43
  • 40
  • 38
  • 37
  • 36
  • 36
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Learning Stochastic Nonlinear Dynamical Systems Using Non-stationary Linear Predictors

Abdalmoaty, Mohamed January 2017 (has links)
The estimation problem of stochastic nonlinear parametric models is recognized to be very challenging due to the intractability of the likelihood function. Recently, several methods have been developed to approximate the maximum likelihood estimator and the optimal mean-square error predictor using Monte Carlo methods. Albeit asymptotically optimal, these methods come with several computational challenges and fundamental limitations. The contributions of this thesis can be divided into two main parts. In the first part, approximate solutions to the maximum likelihood problem are explored. Both analytical and numerical approaches, based on the expectation-maximization algorithm and the quasi-Newton algorithm, are considered. While analytic approximations are difficult to analyze, asymptotic guarantees can be established for methods based on Monte Carlo approximations. Yet, Monte Carlo methods come with their own computational difficulties; sampling in high-dimensional spaces requires an efficient proposal distribution to reduce the number of required samples to a reasonable value. In the second part, relatively simple prediction error method estimators are proposed. They are based on non-stationary one-step ahead predictors which are linear in the observed outputs, but are nonlinear in the (assumed known) input. These predictors rely only on the first two moments of the model and the computation of the likelihood function is not required. Consequently, the resulting estimators are defined via analytically tractable objective functions in several relevant cases. It is shown that, under mild assumptions, the estimators are consistent and asymptotically normal. In cases where the first two moments are analytically intractable due to the complexity of the model, it is possible to resort to vanilla Monte Carlo approximations. Several numerical examples demonstrate a good performance of the suggested estimators in several cases that are usually considered challenging. / <p>QC 20171128</p>
532

Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions

Vedreño Santos, Francisco Jose 02 December 2013 (has links)
Tradicionalmente, la detección de faltas en máquinas eléctricas se basa en el uso de la Transformada Rápida de Fourier ya que la mayoría de las faltas pueden ser diagnosticadas con ella con seguridad si las máquinas operan en condiciones de régimen estacionario durante un intervalo de tiempo razonable. Sin embargo, para aplicaciones en las que las máquinas operan en condiciones de carga y velocidad fluctuantes (condiciones no estacionarias) como por ejemplo los aerogeneradores, el uso de la Transformada Rápida de Fourier debe ser reemplazado por otras técnicas. La presente tesis desarrolla una nueva metodología para el diagnóstico de máquinas de inducción de rotor de jaula y rotor bobinado operando en condiciones no estacionarias, basada en el análisis de las componentes de falta de las corrientes en el plano deslizamiento frecuencia. La técnica es aplicada al diagnóstico de asimetrías estatóricas, rotóricas y también para la falta de excentricidad mixta. El diagnóstico de las máquinas eléctricas en el dominio deslizamiento-frecuencia confiere un carácter universal a la metodología ya que puede diagnosticar máquinas eléctricas independientemente de sus características, del modo en el que la velocidad de la máquina varía y de su modo de funcionamiento (motor o generador). El desarrollo de la metodología conlleva las siguientes etapas: (i) Caracterización de las evoluciones de las componentes de falta de asimetría estatórica, rotórica y excentricidad mixta para las máquinas de inducción de rotores de jaula y bobinados en función de la velocidad (deslizamiento) y la frecuencia de alimentación de la red a la que está conectada la máquina. (ii) Debido a la importancia del procesado de la señal, se realiza una introducción a los conceptos básicos del procesado de señal antes de centrarse en las técnicas actuales de procesado de señal para el diagnóstico de máquinas eléctricas. (iii) La extracción de las componentes de falta se lleva a cabo a través de tres técnicas de filtrado diferentes: filtros basados en la Transformada Discreta Wavelet, en la Transformada Wavelet Packet y con una nueva técnica de filtrado propuesta en esta tesis, el Filtrado Espectral. Las dos primeras técnicas de filtrado extraen las componentes de falta en el dominio del tiempo mientras que la nueva técnica de filtrado realiza la extracción en el dominio de la frecuencia. (iv) La extracción de las componentes de falta, en algunos casos, conlleva el desplazamiento de la frecuencia de las componentes de falta. El desplazamiento de la frecuencia se realiza a través de dos técnicas: el Teorema del Desplazamiento de la Frecuencia y la Transformada Hilbert. (v) A diferencia de otras técnicas ya desarrolladas, la metodología propuesta no se basa exclusivamente en el cálculo de la energía de la componente de falta sino que también estudia la evolución de la frecuencia instantánea de ellas, calculándola a través de dos técnicas diferentes (la Transformada Hilbert y el operador Teager-Kaiser), frente al deslizamiento. La representación de la frecuencia instantánea frente al deslizamiento elimina la posibilidad de diagnósticos falsos positivos mejorando la precisión y la calidad del diagnóstico. Además, la representación de la frecuencia instantánea frente al deslizamiento permite realizar diagnósticos cualitativos que son rápidos y requieren bajos requisitos computacionales. (vi) Finalmente, debido a la importancia de la automatización de los procesos industriales y para evitar la posible divergencia presente en el diagnóstico cualitativo, tres parámetros objetivos de diagnóstico son desarrollados: el parámetro de la energía, el coeficiente de similitud y los parámetros de regresión. El parámetro de la energía cuantifica la severidad de la falta según su valor y es calculado en el dominio del tiempo y en el dominio de la frecuencia (consecuencia de la extracción de las componentes de falta en el dominio de la frecuencia). El coeficiente de similitud y los parámetros de regresión son parámetros objetivos que permiten descartar diagnósticos falsos positivos aumentando la robustez de la metodología propuesta. La metodología de diagnóstico propuesta se valida experimentalmente para las faltas de asimetría estatórica y rotórica y para el fallo de excentricidad mixta en máquinas de inducción de rotor de jaula y rotor bobinado alimentadas desde la red eléctrica y desde convertidores de frecuencia en condiciones no estacionarias estocásticas. / Vedreño Santos, FJ. (2013). Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34177
533

Определение предела огнестойкости железобетонных конструкций расчетными методами : магистерская диссертация / Determination of the fire resistance limit of reinforced concrete structures by calculation methods

Дубинская, И. Ю., Dubinskaya, I. Yu. January 2024 (has links)
Разработан метод определения предела огнестойкости по потере несущей способности с использованием нелинейной деформационной модели, произведена оценка предела огнестойкости по теплоизолирующей способности и потере целостности на примере расчета железобетонной плиты. Представлены методы решения теплотехнической задачи огнестойкости. / A method has been developed for determining the fire resistance limit for loss of bearing capacity using a nonlinear deformation model, and an assessment of the fire resistance limit for thermal insulation ability and loss of integrity has been made using the example of calculating a reinforced concrete slab. The methods of solving the heat engineering problem of fire resistance are presented.
534

Joint Source-Channel Coding Reliability Function for Single and Multi-Terminal Communication Systems

Zhong, Yangfan 15 May 2008 (has links)
Traditionally, source coding (data compression) and channel coding (error protection) are performed separately and sequentially, resulting in what we call a tandem (separate) coding system. In practical implementations, however, tandem coding might involve a large delay and a high coding/decoding complexity, since one needs to remove the redundancy in the source coding part and then insert certain redundancy in the channel coding part. On the other hand, joint source-channel coding (JSCC), which coordinates source and channel coding or combines them into a single step, may offer substantial improvements over the tandem coding approach. This thesis deals with the fundamental Shannon-theoretic limits for a variety of communication systems via JSCC. More specifically, we investigate the reliability function (which is the largest rate at which the coding probability of error vanishes exponentially with increasing blocklength) for JSCC for the following discrete-time communication systems: (i) discrete memoryless systems; (ii) discrete memoryless systems with perfect channel feedback; (iii) discrete memoryless systems with source side information; (iv) discrete systems with Markovian memory; (v) continuous-valued (particularly Gaussian) memoryless systems; (vi) discrete asymmetric 2-user source-channel systems. For the above systems, we establish upper and lower bounds for the JSCC reliability function and we analytically compute these bounds. The conditions for which the upper and lower bounds coincide are also provided. We show that the conditions are satisfied for a large class of source-channel systems, and hence exactly determine the reliability function. We next provide a systematic comparison between the JSCC reliability function and the tandem coding reliability function (the reliability function resulting from separate source and channel coding). We show that the JSCC reliability function is substantially larger than the tandem coding reliability function for most cases. In particular, the JSCC reliability function is close to twice as large as the tandem coding reliability function for many source-channel pairs. This exponent gain provides a theoretical underpinning and justification for JSCC design as opposed to the widely used tandem coding method, since JSCC will yield a faster exponential rate of decay for the system error probability and thus provides substantial reductions in complexity and coding/decoding delay for real-world communication systems. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2008-05-13 22:31:56.425
535

Comportamiento Óptico y Térmico de un Concentrador Solar Lineal con reflector estacionario y Foco Móvil

Pujol Nadal, Ramon 30 July 2012 (has links)
El concentrador solar Fixed Mirror Solar Concentrator (FMSC) apareció en los años 70 con la finalidad de reducir costes en la producción de energía termoeléctrica. Este diseño consiste en un concentrador de reflector estacionario y foco móvil, presenta buena integrabilidad en cubiertas, y es capaz de alcanzar temperaturas entre 100 y 200ºC manteniendo una eficiencia aceptable. En esta tesis se expone una metodología para determinar el comportamiento del FMSC. Se ha desarrollado una herramienta de cálculo basada en el método de ray-tracing, que simula el trazado de los rayos solares en el sistema óptico. Con esta herramienta se ha analizado el comportamiento óptico y térmico del FMSC, y de la versión con espejos curvos Curved Slats Fixed Mirror Solar Concentrator (CSFMSC). Se ha realizado un análisis paramétrico para conocer la influencia de los distintos parámetros en el modificador de ángulo (IAM), y para obtener los diseños óptimos a una temperatura de 200ºC para tres climas en diferentes latitudes. Se han comparado los valores teóricos obtenidos mediante ray-tracing con dos prototipos ensayados, obteniendo un buen ajuste en ambos casos. Los ensayos han sido utilizados para determinar la curva de rendimiento de uno de los prototipos. Se ha hecho uso del método propuesto en la norma EN-12975-2:2006, combinado con valores de IAM obtenidos mediante ray-tracing. Se prueba que esta combinación puede ser útil para obtener la curva de rendimiento de colectores complejos con un modelo biaxial para el IAM. / The Fixed Mirror Solar Concentrator (FMSC) appeared during the 70s with the aim of reducing costs in the production of electricity in solar thermal power plants. This design consists of a concentrator with fixed reflector and moving receiver, has a very good integrability into building roofs and can reach temperatures between 100 and 200ºC with an acceptable efficiency. In this Thesis a methodology is presented for the determination of the behaviour of the FMSC. A simulation tool based on the forward ray-tracing method has been developed. The optical and thermal behaviour of the FMSC and its curved mirror variation called the Curved Slats Fixed Mirror Solar Concentrator (CSFMSC), have been analyzed with this tool. A parametric analysis has been carried out in order to determine the influence of the different parameters on the Incidence Angle Modifier (IAM) and to determine the optimal designs at a temperature of 200ºC for three different climates at different latitudes. The theoretical values obtained from the ray-tracing code have been compared with two experimental prototypes. The experimental and numerical results obtained show a good fit. The efficiency curve of one of the prototypes has been determined from the experimental tests. The methodology proposed in the norm EN-12975-2:2006 has been used in combination with IAM values obtained by ray-tracing. It has been shown that this combination can be effectively used to obtain the efficiency curve of complex collectors with a bi-axial IAM model.
536

台指選擇權之市場指標實證分析

吳建民, Wu,Jian-Min Unknown Date (has links)
本研究有系統地收集了2003年8月12日到2005年9月30日止共495個交易日的台指期貨、選擇權市場裡P/C量、P/C倉、隱含波動率(AIV)、不同天數的歷史波動率等收盤資料,進行這些因素與行情走勢間的關係,以及因素彼此的互動性。結果證實分析台指選擇權指標是需要區分金融重大衝擊前後期間,以及區分漲勢、跌勢、盤整的各期間,各期間的選擇權指標均會有不同意涵。 本論文證實使用結構轉換的Chow-ARMA(2,1)模型可能比較符合模擬指數 實況,且GARCH(1,1) 模型也很適合描述台期指貨波動度預測力。在選擇權指標方面:P/C量與AIV與台指期貨呈現負相關,P/C倉與台指期貨正相關。其中以P/C倉對指數漲跌的影響程度最大、P/C量的影響程度次之、AIV影響程度最小。若把隱含波動率區分成買權與賣權之各個波動率更有效地預測行情走勢,在大跌期間的買賣權隱含波動率更能表現出優越的預測能力,其中前兩期的賣權隱含波動率(PIV)更是效率性指標, 實證結果使用20天的歷史波動率比較能貼近選擇權市場的變化,跟過去教 科書慣用的90天不同。若比較歷史波動率與隱含波動率間的關係,結論是當「大跌期」歷史波動率大於買權隱含波動率(CIV)時,買權是會被低估的,其他的各種假設條件均不成立。理由有二:一是市場效率性決定了是否可使用隱含波動率與歷史波動率之間的高低關係。二是「大跌時期」相對於「大漲時期」的市場資訊被反應的更敏銳,而在「大跌時期」的賣權價格反應比買權價格反應更快速敏銳。 本研究推論的Chow-ARMA(2,1) 台指期貨模型、GARCH(1,1) 波動率模型、P/C量-P/C倉-AIV的多變數模型、FMA20/XIV模型等等在研判指數變化上具有參考價值,進一步均可以做為選擇權操作策略參考依據之一。
537

Une mesure de non-stationnarité générale : Application en traitement d'images et du signaux biomédicaux / A general non-stationarity measure : Application to biomedical image and signal processing

Xu, Yanli 04 October 2013 (has links)
La variation des intensités est souvent exploitée comme une propriété importante du signal ou de l’image par les algorithmes de traitement. La grandeur permettant de représenter et de quantifier cette variation d’intensité est appelée une « mesure de changement », qui est couramment employée dans les méthodes de détection de ruptures d’un signal, dans la détection des contours d’une image, dans les modèles de segmentation basés sur les contours, et dans des méthodes de lissage d’images avec préservation de discontinuités. Dans le traitement des images et signaux biomédicaux, les mesures de changement existantes fournissent des résultats peu précis lorsque le signal ou l’image présentent un fort niveau de bruit ou un fort caractère aléatoire, ce qui conduit à des artefacts indésirables dans le résultat des méthodes basées sur la mesure de changement. D’autre part, de nouvelles techniques d'imagerie médicale produisent de nouveaux types de données dites à valeurs multiples, qui nécessitent le développement de mesures de changement adaptées. Mesurer le changement dans des données de tenseur pose alors de nouveaux problèmes. Dans ce contexte, une mesure de changement, appelée « mesure de non-stationnarité (NSM) », est améliorée et étendue pour permettre de mesurer la non-stationnarité de signaux multidimensionnels quelconques (scalaire, vectoriel, tensoriel) par rapport à un paramètre statistique, et en fait ainsi une mesure générique et robuste. Une méthode de détection de changements basée sur la NSM et une méthode de détection de contours basée sur la NSM sont respectivement proposées et appliquées aux signaux ECG et EEG, ainsi qu’a des images cardiaques pondérées en diffusion (DW). Les résultats expérimentaux montrent que les méthodes de détection basées sur la NSM permettent de fournir la position précise des points de changement et des contours des structures tout en réduisant efficacement les fausses détections. Un modèle de contour actif géométrique basé sur la NSM (NSM-GAC) est proposé et appliqué pour segmenter des images échographiques de la carotide. Les résultats de segmentation montrent que le modèle NSM-GAC permet d’obtenir de meilleurs résultats comparativement aux outils existants avec moins d'itérations et de temps de calcul, et de réduire les faux contours et les ponts. Enfin, et plus important encore, une nouvelle approche de lissage préservant les caractéristiques locales, appelée filtrage adaptatif de non-stationnarité (NAF), est proposée et appliquée pour améliorer les images DW cardiaques. Les résultats expérimentaux montrent que la méthode proposée peut atteindre un meilleur compromis entre le lissage des régions homogènes et la préservation des caractéristiques désirées telles que les bords ou frontières, ce qui conduit à des champs de tenseurs plus homogènes et par conséquent à des fibres cardiaques reconstruites plus cohérentes. / The intensity variation is often used in signal or image processing algorithms after being quantified by a measurement method. The method for measuring and quantifying the intensity variation is called a « change measure », which is commonly used in methods for signal change detection, image edge detection, edge-based segmentation models, feature-preserving smoothing, etc. In these methods, the « change measure » plays such an important role that their performances are greatly affected by the result of the measurement of changes. The existing « change measures » may provide inaccurate information on changes, while processing biomedical images or signals, due to the high noise level or the strong randomness of the signals. This leads to various undesirable phenomena in the results of such methods. On the other hand, new medical imaging techniques bring out new data types and require new change measures. How to robustly measure changes in theos tensor-valued data becomes a new problem in image and signal processing. In this context, a « change measure », called the Non-Stationarity Measure (NSM), is improved and extended to become a general and robust « change measure » able to quantify changes existing in multidimensional data of different types, regarding different statistical parameters. A NSM-based change detection method and a NSM-based edge detection method are proposed and respectively applied to detect changes in ECG and EEG signals, and to detect edges in the cardiac diffusion weighted (DW) images. Experimental results show that the NSM-based detection methods can provide more accurate positions of change points and edges and can effectively reduce false detections. A NSM-based geometric active contour (NSM-GAC) model is proposed and applied to segment the ultrasound images of the carotid. Experimental results show that the NSM-GAC model provides better segmentation results with less iterations that comparative methods and can reduce false contours and leakages. Last and more important, a new feature-preserving smoothing approach called « Nonstationarity adaptive filtering (NAF) » is proposed and applied to enhance human cardiac DW images. Experimental results show that the proposed method achieves a better compromise between the smoothness of the homogeneous regions and the preservation of desirable features such as boundaries, thus leading to homogeneously consistent tensor fields and consequently a more reconstruction of the coherent fibers.
538

Vliv vnitřní tepelné akumulace konstrukcí pasivních domů na jejich letní tepelnou stabilitu / The influence of internal thermal storage mass used in passive houses' construction systems on their summer thermal stability

Němeček, Martin January 2018 (has links)
In recent years we may observe a growth in construction of passive houses and low energy houses using lightweight constructions such as modern wooden houses. It is assumed that wooden houses keep overheating more comparing to brick houses during summer period. Due to the lack of research in this field the paper investigates the influence of internal thermal storage mass in passive houses constructions on their summer thermal stability under the Czech climatic conditions. Only sensible heat accumulation without a usage of phase change materials is examined. Differences between wooden houses comparing to brick-built houses are emphasized. Objects of research are mostly residential passive houses in low energy building standards. However, the results of research might be applied to different types of buildings as well. The first section outlines theoretical fundamentals. For the research itself various scientific research methods were used, such as basic mathematical calculations, experimental temperature measurement of two buildings (detached house in Dubňany and in Moravany) and numerical simulations. Own tribute to the research was first of all discussion on the topic of thermal accumulation and structures heat capacity calculation. Experimental measurements outlined conclusive evidence about the importance of internal thermal storage mass in respect of interior summer overheating. The research confirmed that the highest interior temperature reached is mostly influenced by solar gains through unshaded windows. However, the influence of internal thermal storage mass is not remote. If we compare standard timber-framed wooden house to the hole ceramic bricks-built house, the wooden house will overheat by 0,5°C more during a standard day. Wider spread in the maximum temperature reached was measured for lightweight consturctions wooden houses without any internal thermal storage mass. Therefore, such structures should have an additional layer of thermal storage mass.
539

Advanced Stochastic Signal Processing and Computational Methods: Theories and Applications

Robaei, Mohammadreza 08 1900 (has links)
Compressed sensing has been proposed as a computationally efficient method to estimate the finite-dimensional signals. The idea is to develop an undersampling operator that can sample the large but finite-dimensional sparse signals with a rate much below the required Nyquist rate. In other words, considering the sparsity level of the signal, the compressed sensing samples the signal with a rate proportional to the amount of information hidden in the signal. In this dissertation, first, we employ compressed sensing for physical layer signal processing of directional millimeter-wave communication. Second, we go through the theoretical aspect of compressed sensing by running a comprehensive theoretical analysis of compressed sensing to address two main unsolved problems, (1) continuous-extension compressed sensing in locally convex space and (2) computing the optimum subspace and its dimension using the idea of equivalent topologies using Köthe sequence. In the first part of this thesis, we employ compressed sensing to address various problems in directional millimeter-wave communication. In particular, we are focusing on stochastic characteristics of the underlying channel to characterize, detect, estimate, and track angular parameters of doubly directional millimeter-wave communication. For this purpose, we employ compressed sensing in combination with other stochastic methods such as Correlation Matrix Distance (CMD), spectral overlap, autoregressive process, and Fuzzy entropy to (1) study the (non) stationary behavior of the channel and (2) estimate and track channel parameters. This class of applications is finite-dimensional signals. Compressed sensing demonstrates great capability in sampling finite-dimensional signals. Nevertheless, it does not show the same performance sampling the semi-infinite and infinite-dimensional signals. The second part of the thesis is more theoretical works on compressed sensing toward application. In chapter 4, we leverage the group Fourier theory and the stochastical nature of the directional communication to introduce families of the linear and quadratic family of displacement operators that track the join-distribution signals by mapping the old coordinates to the predicted new coordinates. We have shown that the continuous linear time-variant millimeter-wave channel can be represented as the product of channel Wigner distribution and doubly directional channel. We notice that the localization operators in the given model are non-associative structures. The structure of the linear and quadratic localization operator considering group and quasi-group are studied thoroughly. In the last two chapters, we propose continuous compressed sensing to address infinite-dimensional signals and apply the developed methods to a variety of applications. In chapter 5, we extend Hilbert-Schmidt integral operator to the Compressed Sensing Hilbert-Schmidt integral operator through the Kolmogorov conditional extension theorem. Two solutions for the Compressed Sensing Hilbert Schmidt integral operator have been proposed, (1) through Mercer's theorem and (2) through Green's theorem. We call the solution space the Compressed Sensing Karhunen-Loéve Expansion (CS-KLE) because of its deep relation to the conventional Karhunen-Loéve Expansion (KLE). The closed relation between CS-KLE and KLE is studied in the Hilbert space, with some additional structures inherited from the Banach space. We examine CS-KLE through a variety of finite-dimensional and infinite-dimensional compressible vector spaces. Chapter 6 proposes a theoretical framework to study the uniform convergence of a compressible vector space by formulating the compressed sensing in locally convex Hausdorff space, also known as Fréchet space. We examine the existence of an optimum subspace comprehensively and propose a method to compute the optimum subspace of both finite-dimensional and infinite-dimensional compressible topological vector spaces. To the author's best knowledge, we are the first group that proposes continuous compressed sensing that does not require any information about the local infinite-dimensional fluctuations of the signal.

Page generated in 0.1256 seconds