• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Relação entre desempenho social corporativo e desempenho financeiro de empresas no Brasil / The relation between corporate social performance and financial performance of firms in Brazil

Borba, Paulo da Rocha Ferreira 29 June 2005 (has links)
Esta dissertação tem por objetivo principal analisar e responder ao questionamento acerca da relação entre desempenho social corporativo e desempenho financeiro de empresas brasileiras. De forma mais específica, a dissertação investiga a seqüência causal e a direção (positiva ou negativa) do relacionamento entre as variáveis que representam as duas concepções de desempenho. Para tanto, foram utilizadas duas variáveis para representar o desempenho financeiro a valores de mercado, três variáveis para representar o desempenho financeiro a valores contábeis, e o Indicador de Desempenho Social Corporativo, construído nesta dissertação para representar o desempenho social das empresas e baseado nos Balanços Sociais publicados segundo modelo do Instituto Brasileiro de Análises Sociais e Econômicas (IBASE). Além dessas variáveis, duas variáveis de controle consideradas relevantes pela teoria, o Tamanho da Empresa e o Setor de Atuação, também constituíram o modelo estatístico. O período de análise (2000 a 2002) do relacionamento entre as variáveis e a amostra da pesquisa foi relativamente pequeno, devido principalmente à fragilidade do banco de dados referente ao desempenho social corporativo e à incipiência do próprio tema na administração brasileira. As análises foram realizadas em períodos anuais, sem e com defasagem temporal de um ano, a fim de que seis hipóteses alternativas propostas por modelo já existente pudessem ser testadas. A análise estatística foi realizada através de matriz de correlações, de modelos de regressão linear múltipla pelo método dos mínimos quadrados ordinários, de regressão a erros padrão robustos a heterocedasticidade e de regressão robusta. Os resultados, em sua maioria, não foram capazes de rejeitar a hipótese nula do modelo, isto é, de que não há relação estatisticamente significante entre o desempenho social e o desempenho financeiro corporativos. Entretanto, as regressões que utilizaram indicadores contábeis de desempenho financeiro apresentaram resultados que indicariam, em alguns períodos de análise, a existência de uma relação positiva entre as duas formas de desempenho, corroborando, em parte, com a idéia de que a administração dos stakeholders acarretaria desempenho financeiro superior às empresas. Porém, a seqüência causal do relacionamento não foi clara, dado que tanto um melhor ou pior desempenho social corporativo foi causa de um melhor ou pior desempenho financeiro, como também o segundo foi causa do primeiro. Por sua vez, a relação entre os indicadores de mercado do desempenho financeiro das empresas e o indicador de desempenho social corporativo apresentou-se bastante contraditória, o que corrobora com resultados alcançados em pesquisas e trabalhos anteriores sobre o tema. Finalmente, a variável de controle de Tamanho da Empresa mostrou-se não significante para o modelo, enquanto a variável de controle de Setor de Atuação apresentou resultados bastante diversos. Dessa forma, os resultados mostraram-se pouco conclusivos, fato explicado pela literatura existente que identifica as limitações conceituais, como a indefinição de conceitos-chave, e empíricas, como a ausência de banco de dados ou deficiências dos existentes, que permeiam a maioria das pesquisas existentes sobre o tema e que se agravam na realidade brasileira. Para as novas pesquisas, recomenda-se atenção especial à representação do desempenho social corporativo, tornando-o mais abrangente e robusto, bem como a utilização de diferentes janelas temporais para o modelo estatístico que permitam o alcance de resultados mais conclusivos. É importante observar que iniciativas recentes de instituições no país vêm ao encontro do atendimento das necessidades dos pesquisadores que se interessam sobre o tema, o que estimula e favorece o desenvolvimento de pesquisas futuras. / The major objective of this here dissertation consists in analyzing the relation between social and financial performances of Brazilian enterprises and in responding to the related inquiries. More specifically, this text investigates the causal sequence and the direction (positive or negative) taken by the relation between the variables that represent the two performance concepts. For that purpose, two variables have been used to represent the financial performance at market values, three variables have been used to represent the financial performance at book values and the Corporate Social Performance Indicator has been here constructed to represent the social performance of enterprises, based on the Social Balance Sheets published according to the Instituto Brasileiro de Análises Sociais e Econômicas (IBASE). Besides these variables, two control variables have been considered relevant in the theory: the size of the company and its industry. Both have framed the statistic model. The analysis period (from 2000 to 2002) of the relation between the variables and the research sample has been somewhat short, mostly due to the fragility of the corporate social performance database and to the embryonic condition of the issue in Administration in Brazil. The analyses have been accomplished in yearly periods, with and without one-year temporal gaps, so that six alternative hypotheses proposed for already existing model could be tested. The statistic analysis was developed through a correlations matrix, models of multiple linear regression processed by the ordinary least squares method, models of heteroskedasticity-consistent standard error and covariance regression and of robust regression. Most of the results have not been able to reject the null hypothesis of the model, which means that there is no statistically meaningful relation between corporate social and financial performances. However, some of the results provided by the regressions that adopted book value measurement of financial performance denote, at some analysis periods, the existence of a positive relation between the two performances, which somehow reinforces the idea that the administration of stakeholders may provide an improved financial performance. Nevertheless, the causal sequence of the relation has not been clear, considering that either a better or worse corporate social performance has caused a better or worse financial performance; and that the latter may have caused the former. On the other hand, the relation between the variables of corporate financial performance at market values and the corporate social performance indicator has proven quite contradictory, which enhances the results accomplished in previous researches and articles on the subject. Finally, the control variable of enterprise size has proven meaningless for the model, while the control variable of industry has presented quite diverse results. Thus, the results are inconclusive, according to the existing literature, which identifies the conceptual limitations, such as lack of definition of key-concepts; and the empirical limitations, like lack or inadequacy of database, which permeates most of the researches on the subject and deteriorates in the Brazilian reality. For new researches, special attention should be paid to the representation of corporate social performance, making it more extensive and robust; likewise, the usage of different temporal windows for the statistic model, which would allow the accomplishment of more conclusive results. It is important to notice that recent institutional initiatives in the country have met with the needs of researchers interested in the subject, which stimulates and favors the development of future investigations.
322

Estimation of prevalence on psychiatric mentally disorders on Shatin community.

January 2001 (has links)
Leung Siu-Ngan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 72-74). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Structure and Contents of Data Sets --- p.6 / Chapter 2 --- Estimation of Prevalence of Mentally Disorders --- p.10 / Chapter 2.1 --- Likelihood Function Approach --- p.10 / Chapter 2.2 --- Maximum Likelihood Estimation via EM Algorithm --- p.13 / Chapter 2.3 --- The SEM Algorithm --- p.16 / Chapter 3 --- Estimation of Lifetime Comorbidity --- p.24 / Chapter 3.1 --- What is Comorbidity? --- p.24 / Chapter 3.2 --- Likelihood Function Approach --- p.25 / Chapter 3.2.1 --- Likelihood Function Model --- p.27 / Chapter 3.2.2 --- Maximum Likelihood Estimation via EM Algorithm --- p.28 / Chapter 3.2.3 --- Odds Ratio --- p.31 / Chapter 4 --- Logistic Regression --- p.35 / Chapter 4.1 --- Imputation Method of Missing Values --- p.35 / Chapter 4.1.1 --- Hot Deck Imputation --- p.35 / Chapter 4.1.2 --- A logistic Regression Imputation Model for Dichotomous Response --- p.40 / Chapter 4.2 --- Combining Results from Different Imputed Data Sets --- p.47 / Chapter 4.3 --- Itemization on Screening --- p.60 / Chapter 4.3.1 --- Methods of Weighting on the Screening Questions --- p.61 / Chapter 4.3.2 --- Statistical Analysis --- p.62 / Chapter 5 --- Conclusion and Discussion --- p.68 / Appendix: SRQ Questionnaire --- p.69 / Bibliography --- p.72
323

Robust utility maximization, f-projections, and risk constraints

Gundel, Anne 01 June 2006 (has links)
Ein wichtiges Gebiet der Finanzmathematik ist die Bestimmung von Auszahlungsprofilen, die den erwarteten Nutzen eines Agenten unter einer Budgetrestriktion maximieren. Wir charakterisieren optimale Auszahlungsprofile für einen Agenten, der unsicher ist in Bezug auf das genaue Marktmodell. Der hier benutzte Dualitätsansatz führt zu einem Minimierungsproblem für bestimmte konvexe Funktionale über zwei Mengen von Wahrscheinlichkeitsmaßen, das wir zunächst lösen müssen. Schließlich führen wir noch eine zweite Restriktion ein, die das Risiko beschränkt, das der Agent eingehen darf. Wir gehen dabei wie folgt vor: Kapitel 1. Wir betrachten das Problem, die f-Divergenz f(P|Q) über zwei Mengen von Wahrscheinlichkeitsmaßen zu minimieren, wobei f eine konvexe Funktion ist. Wir zeigen, dass unter der Bedingung "f( undendlich ) / undendlich = undendlich" Minimierer existieren, falls die erste Menge abgeschlossen und die zweite schwach kompakt ist. Außerdem zeigen wir, dass unter der Bedingung "f( undendlich ) / undendlich = 0" ein Minimierer in einer erweiterten Klasse von Martingalmaßen existiert, falls die zweite Menge schwach kompakt ist. Kapitel 2. Die Existenzresultate aus dem ersten Kapitel implizieren die Existenz eines Auszahlungsprofils, das das robuste Nutzenfunktional inf E_Q[u(X)] über eine Menge von finanzierbaren Auszahlungen maximiert, wobei das Infimum über eine Menge von Modellmaßen betrachtet wird. Die entscheidende Idee besteht darin, die minimierenden Maße aus dem ersten Kapitel als gewisse "worst-case"-Maße zu identifizieren. Kapitel 3. Schließlich fordern wir, dass das Risiko der Auszahlungsprofile beschränkt ist. Wir lösen das robuste Problem in einem unvollständigen Marktmodell für Nutzenfunktionen, die nur auf der positiven Halbachse definiert sind. In einem Beispiel vergleichen wir das optimale Auszahlungsprofil unter der Risikorestriktion mit den optimalen Auszahlungen ohne eine solche Restriktion und unter einer Value-at-Risk-Nebenbedingung. / Finding payoff profiles that maximize the expected utility of an agent under some budget constraint is a key issue in financial mathematics. We characterize optimal contingent claims for an agent who is uncertain about the market model. The dual approach that we use leads to a minimization problem for a certain convex functional over two sets of measures, which we first have to solve. Finally, we incorporate a second constraint that limits the risk that the agent is allowed to take. We proceed as follows: Chapter 1. Given a convex function f, we consider the problem of minimizing the f-divergence f(P|Q) over these two sets of measures. We show that, if the first set is closed and the second set is weakly compact, a minimizer exists if f( infinity ) / infinity = infinity. Furthermore, we show that if the second set of measures is weakly compact and f( infinifty ) / infinity = 0, then there is a minimizer in a class of extended martingale measures. Chapter 2. The existence results in Chapter 1 lead to the existence of a contingent claim which maximizes the robust utility functional inf E_Q[u(X)] over some set of affordable contingent claims, where the infimum is taken over a set of subjective or modell measures. The key idea is to identify the minimizing measures from the first chapter as certain worst case measures. Chapter 3. Finally, we require the risk of the contingent claims to be bounded. We solve the robust problem in an incomplete market for a utility function that is only defined on the positive halfline. In an example we compare the optimal claim under this risk constraint with the optimal claims without a risk constraint and under a value-at-risk constraint.
324

Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle

Sharify, Meisam 01 September 2011 (has links) (PDF)
L'Algèbre tropicale peut être considérée comme un domaine relativement nouveau en mathématiques. Elle apparait dans plusieurs domaines telles que l'optimisation, la synchronisation de la production et du transport, les systèmes à événements discrets, le contrôle optimal, la recherche opérationnelle, etc. La première partie de ce manuscrit est consacrée a l'étude des applications de l'algèbre tropicale à l'analyse numérique matricielle. Nous considérons tout d'abord le problème classique de l'estimation des racines d'un polynôme univarié. Nous prouvons plusieurs nouvelles bornes pour la valeur absolue des racines d'un polynôme en exploitant les méthodes tropicales. Ces résultats sont particulièrement utiles lorsque l'on considère des polynômes dont les coefficients ont des ordres de grandeur différents. Nous examinons ensuite le problème du calcul des valeurs propres d'une matrice polynomiale. Ici, nous introduisons une technique de mise à l'échelle générale, basée sur l'algèbre tropicale, qui s'applique en particulier à la forme compagnon. Cette mise à l'échelle est basée sur la construction d'une fonction polynomiale tropicale auxiliaire, ne dépendant que de la norme des matrices. Les raciness (les points de non-différentiabilité) de ce polynôme tropical fournissent une pré-estimation de la valeur absolue des valeurs propres. Ceci se justifie en particulier par un nouveau résultat montrant que sous certaines hypothèses faites sur le conditionnement, il existe un groupe de valeurs propres bornées en norme. L'ordre de grandeur de ces bornes est fourni par la plus grande racine du polynôme tropical auxiliaire. Un résultat similaire est valable pour un groupe de petites valeurs propres. Nous montrons expérimentalement que cette mise à l'échelle améliore la stabilité numérique, en particulier dans des situations où les données ont des ordres de grandeur différents. Nous étudions également le problème du calcul des valeurs propres tropicales (les points de non-différentiabilité du polynôme caractéristique) d'une matrice polynômiale tropicale. Du point de vue combinatoire, ce problème est équivalent à trouver une fonction de couplage: la valeur d'un couplage de poids maximum dans un graphe biparti dont les arcs sont valués par des fonctions convexes et linéaires par morceaux. Nous avons développé un algorithme qui calcule ces valeurs propres tropicales en temps polynomial. Dans la deuxième partie de cette thèse, nous nous intéressons à la résolution de problèmes d'affectation optimale de très grande taille, pour lesquels les algorithms séquentiels classiques ne sont pas efficaces. Nous proposons une nouvelle approche qui exploite le lien entre le problème d'affectation optimale et le problème de maximisation d'entropie. Cette approche conduit à un algorithme de prétraitement pour le problème d'affectation optimale qui est basé sur une méthode itérative qui élimine les entrées n'appartenant pas à une affectation optimale. Nous considérons deux variantes itératives de l'algorithme de prétraitement, l'une utilise la méthode Sinkhorn et l'autre utilise la méthode de Newton. Cet algorithme de prétraitement ramène le problème initial à un problème beaucoup plus petit en termes de besoins en mémoire. Nous introduisons également une nouvelle méthode itérative basée sur une modification de l'algorithme Sinkhorn, dans lequel un paramètre de déformation est lentement augmenté. Nous prouvons que cette méthode itérative(itération de Sinkhorn déformée) converge vers une matrice dont les entrées non nulles sont exactement celles qui appartiennent aux permutations optimales. Une estimation du taux de convergence est également présentée.
325

Ordonnancement en régime permanent sur plates-formes hétérogènes

Gallet, Matthieu 20 October 2009 (has links) (PDF)
Les travaux présentés dans cette thèse portent sur l'ordonnancement d'applications sur des plate- formes hétérogènes à grande échelle. Dans la mesure où le problème général est trop complexe pour être résolu de façon exacte, nous considérons deux relaxations. Tâches divisibles : La première partie est consacrée aux tâches divisibles, qui sont des appli- cations parfaitement parallèles et pouvant être arbitrairement subdivisées pour être réparties sur de nombreux processeurs. Nous cherchons à minimiser le temps de travail total lors de l'exécution de plusieurs applications aux caractéristiques différentes sur un réseau linéaire de processeurs, sachant que les données peuvent être distribuées en plusieurs tournées. Le nombre de ces tour- nées étant fixé, nous décrivons un algorithme optimal pour déterminer précisément ces tournées, et nous montrons que toute solution optimale requiert un nombre infini de tournées, résultat restant vrai sur des plate-formes non plus linéaires mais en étoile. Nous comparons également notre méthode à des méthodes déjà existantes. Ordonnancement en régime permanent : La seconde partie s'attache à l'ordonnancement de nombreuses copies du même graphe de tâches représentant une application donnée. Au lieu de chercher à minimiser le temps de travail total, nous optimisons uniquement le cœur de l'or- donnancement. Tout d'abord, nous étudions des ordonnancements cycliques de ces applications sur des plate-formes hétérogènes, basés sur une seule allocation pour faciliter leur utilisation. Ce problème étant NP-complet, nous donnons non seulement un algorithme optimal, mais éga- lement différentes heuristiques permettant d'obtenir rapidement des ordonnancements efficaces. Nous les comparons à ces méthodes classiques d'ordonnancement, telles que HEFT. Dans un second temps, nous étudions des applications plus simples, faites de nombreuses tâches indépendantes, que l'on veut exécuter sur une plate-forme en étoile. Les caractéristiques de ces tâches variant, nous supposons qu'elles peuvent être modélisées par des variables aléatoires. Cela nous permet de proposer une ε-approximation dans un cadre clairvoyant, alors que l'ordonnan- ceur dispose de toutes les informations nécessaires. Nous exposons également des heuristiques dans un cadre non-clairvoyant. Ces différentes méthodes montrent que malgré la dynamicité des tâches, il reste intéressant d'utiliser un ordonnancement statique et non des stratégies plus dynamiques comme On-Demand. Nous nous intéressons ensuite à des applications, dont plusieurs tâches sont répliquées sur plu- sieurs processeurs de la plate-forme de calcul afin d'améliorer le débit total. Dans ce cas, même si les différentes instances sont distribuées aux processeurs tour à tour, le calcul du débit est difficile. Modélisant le problème par des réseaux de Petri temporisés, nous montrons comment le calculer, prouvant également que ce calcul peut être fait en temps polynomial avec le modèle Strict One-Port. Enfin, le dernier chapitre est consacré à l'application de ces techniques à un processeur multi- cœur hétérogène, le Cell d'IBM. Nous présentons donc un modèle théorique de ce processeur ainsi qu'un algorithme d'ordonnancement adapté. Une implémentation réelle de cet ordonnanceur a été effectuée, permettant d'obtenir des débits intéressants tout en simplifiant l'utilisation de ce processeur et validant notre modèle théorique.
326

Probabilistic models in noisy environments : and their application to a visual prosthesis for the blind

Archambeau, Cédric 26 September 2005 (has links)
In recent years, probabilistic models have become fundamental techniques in machine learning. They are successfully applied in various engineering problems, such as robotics, biometrics, brain-computer interfaces or artificial vision, and will gain in importance in the near future. This work deals with the difficult, but common situation where the data is, either very noisy, or scarce compared to the complexity of the process to model. We focus on latent variable models, which can be formalized as probabilistic graphical models and learned by the expectation-maximization algorithm or its variants (e.g., variational Bayes).<br> After having carefully studied a non-exhaustive list of multivariate kernel density estimators, we established that in most applications locally adaptive estimators should be preferred. Unfortunately, these methods are usually sensitive to outliers and have often too many parameters to set. Therefore, we focus on finite mixture models, which do not suffer from these drawbacks provided some structural modifications.<br> Two questions are central in this dissertation: (i) how to make mixture models robust to noise, i.e. deal efficiently with outliers, and (ii) how to exploit side-channel information, i.e. additional information intrinsic to the data. In order to tackle the first question, we extent the training algorithms of the popular Gaussian mixture models to the Student-t mixture models. the Student-t distribution can be viewed as a heavy-tailed alternative to the Gaussian distribution, the robustness being tuned by an extra parameter, the degrees of freedom. Furthermore, we introduce a new variational Bayesian algorithm for learning Bayesian Student-t mixture models. This algorithm leads to very robust density estimators and clustering. To address the second question, we introduce manifold constrained mixture models. This new technique exploits the information that the data is living on a manifold of lower dimension than the dimension of the feature space. Taking the implicit geometrical data arrangement into account results in better generalization on unseen data.<br> Finally, we show that the latent variable framework used for learning mixture models can be extended to construct probabilistic regularization networks, such as the Relevance Vector Machines. Subsequently, we make use of these methods in the context of an optic nerve visual prosthesis to restore partial vision to blind people of whom the optic nerve is still functional. Although visual sensations can be induced electrically in the blind's visual field, the coding scheme of the visual information along the visual pathways is poorly known. Therefore, we use probabilistic models to link the stimulation parameters to the features of the visual perceptions. Both black-box and grey-box models are considered. The grey-box models take advantage of the known neurophysiological information and are more instructive to medical doctors and psychologists.<br>
327

Evaluating the benefits and effectiveness of public policy

Sandström, F. Mikael January 1999 (has links)
The dissertation consists of four essays that treat different aspects or the evaluation of public policy. Two essays are applications of the travel cost method. In the first of these, recreational travel to the Swedish coast is studied to obtain estimates of the social benefits from reduced eutrophication of the sea. The second travel cost essay attempts at estimating how the probability that a woman will undergo mammographic screening for breast cancer is affected by the distance she has to travel to undergo such an examination. Using these estimated probabilities, the woman's valuation of the examination is obtained. The two other essays deal with automobile taxation. One essay analyzes how taxation and the Swedish eco-labeling system of automobiles have affected the sale of different car models. The last essay treats the effects of taxes and of scrappage premiums on the life length of cars. / Diss. Stockholm : Handelshögskolan, 1999
328

Interrogation of Nucleic Acids by Parallel Threading

Pettersson, Erik January 2007 (has links)
Advancements in the field of biotechnology are expanding the scientific horizon and a promising era is envisioned with personalized medicine for improved health. The amount of genetic data is growing at an ever-escalating pace due to the availability of novel technologies that allow massively parallel sequencing and whole-genome genotyping, that are supported by the advancements in computer science and information technologies. As the amount of information stored in databases throughout the world is growing and our knowledge deepens, genetic signatures with significant importance are discovered. The surface of such a set in the data mining process may include causative- or marker single nucleotide polymorphisms (SNPs), revealing predisposition to disease, or gene expression signatures, profiling a pathological state. When targeting a reduced set of signatures in a large number of samples for diagnostic- or fine-mapping purposes, efficient interrogation and scoring require appropriate preparations. These needs are met by miniaturized and parallelized platforms that allow a low sample and template consumption. This doctoral thesis describes an attempt to tackle some of these challenges by the design and implementation of a novel assay denoted Trinucleotide Threading (TnT). The method permits multiplex amplification of a medium size set of specific loci and was adapted to genotyping, gene expression profiling and digital allelotyping. Utilizing a reduced number of nucleotides permits specific amplification of targeted loci while preventing the generation of spurious amplification products. This method was applied to genotype 96 individuals for 75 SNPs. In addition, the accuracy of genotyping from minute amounts of genomic DNA was confirmed. This procedure was performed using a robotic workstation running custom-made scripts and a software tool was implemented to facilitate the assay design. Furthermore, a statistical model was derived from the molecular principles of the genotyping assay and an Expectation-Maximization algorithm was chosen to automatically call the generated genotypes. The TnT approach was also adapted to profiling signature gene sets for the Swedish Human Protein Atlas Program. Here 18 protein epitope signature tags (PrESTs) were targeted in eight different cell lines employed in the program and the results demonstrated high concordance rates with real-time PCR approaches. Finally, an assay for digital estimation of allele frequencies in large cohorts was set up by combining the TnT approach with a second-generation sequencing system. Allelotyping was performed by targeting 147 polymorphic loci in a genomic pool of 462 individuals. Subsequent interrogation was carried out on a state-of-the-art massively parallelized Pyrosequencing instrument. The experiment generated more than 200,000 reads and with bioinformatic support, clonally amplified fragments and the corresponding sequence reads were converted to a precise set of allele frequencies. / QC 20100813
329

Modélisation gaussienne de rang plein des mélanges audio convolutifs appliquée à la séparation de sources.

Duong, Quang-Khanh-Ngoc 15 November 2011 (has links) (PDF)
Nous considérons le problème de la séparation de mélanges audio réverbérants déterminés et sous-déterminés, c'est-à-dire l'extraction du signal de chaque source dans un mélange multicanal. Nous proposons un cadre général de modélisation gaussienne où la contribution de chaque source aux canaux du mélange dans le domaine temps-fréquence est modélisée par un vecteur aléatoire gaussien de moyenne nulle dont la covariance encode à la fois les caractéristiques spatiales et spectrales de la source. A n de mieux modéliser la réverbération, nous nous aff ranchissons de l'hypothèse classique de bande étroite menant à une covariance spatiale de rang 1 et nous calculons la borne théorique de performance atteignable avec une covariance spatiale de rang plein. Les ré- sultats expérimentaux indiquent une augmentation du rapport Signal-à-Distorsion (SDR) de 6 dB dans un environnement faiblement à très réverbérant, ce qui valide cette généralisation. Nous considérons aussi l'utilisation de représentations temps-fréquence quadratiques et de l'échelle fréquentielle auditive ERB (equivalent rectangular bandwidth) pour accroître la quantité d'information exploitable et décroître le recouvrement entre les sources dans la représentation temps-fréquence. Après cette validation théorique du cadre proposé, nous nous focalisons sur l'estimation des paramètres du modèle à partir d'un signal de mélange donné dans un scénario pratique de séparation aveugle de sources. Nous proposons une famille d'algorithmes Expectation-Maximization (EM) pour estimer les paramètres au sens du maximum de vraisemblance (ML) ou du maximum a posteriori (MAP). Nous proposons une famille d'a priori de position spatiale inspirée par la théorie de l'acoustique des salles ainsi qu'un a priori de continuité spatiale. Nous étudions aussi l'utilisation de deux a priori spectraux précédemment utilisés dans un contexte monocanal ou multicanal de rang 1: un a priori de continuité spatiale et un modèle de factorisation matricielle positive (NMF). Les résultats de séparation de sources obtenus par l'approche proposée sont comparés à plusieurs algorithmes de base et de l'état de l'art sur des mélanges simulés et sur des enregistrements réels dans des scénarios variés.
330

System Availability Maximization and Residual Life Prediction under Partial Observations

Jiang, Rui 10 January 2012 (has links)
Many real-world systems experience deterioration with usage and age, which often leads to low product quality, high production cost, and low system availability. Most previous maintenance and reliability models in the literature do not incorporate condition monitoring information for decision making, which often results in poor failure prediction for partially observable deteriorating systems. For that reason, the development of fault prediction and control scheme using condition-based maintenance techniques has received considerable attention in recent years. This research presents a new framework for predicting failures of a partially observable deteriorating system using Bayesian control techniques. A time series model is fitted to a vector observation process representing partial information about the system state. Residuals are then calculated using the fitted model, which are indicative of system deterioration. The deterioration process is modeled as a 3-state continuous-time homogeneous Markov process. States 0 and 1 are not observable, representing healthy (good) and unhealthy (warning) system operational conditions, respectively. Only the failure state 2 is assumed to be observable. Preventive maintenance can be carried out at any sampling epoch, and corrective maintenance is carried out upon system failure. The form of the optimal control policy that maximizes the long-run expected average availability per unit time has been investigated. It has been proved that a control limit policy is optimal for decision making. The model parameters have been estimated using the Expectation Maximization (EM) algorithm. The optimal Bayesian fault prediction and control scheme, considering long-run average availability maximization along with a practical statistical constraint, has been proposed and compared with the age-based replacement policy. The optimal control limit and sampling interval are calculated in the semi-Markov decision process (SMDP) framework. Another Bayesian fault prediction and control scheme has been developed based on the average run length (ARL) criterion. Comparisons with traditional control charts are provided. Formulae for the mean residual life and the distribution function of system residual life have been derived in explicit forms as functions of a posterior probability statistic. The advantage of the Bayesian model over the well-known 2-parameter Weibull model in system residual life prediction is shown. The methodologies are illustrated using simulated data, real data obtained from the spectrometric analysis of oil samples collected from transmission units of heavy hauler trucks in the mining industry, and vibration data from a planetary gearbox machinery application.

Page generated in 0.1139 seconds