• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Adaptation de modèles statistiques pour la séparation de sources mono-capteur Texte imprimé : application à la séparation voix / musique dans les chansons

Ozerov, Alexey 15 December 2006 (has links) (PDF)
La séparation de sources avec un seul capteur est un problème très récent, qui attire de plus en plus d'attention dans le monde scientifique. Cependant, il est loin d'être résolu et, même plus, il ne peut pas être résolu en toute généralité. La difficulté principale est que, ce problème étant extrêmement sous déterminé, il faut disposer de fortes connaissances sur les sources pour pouvoir les séparer. Pour une grande partie des méthodes de séparation, ces connaissances sont représentées par des modèles statistiques des sources, notamment par des Modèles de Mélange de Gaussiennes (MMG), qui sont appris auparavant à partir d'exemples. L'objet de cette thèse est d'étudier les méthodes de séparation basées sur des modèles statistiques en général, puis de les appliquer à un problème concret, tel que la séparation de la voix par rapport à la musique dans des enregistrements monophoniques de chansons. Apporter des solutions à ce problème, qui est assez difficile et peu étudié pour l'instant, peut être très utile pour faciliter l'analyse du contenu des chansons, par exemple dans le contexte de l'indexation audio. Les méthodes de séparation existantes donnent de bonnes performances à condition que les caractéristiques des modèles statistiques utilisés soient proches de celles des sources à séparer. Malheureusement, il n'est pas toujours possible de construire et d'utiliser en pratique de tels modèles, à cause de l'insuffisance des exemples d'apprentissage représentatifs et des ressources calculatoires. Pour remédier à ce problème, il est proposé dans cette thèse d'adapter a posteriori les modèles aux sources à séparer. Ainsi, un formalisme général d'adaptation est développé. En s'inspirant de techniques similaires utilisées en reconnaissance de la parole, ce formalisme est introduit sous la forme d'un critère d'adaptation Maximum A Posteriori (MAP). De plus, il est montré comment optimiser ce critère à l'aide de l'algorithme EM à différents niveaux de généralité. Ce formalisme d'adaptation est ensuite appliqué dans certaines formes particulières pour la séparation voix / musique. Les résultats obtenus montrent que pour cette tâche, l'utilisation des modèles adaptés permet d'augmenter significativement (au moins de 5 dB) les performances de séparation par rapport aux modèles non adaptés. Par ailleurs, il est observé que la séparation de la voix chantée facilite l'estimation de sa fréquence fondamentale (pitch), et que l'adaptation des modèles ne fait qu'améliorer ce résultat.
232

A Contribution in Stochastic Control Applied to Finance and Insurance

Ludovic, Moreau 25 September 2012 (has links) (PDF)
Le but de cette thèse est d'apporter une contribution à la problématique de valorisation de produits dérivés en marchés incomplets. Nous considérons tout d'abord les cibles stochastiques introduites par Soner et Touzi (2002) afin de traiter le problème de sur-réplication, et récemment étendues afin de traiter des approches plus générales par Bouchard, Elie et Touzi (2009). Nous généralisons le travail de Bouchard {\sl et al} à un cadre plus général où les diffusions sont sujettes à des sauts. Nous devons considérer dans ce cas des contrôles qui prennent la forme de fonctions non bornées, ce qui impacte de façon non triviale la dérivation des EDP correspondantes. Notre deuxième contribution consiste à établir une version des cibles stochastiques qui soit robuste à l'incertitude de modèle. Dans un cadre abstrait, nous établissons une version faible du principe de programmation dynamique géométrique de Soner et Touzi (2002), et nous dérivons, dans un cas d'EDS controllées, l'équation aux dérivées partielles correspondantes, au sens des viscosités. Nous nous intéressons ensuite à un exemple de couverture partielle sous incertitude de Knightian. Finalement, nous nous concentrons sur le problème de valorisation de produits dérivées {\sl hybrides} (produits dérivés combinant finance de marché et assurance). Nous cherchons plus particulièrement à établir une condition suffisante sous laquelle une règle de valorisation (populaire dans l'industrie), consistant à combiner l'approches actuarielle de mutualisation avec une approche d'arbitrage, soit valable.
233

Graphical Models for Robust Speech Recognition in Adverse Environments

Rennie, Steven J. 01 August 2008 (has links)
Robust speech recognition in acoustic environments that contain multiple speech sources and/or complex non-stationary noise is a difficult problem, but one of great practical interest. The formalism of probabilistic graphical models constitutes a relatively new and very powerful tool for better understanding and extending existing models, learning, and inference algorithms; and a bedrock for the creative, quasi-systematic development of new ones. In this thesis a collection of new graphical models and inference algorithms for robust speech recognition are presented. The problem of speech separation using multiple microphones is first treated. A family of variational algorithms for tractably combining multiple acoustic models of speech with observed sensor likelihoods is presented. The algorithms recover high quality estimates of the speech sources even when there are more sources than microphones, and have improved upon the state-of-the-art in terms of SNR gain by over 10 dB. Next the problem of background compensation in non-stationary acoustic environments is treated. A new dynamic noise adaptation (DNA) algorithm for robust noise compensation is presented, and shown to outperform several existing state-of-the-art front-end denoising systems on the new DNA + Aurora II and Aurora II-M extensions of the Aurora II task. Finally, the problem of speech recognition in speech using a single microphone is treated. The Iroquois system for multi-talker speech separation and recognition is presented. The system won the 2006 Pascal International Speech Separation Challenge, and amazingly, achieved super-human recognition performance on a majority of test cases in the task. The result marks a significant first in automatic speech recognition, and a milestone in computing.
234

Duality theory for optimal mechanism design

Giannakopoulos, Ioannis January 2015 (has links)
In this work we present a general duality-theory framework for revenue maximization in additive Bayesian auctions involving multiple items and many bidders whose values for the goods follow arbitrary continuous joint distributions over some multi-dimensional real interval. Although the single-item case has been resolved in a very elegant way by the seminal work of Myerson [1981], optimal solutions involving more items still remain elusive. The framework extends linear programming duality and complementarity to constraints with partial derivatives. The dual system reveals the natural geometric nature of the problem and highlights its connection with the theory of bipartite graph matchings. We demonstrate the power of the framework by applying it to various special monopoly settings where a seller of multiple heterogeneous goods faces a buyer with independent item values drawn from various distributions of interest, to design both exact and approximately optimal selling mechanisms. Previous optimal solutions were only known for up to two and three goods, and a very limited range of distributional priors. The duality framework is used not only for proving optimality, but perhaps more importantly, for deriving the optimal mechanisms themselves. Some of our main results include: the proposal of a simple deterministic mechanism, which we call Straight-Jacket Auction (SJA) and is defined in a greedy, recursive way through natural geometric constraints, for many uniformly distributed goods, where exact optimality is proven for up to six items and general optimality is conjectured; a scheme of sufficient conditions for exact optimality for two-good settings and general independent distributions; a technique for upper-bounding the optimal revenue for arbitrarily many goods, with an application to uniform and exponential priors; and the proof that offering deterministically all items in a single full bundle is the optimal way of selling multiple exponentially i.i.d. items.
235

La faiblesse de volonté : conceptions classiques et dynamiques

Labonté, Jean-François 09 1900 (has links)
La présente thèse expose, analyse et critique les positions classiques et modernes à l’égard de la nature et des causes de la faiblesse de volonté. L’identification du problème par Platon et Aristote a donné lieu à l’explicitation de principes et propositions portant sur la rationalité pratique en général et la motivation en particulier. Une discussion de ces principes et propositions est faite dans la mesure où ils ont conservé une certaine pertinence pour les théories modernes. Ce qui est devenu la conception standard de la stricte akrasie ainsi que son caractère prétendument paradoxal sont mis de l’avant. Nous argumentons qu’une position sceptique à l’égard de la stricte akrasie ne peut pas reposer sur une version ou une autre de la théorie des préférences révélées et montrons qu’une description du processus décisionnel est nécessaire pour attribuer une préférence synthétique ou un meilleur jugement. Nous abordons le débat philosophique qui oppose une conception internaliste du lien entre le meilleur jugement et la décision à une conception externaliste, et soutenons, sur la base de résultats expérimentaux en psychologie cognitive et en neuroscience, que cette dernière conception est plus robuste, bien qu’imparfaite. Ces résultats ne vont pas toutefois à l’encontre de l’hypothèse que les agents sont des maximisateurs dans la satisfaction de leur préférence, laquelle hypothèse continue de justifier une forme de scepticisme à l’égard de la stricte akrasie. Nous exposons, par contre, des arguments solides à l’encontre de cette hypothèse et montrons pourquoi la maximisation n’est pas nécessairement requise pour le choix rationnel et que nous devons, par conséquent, réviser la conception standard de la stricte akrasie. Nous discutons de l’influente théorie de Richard Holton sur la faiblesse de volonté non strictement akratique. Bien que compatible avec une conception non maximisante, sa théorie réduit trop les épisodes de faiblesse de volonté à des cas d’irrésolution. Nous exposons finalement la théorie du choix intertemporel. Cette théorie est plus puissante parce qu’elle décrit et explique, à partir d’un même schème conceptuel, autant la stricte akrasie que l’akrasie tout court. Ce schème concerne les propriétés des distributions temporelles des conséquences des décisions akratiques et les attitudes prospectives qui motivent les agents à les prendre. La structure de ces distributions, couplée à la dévaluation à l’égard du futur, permet également d’expliquer de manière simple et élégante pourquoi la faiblesse de volonté est irrationnelle. Nous discutons de l’hypothèse qu’une préférence temporelle pure est à la source d’une telle dévaluation et mentionnons quelques éléments critiques et hypothèses concurrentes plus conformes à une approche cognitiviste du problème. / This thesis explains, analyses and examines the classical and modern positions on the nature and causes of the weakness of will. Since Plato and Aristotle’s identification of the problem, many principles and propositions on the subject of practical rationality in general and motivation in particular have been examined in details. These principles and propositions are being discussed on the basis that they are still somewhat relevant to modern theories. An emphasis is made on what is now known as the standard conception of strict akrasia and its supposedly paradoxical nature. We argue that a skeptical position toward strict akrasia cannot be based on one version or another of the preference-revealed theory and we demonstrate that a description of the decision process is necessary to assign an overall preference or a better judgment. We discuss the philosophical debate on internalist and externalist conceptions of the connection between better judgment and decision. We then argue that, based on experimental results in cognitive psychology and neuroscience, the externalist conception, although imperfect, is stronger. But these experimental results are not incompatible with the hypothesis that agents are maximizers when it comes to the satisfaction of their preference. This hypothesis continues to justify a form of skepticism toward strict akrasia. However, we strongly argue against this hypothesis and we demonstrate why maximization is not absolutely necessary to rational choice; therefore, we have to revise the standard conception of strict akrasia. We then discuss Richard Holton’s influential theory on non-strictly akratic weakness of will. Although compatible with a non-maximizing conception, Holton’s theory tends to reduce episodes of weakness of will to irresolution cases. Lastly, we introduce the theory of intertemporal choice, a more potent theory that describes and explains, with the same conceptual schema, both strict and non-strict akrasia. This schema concerns the properties of temporal distribution of akratic decision’s consequences and the prospective attitudes that motivate agents to make those decisions. Also, the structure of these distributions, along with the devaluation of the future, allows us to explain, clearly and simply, why weakness of will is irrational. We discuss the hypothesis that this devaluation of the future is due to a pure temporal preference and we mention a number of critical elements and rival hypothesis more in keeping with a cognitive approach to the problem.
236

Aprendizado semi-supervisionado para o tratamento de incerteza na rotulação de dados de química medicinal / Semi supervised learning for uncertainty on medicinal chemistry labelling

Souza, João Carlos Silva de 09 March 2017 (has links)
Nos últimos 30 anos, a área de aprendizagem de máquina desenvolveu-se de forma comparável com a Física no início do século XX. Esse avanço tornou possível a resolução de problemas do mundo real que anteriormente não poderiam ser solucionados por máquinas, devido à dificuldade de modelos puramente estatísticos ajustarem-se de forma satisfatória aos dados de treinamento. Dentre tais avanços, pode-se citar a utilização de técnicas de aprendizagem de máquina na área de Química Medicinal, envolvendo métodos de análise, representação e predição de informação molecular por meio de recursos computacionais. Os dados utilizados no contexto biológico possuem algumas características particulares que podem influenciar no resultado de sua análise. Dentre estas, pode-se citar a complexidade das informações moleculares, o desbalanceamento das classes envolvidas e a existência de dados incompletos ou rotulados de forma incerta. Tais adversidades podem prejudicar o processo de identificação de compostos candidatos a novos fármacos, se não forem tratadas de forma adequada. Neste trabalho, foi abordada uma técnica de aprendizagem de máquina semi-supervisionada capaz de reduzir o impacto causado pelo problema da incerteza na rotulação dos dados, aplicando um método para estimar rótulos mais confiáveis para os compostos químicos existentes no conjunto de treinamento. Na tentativa de evitar os efeitos causados pelo desbalanceamento dos dados, foi incorporada ao processo de estimação de rótulos uma abordagem sensível ao custo, com o objetivo de evitar o viés em benefício da classe majoritária. Após o tratamento do problema da incerteza na rotulação, classificadores baseados em Máquinas de Aprendizado Extremo foram construídos, almejando boa capacidade de aproximação em um tempo de processamento reduzido em relação a outras abordagens de classificação comumente aplicadas. Por fim, o desempenho dos classificadores construídos foi avaliado por meio de análises dos resultados obtidos, confrontando o cenário com os dados originais e outros com as novas rotulações obtidas durante o processo de estimação semi-supervisionado / In the last 30 years, the area of machine learning has developed in a way comparable to Physics in the early twentieth century. This breakthrough has made it possible to solve real-world problems that previously could not be solved by machines because of the difficulty of purely statistical models to fit satisfactorily with training data. Among these advances, one can cite the use of machine learning techniques in the area of Medicinal Chemistry, involving methods for analysing, representing and predicting molecular information through computational resources. The data used in the biological context have some particular characteristics that can influence the result of its analysis. These include the complexity of molecular information, the imbalance of the classes involved, and the existence of incomplete or uncertainly labeled data. If they are not properly treated, such adversities may affect the process of identifying candidate compounds for new drugs. In this work, a semi-supervised machine learning technique was considered to reduce the impact caused by the problem of uncertainty in the data labeling, by applying a method to estimate more reliable labels for the chemical compounds in the training set. In an attempt to reduce the effects caused by data imbalance, a cost-sensitive approach was incorporated to the label estimation process, in order to avoid bias in favor of the majority class. After addressing the uncertainty problem in labeling, classifiers based on Extreme Learning Machines were constructed, aiming for good approximation ability in a reduced processing time in relation to other commonly applied classification approaches. Finally, the performance of the classifiers constructed was evaluated by analyzing the results obtained, comparing the scenario with the original data and others with the new labeling obtained by the semi-supervised estimation process
237

Dinâmicas de propagação de informações e rumores em redes sociais / Information and rumor propagation in social networks

Oliveros, Didier Augusto Vega 12 May 2017 (has links)
As redes sociais se tornaram um novo e importante meio de intercâmbio de informações, ideias e comunicação que aproximam parentes e amigos sem importar as distâncias. Dada a natureza aberta da Internet, as informações podem fluir muito fácil e rápido na população. A rede pode ser representada como um grafo, onde os indivíduos ou organizações são o conjunto de vértices e os relacionamentos ou conexões entre os vértices são o conjunto de arestas. Além disso, as redes sociais representam intrinsecamente a estrutura de um sistema mais complexo que é a sociedade. Estas estruturas estão relacionadas com as características dos indivíduos. Por exemplo, os indivíduos mais populares são aqueles com maior número de conexões. Em particular, é aceito que a estrutura da rede pode afetar a forma como a informação se propaga nas redes sociais. No entanto, ainda não está claro como a estrutura influencia na propagação, como medir seu impacto e quais as possíveis estratégias para controlar o processo de difusão. Nesta tese buscamos contribuir nas análises da interação entre as dinâmicas de propagação de informações e rumores e a estrutura da rede. Propomos um modelo de propagação mais realista considerando a heterogeneidade dos indivíduos na transmissão de ideias ou informações. Nós confirmamos a presença de propagadores mais influentes na dinâmica de rumor e observamos que é possível melhorar ou reduzir expressivamente a difusão de uma informação ao selecionar uma fração muito pequena de propagadores influentes. No caso em que se objetiva selecionar um conjunto de propagadores iniciais que maximizem a difusão de informação, a melhor opção é selecionar os indivíduos mais centrais ou importantes nas comunidades. Porém, se o padrão de conexão dos vértices está negativamente correlacionado, a melhor alternativa é escolher entre os indivíduos mais centrais de toda a rede. Por outro lado, através de abordagens topológicas e de técnicas de aprendizagem máquina, identificamos aos propagadores menos influentes e mostramos que eles atuam como um firewall no processo de difusão. Nós propomos um método adaptativo de reconexão entre os vértices menos influentes para um indivíduo central da rede, sem afetar a distribuição de grau da rede. Aplicando o nosso método em uma pequena fração de propagadores menos influentes, observamos um aumento importante na capacidade de propagação desses vértices e da rede toda. Nossos resultados vêm de uma ampla gama de simulações em conjuntos de dados artificiais e do mundo real e a comparação com modelos clássicos de propagação da literatura. A propagação da informação em redes é de grande relevância para as áreas de publicidade e marketing, educação, campanhas políticas ou de saúde, entre outras. Os resultados desta tese podem ser aplicados e estendidos em diferentes campos de pesquisa como redes biológicas e modelos de comportamento social animal, modelos de propagação de epidemias e na saúde pública, entre outros. / On-line Social networks become a new and important medium of exchange of information, ideas and communication that approximate relatives and friends no matter the distances. Given the open nature of the Internet, the information can flow very easy and fast in the population. The network can be represented as a graph, where individuals or organizations are the set of vertices and the relationship or connection among the vertices are the set of edge. Moreover, the social networks are also intrinsically representing the structure of a more complex system that is the society. These structures are related with characteristics of the subjects, like the most popular individuals have many connections, the correlation in the connectivity of vertices that is a trace of homophily phenomenon, among many others. In particular, it is well accepted that the structure of the network can affect the way the information propagates on the social networks. However, how the structure impacts in the propagation, how to measure that impact and what are the strategies for controlling the propagation of some information, it is still unclear. In this thesis, we seek to contribute in the analysis of the interplay between the dynamics of information and rumor spreading and the structure of the networks. We propose a more realistic propagation model considering the heterogeneity of the individuals in the transmission of ideas or information. We confirm the presence of influential spreaders in the rumor propagation process and found that selecting a very small fraction of influential spreaders, it is possible to expressively improve or reduce de diffusion of some information on the network. In the case we want to select a set of initial spreaders that maximize the information diffusion on the network, the simple and best alternative is to select the most central or important individuals from the networks communities. But, if the pattern of connection of the networks is negatively correlated, the best alternative is to choose from the most central individuals in the whole network. On the other hand, we identify, by topological approach and machine learning techniques, the least influential spreaders and show that they act as a firewall in the propagation process. We propose an adaptative method that rewires one edge for a given vertex to a central individual, without affecting the overall distribution of connection. Applying our proposed method in a little fraction of least influential spreaders, we observed an important increasing in the capacity of propagation of these vertices and in the overall network. Our results are from a wide range of simulations in artificial and real-world data sets and the comparison with the classical rumor propagation model. The propagation of information is of greatest relevance for publicity and marketing area, education, political or health campaigns, among others. The results of this these might be applicable and extended in different research fields like biological networks and animal social behavior models.
238

Functional analytic approaches to some stochastic optimization problems

Backhoff, Julio Daniel 17 February 2015 (has links)
In dieser Arbeit beschäftigen wir uns mit Nutzenoptimierungs- und stochastischen Kontrollproblemen unter mehreren Gesichtspunkten. Wir untersuchen die Parameterunsicherheit solcher Probleme im Sinne des Robustheits- und des Sensitivitätsparadigma. Neben der Betrachtung dieser problemen widmen wir uns auch einem Zweiagentenproblem, bei dem der eine dem anderen das Management seines Portfolios vertraglich überträgt. Wir betrachten das robuste Nutzenoptimierungsproblem in Finanzmarktmodellen, wobei wir Bedingungen für seine Lösbarkeit formulieren, ohne jegliche Kompaktheit der Unsicherheitsmenge zu fordern, welche die Maße enthält, auf die der Optimierer robustifiziert. Unsere Bedingungen sind über gewisse Funktionenräume beschrieben, die allgemein Modularräume sind, mittels dennen wir eine Min-Max-Gleichung und die Existenz optimalen Strategien beweisen. In vollständigen Märkten ist der Raum ein Orlicz, und nachdem man seine Reflexivität explizit überprüft hat, erhält man zusätzlich die Existenz einer Worst-Case-Maße, die wir charakterisieren. Für die Parameterabhängigkeit stochastischer Kontrollprobleme entwickeln wir einen Sensitivitätsansatz. Das Kernargument ist die Korrespondenz zwischen dem adjungierten Zustand zur schwachen Formulierung des Pontryaginschen Prinzips und den Lagrange-Multiplikatoren, die der Kontrollgleichung assoziiert werden, wenn man sie als eine Bedingung betrachtet. Der Sensitivitätsansatz wird dann auf konvexe Probleme mit additiver oder multiplikativer Störung angewendet. Das Zweiagentenproblem formulieren wir in diskreter Zeit. Wir wenden in größter Verallgemeinerung die Methoden der bedingten Analysis auf den Fall linearer Verträge an und zeigen, dass sich die Mehrheit der in der Literatur unter sehr spezifischen Annahmen bekannten Ergebnisse auf eine deutlich umfassenderer Klasse von Modellen verallgemeinern lässt. Insbesondere erhalten wir die Existenz eines first-best-optimalen Vertrags und dessen Implementierbarkeit. / In this thesis we deal with utility maximization and stochastic optimal control through several points of view. We shall be interested in understanding how such problems behave under parameter uncertainty under respectively the robustness and the sensitivity paradigms. Afterwards, we leave the single-agent world and tackle a two-agent problem where the first one delegates her investments to the second through a contract. First, we consider the robust utility maximization problem in financial market models, where we formulate conditions for its solvability without assuming compactness of the densities of the uncertainty set, which is a set of measures upon which the maximizing agent performs robust investments. These conditions are stated in terms of functional spaces wich generally correspond to Modular spaces, through which we prove a minimax equality and the existence of optimal strategies. In complete markets the space is an Orlicz one, and upon explicitly granting its reflexivity we obtain in addition the existence of a worst-case measure, which we fully characterize. Secondly we turn our attention to stochastic optimal control, where we provide a sensitivity analysis to some parameterized variants of such problems. The main tool is the correspondence between the adjoint states appearing in a (weak) stochastic Pontryagin principle and the Lagrange multipliers associated to the controlled equation when viewed as a constraint. The sensitivity analysis is then deployed in the case of convex problems and additive or multiplicative perturbations. In a final part, we proceed to Principal-Agent problems in discrete time. Here we apply in great generality the tools from conditional analysis to the case of linear contracts and show that most results known in the literature for very specific instances of the problem carry on to a much broader setting. In particular, the existence of a first-best optimal contract and its implementability by the Agent is obtained.
239

Utility maximization and quadratic BSDEs under exponential moments

Mocha, Markus 08 March 2012 (has links)
In der Arbeit befassen wir uns mit der Potenznutzenmaximierung des Endvermögens, wenn die Aktienpreise stetigen Semimartingaldynamiken genügen und die Strategien des Agenten Investitions- und Informationsrestriktionen unterworfen sind. Hauptaugenmerk liegt auf der stochastischen Rückwärtsgleichung (BSDE) für den dynamischen Wertprozess und auf der Übertragung von neuen Ergebnissen zu quadratischen Semimartingal-BSDEs auf das Investitionsproblem. Dieses gelingt unter der Annahme endlicher exponentiellen Momente des Mean-Variance Tradeoff und verallgemeinert frühere Resultate, die Beschränktheit fordern. Wir betrachten dabei zunächst die Beziehung zwischen den Dualitäts- und BSDE-Ansätzen zur Lösung des Problems und gehen dann über zum Studium der quadratischen Semimartingal-BSDE, wenn der Marktpreis des Risikos vom BMO-Typ ist. Wir zeigen, dass es stets ein Kontinuum verschiedener BSDE-Lösungen mit quadratisch integrierbarem Martingalteil gibt. Wir stellen dann eine neue scharfe Bedingung an geeignete dynamische exponentielle Momente vor, die die Beschränktheit der BSDE-Lösungen in einer allgemeinen Filtration garantiert. In weiterer Folge weisen wir Existenz-, Eindeutigkeits-, Stabilitäts- und Maßwechselresultate für allgemeine quadratische stetige BSDEs unter exponentiellen Momenten nach. Diese Ergebnisse verwenden wir, um das Investitionsproblem für den Fall konischer Investitionsrestriktionen zu untersuchen. Ausgehend von der Zerlegung von Elementen des dualen Gebietes erhalten wir die zugehörige BSDE und beweisen, dass der Wertprozess in einem Raum liegt, in dem Lösungen quadratischer BSDEs eindeutig sind. Als Folgerung aus dem Stabilitätsresultat für BSDEs erhalten wir die Stetigkeit der Optimierer in der Semimartingaltopologie in den Parametern des Modells. Schließlich betrachten wir das Investitionsproblem unter exponentiellen Momenten, kompakten Handelsrestriktionen und eingeschränkter Information. Hierbei benutzen wir ausschließlich BSDE-Resultate. / In this thesis we consider the problem of maximizing the power utility from terminal wealth when the stocks have continuous semimartingale dynamics and there are investment and information constraints on the agent''s strategies. The main focus is on the backward stochastic differential equation (BSDE) that encodes the dynamic value process and on transferring new results on quadratic semimartingale BSDEs to the portfolio choice problem. This is accomplished under the assumption of finite exponential moments of the mean-variance tradeoff, generalizing previous results which require boundedness. We first recall the relationship between the duality and BSDE approaches to solving the problem and then study the associated quadratic semimartingale when the market price of risk is of BMO type. We show that there is always a continuum of distinct solutions to this BSDE with square-integrable martingale part. We then provide a new sharp condition on the dynamic exponential moments of the mean-variance tradeoff which guarantees the boundedness of BSDE solutions in a general filtration. In a subsequent step we establish existence, uniqueness, stability and measure change results for general quadratic continuous BSDEs under an exponential moments condition. We use these results to study the portfolio selection problem when there are conic investment constraints. Building on a decomposition result for the elements of the so-called dual domain we derive the associated BSDE and show that the value process is contained in a specific space in which BSDE solutions are unique. A consequence of the stability result for BSDEs is then the continuity of the optimizers with respect to the input parameters of the model in the semimartingale topology. Finally, we study the optimal investment problem under exponential moments, compact constraints and restricted information. This is done by referring to BSDE results only.
240

Finite sample analysis of profile M-estimators

Andresen, Andreas 02 September 2015 (has links)
In dieser Arbeit wird ein neuer Ansatz für die Analyse von Profile Maximierungsschätzern präsentiert. Es werden die Ergebnisse von Spokoiny (2011) verfeinert und angepasst für die Schätzung von Komponenten von endlich dimensionalen Parametern mittels der Maximierung eines Kriteriumfunktionals. Dabei werden Versionen des Wilks Phänomens und der Fisher-Erweiterung für endliche Stichproben hergeleitet und die dafür kritische Relation der Parameterdimension zum Stichprobenumfang gekennzeichnet für den Fall von identisch unabhängig verteilten Beobachtungen und eines hinreichend glatten Funktionals. Die Ergebnisse werden ausgeweitet für die Behandlung von Parametern in unendlich dimensionalen Hilberträumen. Dabei wir die Sieve-Methode von Grenander (1981) verwendet. Der Sieve-Bias wird durch übliche Regularitätsannahmen an den Parameter und das Funktional kontrolliert. Es wird jedoch keine Basis benötigt, die orthogonal in dem vom Model induzierten Skalarprodukt ist. Weitere Hauptresultate sind zwei Konvergenzaussagen für die alternierende Maximisierungsprozedur zur approximation des Profile-Schätzers. Alle Resultate werden anhand der Analyse der Projection Pursuit Prozedur von Friendman (1981) veranschaulicht. Die Verwendung von Daubechies-Wavelets erlaubt es unter natürlichen und üblichen Annahmen alle theoretischen Resultate der Arbeit anzuwenden. / This thesis presents a new approach to analyze profile M-Estimators for finite samples. The results of Spokoiny (2011) are refined and adapted to the estimation of components of a finite dimensional parameter using the maximization of a criterion functional. A finite sample versions of the Wilks phenomenon and Fisher expansion are obtained and the critical ratio of parameter dimension to sample size is derived in the setting of i.i.d. samples and a smooth criterion functional. The results are extended to parameters in infinite dimensional Hilbert spaces using the sieve approach of Grenander (1981). The sieve bias is controlled via common regularity assumptions on the parameter and functional. But our results do not rely on an orthogonal basis in the inner product induced by the model. Furthermore the thesis presents two convergence results for the alternating maximization procedure. All results are exemplified in an application to the Projection Pursuit Procedure of Friendman (1981). Under a set of natural and common assumptions all theoretical results can be applied using Daubechies wavelets.

Page generated in 0.0689 seconds