• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Market completion and robust utility maximization

Müller, Matthias 28 September 2005 (has links)
Der erste Teil der Arbeit beschreibt eine Methode, Auszahlungen zu bewerten, die einem auf dem Finanzmarkt nicht absicherbaren Risiken ausgesetzt sind. Im zweiten Teil berechnen wir den maximalen Nutzen und optimale Handelsstrategien auf unvollständigen Märkten mit Hilfe von stochastischen Rückwärtsgleichungen. Wir betrachten Händler, deren Einkommen einer externen Risikoquelle ausgesetzt sind. Diese vervollständigen den Markt, indem sie entweder einen Bond schaffen oder gegenseitig Verträge schliessen. Eine andere Moeglichkeit ist eine Anleihe, die von einer Versicherung herausgegeben wird. Die Risikoquellen, die wir in Betracht ziehen, können Versicherungs-, Wetter-oder Klimarisiko sein. Aktienpreise sind exogen gegeben. Wir berechnen Preise für die zusätzlichen Anlagen so dass Angebot und Nachfrage dafür gleich sind. Wir haben partielle Markträumung. Die Präferenzen der Händler sind durch erwarteten Nutzen gegeben. In Kapitel 2 bis Kapitel 4 haben die Händler exponentielle Nutzenfunktionen. Um den Gleichgewichtspreis zu finden, wenden wir stochastische Rückwärtsgleichungen an. In Kapitel 5 beschreiben wir ein Einperiodenmodell mit Nutzenfunktionen, die die Inada-Bedingungen erfüllen. Der zweite Teil dieser Arbeit beschäftigt sich mit dem robusten Nutzenmaximierungsproblem auf einem unvollständigen Finanzmarkt. Entweder das Wahrscheinlichkeitsmass oder die Koeffizienten des Aktienmarktes sind ungewiss. Die Lösung der Rückwärtsgleichung beschreibt die nutzenmaximierende Handelsstrategie und das Wahrscheinlichkeitsmass, das in der Auswertung des robusten Nutzens benutzt wird. Für die exponentielle Nutzenfunktion berechnen wir Nutzenindifferenzpreise. Ausserdem wenden wir diese Techniken auf die Maximierung des erwarteten Nutzens bezüglich eines festen Wahrscheinlichkeitsmasses an. Dafür betrachten wir abgeschlossene, im allgemeinen nicht konvexe zulässige Mengen für die Handelsstrategien. / The first part of the thesis proposes a method to find prices and hedging strategies for risky claims exposed to a risk factor that is not hedgeable on a financial market. In the second part we calculate the maximal utility and optimal trading strategies on incomplete markets using Backward Stochastic Differential Equations. We consider agents with incomes exposed to a non-hedgeable external source of risk by creating either a bond or by signing contracts. The sources of risk we think of may be insurance, weather or climate risk. Stock prices are seen as exogenuosly given. We calculate prices for the additional securities such that supply is equal to demand, the market clears partially. The preferences of the agents are described by expected utility. In Chapter 2 through Chapter 4 the agents use exponential utility functions, the model is placed in a Brownian filtration. In order to find the equilibrium price, we use Backward Stochastic Differential Equations. Chapter 5 provides a one--period model where the agents use utility functions satisfying the Inada condition. The second part of this thesis considers the robust utility maximization problem on an incomplete financial market. Either the probability measure or drift and volatility of the stock price process are uncertain. We apply a martingale argument and solve a saddle point problem. The solution of a Backward Stochastic Differential Equation describes the maximizing trading strategy as well as the probability measure that is used in the robust utility. We consider the exponential, the power and the logarithmic utility functions. For the exponential utility function we calculate utility indifference prices of not perfectly hedgeable claims. Finally, we maximize the expected utility with respect to a single probability measure. We apply a martingale argument and solve maximization problems. This allows us to consider closed, in general non--convex constraints on the values of trading strategies.
242

Essays on Utility maximization and Optimal Stopping Problems in the Presence of Default Risk

Feunou, Victor Nzengang 09 August 2018 (has links)
Gegenstand der vorliegenden Dissertation sind stochastische Kontrollprobleme, denen sich Agenten im Zusammenhang mit Entscheidungen auf Finanzmärkten gegenübersehen. Der erste Teil der Arbeit behandelt die Maximierung des erwarteten Nutzens des Endvermögens eines Finanzmarktinvestors. Für den Investor ist eine Beschreibung der optimalen Handelsstrategie, die zur numerischen Approximation geeignet ist sowie eine Stabilitätsanalyse der optimalen Handelsstrategie bzgl. kleinerer Fehlspezifikationen in Nutzenfunktion und Anfangsvermögen, von höchstem Interesse. In stetigen Marktmodellen beweisen wir Stabilitätsresultate für die optimale Handeslsstrategie in geeigneten Topologien. Für hinreichend differenzierbare Nutzenfunktionen und zeitstetige Marktmodelle erhalten wir eine Beschreibung der optimalen Handelsstrategie durch die Lösung eines Systems von stochastischen Vorwärts-Rückwärts-Differentialgleichungen (FBSDEs). Der zweite Teil der Arbeit beschäftigt sich mit optimalen Stopproblemen für einen Agenten, dessen Ertragsprozess von einem Ausfallsereignis abhängt. Unser Hauptinteresse gilt der Beschreibung der Lösungen vor und nach dem Ausfallsereignis und damit dem besseren Verständnis des Verhaltens des Agenten bei Auftreten eines Ausfallsereignisses. Wir zeigen wie sich das optimale Stopproblem in zwei einzelne Teilprobleme zerlegen lässt: eines, für das der zugrunde liegende Informationsfluss das Ausfallereignis nicht beinhaltet, und eines, in welchem der Informationsfluss das Ausfallereignis berücksichtigt. Aufbauend auf der Zerlegung des Stopproblems und der Verbindung zwischen der Optimalen Stoptheorie und der Theorie von reflektierenden stochastischen Rückwärts-Differentialgleichungen (RBSDEs), leiten wir einen entsprechenden Zerlegungsansatz her, um RBSDEs mit genau einem Sprung zu lösen. Wir beweisen neue Existenz- und Eindeutigkeitsresultate von RBSDEs mit quadratischem Wachstum. / This thesis studies stochastic control problems faced by agents in financial markets when making decisions. The first part focuses on the maximization of expected utility from terminal wealth for an investor trading in a financial market. Of utmost concern to the investor is a description of optimal trading strategy that is amenable to numerical approximation, and the stability analysis of the optimal trading strategy w.r.t. "small" misspecification in his utility function and initial capital. In the setting of a continuous market model, we prove stability results for the optimal wealth process in the Emery topology and the uniform topology on semimartingales, and stability results for the optimal trading strategy in suitable topologies. For sufficiently differentiable utility functions, we obtain a description of the optimal trading strategy in terms of the solution of a system of forward-backward stochastic differential equations (FBSDEs). The second part of the thesis deals with the optimal stopping problem for an agent with a reward process exposed to a default event. Our main concern is to give a description of the solutions before and after the default event and thereby better understand the behavior of the agent in the presence of default. We show how the stopping problem can be decomposed into two individual stopping problems: one with information flow for which the default event is not visible, and another one with information flow which captures the default event. We build on the decomposition of the optimal stopping problem, and the link between the theories of optimal stopping and reflected backward stochastic differential equations (RBSDEs) to derive a corresponding decomposition approach to solve RBSDEs with a single jump. This decomposition allows us to establish existence and uniqueness results for RBSDEs with drivers of quadratic growth.
243

Modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look / Multi-look polarimetric SAR image segmentation using mixture models

Horta, Michelle Matos 04 June 2009 (has links)
Esta tese se concentra em aplicar os modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look. Dentro deste contexto, utilizou-se o algoritmo SEM em conjunto com os estimadores obtidos pelo método dos momentos para calcular as estimativas dos parâmetros do modelo de mistura das distribuições Wishart, Kp ou G0p. Cada uma destas distribuições possui parâmetros específicos que as diferem no ajuste dos dados com graus de homogeneidade variados. A distribuição Wishart descreve bem regiões com características mais homogêneas, como cultivo. Esta distribuição é muito utilizada na análise de dados SAR polarimétricos multi-look. As distribuições Kp e G0p possuem um parâmetro de rugosidade que as permitem descrever tanto regiões mais heterogêneas, como vegetação e áreas urbanas, quanto regiões homogêneas. Além dos modelos de mistura de uma única família de distribuições, também foi analisado o caso de um dicionário contendo as três famílias. Há comparações do método SEM proposto para os diferentes modelos com os métodos da literatura k-médias e EM utilizando imagens reais da banda L. O método SEM com a mistura de distribuições G0p forneceu os melhores resultados quando os outliers da imagem são desconsiderados. A distribuição G0p foi a mais flexível ao ajuste dos diferentes tipos de alvo. A distribuição Wishart foi robusta às diferentes inicializações. O método k-médias com a distribuição Wishart é robusto à segmentação de imagens contendo outliers, mas não é muito flexível à variabilidade das regiões heterogêneas. O modelo de mistura do dicionário de famílias melhora a log-verossimilhança do método SEM, mas apresenta resultados parecidos com os do modelo de mistura G0p. Para todos os tipos de inicialização e grupos, a distribuição G0p predominou no processo de seleção das distribuições do dicionário de famílias. / The main focus of this thesis consists of the application of mixture models in multi-look polarimetric SAR image segmentation. Within this context, the SEM algorithm, together with the method of moments, were applied in the estimation of the Wishart, Kp and G0p mixture model parameters. Each one of these distributions has specific parameters that allows fitting data with different degrees of homogeneity. The Wishart distribution is suitable for modeling homogeneous regions, like crop fields for example. This distribution is widely used in multi-look polarimetric SAR data analysis. The distributions Kp and G0p have a roughness parameter that allows them to describe both heterogeneous regions, as vegetation and urban areas, and homogeneous regions. Besides adopting mixture models of a single family of distributions, the use of a dictionary with all the three family of distributions was proposed and analyzed. Also, a comparison between the performance of the proposed SEM method, considering the different models in real L-band images and two widely known techniques described in literature (k-means and EM algorithms), are shown and discussed. The proposed SEM method, considering a G0p mixture model combined with a outlier removal stage, provided the best classication results. The G0p distribution was the most flexible for fitting the different kinds of data. The Wishart distribution was robust for different initializations. The k-means algorithm with Wishart distribution is robust for segmentation of SAR images containing outliers, but it is not so flexible to variabilities in heterogeneous regions. The mixture model considering the dictionary of distributions improves the SEM method log-likelihood, but presents similar results to those of G0p mixture model. For all types of initializations and clusters, the G0p prevailed in the distribution selection process of the dictionary of distributions.
244

[en] YIELD MANAGEMENT IN RIO DE JANEIRO HOTELS: SURVEY AND ANALYSIS / [pt] YIELD MANAGEMENT NOS HOTÉIS DO RIO DE JANEIRO: LEVANTAMENTO E ANÁLISE

LUIZ GUSTAVO ALCURE DE MORAIS 10 March 2003 (has links)
[pt] Como conseqüência da desregulamentação da indústria aérea americana nos anos 70, Yield Management YM foi criado como uma ferramenta gerencial para maximizar os lucros e manter as vantagens competitivas de empresas do setor. Com o tempo, essa ferramenta passou a ser utilizada por diversas empresas prestadoras de serviço, quando existe uma demanda variável para uma capacidade fixa e elevado custo de ociosidade, ajudando os gerentes a maximizar as receitas das suas operações. Basicamente, YM é o processo de alocação do tipo certo de capacidade, para cada tipo de cliente, ao preço certo, para que se maximizem os lucros. Pode-se ainda dizer que YM é uma forma sistemática de realizar um tipo de preço discriminatório em função de dados de demanda, de ocupação e dos custos marginais de utilização do recurso (avião, hotel, ou outro sistema prestador de serviço). Esta dissertação tem como objetivo verificar como os maiores hotéis turísticos do Rio de Janeiro se utilizam desta ferramenta em suas operações de reservas e vendas. Para atingir tal objetivo, selecionou-se uma amostra de onze hotéis na orla marítima. Entrevistas foram realizadas com os respectivos gerentes responsáveis pelo processo de reservas e/ou vendas, que responderam a um sobre a atual aplicação dos elementos de YM dentro da organização. Os resultados indicaram que o uso de YM é ainda muito pouco difundido entre esses hotéis, sendo reconhecido e de aplicação incipiente apenas em hotéis de maior porte pertencentes a cadeias, sobretudo internacionais. Alguns impedimentos indicados pelos respondentes e inferidos de suas respostas são comentados. / [en] As a consequence of the deregulation of the American airline industry in the 70 s, Yield Management was created as a managerial tool in order to maximize the profits and to keep the competitive advantages of companies of the sector. With time, this tool was adopted by other service companies, where a flexible demand for a fixed capacity and high under utilization costs exist, helping the managers to maximize their operations revenues. Yield Management (YM), or Revenue Management, is the allocation process of the right type of capacity to each type of customer at the proper price to maximize the sales revenues of services, or of highly perishable goods. It can still be said that YM is a systematic form to carry out a type of discriminatory price to meet demand taking into account occupation data and the marginal cost of resource utilization (airplane, hotel, or another service rendering system). The study presented in this thesis aimed at verifying how the main tourist hotels of Rio de Janeiro are taking advantages of this tool within their reservation and sales processes. To accomplish this objective, a sample of eleven hotels was selected amongst the fifteen more important hotels of the main touristic area of Rio de Janeiro City. Interviews were carried out with the managers responsible for the reservation and, or sales, departament, who answered a questionnaire about the current application of YM elements within the organization. The results have indicated that YM is still very little spread out among these hotels, being recognized and of incipient application, only in large hotel chains, mainly, the international ones. Some obstacles pointed out by the respondents and inferred from their responses are commented.
245

Distributed Cooperative Communications and Wireless Power Transfer

Wang, Rui 22 February 2018 (has links)
In telecommunications, distributed cooperative communications refer to techniques which allow different users in a wireless network to share or combine their information in order to increase diversity gain or power gain. Unlike conventional point-to-point communications maximizing the performance of the individual link, distributed cooperative communications enable multiple users to collaborate with each other to achieve an overall improvement in performance, e.g., improved range and data rates. The first part of this dissertation focuses the problem of jointly decoding binary messages from a single distant transmitter to a cooperative receive cluster. The outage probability of distributed reception with binary hard decision exchanges is compared with the outage probability of ideal receive beamforming with unquantized observation exchanges. Low- dimensional analysis and numerical results show, via two simple but surprisingly good approximations, that the outage probability performance of distributed reception with hard decision exchanges is well-predicted by the SNR of ideal receive beamforming after subtracting a hard decision penalty of slightly less than 2 dB. These results, developed in non-asymptotic regimes, are consistent with prior asymptotic results (for a large number of nodes and low per-node SNR) on hard decisions in binary communication systems. We next consider the problem of estimating and tracking channels in a distributed transmission system with multiple transmitters and multiple receivers. In order to track and predict the effective channel between each transmit node and each receive node to facilitate coherent transmission, a linear time-invariant state- space model is developed and is shown to be observable but nonstabilizable. To quantify the steady-state performance of a Kalman filter channel tracker, two methods are developed to efficiently compute the steady-state prediction covariance. An asymptotic analysis is also presented for the homogenous oscillator case for systems with a large number of transmit and receive nodes with closed-form results for all of the elements in the asymptotic prediction covariance as a function of the carrier frequency, oscillator parameters, and channel measurement period. Numeric results confirm the analysis and demonstrate the effect of the oscillator parameters on the ability of the distributed transmission system to achieve coherent transmission. In recent years, the development of efficient radio frequency (RF) radiation wireless power transfer (WPT) systems has become an active research area, motivated by the widespread use of low-power devices that can be charged wirelessly. In this dissertation, we next consider a time division multiple access scenario where a wireless access point transmits to a group of users which harvest the energy and then use this energy to transmit back to the access point. Past approaches have found the optimal time allocation to maximize sum throughput under the assumption that the users must use all of their harvested power in each block of the "harvest-then-transmit" protocol. This dissertation considers optimal time and energy allocation to maximize the sum throughput for the case when the nodes can save energy for later blocks. To maximize the sum throughput over a finite horizon, the initial optimization problem is separated into two sub-problems and finally can be formulated into a standard box- constrained optimization problem, which can be solved efficiently. A tight upper bound is derived by relaxing the energy harvesting causality. A disadvantage of RF-radiation based WPT is that path loss effects can significantly reduce the amount of power received by energy harvesting devices. To overcome this problem, recent investigations have considered the use of distributed transmit beamforming (DTB) in wireless communication systems where two or more individual transmit nodes pool their antenna resources to emulate a virtual antenna array. In order to take the advantages of the DTB in the WPT, in this dissertation, we study the optimization of the feedback rate to maximize the energy efficiency in the WPT system. Since periodic feedback improves the beamforming gain but requires the receivers to expend energy, there is a fundamental tradeoff between the feedback period and the efficiency of the WPT system. We develop a new model to combine WPT and DTB and explicitly account for independent oscillator dynamics and the cost of feedback energy from the receive nodes. We then formulate a "Normalized Weighted Mean Energy Harvesting Rate" (NWMEHR) maximization problem to select the feedback period to maximize the weighted averaged amount of net energy harvested by the receive nodes per unit of time as a function of the oscillator parameters. We develop an explicit method to numerically calculate the globally optimal feedback period.
246

Magnetic Resonance Image segmentation using Pulse Coupled Neural Networks

Swathanthira Kumar, Murali Murugavel M 08 May 2009 (has links)
The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this dissertation, three novel PCNN based automatic segmentation algorithms were developed to segment Magnetic Resonance Imaging (MRI) data: (a) PCNN image 'signature' based single region cropping; (b) PCNN - Kittler Illingworth minimum error thresholding and (c) PCNN -Gaussian Mixture Model - Expectation Maximization (GMM-EM) based multiple material segmentation. Among other control tests, the proposed algorithms were tested on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes, 20 T1 weighted MR human brain volumes from Harvard's Internet Brain Segmentation Repository and 5 human MR breast volumes. The results were compared against manually segmented gold standards, Brain Extraction Tool (BET) V2.1 results, published results and single threshold methods. The Jaccard similarity index was used for numerical evaluation of the proposed algorithms. Our quantitative results demonstrate conclusively that PCNN based multiple material segmentation strategies can approach a human eye's intensity delineation capability in grayscale image segmentation tasks.
247

Efficient modularity density heuristics in graph clustering and their applications

Santiago, Rafael de January 2017 (has links)
Modularity Density Maximization is a graph clustering problem which avoids the resolution limit degeneracy of the Modularity Maximization problem. This thesis aims at solving larger instances than current Modularity Density heuristics do, and show how close the obtained solutions are to the expected clustering. Three main contributions arise from this objective. The first one is about the theoretical contributions about properties of Modularity Density based prioritizers. The second one is the development of eight Modularity Density Maximization heuristics. Our heuristics are compared with optimal results from the literature, and with GAOD, iMeme-Net, HAIN, BMD- heuristics. Our results are also compared with CNM and Louvain which are heuristics for Modularity Maximization that solve instances with thousands of nodes. The tests were carried out by using graphs from the “Stanford Large Network Dataset Collection”. The experiments have shown that our eight heuristics found solutions for graphs with hundreds of thousands of nodes. Our results have also shown that five of our heuristics surpassed the current state-of-the-art Modularity Density Maximization heuristic solvers for large graphs. A third contribution is the proposal of six column generation methods. These methods use exact and heuristic auxiliary solvers and an initial variable generator. Comparisons among our proposed column generations and state-of-the-art algorithms were also carried out. The results showed that: (i) two of our methods surpassed the state-of-the-art algorithms in terms of time, and (ii) our methods proved the optimal value for larger instances than current approaches can tackle. Our results suggest clear improvements to the state-of-the-art results for the Modularity Density Maximization problem.
248

EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studies

Damian, Camilla, Eksi-Altay, Zehra, Frey, Rüdiger January 2018 (has links) (PDF)
In this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
249

Structural and functional brain plasticity for statistical learning

Karlaftis, Vasileios Misak January 2018 (has links)
Extracting structure from initially incomprehensible streams of events is fundamental to a range of human abilities: from navigating in a new environment to learning a language. These skills rely on our ability to extract spatial and temporal regularities, often with minimal explicit feedback, that is known as statistical learning. Despite the importance of statistical learning for making perceptual decisions, we know surprisingly little about the brain circuits and how they change when learning temporal regularities. In my thesis, I combine behavioural measurements, Diffusion Tensor Imaging (DTI) and resting-state fMRI (rs-fMRI) to investigate the structural and functional circuits that are involved in statistical learning of temporal structures. In particular, I compare structural connectivity as measured by DTI and functional connectivity as measured by rs-fMRI before vs. after training to investigate learning-dependent changes in human brain pathways. Further, I combine the two imaging modalities using graph theory and regression analyses to identify key predictors of individual learning performance. Using a prediction task in the context of sequence learning without explicit feedback, I demonstrate that individuals adapt to the environment’s statistics as they change over time from simple repetition to probabilistic combinations. Importantly, I show that learning of temporal structures relates to decision strategy that varies among individuals between two prototypical distributions: matching the exact sequence statistics or selecting the most probable outcome in a given context (i.e. maximising). Further, combining DTI and rs-fMRI, I show that learning-dependent plasticity in dissociable cortico-striatal circuits relates to decision strategy. In particular, matching relates to connectivity between visual cortex, hippocampus and caudate, while maximisation relates to connectivity between frontal and motor cortices and striatum. These findings have potential translational applications, as alternate brain routes may be re-trained to support learning ability when specific pathways (e.g. memory-related circuits) are compromised by age or disease.
250

GestÃo de recursos de rÃdio para otimizaÃÃo da qualidade de experiÃncia em sistemas sem fio / Radio resource management for quality of experience optimization in wireless networks

Victor Farias Monteiro 15 July 2015 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / Ericsson Brasil / Uma nova geraÃÃo de sistemas de comunicaÃÃes sem fio, 5a GeraÃÃo (5G), à prevista para 2020. Para a 5G, à esperado o surgimento de diversos serviÃos baseados em comunicaÃÃes mÃquina à mÃquina em diferentes Ãreas, como assistÃncia mÃdica, seguranÃa e redes de mediÃÃo inteligente. Cada um com diferentes requerimentos de taxa de transmissÃo, latÃncia, capacidade de processamento, eficiÃncia energÃtica, etc. Independente do serviÃo, os clientes precisam ficar satisfeitos. Isto està impondo uma mudanÃa de paradigmas em direÃÃo à priorizaÃÃo do usuÃrio como fator mais importante no gerenciamento de redes sem fio. Com esta mudanÃa, criou-se o conceito de qualidade de experiÃncia (do inglÃs, Quality of Experience (QoE)), que descreve de forma subjetiva como o serviÃo à percebido pelo usuÃrio. A QoE normalmente à avaliada por uma nota entre 1 e 5, chamada nota mÃdia de opiniÃo (do inglÃs, Mean Opinion Score (MOS)). Neste contexto, conceitos de QoE podem ser considerados com diferentes objetivos, como: aumentar a vida Ãtil de baterias, melhorar a seleÃÃo para acesso à rede e aprimorar a alocaÃÃo dos recursos de rÃdio (do inglÃs, Radio Resource Allocation (RRA)). Com relaÃÃo à RRA, nesta dissertaÃÃo consideram-se requerimentos de QoE na gestÃo dos recursos disponÃveis em um sistema de comunicaÃÃes sem fio, como espectro de frequÃncia e potÃncia de transmissÃo. Mais especificamente, estuda-se um problema de assinalamento de recursos de rÃdio e de alocaÃÃo de potÃncia que objetiva maximizar a mÃnima MOS do sistema sujeito a satisfazer um nÃmero mÃnimo de usuÃrios prÃ-estabelecido. Inicialmente, formula-se um novo problema de otimizaÃÃo considerando restriÃÃes quanto à potÃncia de transmissÃo e quanto à fraÃÃo de usuÃrios que deve ser satisfeita, o que à um importante tÃpico do ponto de vista das operadoras. Este à um problema nÃo linear e de difÃcil soluÃÃo. Ele à entÃo reformulado como um problema linear inteiro e misto, que pode ser resolvido de forma Ãtima usando algoritmos conhecidos de otimizaÃÃo. Devido à complexidade da soluÃÃo Ãtima obtida, propÃe-se uma heurÃstica chamada em inglÃs de Power and Resource Allocation Based on Quality of Experience (PRABE). O mÃtodo proposto à avaliado por meio de simulaÃÃes e os resultados obtidos mostram que sua performance à superior à de outros existentes, sendo prÃxima à da Ãtima. / A new generation of wireless networks, the 5th Generation (5G), is predicted for beyond 2020. For the 5G, it is foreseen an emerging huge number of services based on Machine-Type Communications (MTCs) in different fields, such as, health care, smart metering and security. Each one of them requiring different throughput rates, latency, processing capacity, energy efficiency, etc. Independently of the service type, the customers still need to get satisfied, which is imposing a shift of paradigm towards incorporating the user as the most important factor in wireless network management. This shift of paradigm drove the creation of the Quality of Experience (QoE) concept, which describes the service quality subjectively perceived by the users. QoE is generally evaluated by a Mean Opinion Score (MOS) ranging from 1 to 5. In this context, QoE concepts can be considered with different objectives, such as, increasing battery life, optimizing handover decision, enhancing access network selection and improving Radio Resource Allocation (RRA). Regarding the RRA, in this masterâs thesis we consider QoE requirements when managing the limited available resources of a communication system, such as frequency spectrum and transmit power. More specifically, we study a radio resource assignment and power allocation problem that aims at maximizing the minimum MOS of the users in a system subject to attaining a minimum number of satisfied users. Initially, we formulate a new optimization problem taking into account constraints on the total transmit power and on the fraction of users that must be satisfied, which is an important topic from an operatorâs point of view. The referred problem is non-linear and hard to solve. However, we get to transform it into a simpler form, a Mixed Integer Linear Problem (MILP), that can be optimally solved using standard numerical optimization methods. Due to the complexity of obtaining the optimal solution, we propose a heuristic solution to this problem, called Power and Resource Allocation Based on Quality of Experience (PRABE). We evaluate the proposed method by means of simulations and the obtained results show that it outperforms some existing algorithms, as well as it performs close to the optimal solution.

Page generated in 0.0818 seconds