• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 56
  • 23
  • 13
  • 11
  • 11
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 367
  • 43
  • 40
  • 37
  • 35
  • 34
  • 30
  • 26
  • 25
  • 25
  • 22
  • 21
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Minimally invasive assessment of lymphatic pumping pressure using near infrared imaging

Akin, Ryan E. 14 January 2013 (has links)
Although the major functions of the lymphatic system are fairly well defined, its vasculature has yet to be well characterized in comparison to its blood vasculature counterpart. Recent advances in optical imaging techniques have allowed for more detailed and quantitative evaluations of lymph flow dynamics and mechanism. A rat tail is often used for investigations of lymph flow because of the simple geometry, superficial nature, and disease progression models of its collecting lymphatic vessels. In this study, a pressure cuff system was fabricated and coupled with an existing functional near infrared (NIR) imaging system to measure the overall pumping pressure of the lymphatic vessels of a rat tail. In addition to adapting the system for use on rodents, previous systems used for measuring lymphatic pumping pressure in humans were improved upon in several ways. The system defined here utilizes closed-loop feedback control of pressure application at smaller, more precise intervals. Using this device, a significant difference in lymphatic vessel pumping pressure was detected between a control case and a treatment case in which a vasoactive substance with a nitric oxide donor (GTNO ointment) was applied to the tail. Although it is known that nitric oxide plays a crucial physiologic role in propagation of flow through lymphatic vessels, this study has quantified its significant pharmacological reduction of pumping pressure for the first time.
242

Optimal Reinsurance Designs: from an Insurer’s Perspective

Weng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer. The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
243

Underground in the Cloud : En kvalitativ studie om den digitala musikplattformen Soundcloud

Kuylenstierna, Adam January 2011 (has links)
Under 2000-talet har medie- och musikklimatet varit i omstöpning. Förutsättningar för att dela, distribuera och upptäcka musik har förändrats i grunden. Soundcloud, en digital musikplattform som startade sin verksamhet hösten 2008, står på många sätt i centrum för denna utveckling. Det övergripande syftet med föreliggande uppsats är att utifrån ett antal ämnesområden kopplade till teknik och musik, sociala funktioner och relationer, ett förändrat musiklandskap samt Soundclouds samarbete med Audible Magic tematiskt analysera och problematisera den digitala musikplattformen Soundcloud med utgångspunkt i kvalitativa intervjuer med artister/producenter/DJ:s verkandes inom den elektroniska scenen. Mer specifikt behandlas användarnas förhållning till upplevelsen av musik genom den särskilda teknologi och grafiska inramning som Soundcloud tillhandahåller samt de sociala funktioner som implementerats. Vidare problematiseras synen på Soundcloud som ett community, förhållandet mellan global och lokal tillhörighet samt hur artister respektive lyssnare förhåller sig till ett nytt medie- och musikklimat. Slutligen belyser uppsatsen Soundclouds nyligen initierade partnerskap med Audible Magic – ett företag som specialiserat sig på automatiserad identifiering av medieinnehåll. I tidigare forskning behandlas den så kallade modscenen samt den forskning som bedrivits kring det komplexa förhållandet mellan musik, teknologi och sociala praktiker. Det teoretiska ramverket är baserat på Chris Andersons ”Long Tail”-teori, med betoning på demokratiseringen av distributionsmedel respektive produktionsverktyg samt nya filter för urval och sammankoppling mellan utbud och efterfrågan. Vidare berörs Gerd Leonhards samtidsforskning om det nya medie- och musikklimat som står inför dörren. Studien har belyst Soundclouds distinkta grafiska gränssnitt och sociala funktioner och dess inverkan, i både positiv och negative bemärkelse, för hur musik upplevs. Studien visar att Soundcloud kan ses ett steg framåt i processen att göra delning av musik i digital tidsålder mer social, en utveckling som emellertid bromsas av faktum av att banden mellan användarna i många fall är allt för svaga, vilket bland annat beror på Soundclouds allt för kraftfulla filtrering- och urvalsmekanismer. En konsekvens av detta är att Soundcloud i sig ofta inte ses som ett community, utan blott som ett verktyg eller en länk mellan lokala noder. Denna slutsats stärks av faktum att samtliga informanter framhöll lokal tillhörighet som grundläggande. Musikaliska samarbeten via en digital musikplattform som Soundcloud menar man aldrig kommer att kunna ersätta det kreativa utbyte som sker fysisk mellan människor. Partnerskapet med Audible Magic är ännu i ett tidigt stadium. Studien visar att samarbete kan få både negativa och positiva följder. Det negativa består i att DJ-mixar kan komma att försvinna från Soundcloud. Det positiva i att en ökad upphovsrättslig kontroll kan fungera som ett incitament för att göra artister mer originella i sitt arbete
244

Optimal Reinsurance Designs: from an Insurer’s Perspective

Weng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer. The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
245

On Target Marketing in Mobile Devices : MBA-thesis in marketing

Wessén, Fredrik, Forsberg, Mats January 2010 (has links)
In the best of worlds, all marketing is relevant. This study brings light to and creates understanding for how to capture the opportunities for target marketing, given by resent technical development and improvement. Customers often perceive marketing on the internet as annoying, embarrassing, repetitive and sometimes even noisy. Companies have problems to focus their marketing efforts towards the areas which give most value for the marketing investments. The conflict between companies pushing the marketing messages to their customers, who are trying to avoid them, a growing mistrust is feed. Well established businesses are challenged by new companies cutting in between the content providers and their customers. A new business model using the Long Tail phenomena is shaking the old media houses’ business position. Smartphones and netbooks are merging into mobile devices, which release a number of opportunities for target marketing. This study states that mobile devices are personalized and as a consequence, open for the possibility of target marketing towards individuals. However, there are identified obstacles to overcome. One challenge lies in the balance between marketing benefits and preventing violation of the customers’ personal integrity. From literature and case studies, light is brought to the state of practice of rules and regulations, old media houses Schibsted and Aftonbladet, search engine provider as Google and a marketing agency, Mobiento Mobile Marketing.  Trends and best practices stick out as more important in order for a company to become a successful target marketing actor. A “target marketing house concept” points out four significant areas for companies to benefit from the power of target marketing in mobile devices. Trough out of this study, protection of the personal integrity and personal data has been pointed out as a key factor for a mutual and trustful customer relationship. This is considered to be as a precondition, both for behaviour segmentation and for a joint rewarding customer dialogue.
246

Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables

Karniychuk, Maryna 09 January 2007 (has links) (PDF)
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
247

The Principle of Scaling of Geographic Space and its Application in Urban Studies

Liu, Xintao January 2012 (has links)
Geographic space is the large-scale and continuous space that encircles the earth and in which human activities occur. The study of geographic space has drawn attention in many different fields and has been applied in a variety of studies, including those on cognition, urban planning and navigation systems. A scaling property indicates that small objects are far more numerous than large ones, i.e., the size of objects is extremely diverse. The concept of scaling resembles a fractal in geometric terms and a power law distribution from the perspective of statistical physics, but it is different from both in terms of application. Combining the concepts of geographic space and scaling, this thesis proposes the concept of the scaling of geographic space, which refers to the phenomenon that small geographic objects or representations are far more numerous than large ones. From the perspectives of statistics and mathematics, the scaling of geographic space can be characterized by the fact that the sizes of geographic objects follow heavy-tailed distributions, i.e., the special non-linear relationships between variables and their probability. In this thesis, the heavy-tailed distributions refer to the power law, lognormal, exponential, power law with an exponential cutoff and stretched exponential. The first three are the basic distributions, and the last two are their degenerate versions. If the measurements of the geographic objects follow a heavy-tailed distribution, then their mean value can divide them into two groups: large ones (a low percentage) whose values lie above the mean value and small ones (a high percentage) whose values lie below. This regularity is termed as the head/tail division rule. That is, a two-tier hierarchical structure can be obtained naturally. The scaling property of geographic space and the head/tail division rule are verified at city and country levels from the perspectives of axial lines and blocks, respectively. In the study of geographic space, the most important concept is geographic representation, which represents or partitions a large-scale geographic space into numerous small pieces, e.g., vector and raster data in conventional spatial analysis. In a different context, each geographic representation possesses different geographic implications and a rich partial knowledge of space. The emergence of geographic information science (GIScience) and volunteered geographic information (VGI) greatly enable the generation of new types of geographic representations. In addition to the old axial lines, this thesis generated several types of representations of geographic space: (a) blocks that were decomposed from road segments, each of which forms a minimum cycle such as city and field blocks (b) natural streets that were generated from street center lines using the Gestalt principle of good continuity; (c) new axial lines that were defined as the least number of individual straight line segments mutually intersected along natural streets; (d) the fewest-turn map direction (route) that possesses the hierarchical structure and indicates the scaling of geographic space; (e) spatio-temporal clusters of the stop points in the trajectories of large-scale floating car data. Based on the generated geographic representations, this thesis further applies the scaling property and the head/tail division rule to these representations for urban studies. First, all of the above geographic representations demonstrate the scaling property, which indicates the scaling of geographic space. Furthermore, the head/tail division rule performs well in obtaining the hierarchical structures of geographic objects. In a sense, the scaling property reveals the hierarchical structures of geographic objects. According to the above analysis and findings, several urban studies are performed as follows: (1) generate new axial lines based on natural streets for a better understanding of urban morphologies; (2) compute the fewest-turn and shortest map direction; (3) identify urban sprawl patches based on the statistics of blocks and natural cities; (4) categorize spatio-temporal clusters of long stop points into hotspots and traffic jams; and (5) perform an across-country comparison of hierarchical spatial structures. The overall contribution of this thesis is first to propose the principle of scaling of geographic space as well as the head/tail division rule, which provide a new and quantitative perspective to efficiently reduce the high degree of complexity and effectively solve the issues in urban studies. Several successful applications prove that the scaling of geographic space and the head/tail division rule are inspiring and can in fact be applied as a universal law, in particular, to urban studies and other fields. The data sets that were generated via an intensive geo-computation process are as large as hundreds of gigabytes and will be of great value to further data mining studies. / <p>QC 20120301</p> / Hägerstrand project entitled “GIS-based mobility information for sustainable urban planning and design”
248

跨國金融危機擴散效果之分析-以Copula模型為分析方法 / Analysis of transnational financial crisis contagion effect-copula approach

莊旭明, Chuang, Shiu Ming Unknown Date (has links)
本篇論文主要是想探討在2008年全球金融危機發生後,美國與亞洲國家股票市場之間的相關性是否發生明顯的改變。藉由2005年至2012年美國、新加坡、台灣、日本和泰國的股票市場資料,來觀察各國股票市場的相關性是否產生不對稱的現象,首先檢定美國對其他四個國家有無產生蔓延效果,並藉由不同期間的資料來檢定蔓延效果以看出各國之間是否在極端的情況下產生尾端相關性,最後,再使用不同的關聯結構函數配適出最適合資料的模型。 / The main idea of this paper is to show whether or not that stock market between U.S and Asian country has been obviously changed after 2008 financial crisis. For the sake of observing if there is or not occurred inconsistence phenomenon in each country’s stock market, we use the information from U.S、Singapore、Taiwan、Japan and Thailand since 2005 to 2012. First, look in that if U.S has contagion affects to other four countries and, checkup the contagion effects through the information from different period to find the tail dependence in extreme situation. Finally, to dispose a model which is the most suitable for the information by using different Copula functions.
249

Site blocking effects on adsorbed polyacrylamide conformation

Brotherson, Brett Andrew 06 November 2007 (has links)
The use of polymers as flocculating additives is a common practice in many manufacturing environments. However, exactly how these polymers interact with surfaces is relatively unknown. One specific topic which is thought to be very important to flocculation is an adsorbed polymer's conformation. Substantial amounts of previous work, mainly using simulations, have been performed to elucidate the theory surrounding adsorbed polymer conformations. Yet, there is little experimental work which directly verifies current theory. In order to optimize the use of polymer flocculants in industrial applications, a better understanding of an adsorbed polymer's conformation on a surface beyond theoretical simulations is necessary. This work looks specifically at site blocking, which has a broad impact on flocculation, adsorption, and surface modification, and investigated its effects on the resulting adsorbed polymer conformation. Experimental methods which would allow direct determination of adsorbed polymer conformational details and be comparable with previous experimental results were first determined or developed. Characterization of an adsorbed polymer's conformation was then evaluated using dynamic light scattering, a currently accepted experimental technique to examine this. This commonly used technique was performed to allow the comparison of this works results with past literature. Next, a new technique using atomic force microscopy was developed, building on previous experimental techniques, to allow the direct determination of an adsorbed polymer's loop lengths. This method also was able to quantify changes in the length of adsorbed polymer tails. Finally, mesoscopic simulation was attempted using dissipative particle dynamics. In order to determine more information about an adsorbed polymer's conformation, three different environmental factors were analyzed: an adsorbed polymer on a surface in water, an adsorbed polymer on a surface in aqueous solutions of varying ionic strength, and an adsorbed polymer on a surface functionalized with site blocking additives. This work investigated these scenarios using a low charge density high molecular weight cationic polyacrylamide. Three different substrates, for polymer adsorption were analyzed: mica, anionic latex, and glass. It was determined that, similar to previous studies, the adsorbed polymer layer thickness in water is relatively small even for high molecular weight polymers, on the order of tens of nanometers. The loop length distribution of a single polymer, experimentally verified for the first time, revealed a broad span of loop lengths as high as 1.5 microns. However, the bulk of the distribution was found between 40 and 260 nanometers. For the first time, previous theoretical predictions regarding the salt effect on adsorbed polymer conformation were confirmed experimentally. It was determined that the adsorbed polymer layer thickness expanded with increasing ionic strength of the solvent. Using atomic force microscopy, it was determined that the adsorbed polymer loop lengths and tail lengths increased with increasing ionic strength, supporting the results found using dynamic light scattering. The effect of the addition of site blocking additives on a single polymer's conformation was investigated for the first time. It was determined that the addition of site blocking additives caused strikingly similar results as the addition of salt to the medium. The changes in an adsorbed polymer's loop lengths was found to be inconsistent and minimal. However, the changes in an adsorbed polymer's free tail length was found to increase with increasing site blocking additive levels. These results were obtained using either PDADMAC or cationic nanosilica as site blocking additives.
250

Tail asymptotics of queueing networks with subexponential service times

Kim, Jung-Kyung 06 July 2009 (has links)
This dissertation is concerned with the tail asymptotics of queueing networks with subexponential service time distributions. Our objective is to investigate the tail characteristics of key performance measures such as cycle times and waiting times on a variety of queueing models which may arise in many applications such as communication and manufacturing systems. First, we focus on a general class of closed feedforward fork and join queueing networks under the assumption that the service time distribution of at least one station is subexponential. Our goal is to derive the tail asymptotics of transient cycle times and waiting times. Furthermore, we argue that under certain conditions the asymptotic tail distributions remain the same for stationary cycle times and waiting times. Finally, we provide numerical experiments in order to understand how fast the convergence of tail probabilities of cycle times and waiting times is to their asymptotic counter parts. Next, we consider closed tandem queues with finite buffers between stations. We assume that at least one station has a subexponential service time distribution. We analyze this system under communication blocking and manufacturing blocking rules. We are interested in the tail asymptotics of transient cycle times and waiting times. Furthermore, we study under which conditions on system parameters a stationary regime exists and the transient results can be generalized to stationary counter parts. Finally, we provide numerical examples to understand the convergence behavior of the tail asymptotics of transient cycle times and waiting times. Finally, we study open tandem queueing networks with subexponential service time distributions. We assume that number of customers in front of the first station is infinite and there is infinite room for finished customers after the last station but the size of the buffer between two consecutive stations is finite. Using (max,+) linear recursions, we investigate the tail asymptotics of transient response times and waiting times under both communication blocking and manufacturing blocking schemes. We also discuss under which conditions these results can be generalized to the tail asymptotics of stationary response times and waiting times. Finally, we provide numerical examples to investigate the convergence of the tail probabilities of transient response times and waiting times to their asymptotic counter parts.

Page generated in 0.0322 seconds