• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 5
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards a Framework For Resource Allocation in Networks

Ranasingha, Maththondage Chamara Sisirawansha 26 May 2009 (has links)
Network resources (such as bandwidth on a link) are not unlimited, and must be shared by all networked applications in some manner of fairness. This calls for the development and implementation of effective strategies that enable optimal utilization of these scarce network resources among the various applications that share the network. Although several rate controllers have been proposed in the literature to address the issue of optimal rate allocation, they do not appear to capture other factors that are of critical concern. For example, consider a battlefield data fusion application where a fusion center desires to allocate more bandwidth to incoming flows that are perceived to be more accurate and important. For these applications, network users should consider transmission rates of other users in the process of rate allocation. Hence, a rate controller should consider application specific rate coordination directives given by the underlying application. The work reported herein addresses this issue of how a rate controller may establish and maintain the desired application specific rate coordination directives. We identify three major challenges in meeting this objective. First, the application specific performance measures must be formulated as rate coordination directives. Second, it is necessary to incorporate these rate coordination directives into a rate controller. Of course, the resulting rate controller must co-exist with ordinary rate controllers, such as TCP Reno, in a shared network. Finally, a mechanism for identifying those flows that require the rate allocation directives must be put in place. The first challenge is addressed by means of a utility function which allows the performance of the underlying application to be maximized. The second challenge is addressed by utilizing the Network Utility Maximization (NUM) framework. The standard utility function (i.e. utility function of the standard rate controller) is augmented by inserting the application specific utility function as an additive term. Then the rate allocation problem is formulated as a constrained optimization problem, where the objective is to maximize the aggregate utility of the network. The gradient projection algorithm is used to solve the optimization problem. The resulting solution is formulated and implemented as a window update function. To address the final challenge we resort to a machine learning algorithm. We demonstrate how data features estimated utilizing only a fraction of the flow can be used as evidential input to a series of Bayesian Networks (BNs). We account for the uncertainty introduced by partial flow data through the Dempster-Shafer (DS) evidential reasoning framework.
2

Market with transaction costs: optimal shadow state-price densities and exponential utility maximization

Nakatsu, Hitoshi Unknown Date
No description available.
3

Utility maximization in incomplete markets with random endowment

Cvitanic, Jaksa, Schachermayer, Walter, Wang, Hui January 2000 (has links) (PDF)
This paper solves a long-standing open problem in mathematical finance: to find a solution to the problem of maximizing utility from terminal wealth of an agent with a random endowment process, in the general, semimartingale model for incomplete markets, and to characterize it via the associated dual problem. We show that this is indeed possible if the dual problem and its domain are carefully defined. More precisely, we show that the optimal terminal wealth is equal to the inverse of marginal utility evaluated at the solution to the dual problem, which is in the form of the regular part of an element of(L∞)* (the dual space of L∞). (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
4

How potential investments may change the optimal portfolio for the exponential utility

Schachermayer, Walter January 2002 (has links) (PDF)
We show that, for a utility function U: R to R having reasonable asymptotic elasticity, the optimal investment process H. S is a super-martingale under each equivalent martingale measure Q, such that E[V(dQ/dP)] < "unendlich", where V is conjugate to U. Similar results for the special case of the exponential utility were recently obtained by Delbaen, Grandits, Rheinländer, Samperi, Schweizer, Stricker as well as Kabanov, Stricker. This result gives rise to a rather delicate analysis of the "good definition" of "allowed" trading strategies H for the financial market S. One offspring of these considerations leads to the subsequent - at first glance paradoxical - example. There is a financial market consisting of a deterministic bond and two risky financial assets (S_t^1, S_t^2)_0<=t<=T such that, for an agent whose preferences are modeled by expected exponential utility at time T, it is optimal to constantly hold one unit of asset S^1. However, if we pass to the market consisting only of the bond and the first risky asset S^1, and leaving the information structure unchanged, this trading strategy is not optimal any more: in this smaller market it is optimal to invest the initial endowment into the bond. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
5

Network Traffic Control Based on Modern Control Techniques: Fuzzy Logic and Network Utility Maximization

Liu, Jungang 30 April 2014 (has links)
This thesis presents two modern control methods to address the Internet traffic congestion control issues. They are based on a distributed traffic management framework for the fast-growing Internet traffic in which routers are deployed with intelligent or optimal data rate controllers to tackle the traffic mass. The first one is called the IntelRate (Intelligent Rate) controller using the fuzzy logic theory. Unlike other explicit traffic control protocols that have to estimate network parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or the number of flows), our fuzzy-logic-based explicit controller can measure the router queue size directly. Hence it avoids various potential performance problems arising from parameter estimations while reducing much computation and memory consumption in the routers. The communication QoS (Quality of Service) is assured by the good performances of our scheme such as max-min fairness, low queueing delay and good robustness to network dynamics. Using the Lyapunov’s Direct Method, this controller is proved to be globally asymptotically stable. The other one is called the OFEX (Optimal and Fully EXplicit) controller using convex optimization. This new scheme is able to provide not only optimal bandwidth allocation but also fully explicit congestion signal to sources. It uses the congestion signal from the most congested link, instead of the cumulative signal from a flow path. In this way, it overcomes the drawback of the relatively explicit controllers that bias the multi-bottlenecked users, and significantly improves their convergence speed and throughput performance. Furthermore, the OFEX controller design considers a dynamic model by proposing a remedial measure against the unpredictable bandwidth changes in contention-based multi-access networks (such as shared Ethernet or IEEE 802.11). When compared with the former works/controllers, such a remedy also effectively reduces the instantaneous queue size in a router, and thus significantly improving the queueing delay and packet loss performance. Finally, the applications of these two controllers on wireless local area networks have been investigated. Their design guidelines/limits are also provided based on our experiences.
6

Market with transaction costs: optimal shadow state-price densities and exponential utility maximization

Nakatsu, Hitoshi 11 1900 (has links)
This thesis discusses the financial market model with proportional transaction costs considered in Cvitanic and Karatzas (1996) (hereafter we use CK (1996)). For a modified dual problem introduced by Choulli (2009), I discuss solutions under weaker conditions than those of CK (1996), and furthermore the obtained solutions generalize the examples treated in CK (1996). Then, I consider the exponential utility which does not belong to the family of utility considered by CK (1996) due to the Inada condition. Finally, I elaborate the same results as in CK (1996) for the exponential utility, and I derive other related results using the specificity of the exponential utility function as well. These lead to a different method/approach than CK (1996) for our utility maximization problem, and different notion of admissibility for financial strategies as well. / Mathematical Finance
7

Network Traffic Control Based on Modern Control Techniques: Fuzzy Logic and Network Utility Maximization

Liu, Jungang January 2014 (has links)
This thesis presents two modern control methods to address the Internet traffic congestion control issues. They are based on a distributed traffic management framework for the fast-growing Internet traffic in which routers are deployed with intelligent or optimal data rate controllers to tackle the traffic mass. The first one is called the IntelRate (Intelligent Rate) controller using the fuzzy logic theory. Unlike other explicit traffic control protocols that have to estimate network parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or the number of flows), our fuzzy-logic-based explicit controller can measure the router queue size directly. Hence it avoids various potential performance problems arising from parameter estimations while reducing much computation and memory consumption in the routers. The communication QoS (Quality of Service) is assured by the good performances of our scheme such as max-min fairness, low queueing delay and good robustness to network dynamics. Using the Lyapunov’s Direct Method, this controller is proved to be globally asymptotically stable. The other one is called the OFEX (Optimal and Fully EXplicit) controller using convex optimization. This new scheme is able to provide not only optimal bandwidth allocation but also fully explicit congestion signal to sources. It uses the congestion signal from the most congested link, instead of the cumulative signal from a flow path. In this way, it overcomes the drawback of the relatively explicit controllers that bias the multi-bottlenecked users, and significantly improves their convergence speed and throughput performance. Furthermore, the OFEX controller design considers a dynamic model by proposing a remedial measure against the unpredictable bandwidth changes in contention-based multi-access networks (such as shared Ethernet or IEEE 802.11). When compared with the former works/controllers, such a remedy also effectively reduces the instantaneous queue size in a router, and thus significantly improving the queueing delay and packet loss performance. Finally, the applications of these two controllers on wireless local area networks have been investigated. Their design guidelines/limits are also provided based on our experiences.
8

Stepping stone or career move? A case study of rural K–12 music educators and their job attrition

Kuntzelman, Richard Ian 07 November 2016 (has links)
Teachers of rural K–12 music education are subject to attrition rates that are higher than many other professions or teaching specialties (Goldring, Taie & Riddles, 2014; Harmon, 2001; Ingersoll, 2001). Because of this, a large number of music teachers who are hired to teach in rural schools are inexperienced educators who are often unaware of the specific demands that are unique to these jobs. Upon earning a teaching certification, many new graduates get hired in rural locations with unfamiliar teaching conditions that could potentially lead to dissatisfaction in the workplace which could be a contributing factor to the higher than average attrition rates (Bates, 2013; Hancock, 2008; Monk, 2007; Isbell, 2005). This dissertation is a case study of in-service music educators in the rural Western United States designed to help understand the trend of higher than average attrition rates. With a theoretical framework of utility maximization to find a satisfactory person-job fit, I observed, interviewed, and collected journals from 5 participants with current or previous rural K–12 music teaching experience to determine: 1) what reasons do educators consider influential in a decision to stay in or move from a teaching position?, 2) what changes do teachers report in their perception of job utility maximization over their careers?, and 3) what are some benefits and challenges of teaching in a rural music teaching setting? Reasons for attrition specific to rural music education and generic to teaching were discussed in terms of a participant’s perception of job satisfaction and their decisions to stay in or leave rural K–12 music teaching jobs. Participants listed five themes as influential to their decisions for attrition: 1) disproportionate emphasis on athletics and pep band, 2) teacher and student absenteeism, 3) spillover work time 4) family, and 5) administrative rapport. No individual theme was a singular indicator of attrition, nor was any theme more prominent than others in influencing a participant to keep or leave a job. Rather, the perception of each reason for attrition had a cumulative effect and jobs were maintained or sought anew based on a combination of views of each theme. Also, participants reported steady inclinations of preferred musical specialty, but the perception of each theme as a reason for attrition changed with time and teaching experience. Ultimately, participants revealed that rural K–12 music teaching jobs can be highly rewarding if a person is professionally flexible, willing to regularly travel long distances (with students and alone), and can appreciate the idiosyncrasies of living in remote communities.
9

Topics in Network Utility Maximization : Interior Point and Finite-step Methods

Akhil, P T January 2017 (has links) (PDF)
Network utility maximization has emerged as a powerful tool in studying flow control, resource allocation and other cross-layer optimization problems. In this work, we study a flow control problem in the optimization framework. The objective is to maximize the sum utility of the users subject to the flow constraints of the network. The utility maximization is solved in a distributed setting; the network operator does not know the user utility functions and the users know neither the rate choices of other users nor the flow constraints of the network. We build upon a popular decomposition technique proposed by Kelly [Eur. Trans. Telecommun., 8(1), 1997] to solve the utility maximization problem in the aforementioned distributed setting. The technique decomposes the utility maximization problem into a user problem, solved by each user and a network problem solved by the network. We propose an iterative algorithm based on this decomposition technique. In each iteration, the users communicate to the network their willingness to pay for the network resources. The network allocates rates in a proportionally fair manner based on the prices communicated by the users. The new feature of the proposed algorithm is that the rates allocated by the network remains feasible at all times. We show that the iterates put out by the algorithm asymptotically tracks a differential inclusion. We also show that the solution to the differential inclusion converges to the system optimal point via Lyapunov theory. We use a popular benchmark algorithm due to Kelly et al. [J. of the Oper. Res. Soc., 49(3), 1998] that involves fast user updates coupled with slow network updates in the form of additive increase and multiplicative decrease of the user flows. The proposed algorithm may be viewed as one with fast user update and fast network update that keeps the iterates feasible at all times. Simulations suggest that our proposed algorithm converges faster than the aforementioned benchmark algorithm. When the flows originate or terminate at a single node, the network problem is the maximization of a so-called d-separable objective function over the bases of a polymatroid. The solution is the lexicographically optimal base of the polymatroid. We map the problem of finding the lexicographically optimal base of a polymatroid to the geometrical problem of finding the concave cover of a set of points on a two-dimensional plane. We also describe an algorithm that finds the concave cover in linear time. Next, we consider the minimization of a more general objective function, i.e., a separable convex function, over the bases of a polymatroid with a special structure. We propose a novel decomposition algorithm and show the proof of correctness and optimality of the algorithm via the theory of polymatroids. Further, motivated by the need to handle piece-wise linear concave utility functions, we extend the decomposition algorithm to handle the case when the separable convex functions are not continuously differentiable or not strictly convex. We then provide a proof of its correctness and optimality.
10

Extending Complex Event Processing for Advanced Applications

Wang, Di 30 April 2013 (has links)
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology.

Page generated in 0.1191 seconds