• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 41
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Top-percentile traffic routing problem

Yang, Xinan January 2012 (has links)
Multi-homing is a technology used by Internet Service Provider (ISP) to connect to the Internet via multiple networks. This connectivity enhances the network reliability and service quality of the ISP. However, using multi-networks may imply multiple costs on the ISP. To make full use of the underlying networks with minimum cost, a routing strategy is requested by ISPs. Of course, this optimal routing strategy depends on the pricing regime used by network providers. In this study we investigate a relatively new pricing regime – top-percentile pricing. Under top-percentile pricing, network providers divide the charging period into several fixed length time intervals and calculate their cost according to the traffic volume that has been shipped during the θ-th highest time interval. Unlike traditional pricing regimes, the network design under top-percentile pricing has not been fully studied. This paper investigates the optimal routing strategy in case where network providers charge ISPs according to top-percentile pricing. We call this problem the Top-percentile Traffic Routing Problem (TpTRP). As the ISP cannot predict next time interval’s traffic volume in real world application, in our setting up the TpTRP is a multi-stage stochastic optimisation problem. Routing decisions should be made at the beginning of every time period before knowing the amount of traffic that is to be sent. The stochastic nature of the TpTRP forms the critical difficulty of this study. In this paper several approaches are investigated in either the modelling or solving steps of the problem. We begin by exploring several simplifications of the original TpTRP to get an insight of the features of the problem. Some of these allow analytical solutions which lead to bounds on the achievable optimal solution. We also establish bounds by investigating several “naive” routing policies. In the second part of this work, we build the multi-stage stochastic programming model of the TpTRP, which is hard to solve due to the integer variables introduced in the calculation of the top-percentile traffic. A lift-and-project based cutting plane method is investigated in solving the SMIP for very small examples of TpTRP. Nevertheless it is too inefficient to be applicable on large sized instances. As an alternative, we explore the solution of the TpTRP as a Stochastic Dynamic Programming (SDP) problem by a discretization of the state space. This SDP model gives us achievable routing policies on small size instances of the TpTRP, which of course improve the naive routing policies. However, the solution approach based on SDP suffers from the curse of dimensionality which restricts its applicability. To overcome this we suggest using Approximate Dynamic Programming (ADP) which largely avoids the curse of dimensionality by exploiting the structure of the problem to construct parameterized approximations of the value function in SDP and train the model iteratively to get a converged set of parameters. The resulting ADP model with discrete parameter for every time interval works well for medium size instances of TpTRP, though it still requires too long to be trained for large size instances. To make the realistically sized TpTRP problem solvable, we improve on the ADP model by using Bezier Curves/Surfaces to do the aggregation over time. This modification accelerates the efficiency of parameter training in the solution of the ADP model, which makes the realistically sized TpTRP tractable.
2

Exploring the Efficacy of Percentile Schedules with the Amplitude of Muscular Contractions

Goodhue, Rob 05 1900 (has links)
Percentile reinforcement schedules have been used to systematically alter inter-response times, behavioral variability, breath carbon monoxide levels, duration of social behaviors, and various other properties of behavior. However, none of the previous studies have examined the effectiveness of percentile schedules in relation to the magnitude of muscular contractions. This control over magnitude of muscular responding has important implications relating to the strengthening of muscles and correct movements for patients receiving physical rehabilitation. There would be great utility in percentile schedules that can be implemented in rehabilitation situations by physical therapists and patients themselves to improve treatment outcomes – all of which could be possible without any behavioral training if the procedure is implemented via body sensors and smartphone applications. Using healthy adults and the aforementioned technology, this thesis focused on the design and testing of three percentile reinforcement schedule procedures to increase the strength of the vastus medialis muscle. Results indicate that the magnitude of muscular responses can be shaped using body sensors and contingent feedback, and the percentile schedule procedures have promising applications in the domain of physical therapy.
3

On the Estimation of Lower-end Quantiles from a Right-skewed Distribution

Wang, Hongjun 13 April 2010 (has links)
No description available.
4

The Effects of Topography on Spatial Tornado Distribution

Cox, David Austin 12 May 2012 (has links)
The role of topography on the spatial distribution of tornadoes was assessed through geospatial and statistical techniques. A 100-m digital elevation model was used to create slope, aspect, and surface roughness maps; and; tornado beginning and ending points and paths were used to extract terrain information. Tornado touchdowns, liftoffs, paths, and path-land angles were examined to determine whether tornado paths occur more frequently in or along certain terrain or slopes. Statistical analyses, such as bootstrapping, were used to analyze tornado touchdowns, liftoffs and paths and path-relative terrain angles. Results show that tornado paths are more common with downhill-movement. Tornadoes are not as likely to move uphill because the 73.6 percent northeast path bias represents the highest frequencies of path-angles. Tornado touchdowns and paths occur more often in smooth terrain, rather than rough terrain. Complex topographic variability seems to not have an effect on the spatial distribution of tornadoes.
5

The Relationship of Scoring Above or Below the 75th Percentile on the Kuder Preference Record to General Aptitude, Vocational Attitudes and Occupational Values

Orme, Terry Joseph 01 May 1973 (has links)
This study was designed to investigate the relationship of general aptitude, vocational attitudes, and occupational values to scoring above or below the 75th percentile on the Kuder Preference Record by ninth grade students in rural southwestern Utah and southeastern Idaho. The sample consisted of a group of 248 students who participated in Project Mace. The students were divided into two groups according to their Kuder percentile scores. The G scale of the General Aptitude Test Battery, the Vocational Development Inventory, and the Occupational Values Inventory were also administered to the subjects. The data were analyzed by a simple correlation matrix and analysis of variance. The results of the study indicated there was no significant difference between the two groups on any of the instruments. The implications from the results indicated that: The 75th percentile probably should not be used as a criterion for strong interests, at least when an attempt is being made to relate interests to the general aptitude, attitudes, and values measured in this study. More research is needed in order to fully understand the relationship of interests, aptitudes, attitudes and values. More research is needed on the instruments used in this study, especially the Occupational Values Inventory and the Vocational Development Inventory.
6

Data-Driven Robust Optimization in Healthcare Applications

January 2018 (has links)
abstract: Healthcare operations have enjoyed reduced costs, improved patient safety, and innovation in healthcare policy over a huge variety of applications by tackling prob- lems via the creation and optimization of descriptive mathematical models to guide decision-making. Despite these accomplishments, models are stylized representations of real-world applications, reliant on accurate estimations from historical data to jus- tify their underlying assumptions. To protect against unreliable estimations which can adversely affect the decisions generated from applications dependent on fully- realized models, techniques that are robust against misspecications are utilized while still making use of incoming data for learning. Hence, new robust techniques are ap- plied that (1) allow for the decision-maker to express a spectrum of pessimism against model uncertainties while (2) still utilizing incoming data for learning. Two main ap- plications are investigated with respect to these goals, the first being a percentile optimization technique with respect to a multi-class queueing system for application in hospital Emergency Departments. The second studies the use of robust forecasting techniques in improving developing countries’ vaccine supply chains via (1) an inno- vative outside of cold chain policy and (2) a district-managed approach to inventory control. Both of these research application areas utilize data-driven approaches that feature learning and pessimism-controlled robustness. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2018
7

A Confidence Interval Estimate of Percentile

Jou, How Coung 01 May 1980 (has links)
The confidence interval estimate of percentile and its applications were studied. The three methods of estimating a confidence interval were introduced. Some properties of order statistics were reviewed. The Monte Carlo Method -- used to estimate the confidence interval was the most important one among the three methods. The generation of ordered random variables and the estimation of parameters were discussed clearly. The comparison of the three methods showed that the Monte Carlo method would always work, but the K-S and the simplified methods would not.
8

Evaluating the Efficacy of Shaping with a Percentile Schedule to Increase the Duration of Sustained Interaction Following a Bid for Joint Attention in Children with Autism

Gutbrod, Therese 11 June 2014 (has links)
This study examined the use of shaping with a percentile schedule to increase the duration of the interaction following a bid for joint attention in children with autism. Specifically, the therapist initiated a bid for joint attention and reinforced longer successive approximations in seconds of sustained interaction with the therapist and activity. A percentile schedule ranked the most recent 10 observations and reinforcement was provided if the current observation equaled the sixth ranking. Most-to-least prompting was used if the child failed to meet the calculated criterion. Shaping with a percentile schedule of reinforcement was effective at increasing the duration of sustained interaction following a bid for joint attention, for all participants from an average baseline duration of 13 s to an average intervention duration of 215 s.
9

Driving Cycle Generation Using Statistical Analysis and Markov Chains

Torp, Emil, Önnegren, Patrik January 2013 (has links)
A driving cycle is a velocity profile over time. Driving cycles can be used for environmental classification of cars and to evaluate vehicle performance. The benefit by using stochastic driving cycles instead of predefined driving cycles, i.e. the New European Driving Cycle, is for instance that the risk of cycle beating is reduced. Different methods to generate stochastic driving cycles based on real-world data have been used around the world, but the representativeness of the generated driving cycles has been difficult to ensure. The possibility to generate stochastic driving cycles that captures specific features from a set of real-world driving cycles is studied. Data from more than 500 real-world trips has been processed and categorized. The driving cycles are merged into several transition probability matrices (TPMs), where each element corresponds to a specific state defined by its velocity and acceleration. The TPMs are used with Markov chain theory to generate stochastic driving cycles. The driving cycles are validated using percentile limits on a set of characteristic variables, that are obtained from statistical analysis of real-world driving cycles. The distribution of the generated driving cycles is investigated and compared to real-world driving cycles distribution. The generated driving cycles proves to represent the original set of real-world driving cycles in terms of key variables determined through statistical analysis. Four different methods are used to determine which statistical variables that describes the features of the provided driving cycles. Two of the methods uses regression analysis. Hierarchical clustering of statistical variables is proposed as a third alternative, and the last method combines the cluster analysis with the regression analysis. The entire process is automated and a graphical user interface is developed in Matlab to facilitate the use of the software. / En körcykel är en beskriving av hur hastigheten för ett fordon ändras under en körning. Körcykler används bland annat till att miljöklassa bilar och för att utvärdera fordonsprestanda. Olika metoder för att generera stokastiska körcykler baserade på verklig data har använts runt om i världen, men det har varit svårt att efterlikna naturliga körcykler. Möjligheten att generera stokastiska körcykler som representerar en uppsättning naturliga körcykler studeras. Data från över 500 körcykler bearbetas och kategoriseras. Dessa används för att skapa överergångsmatriser där varje element motsvarar ett visst tillstånd, med hastighet och acceleration som tillståndsvariabler. Matrisen tillsammans med teorin om Markovkedjor används för att generera stokastiska körcykler. De genererade körcyklerna valideras med hjälp percentilgränser för ett antal karaktäristiska variabler som beräknats för de naturliga körcyklerna. Hastighets- och accelerationsfördelningen hos de genererade körcyklerna studeras och jämförs med de naturliga körcyklerna för att säkerställa att de är representativa. Statistiska egenskaper jämfördes och de genererade körcyklerna visade sig likna den ursprungliga uppsättningen körcykler. Fyra olika metoder används för att bestämma vilka statistiska variabler som beskriver de naturliga körcyklerna. Två av metoderna använder regressionsanalys. Hierarkisk klustring av statistiska variabler föreslås som ett tredje alternativ. Den sista metoden kombinerar klusteranalysen med regressionsanalysen. Hela processen är automatiserad och ett grafiskt användargränssnitt har utvecklats i Matlab för att underlätta användningen av programmet.
10

Single and Twin-Heaps as Natural Data Structures for Percentile Point Simulation Algorithms

Hatzinger, Reinhold, Panny, Wolfgang January 1993 (has links) (PDF)
Sometimes percentile points cannot be determined analytically. In such cases one has to resort to Monte Carlo techniques. In order to provide reliable and accurate results it is usually necessary to generate rather large samples. Thus the proper organization of the relevant data is of crucial importance. In this paper we investigate the appropriateness of heap-based data structures for the percentile point estimation problem. Theoretical considerations and empirical results give evidence of the good performance of these structures regarding their time and space complexity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik

Page generated in 0.0446 seconds