• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Novel Application Models and Efficient Algorithms for Offloading to Clouds

González Barrameda, José Andrés January 2017 (has links)
The application offloading problem for Mobile Cloud Computing aims at improving the mobile user experience by leveraging the resources of the cloud. The execution of the mobile application is offloaded to the cloud, saving energy at the mobile device or speeding up the execution of the application. We improve the accuracy and performance of application offloading solutions in three main directions. First, we propose a novel fine-grained application model that supports complex module dependencies such as sequential, conditional and parallel module executions. The model also allows for multiple offloading decisions that are tailored towards the current application, network, or user contexts. As a result, the model is more precise in capturing the structure of the application and supports more complex offloading solutions. Second, we propose three cost models, namely, average-based, statistics-based and interval-based cost models, defined for the proposed application model. The average-based approach models each module cost by the expected cost value, and the expected cost of the entire application is estimated considering each of the three module dependencies. The novel statistics-based cost model employs Cumulative Distribution Function (CDFs) to represent the costs of the modules and of the mobile application, which is estimated considering the cost and dependencies of the modules. This cost model opens the doors for new statistics-based optimization functions and constraints whereas the state of the art only support optimizations based on the average running cost of the application. Furthermore, this cost model can be used to perform statistical analysis of the performance of the application in different scenarios such as varying network data rates. The last cost model, the interval-based, represents the module costs via intervals in order to addresses the cost uncertainty while having lower requirements and computational complexity than the statistics-based model. The cost of the application is estimated as an expected maximum cost via a linear optimization function. Finally, we present offloading decision algorithms for each cost model. For the average-based model, we present a fast optimal dynamic programming algorithm. For the statistics-based model, we present another fast optimal dynamic programming algorithm for the scenario where the optimization function meets specific properties. Finally, for the interval-based cost model, we present a robust formulation that solves a linear number of linear optimization problems. Our evaluations verify the accuracy of the models and show higher cost savings for our solutions when compared to the state of the art.
12

Sinkhole Hazard Assessment in Minnesota Using a Decision Tree Model

Gao, Yongli, Alexander, E. Calvin 01 May 2008 (has links)
An understanding of what influences sinkhole formation and the ability to accurately predict sinkhole hazards is critical to environmental management efforts in the karst lands of southeastern Minnesota. Based on the distribution of distances to the nearest sinkhole, sinkhole density, bedrock geology and depth to bedrock in southeastern Minnesota and northwestern Iowa, a decision tree model has been developed to construct maps of sinkhole probability in Minnesota. The decision tree model was converted as cartographic models and implemented in ArcGIS to create a preliminary sinkhole probability map in Goodhue, Wabasha, Olmsted, Fillmore, and Mower Counties. This model quantifies bedrock geology, depth to bedrock, sinkhole density, and neighborhood effects in southeastern Minnesota but excludes potential controlling factors such as structural control, topographic settings, human activities and land-use. The sinkhole probability map needs to be verified and updated as more sinkholes are mapped and more information about sinkhole formation is obtained.
13

Influencing Elections with Statistics: Targeting Voters with Logistic Regression Trees

Rusch, Thomas, Lee, Ilro, Hornik, Kurt, Jank, Wolfgang, Zeileis, Achim 03 1900 (has links) (PDF)
Political campaigning has become a multi-million dollar business. A substantial proportion of a campaign's budget is spent on voter mobilization, i.e., on identifying and influencing as many people as possible to vote. Based on data, campaigns use statistical tools to provide a basis for deciding who to target. While the data available is usually rich, campaigns have traditionally relied on a rather limited selection of information, often including only previous voting behavior and one or two demographical variables. Statistical procedures that are currently in use include logistic regression or standard classification tree methods like CHAID, but there is a growing interest in employing modern data mining approaches. Along the lines of this development, we propose a modern framework for voter targeting called LORET (for logistic regression trees) that employs trees (with possibly just a single root node) containing logistic regressions (with possibly just an intercept) in every leaf. Thus, they contain logistic regression and classification trees as special cases and allow for a synthesis of both techniques under one umbrella. We explore various flavors of LORET models that (a) compare the effect of using the full set of available variables against using only limited information and (b) investigate their varying effects either as regressors in the logistic model components or as partitioning variables in the tree components. To assess model performance and illustrate targeting, we apply LORET to a data set of 19,634 eligible voters from the 2004 US presidential election. We find that augmenting the standard set of variables (such as age and voting history) together with additional predictor variables (such as the household composition in terms of party affiliation and each individual's rank in the household) clearly improves predictive accuracy. We also find that LORET models based on tree induction outbeat the unpartitioned competitors. Additionally, LORET models using both partitioning variables and regressors in the resulting nodes can improve the efficiency of allocating campaign resources while still providing intelligible models. / Series: Research Report Series / Department of Statistics and Mathematics
14

Comptage d'orbites périodiques dans le modèle de windtree / Counting problem on wind-tree models

Pardo, Angel 22 June 2017 (has links)
Le problème du cercle de Gauss consiste à compter le nombre de points entiers de longueur bornée dans le plan. Autrement dit, compter le nombre de géodésiques fermées de longueur bornée sur un tore plat bidimensionnel. De très nombreux problèmes de comptage en systèmes dynamiques se sont inspirés de ce problème. Depuis 30 ans, on cherche à comprendre l’asymptotique de géodésiques fermées dans les surfaces de translation. H. Masur a montré que ce nombre a une croissance quadratique. Calculer l’asymptotique quadratique (constante de Siegel–Veech) est un sujet de recherches très actif aujourd’hui. L’objet d’étude de cette thèse est le modèle de windtree, un modèle de billard non compact. Dans le cas classique, on place des obstacles rectangulaires identiques dans le plan en chaque point entier. On joue au billard sur le complémentaire. Nous montrons que le nombre de trajectoires périodiques a une croissance asymptotique quadratique et calculons la constante de Siegel–Veech pour le windtree classique ainsi que pour la généralisation de Delecroix– Zorich. Nous prouvons que, pour le windtree classique, cette constante ne dépend pas des tailles des obstacles (phénomène “non varying” analogue aux résultats de Chen–Möller). Enfin, lorsque la surface de translation compacte sous-jacente est une surface de Veech, nous donnons une version quantitative du comptage. / The Gauss circle problem consists in counting the number of integer points of bounded length in the plane. In other words, counting the number of closed geodesics of bounded length on a flat two dimensional torus. Many counting problems in dynamical systems have been inspired by this problem. For 30 years, the experts try to understand the asymptotic behavior of closed geodesics in translation surfaces. H. Masur proved that this number has quadratic growth rate. Compute the quadratic asymptotic (Siegel–Veech constant) is a very active research domain these days. The object of study in this thesis is the wind-tree model, a non-compact billiard model. In the classical setting, we place identical rectangular obstacles in the plane at each integer point. We play billiard on the complement. We show that the number of periodic trajectories has quadratic asymptotic growth rate and we compute the Siegel–Veech constant for the classical wind-tree model as well as for the Delecroix–Zorich variant. We prove that, for the classical wind-tree model, this constant does not depend on the dimensions of the obstacles (non-varying phenomenon, analogous to results of Chen–Möller). Finally, when the underlying compact translation surface is a Veech surface, we give a quantitative version of the counting.
15

APPLICATION OF PEER TO PEER TECHNOLOGY IN VEHICULAR COMMUNICATION.

Shameerpet, Tanuja 01 June 2021 (has links)
The primary goal of this thesis is to implement peer to peer technology in vehicular communication. The emerging concept of Vehicular Communication including road side infrastructure is a promising solution to avoid accidents and providing live traffic data. There is a high demand for the technologies which ensure low latency communication. Modern vehicles equipped with computing, communication and storage and sensing capabilities eased the transmission of data. To achieve deterministic bounds on data delivery, ability to be established anywhere quickly, and efficiency of data query we have chosen to implement a structured peer to peer overlay model in a cluster of vehicles. The vehicles in the cluster exchange information with the cluster head. The cluster head acts as a leader of the cluster, it fetches the data from the Road-side unit and other cluster heads. We have implemented Pyramid Tree Model in structured peer to peer models. A pyramid tree is group of clusters organized in a structured format with the data links between the clusters. The core concepts behind the pyramid tree model is clustering the nodes based on interest.
16

以民族誌決策樹與模糊本體論法研究失智症照護之供需 / Investigation of the long-term institutional care requirements of patients with dementia and their families by qualitative and quantitative analysis

張清為, Chang, Chingwei Unknown Date (has links)
台灣在過去的數十年內,罹患失智症人口逐漸增多,其中的多數皆有接受了各層面的照護,舉凡藥物治療、醫護治療、復健治療以及職能治療,然其中的成效與需求之研究仍相當缺乏。故本研究採以質性與量性研究方法,以便於探索目前失智症患者家屬照護時所面臨的實際抉擇歷程與主要需求,並同時探索個案醫院內的治療效果與病患入院時狀況之關係,本研究希望藉由中部地區失智症病患照護的需求及機構之供給的角度來探索研究所能增進其醫療服務品質之處。 在質性研究方法部分,本研究以民族誌決策樹研究法來洞悉與探索家屬在面臨照護失智症病患時是否要採行機構式照護的決策歷程以及決策條件。藉由深度訪談結果粹取出的判斷準則發現,影響家屬決策之最主要考量為失智症病患者的失智程度,其餘包含道德規範、照護負擔、病患是否需要騎他的專業醫療照護以及照護中心的軟硬體環境。本研究整合考量這些判斷準則的優先順序、輕重緩急以及因果關係後將之建立決策樹,並以另外五十名家屬驗證該模型之預測力,得到預測準確率為92%。 此外,本研究再以量性方法來探索治療對於不同失智症病患的成效。結果顯示入住時狀況較好的失智症住民會以更積極的態度來接受職能治療,也因此他們擁有較大的改善或控制病情的機會,然而當住民以消極的態度接受職能治療時,則其治療效果遠不及積極治療者,也因此病情退步的機會較大,主要原因在於多數情況較差的住民具有攻擊、抗拒治療的傾向,使得照護工作變得更為艱鉅,故本研究建議家屬應重視職能治療以及與病人互動之重要性,不論是在居家照護亦或是機構式照護 / Over the past decade, the number of long-term care (LTC) residents has increased, and many have accepted treatments such as medication, rehabilitation and occupational therapy. This study employs both qualitative and quantitative techniques in order to discuss senile dementia patient care in long-term care institutions, and we use a supply and demand viewpoint to explore what services are really necessary for the patient and their family. In qualitative method, the main purpose of this stage is to use the ethnographic decision tree model to understand and explore the decision criteria of the subject. Our study found that the degree of dementia of the patient always affects the decisions made by family members – in fact, this is the most important of all criteria elicited from the interviews with family members. There are also ethical constraints, care burden, norm of filial obligation, patient need professional medical care and institutional environment, etc. which mentioned by families. We linked together the decision criteria considered most important in accounting for the decision-making sequence of family members to be the ethnographic decision tree model which predictive power is 92%. In quantitative stage, our study discussed the effectiveness of occupational therapy when given to dementia patients of different contexts. The results of this stage showed that patients of a good condition in the first stage present a more positive attitude towards participation in the occupational therapy designed by the institution; therefore, they have a greater chance of their condition improving or remaining the same. However, patients of an average condition have a more passive attitude towards taking part in any therapy; therefore, they have a greater chance of their condition deteriorating, because of their violent tendencies and their resistance to care, the task of caring for these patients is more difficult than caring for patients in the other groups. Above all, we suggest that families adopt the therapies no matter in homecare or institutionalization, as early as possible in order to improve the likelihood of being able to control the patient’s condition. It is understandable that accepting more therapies and interaction in the early stage of dementia, having higher chance to go well, however, by waiting until then they also miss the best opportunity to attempt to improve the patient’s condition, it is really not the good way we suggest to be.
17

增進樹狀模型評價重設型選擇權效率之方法

王志原, Wang, Chih-Yuan Unknown Date (has links)
傳統上,對於選擇權的評價模型,大抵可分為封閉解與數值分析兩大類。封閉解計算的速度快,但卻十分缺乏彈性,譬如無法求得美式解,相反的數值分析相當具有彈性,評價時卻比較耗時,譬如障礙選擇權。本文針對上面的問題,提出一個以數值分析中的樹狀模型為基礎,輔以封閉解來維持應有的彈性,並提高計算的速度,我們將此方法稱之為分解結合法。 由於樹狀模型用來評價重設型選擇權必須考慮消除重設界限所導致的非線性誤差,在本文中,主要是以Boyle and Lau(1994)的二元樹模型及Ritchken(1995)的三元樹模型作為主要的架構,搭配分解結合法來針對重設型選擇權進行研究。就本文分析的結果顯示,利用分解結合法不但能夠提高計算的速度,同時對於某些條件下的選擇權,還能夠減少其評價的波動度,效果相當的顯著。 本文主要針對單點單價式與整段時間單價式的重設型選擇權,推導適用分解結合法的方法。以此兩種基本的重設型選擇權為基礎,我們將相同的概念推廣至其他更複雜的重設型選擇權上。此外在選取結合的方式上,我們也可以充分利用已經推導出的重設型選擇權封閉解,應用在更複雜的重設條件上,無形中,增加了封閉解的應用彈性,也減少了樹狀模型的評價時間,所以具有一舉兩得的效果。此外,本文也針對分解結合法的評價速度,作一完整的比較。並在最後,本文也針對分解結合法下避險比率的計算以及重設型選擇權避險所特有的現象:Delta Jump、Negative Delta,這兩種情形發生的原因及可能的影響與因應之道進行分析。
18

Applications of ROA to Value a Dotcom Start-up and a Professional Basketball player

Karungi, Doreen, Huang, Wenqing January 2012 (has links)
This paper attempts to evaluate a dotcom start-up company and a professional young basketball player using Real Option Analysis in the investors’ points of view. That is, we are standing in the financers’ shoes and valuing both cases if they are worth investing in. We believe that real option analysis is the most appropriate valuation method from our current knowledge compared to other traditional valuation methods notably like the Net Present Value (NPV), therefore we try to prove that using both qualitative and quantitative descriptions. The authors concentrate more on applying quantitative methods than giving detailed definitions of real options. Binomial Pricing Model and Monte Carlo simulation with the help of MS Excel and MATLAB were used in the evaluation. The paper consists of two case studies, each tackled differently but both summarized up all together. The paper concludes with a table exhibiting when real options are valuable and a belief that game theory is essential in ROA. / Matlab Codes and Simulation&Binary Tree Model(Excel)
19

結構型商品之評價與分析-每日計息雙區間連動及匯率連動債券

李映瑾 Unknown Date (has links)
目前全球的金融衍生性商品市場中,利率衍生性商品占了全球衍生性商品交易量的一半以上,其次為匯率衍生性商品。市場上的結構型商品,有的連結數個標的,有的報酬型態複雜,不易為一般投資人所了解,且投資人容易被商品條款上的高配息或最高報酬率吸引,而忽略了對投資人不利的條款。 本文針對目前金融市場上已發行的利率及匯率連結金融商品,進行個案評價與分析,希望能讓一般投資人更了解市面上結構型商品的報酬型態,以及潛在的投資風險,並站在發行商的角度,進行商品利潤分析及發行策略的探討。 本文所評價的兩個商品為英國勞埃德銀行(Lloyds TSB Bank Plc.)所發行的「每日計息雙區間可贖回債券」和中國農民銀行所發行的「觸及失效匯率連結債券」,分別以LIBOR Market Model (Brace, Gatarek and Musiela,1997,也稱為BGM模型)和三元樹模型(Ritchken,1995)對其進行評價。最後針對評價結果分析發行商的發行策略以及投資人需注意的投資陷阱。
20

Efektivní knihovna pro práci s konečnými stromovými automaty / An Efficient Finite Tree Automata Library

Lengál, Ondřej January 2010 (has links)
Numerous computer systems use dynamic control and data structures of unbounded size. These data structures have often the character of trees or they can be encoded as trees with some additional pointers. This is exploited by some currently intensively studied techniques of formal verification that represent an infinite number of states using a finite tree automaton. However, currently there is no tree automata library implementation that would provide an efficient and flexible support for such methods. Thus the aim of this Mas- ter's Thesis is to provide such a library. The present paper first describes the theoretical background of finite tree automata and regular tree languages. Then it surveys the cur- rent implementations of tree automata libraries and studies various verification techniques, outlining requirements for the library. Representation of a finite tree automaton and algo- rithms that perform standard language operations on this representation are proposed in the next part, which is followed by description of library implementation. Through a series of experiments it is shown that the library can compete with other available tree automata libraries, in certain areas being even significantly superior to them.

Page generated in 0.0516 seconds