Spelling suggestions: "subject:"heighted"" "subject:"eighted""
321 |
Weighted Average Based Clock Synchronization Protocols For Wireless Sensor NetworksSwain, Amulya Ratna 04 1900 (has links) (PDF)
Wireless Sensor Networks (WSNs) consist of a large number of resource constrained sensor nodes equipped with various sensing devices which can monitor events in the real world. There are various applications such as environmental monitoring, target tracking forest fire detection, etc., which require clock synchronization among the sensor nodes with certain accuracy. However, a major constraint in the design of clock synchronization protocols in WSNs is that sensor nodes of WSNs have limited energy and computing resources. Clock synchronization process in the WSNs is carried out at each sensor node either synchronously, i.e., periodically during the same real-time interval, which we call synchronization phase, or asynchronously, i.e., independently without worrying about what other nodes are doing for clock synchronization. A disadvantage of asynchronous clock synchronization protocols is that they require the sensor nodes to remain awake all the time. Therefore, they cannot be integrated with any sleep-wakeup scheduling scheme of sensor nodes, which is a major technique to reduce energy consumption in WSNs. On the other hand, synchronous clock synchronization protocols can be easily integrated with the synchronous sleep-wakeup scheduling scheme of sensor nodes, and at the same time, they can provide support to achieve sleep-wakeup scheduling of sensor nodes. Essentially, there are two ways to synchronize the clocks of a WSN, viz. internal clock synchronization and external clock synchronization. The existing approaches to internal clock synchronization in WSNs are mostly hop-by-hop in nature, which is difficult to maintain. There are also many application scenarios where external clock synchronization is the only option to synchronize the clocks of a WSN. Besides, it is also desired that the internal clock synchronization protocol used is fault-tolerant to message loss and node failures. Moreover, when the external source fails or reference node fails, the external clock synchronization protocol should revert back to internal clock synchronization protocol with/without using any reference node. Towards this goal, first we propose three fully distributed synchronous clock synchronization protocols, called Energy Efficient and Fault-tolerant Clock Synchronization (EFCS) protocol, Weighted Average Based Internal Clock Synchronization (WICS) protocol, and Weighted Average Based External Clock Synchronization (WECS) protocol, for WSNs making use of peer-to-peer approach. These three protocols are dynamically interchangeable depending upon the availability of external source or reference nodes. In order to ensure consistency of the synchronization error in the long run, the neighboring nodes need to be synchronized with each other at about the same real time, which requires that the synchronization phases of the neighboring nodes always overlap with each other. To realize this objective, we propose a novel technique of pullback, which ensures that the synchronization phases of the neighboring nodes always overlap. In order to further improve the synchronization accuracy of the EFCS, WICS, and WECS protocol, we have proposed a generic technique which can be applied to any of these protocols, and the improved protocols are referred as IEFCS, IWICS, and IWECS respectively. We then give an argument to show that the synchronization error in the improved protocols is much less than that in the original protocols. We have analyzed these protocols for bounds on synchronization error, and shown that the synchronization error is always upper bounded. We have evaluated the performance of these protocols through simulation and experimental studies, and shown that the synchronization accuracy achieved by these protocols is of the order of a few clock ticks even in very large networks. The proposed protocols make use of estimated drift rate to provide logical time from the physical clock value at any instant and at the same time ensure the monotonicity of logical time even though physical clock is updated at the end of each synchronization phase. We have also proposed an energy aware routing protocol with sleep scheduling, which can be integrated with the proposed clock synchronization protocols to reduce energy consumption in WSNs further.
|
322 |
Weighted Unranked Tree Automata over Tree Valuation MonoidsGötze, Doreen 14 March 2017 (has links)
Quantitative aspects of systems, like the maximal consumption of resources, can be modeled by weighted automata. The usual approach is to weight transitions with elements of a semiring and to define the behavior of the weighted automaton by mul- tiplying the transition weights along a run. In this thesis, we define and investigate a new class of weighted automata over unranked trees which are defined over valuation monoids. By turning to valuation monoids we use a more general cost model: the weight of a run is now determined by a global valuation function. Besides the binary cost functions implementable via semirings, valuation functions enable us to cope with average and discounting. We first investigate the supports of weighted unranked tree automata over valuation monoids, i.e., the languages of all words which are evalu- ated to a non-zero value. We will furthermore consider the support of several other weighted automata models over different structures, like words and ranked trees. Next we prove a Nivat-like theorem for the new weighted unranked tree automata. More- over, we give a logical characterization for them. We show that weighted unranked tree automata are expressively equivalent to a weighted MSO logic for unranked trees. This solves an open problem posed by Droste and Vogler. Finally, we present a Kleene- type result for weighted ranked tree automata over valuation monoids.
|
323 |
Expressing Context-Free Tree Languages by Regular Tree GrammarsTeichmann, Markus 12 April 2017 (has links)
In this thesis, three methods are investigated to express context-free tree languages by regular tree grammars. The first method is a characterization. We show restrictions to context-free tree grammars such that, for each restricted context-free tree grammar, a regular tree grammar can be constructed that induces the same tree language. The other two methods are approximations. An arbitrary context-free tree language can be approximated by a regular tree grammar with a restricted pushdown storage. Furthermore, we approximate weighted context-free tree languages, induced by weighted linear nondeleting context-free tree grammars, by showing how to approximate optimal weights for weighted regular tree grammars.
|
324 |
GIS som beslutsunderlag : utvärdering av multikriterieanalyser utifrån AHP och WLC / GIS as a tool for decision making : evaluation of multi-criteria analysis with AHP and WLCWirsén, William, Caesar, Axel January 2022 (has links)
Multifunktionella aspekter och flerdimensionella problem öppnar upp för användandet av geografiska informationssystem (GIS) vid beslutsfattande. Ett användningsområde i GIS är möjligheten att ta fram lämpliga områden, det innebär områden där flera lämpliga aspekter stämmer överens med varandra. Denna typ av lämplighetsanalys påverkas i hög grad av författaren samt beroende på analysmetod vilket i sin tur påverkar beslutsfattande. Denna studie syftar till att undersöka hur de val författaren gör påverkar metoden samt vilka konsekvenser det får för beslutsprocessen. Utifrån tidigare forskning har två analysmetoder identifierats, Analytic Hierarchy Process (AHP) och Weighted Linear Combination (WLC). För att kunna jämföra de två metoderna tar studien stöd i en exempelstudie från Linköpings tätort. Utifrån denna studie går det att se att författaren påverkar resultatet av kriterier och viktning men där det skiljer sig olika mycket mellan de två analysmetoderna AHP och WLC. AHP och WLC har både styrkor och svagheter som gör dem lämpliga för denna sorts studie. / Multifunctional aspects and multidimensional problems open up the use of geographic information systems (GIS) in decision-making. One area of use in GIS is the site selection, the site selection combines areas with several suitable aspects. This type of suitability analysis is greatly influenced by the author and depending on the analysis method, which in turn affects decision-making. This study aims to investigate how the choices the author makes affect the method and what consequences it has for the decision-making process. Based on previous research, two analysis methods have been identified, Analytic Hierarchy Process (AHP) and Weighted Linear Combination (WLC). In order to compare the two methods, the study is supported by an example study from Linköping City. Based on this study, it can be seen that the author influences the results of criteria and weighting. However the two analysis methods AHP and WLC affect the study in different ways. AHP and WLC both have strengths and weaknesses that make them suitable for site selection studies.
|
325 |
Investigating certain share buyback transactions by companies listed on the JSE for the period 2000 to 2005De Goede, Andre 12 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: Prior to 30 June 1999 companies in South Africa were not allowed to buy back their own shares. Amendments to the Companies Act, the Companies Amendment Act (Act 37 of 1999) radically changed the philosophy around capital maintenance. The result of this amendment is that a company is allowed to buy back its own shares and finance the backbuying of its shares under certain circumstances. A sample of 140 companies listed on the Johannesburg Securities Exchange for the period 2000 to 2005 was selected. The backbuying of shares by the relevant company, subsidiary and trust was analysed for the period 2000 to 2005. For the purposes of this empirical study, the financial sector, as well as the alternative exchange, that is focussed on good quality small and medium-sized high growth companies, were excluded during sample selection. The outcome of this exploratory study is the identification of the fact that a share buyback took place or not in Tables 4.1 and 4.2; a summary of the number of shares bought back in Table 4.3; and, in Table 4.4, a summary of the number of shares bought back, expressed as a percentage of the weighted average number of shares in issue. / AFRIKAANSE OPSOMMING: Maatskappye in Suid-Afrika was voor 30 Junie 1999 deur die Maatskappywet verbied om hul eie aandele terug te koop. Wysigings aan die Maatskappywet, naamlik die Wysigingswet op Maatskappye (wet 37 van 1999) het ’n radikale verandering bewerkstellig in die filosofie rakende kapitaalinstandhouding. Die gevolg van dié wysigingswetgewing is dat maatskappye sedert 30 Junie 1999 hul eie aandele kan terugkoop en in sekere omstandighede die aankoop van hul eie aandele finansier. ’n Steekproef van 140 genoteerde maatskappye op die Johannesburgse Aandelebeurs is geselekteer vir die tydperk 2000 tot 2005. Die terugkooptransaksies van aandele deur die betrokke maatskappy, filiaal en trust is opgesom vir die tydperk 2000 tot 2005. Hierdie empiriese ondersoek het die finansiële sektor, asook die alternatiewe beurs van die Johannesburgse Aandelebeurs, wat fokus op goeie kwaliteit klein en mediumgrootte maatskappye met groot groeipotensiaal, tydens die steekproefseleksie uitgesluit.
Die resultate van hierdie empiriese ondersoek is die identifisering en opsomming van die terugkooptransaksies van aandele vir die steekproef in Tabelle 4.1 en 4.2; ’n opsomming in Tabel 4.3 van die getal aandele teruggekoop; en ’n opsomming in Tabel 4.4 van die getal aandele teruggekoop, uitgedruk as ’n persentasie van die gemiddelde getal uitgereikte aandele.
|
326 |
An unstructured numerical method for computational aeroacousticsPortas, Lance O. January 2009 (has links)
The successful application of Computational Aeroacoustics (CAA) requires high accuracy numerical schemes with good dissipation and dispersion characteristics. Unstructured meshes have a greater geometrical flexibility than existing high order structured mesh methods. This work investigates the suitability of unstructured mesh techniques by computing a two-dimensionallinearised Euler problem with various discretisation schemes and different mesh types. The goal of the present work is the development of an unstructured numerical method with the high accuracy, low dissipation and low dispersion required to be an effective tool in the study of aeroacoustics. The suitability of the unstructured method is investigated using aeroacoustic test cases taken from CAA Benchmark Workshop proceedings. Comparisons are made with exact solutions and a high order structured method. The higher order structured method was based upon a standard central differencing spatial discretisation. For the unstructured method a vertex-based data structure is employed. A median-dual control volume is used for the finite volume approximation with the option of using a Green-Gauss gradient approximation technique or a Least Squares approximation. The temporal discretisation used for both the structured and unstructured numerical methods is an explicit Runge-Kutta method with local timestepping. For the unstructured method, the gradient approximation technique is used to compute gradients at each vertex, these are then used to reconstruct the fluxes at the control volume faces. The unstructured mesh types used to evaluate the numerical method include semi-structured and purely unstructured triangular meshes. The semi-structured meshes were created directly from the associated structured mesh. The purely unstructured meshes were created using a commercial paving algorithm. The Least Squares method has the potential to allow high order reconstruction. Results show that a Weighted Least gradient approximation gives better solutions than unweighted and Green-Gauss gradient computation. The solutions are of acceptable accuracy on these problems with the absolute error of the unstructured method approaching that of a high order structured solution on an equivalent mesh for specific aeroacoustic scenarios.
|
327 |
Selection of Warehouse Employees Using a Weighted Application BlankParker, Larry L. 05 1900 (has links)
The purpose of this study was to develop a weighted application blank (WAB) which would aid in the selection of employees who would be more likely to remain on the job for 3 months or more. The 31 biographical items for long- and short-tenure employees were compared to see which items differentiated. A somewhat improvised approach which compared trends of both groups (weighting group N = 169, holdout group N 89), produced five items which were significant at the .05 level and resulted in a 70% improvement over the previous method of selection. The long-tenure employee could be described as a slightly older (20 years or more) married person who lives close to the job, less educated (8th grade or less), and who can list three references.
|
328 |
Revalidation of a Weighted Application Blank to Predict TenureMichalski, Louis Richard 12 1900 (has links)
This study re-examined a previously validated application blank in use for 1 year to screen applicants for the position of equipment operator with a company involved in hydrocarbon recovery. Subjects were 409 male equipment operators ranging in age from 19 to 38 years. Minorities accounted for 12% of the group, while 88% were white. Subjects were randomly divided into an even group, N = 201, and an odd group, N = 208. Multiple R's of .39 were obtained for the most significant 10 variables in each group, but these shrank considerably during cross-validation. Only 3 variables were common to both groups since the unique error variances for each group resulted in different arrangements of variables. It was concluded that the items should be re-examined for relevancy and job relatedness.
|
329 |
Ocenenie Volkswagen Group / Valuation of Volkswagen GroupŠusták, Tomáš January 2010 (has links)
Objective of the thesis is determination of Volkswagen Group's equity intrinsic value. Basic starting point of the analysis is seggregation of consolidated financial statements into financial and production division, which are valuated separately. The production division is valuated using both enterprise discounted cashflow and discounted economic profit analysis. Equity cashflow valuation is used to derive value of the financial division. Results of valuation implied by income approach are then compared with market multiples valuation.
|
330 |
Bayesian approaches for the analysis of sequential parallel comparison design in clinical trialsYao, Baiyun 07 November 2018 (has links)
Placebo response, an apparent improvement in the clinical condition of patients randomly assigned to the placebo treatment, is a major issue in clinical trials on psychiatric and pain disorders. Properly addressing the placebo response is critical to an accurate assessment of the efficacy of a therapeutic agent. The Sequential Parallel Comparison Design (SPCD) is one approach for addressing the placebo response. A SPCD trial runs in two stages, re-randomizing placebo patients in the second stage. Analysis pools the data from both stages. In this thesis, we propose a Bayesian approach for analyzing SPCD data. Our primary proposed model overcomes some of the limitations of existing methods and offers greater flexibility in performing the analysis. We find that our model is either on par or, under certain conditions, better, in preserving the type I error and minimizing mean square error than existing methods. We further develop our model in two ways. First, through prior specification we provide three approaches to model the relationship between the treatment effects from the two stages, as opposed to arbitrarily specifying the relationship as was done in previous studies. Under proper specification these approaches have greater statistical power than the initial analysis and give accurate estimates of this relationship. Second, we revise the model to treat the placebo response as a continuous rather than a binary characteristic. The binary classification, which groups patients into “placebo-responders” or “placebo non-responders”, can lead to misclassification, which can adversely impact the estimate of the treatment effect. As an alternative, we propose to view the placebo response in each patient as an unknown continuous characteristic. This characteristic is estimated and then used to measure the contribution (or the weight) of each patient to the treatment effect. Building upon this idea, we propose two different models which weight the contribution of placebo patients to the estimated second stage treatment effect. We show that this method is more robust against the potential misclassification of responders than previous methods. We demonstrate our methodology using data from the ADAPT-A SPCD trial.
|
Page generated in 0.0412 seconds