Spelling suggestions: "subject:"coverage."" "subject:"overage.""
341 |
Distributed Algorithms for Improving Wireless Sensor Network Lifetime with Adjustable Sensing RangeAung, Aung 03 May 2007 (has links)
Wireless sensor networks are made up of a large number of sensors deployed randomly in an ad-hoc manner in the area/target to be monitored. Due to their weight and size limitations, the energy conservation is the most critical issue. Energy saving in a wireless sensor network can be achieved by scheduling a subset of sensor nodes to activate and allowing others to go into low power sleep mode, or adjusting the transmission or sensing range of wireless sensor nodes. In this thesis, we focus on improving the lifetime of wireless sensor networks using both smart scheduling and adjusting sensing ranges. Firstly, we conduct a survey on existing works in literature and then we define the sensor network lifetime problem with range assignment. We then propose two completely localized and distributed scheduling algorithms with adjustable sensing range. These algorithms are the enhancement of distributed algorithms for fixed sensing range proposed in the literature. The simulation results show that there is almost 20 percent improvement of network lifetime when compare with the previous approaches.
|
342 |
Variable Shaped Detector: A Negative Selection AlgorithmAtaser, Zafer 01 February 2013 (has links) (PDF)
Artificial Immune Systems (AIS) are class of computational intelligent methods developed based on the principles and processes of the biological immune system. AIS methods are categorized mainly into four types according to the inspired principles and processes of immune system. These categories are clonal selection, negative selection, immune network and danger theory. The approach of negative selection algorithm (NSA) is one of the major AIS models. NSA is a supervised learning algorithm based on the imitation of the T cells maturation process in thymus. In this imitation, detectors are used to mimic the cells, and the process of T cells maturation is simulated to generate detectors. Then, NSA classifies the specified data either as normal (self) data or as anomalous (non-self) data. In this classification task, NSA methods can make two kinds of classification errors: a self data is classified as anomalous, and a non-self data is classified as normal data.
In this thesis, a novel negative selection method, variable shaped detector (V-shaped detector), is proposed to increase the classification accuracy, or in other words decreasing classification errors. In V-shaped detector, new approaches are introduced to define self and represent detectors. V-shaped detector uses the combination of Local Outlier Factor (LOF) and kth nearest neighbor (k-NN) to determine a different radius for each self sample, thus it becomes possible to model the self space using self samples and their radii. Besides, the cubic b-spline is proposed to generate a variable shaped detector. In detector representation, the application of cubic spline is meaningful, when the edge points are used. Hence, Edge Detection (ED) algorithm is developed to find the edge points of the given self samples. V-shaped detector was tested using different data sets and compared with the well-known one-class classification method, SVM, and the similar popular negative selection method, NSA with variable-sized detector termed V-detector. The experiments show that the proposed method generates reasonable and comparable results.
|
343 |
Analysis of Healthcare Coverage Using Data Mining TechniquesTekieh, Mohammad Hossein 12 January 2012 (has links)
This study explores healthcare coverage disparity using a quantitative analysis on a large dataset from the United States. One of the objectives is to build supervised models including decision tree and neural network to study the efficient factors in healthcare coverage. We also discover groups of people with health coverage problems and inconsistencies by employing unsupervised modeling including K-Means clustering algorithm.
Our modeling is based on the dataset retrieved from Medical Expenditure Panel Survey with 98,175 records in the original dataset. After pre-processing the data, including binning, cleaning, dealing with missing values, and balancing, it contains 26,932 records and 23 variables. We build 50 classification models in IBM SPSS Modeler employing decision tree and neural networks. The accuracy of the models varies between 76% and 81%. The models can predict the healthcare coverage for a new sample based on its significant attributes. We demonstrate that the decision tree models provide higher accuracy that the models based on neural networks. Also, having extensively analyzed the results, we discover the most efficient factors in healthcare coverage to be: access to care, age, poverty level of family, and race/ethnicity.
|
344 |
Den "perfekta" stormen : En studie av nyhetstäckningen kring en naturkatastof i västerländska tidningar / The "Perfect" Storm : A study in news coverage of a natural disaster in western newspapersBengts, Elina, Johansson, Christian January 2013 (has links)
Uppsatsen söker visa på hur medierapporteringen av en nordamerikansk naturkatastrof ser ut och skiljer sig från varandra i tidningars internetupplagor från olika västerländska länder utifrån sex syftesfrågor; På vilket sätt har amerikanska och svenska nyhetsmedier rapporterat om katastrofen? Vilka huvudsakliga teman skriver tidningarna om? Vilka källor kommer till tals? Vilka händelser fokuserar tidningarna på? Hur har ländernas tidningar rapporterat om länder katastrofen drabbade innan den nådde USA? På vilka sätt skiljer sig ländernas tidningar från varandra? Länderna utvalda för att besvara frågan är USA och Sverige, och tidningarna från respektive land uppsatsen använder för att besvara frågan är Washington Post, Huffington Post, Svenska Dagbladet och Dagens Nyheter. Den nordamerikanska naturkatastrofen som uppsatsen ska undersöka är orkanen Sandy, som mellan dagarna 22 och 31 oktober 2012 färdades från Jamaica till USAs östkust där den skingrades efter att ha orsakat skador för miljarder efter hela sin resväg. För att analysera materialet så har kvantitativ innehållsanalys och kritisk diskursanalys tillämpas. Teorierna som uppsatsen tillämpar de två metoderna på är Entmans Framing-teori, nyhetsvärdering enligt bland andra, Henk Prakke. Teorier om agenda setting och journalistisk praktik tillämpas också för att påvisa hur tidningar kan påverka sitt innehåll på olika sätt. Det kvantitativa materialet består av sammanlagt 194 artiklar från de fyra olika tidningarna och det kvalitativa materialet av en artikel av speciellt intresse per tidning. Resultatet redovisas i två delar, en per metod. I undersökningen framkommer det att täckningen mellan länderna på en del punkter är mycket lika och på andra mycket olika. Täckningen av orkanen var mellan länderna lika i sitt stora fokus på USA i majoriteten av alla artiklar, likaså var tidningarnas täckning lika i frekvensen som artiklarnas innehåll var alarmerande eller neutrala förmedlare. Skillnader fanns dock mellan länderna i hur ländernas täckning av andra länder än USA sett ut, där de svenska tidningarna visade sig nästan uteslutande täcka Haitis situation, till stor del som en del i täckningen av landets många problem i stort, medan USA täckte Jamaica och Kuba, men endast i förbifarten och som en del av den allmänna rapporteringen om orkanens resväg. Uppsatsens slutsats är att skillnaderna som hittats, beror på det uppfattade kulturella avståndet mellan länderna som undersökts och länderna som drabbats. / This essay seeks to show how coverage of a North-American natural disaster looks and differs between the internet-edition of newspapers from western countries by way of six research questions: In what way has Swedish and American news media reported on the disaster? What main themes do the papers write about? Who can be seen making statements in the articles? What are the events focused on in the papers? How did the newspaper’s report on countries affected by the disaster before it arrived in the USA? In what ways do the countries differ from each other? The countries chosen for answering the question with, are the USA and Sweden, and the papers chosen from the respective country is Washington Post, Huffington Post, Svenska Dagbladet and Dagens Nyheter. The North-American disaster that will be examined is Hurricane Sandy that between the days 22 to 31 October 2012 traveled from Jamaica to the American east coast, causing billions worth of damage on its way there. To analyze the material two methods, quantitative content analysis and critical discourse analysis were chosen. The theories used to work the methods are the Framing-theory by Entman, News-evaluation by way of among others Henk Prakke. Agenda Setting theories and theories on journalistic practices is also employed to make the point and explain how newspapers are able to affect the content they produce. The quantitative material used is made up of 194 articles from the four papers as well as one article, chosen for its specific content, from each newspaper. The results are presented separately for each method. The contents of the essay show that the countries papers in some ways have covered the event quite similarly and in some ways very differently. The coverage was similar in that both countries focused mainly on covering the USA, as well as in the frequency of articles where the contents were covered in a neutral or alarming fashion. Differences sprung up in the choice of coverage of other affected countries aside from the U.S where Swedish newspapers mainly focused on reporting on the situation in Haiti, mainly as a part in their general coverage of the country`s previous disaster exposure, while the American newspapers mainly wrote about Jamaica and Cuba, but only however as part of the coverage given to the path the hurricane was taking. The conclusion made by the essay is that differences are created and occur due to perceived cultural distance to the different actors affected by Sandy held by the countries whose media coverage was researched.
|
345 |
Interval Estimation for the Correlation CoefficientJung, Aekyung 11 August 2011 (has links)
The correlation coefficient (CC) is a standard measure of the linear association between two random variables. The CC plays a significant role in many quantitative researches. In a bivariate normal distribution, there are many types of interval estimation for CC, such as z-transformation and maximum likelihood estimation based methods. However, when the underlying bivariate distribution is unknown, the construction of confidence intervals for the CC is still not well-developed. In this thesis, we discuss various interval estimation methods for the CC. We propose a generalized confidence interval and three empirical likelihood-based non-parametric intervals for the CC. We also conduct extensive simulation studies to compare the new intervals with existing intervals in terms of coverage probability and interval length. Finally, two real examples are used to demonstrate the application of the proposed methods.
|
346 |
Web based system for radio planning in WRAPShakya, Nabin Raj January 2009 (has links)
Radio planning is designing of network structure and elements under various design requirements. With the increasing shortage of frequencies, radio planning has become more and more complex. Hence, to maintain accuracy and optimization computerized planning tools are needed. This thesis focuses on developing a simplified and economical solution on web for radio planning tool using WRAP- the software for spectrum management and radio planning developed by WRAP International AB, Linköping, Sweden. In order to make WRAP calculations available for remote users it had developed APIs. The web-based WRAP needs to communicate with WRAP API server, for exchanging API messages in order to perform calculation. To make the system user friendly and interactive, latest web technologies are implemented. In this thesis, we started development process right from requirements gathering to find out required components that need to be analyzed to find suitable web-based conversion. Further, we designed and implemented a software solution. The final part is evaluation to discover if requirements are fully implemented or not as well as to gather the performance result of the new system. It is found, the performance of web based WRAP is equally fast as desktop version for smaller coverage areas whereas, for larger coverage areas, web-based WRAP is slower than desktop version
|
347 |
Mediabevakning och aktiemarknadens reaktion på ny informationSerifler, Levent, Lundborg, Rasmus January 2012 (has links)
Relationen mellan publik media och kapitalmarknader är ett område som studerats under en lång period. Åsikter har väckts om att massmedia är en viktig faktor vid förståelsen av finansiella marknader då media har förmågan att ge upphov till irrationella reaktioner. Utifrån detta har en kritik mot massmedia växt fram som menar att media inte förmedlar viktig information. En del studier visar emellertid att en lättare åtkomst till ny information leder till att priser på finansiella marknader kan justeras mer effektivt.Syftet med denna studie är att förklara hur den historiska mediabevakningen som föregår en aktierekommendation påverkar hela aktiemarknadens aktörer genom att studera den överavkastning som föregår och efterföljer rekommendationen ifråga. Undersökningen har baserats på rekommendationer gällande svenska aktier som är börsnoterade på Large-, Mid- och Small-Cap listorna på Nasdaq OMX Nordic Stockholm under en period som sträcker sig över två år. Underlaget för mediabevakning utgår ifrån artiklar ur stora svenska tidsskrifter och rekommendationerna är hämtade ifrån större analytikerhus.Tidigare forskning har gjorts på aktiemarknadens reaktion vid publicerandet av nya rekommendationer men denna studie tar hänsyn till ytterligare en variabel, antalet historiska publikationer i media, för att försöka skapa ytterligare förståelse på området.Resultatet antyder att den grad av mediabevakning som föregår publicerandet av en aktierekommendation inte har en påverkan på marknadens mottagning av den nya informationen. Den tydligaste skillnaden som kunde ses mellan rekommendationer som föregicks av en hög respektive låg mediabevakning var att rekommendationen som föregicks av en låg mediabevakning visade ett marginellt större utslag vid publikationsdagen. Då denna observation emellertid inte kunde säkerställas statistiskt går det inte att dra några slutsatser utifrån detta resultat och studien kan således inte påvisa att mer lättillgänglig information leder till en mer informerad marknad. / The relationship between public media and capital markets is a subject that has been studied for a long time. Some argue that mass media is an important factor in understanding the financial markets because the media has the ability to generate irrational reactions. On this basis a critique against the media has emerged which believe that the mass media does not publish valuable information. Some studies, however, have concluded that an easier access to new information leads to more efficient price adjustments within the financial markets.The purpose of this study is to explain how the historical media coverage preceding stock recommendations affects the entire stock market by studying the abnormal return that precede and follow the studied recommendations. The study is based on recommendations on Swedish shares listed on the Large-, Mid- and Small-Cap lists on the Nasdaq OMX Stockholm during a period of two years. Media coverage is based on articles from major Swedish magazines while stock recommendations are obtained from major analysts.Previous research has been done on the stock market's reaction to the publication of new recommendations, but this study takes an additional variable into account, the number of historical publications in the media, in an attempt to create further understanding in the field of subject.The results suggest that the degree of media coverage preceding the publication of a stock recommendation do not have an impact on the market's reception of the new information. The most noticeable difference between the recommendations that were preceded by high and low media coverage respectively was that the recommendation which was preceded by low media coverage showed a marginally larger abnormal return at the publication date. Since this observation, however, could not be confirmed statistically the study cannot draw any conclusions from this result and thus the study cannot prove that a larger amount of easily accessible information leads to a more informed market.
|
348 |
Urspårad nyhetsrapportering : En kvalitativ textanalys av en lokal och en rikstäckande tidningsrapportering av tågolyckan utanför KimstadJohansson, Camilla, Henrysson, Emma January 2011 (has links)
The purpose with this study is to examine how the media coverage of the train accidentoutside Kimstad, on the night of September 12th 2010, was framed in one local (Norrköpingsnyheter) and one national newspaper (Expressen) and how the coverage of the accident wasdifferent in the local newspaper compared to the national newspaper.The material is 12 articles within a fixed time limit which is from September 13th toSeptember 15th. The material consists of six articles from each newspaper.The method we use a qualitative text analysis with its basis in medialogic, news value andframing theory.In the essay’s analysis we show a synthesis of both newspapers coverage of the train accidentoutside Kimstad and how they stand in relation to the theory of news value and we end with aterminative comment about the results.The conclusion shows that the coverage of the event was different in many ways. Expressenuses a more dramatic framing and a more sensational language which describes the eventmore badly than it was. NT uses a more informative and objective information which almosttrifles the event but which informs about relevant information for the locals. Expressen mostlyturns to civil persons and the passengers on the train for quotes while NT turns to governmentpeople and cites them. Expressen also used personalization more than NT, but NT simplifiedtheir stories more than Expressen.We could also see that the crisis could be marked by the definitions of a crisis made byUlmer, Sellnow & Seeger. We could also relate the crisis event to the coverage of the crisis.They were both short and sudden.Earlier research has shown that national newspapers have a tendency to dramatize news morethan what local newspapers do.
|
349 |
Control and Optimization of Track Coverage in Underwater Sensor NetworksBaumgartner, Kelli A. Crews 14 December 2007 (has links)
Sensor network coverage refers to the quality of service provided by a sensor network surveilling a region of interest. So far, coverage problems have been formulated to address area coverage or to maintain line-of-sight visibility in the presence of obstacles (i.e., art-gallery problems). Although very useful in many sensor applications, none of the existing formulations address coverage as it pertains to target tracking by means of multiple sensors, nor do they provide a closed-form function that can be applied to the problem of allocating sensors for the surveilling objective of maximizing target detection while minimizing false alarms. This dissertation presents a new coverage formulation addressing the
quality of service of sensor networks that cooperatively detect
targets traversing a region of interest, and is readily applicable to the current sensor network coverage formulations. The problem of track coverage consists of finding the positions of <em>n</em> sensors such
that the amount of tracks detected by at least <em>k</em> sensors is
optimized. This dissertation studies the geometric properties of the
network, addressing a deterministic track-coverage formulation and
binary sensor models. It is shown that the tracks detected by a
network of heterogeneous omnidirectional sensors are the geometric
transversals of non-translates families of disks. A novel
methodology based on cones and convex analysis is presented for representing and
measuring sets of transversals as closed-form functions of the sensors positions and ranges. As a result, the problem of optimally deploying a sensor network with the aforementioned objectives can be formulated as an optimization problem subject to mission dynamics and constraints. The sensor placement problem, in which the sensors are placed such that track coverage is maximized for a fixed sensor network, is formulated as a nonlinear program and solved using sequential quadratic programming.
The sensor deployment, involving a dynamic sensor network installed on non-maneuverable sonobuoys deployed in the ocean, is formulated as an optimization problem subject to inverse dynamics. Both a finite measure of the cumulative coverage provided by a sensor network over a fixed period of time and the oceanic-induced current velocity field are accounted for in order to optimize the dynamic sensor network configuration. It is shown that a state-space representation of the motions of the individual sensors subject to
the current vector field can be derived from sonobuoys oceanic drift
models and obtained from CODAR measurements. Also considered in the sensor model are the position-dependent acoustic ranges of the sensors due to the effects from heterogenous environmental conditions, such as ocean bathymetry, surface temporal variability, and bottom properties. A solution is presented for the initial deployment scheme of the non-maneuverable sonobuoys subject to the ocean's current, such that sufficient track coverage is maintained over the entire mission. As sensor
networks are subject to random disturbances due to unforseen heterogenous environmental conditions propagated throughout the sensors trajectories, the optimal initial positions solution is evaluated for robustness through Monte Carlo simulations. Finally, the problem of controlling a network of maneuverable underwater vehicles, each equipped with an onboard acoustic sensor is formulated using optimal control theory. Consequently, a new optimal control problem is presented that integrates sensor objectives, such as track coverage, with cooperative path planning of a mobile sensor network subject to time-varying environmental dynamics. / Dissertation
|
350 |
Biodiversity Study around Fishponds of Sihcao, Tainan CityTang, Chen-hsien 03 September 2010 (has links)
Abstract
Fishpond area of Sihcao, Tainan City, approximate 488 hectors, is a part of coastal wetlands of Taiwan. Fishponds can be found almost everywhere in the wetlands, readily forming a fragmented ecosystem. Previous studies on populations of small mammals in coastal wetlands of Taiwan are scarce. Disturbance types found in the study area include straying dogs, working farmers, tillaging of the fishponds, and typhoons, etc, which would probably reduce the abundance of small mammals. The resources such as shelter, vegetation coverage, insects and seeds would increase the abundance of small mammals. The seasonal changes of these environmental factors may in turn affect the population dynamics of small mammals. I monitored the monthly population fluctuation of small mammals and invertebrates in the fishpond habitats, 2008. Six species of small mammals and 106 species of invertebrates were found. The small mammal populations in the Hairy Beggar Ticks region were compared with those within the non-Hairy Beggar Ticks region. More small mammals were found in the Hairy Beggar Ticks region in the sampling period. The vegetation coverage of the former was higher than that of the latter. There was positive correlation between evenness of small mammals and temperature. No differences of the biodiversity of invertebrates were found between the regions in the year. Significant correlation existed between the sunshine duration and the richness of invertebrates. The correlation may be due to the change of photoperiod. There was no significant correlation between the richness of small mammals and that of invertebrates.
|
Page generated in 0.1646 seconds