Spelling suggestions: "subject:"incremental"" "subject:"ncremental""
431 |
營業而來的運用資金及現金流量增額資訊內涵之研究王毅偉 Unknown Date (has links)
本研究的目的在探討,盈餘中所包括的由營業而來的運用資金及現金流量相對於盈餘而言,是否具增額資訊內涵?動機來自於我國於民國78年12月28日公布現金流量表取代財務狀況變動表,前者之主要衡量指標為營業活動現金流量,後者則為營業而來之運用資金,而以前者取代後者的理由究竟是因營業活動現金流量較營業而來運用資金更具資訊內涵?抑或是營業而萊的運用資金與盈餘之相關性太高,導致該指標無增額資訊內涵,而須提供與盈餘相關性較低的現金流量資訊?還是財務狀況妥動表相對於損益表及現金流量表而言,已無法提供增額資訊內涵?由於相關研究結果分歧,故本論文試圖做一深入瞭解。
本研究採事件研究法(Event Study),藉由橫斷面分析,透過四階段進行測試:第一階段比較營業而來的運用資金及營業活動現金流量之相對資訊內涵;第二階段則加入稅後盈餘,測試第一階段之自變數相對於稅後盈餘而言,是否具增額資訊內涵,至於測試方法則採用可能比例測試法(Liklihood Ratio Test),測試全迴歸模式中,某一特定變數的估計值是否異於零的方式;第三階段則將異積異常報酬(CAR)分成高、中、低三組,重新檢視一、二階段之結果,看是否在三組間能獲得相同的結論。最後則測試重大非營業項目及土地重估增值之增額資訊內涵。
本研究以162家國內上市製造業為樣本,蒐集其民國八十年至八十四年之財務資料進行實證研究,獲致以下結論:
一、營業而來的運用資金及現金流量相對資訊內涵之比較以年度資料及Pooling結果觀之,除民國80年之營業而來運用資金及營業活動現金活動與股票報酬有顯著關聯性外,其餘年度(包括pooling)則否,故無法得出年度之一般化結論。但若將累積異常報酬之pooling結果依大小分為三組,結果發現高報酬組中,營業而來的運用資金與股票報酬關聯性大於營業活動現金流量,此項結果與前者為加計流動性應計項目之資訊有關,而另外兩組無法得出結論。
二、營業而來的運用資金及現金流量增額資訊內涵之比較以年度資料觀之,民國80年的營業而來運用資金有超越盈餘及營業活動現金流量之增額資訊內涵。而從分組結果得知,高報酬組中,盈餘及營業而來的運用資金皆具有增額資訊內涵。
三、重大非營業項目及土地重估增值資訊內涵之比較以分組結果觀之,在高報酬組中,迴歸模式主要解釋能力來自盈餘,而處分投資、固定資產利得及土地重估增值均無增額資訊內涵;但在中、低報酬組及Pooling,處分固定資產利益皆具增額資訊內涵。 / This study aims to examine the incremental information contents of working capitals and cash flows which is mainly motivated by the mandated accounting regulation in relation to statement of cash flows on December 28, 1989. The empirical analysis includes 162 samples covering from 1991 to 1995. The findings can be summarized as follows.
1. In addition to the samples of 1991, no significant association between cash flows from operations (including working capitals from operations) and stock returns can be detected. Among high returns sample group, the working capital from operations significantly associates with stock returns, if all samples are pooled.
2. In 1991, the working capitals from operations has significant incremental information contents. In addition, among high returns samples the working capital from operations has significant incremental information contents, if all samples are pooled.
3. Among high returns samples, the explanatory power of stock returns is mainly derived from earnings. In addition, gain on disposal of fixed asset has significant incremental information contents in medium and low return sample groups.
|
432 |
An Approach to Incremental Learning Good Classification TestsNaidenova, Xenia, Parkhomenko, Vladimir 28 May 2013 (has links) (PDF)
An algorithm of incremental mining implicative logical rules is pro-posed. This algorithm is based on constructing good classification tests. The in-cremental approach to constructing these rules allows revealing the interde-pendence between two fundamental components of human thinking: pattern recognition and knowledge acquisition.
|
433 |
Framework-Specific Modeling LanguagesAntkiewicz, Michal 12 September 2008 (has links)
Framework-specific modeling languages (FSMLs) help developers build applications
based on object-oriented frameworks. FSMLs formalize abstractions and rules of the framework's application programming interfaces (APIs) and can express models of how applications use an API. Such models, referred to as framework-specific models, aid developers in understanding, creating, and evolving application code.
We present the concept of FSMLs, propose a way of specifying their abstract syntax and semantics, and show how such language specifications can be interpreted to provide reverse, forward, and round-trip engineering of framework-specific models and framework-based application code.
We present a method for engineering FSMLs that was extracted post-mortem from the
experience of building four such languages. The method is driven by the use cases that the FSMLs under development are to support. We present the use cases, the overall process, and its instantiation for each language. The presentation focuses on providing concrete examples for engineering steps, outcomes, and challenges. It also provides strategies for making engineering decisions.
The presented method and experience are aimed at framework developers and tool
builders who are interested in engineering new FSMLs. Furthermore, the method represents a necessary step in the maturation of the FSML concept. Finally, the presented work offers a concrete example of software language engineering.
FSML engineering formalizes existing domain knowledge that is not present in language form and makes a strong case for the benefits of such formalization.
We evaluated the method and the exemplar languages. The evaluation is both empirical and analytical. The empirical evaluation involved measuring the precision and recall of reverse engineering and verifying the correctness or forward and round-trip
engineering. The analytical evaluation focused on the generality of the method.
|
434 |
Framework-Specific Modeling LanguagesAntkiewicz, Michal 12 September 2008 (has links)
Framework-specific modeling languages (FSMLs) help developers build applications
based on object-oriented frameworks. FSMLs formalize abstractions and rules of the framework's application programming interfaces (APIs) and can express models of how applications use an API. Such models, referred to as framework-specific models, aid developers in understanding, creating, and evolving application code.
We present the concept of FSMLs, propose a way of specifying their abstract syntax and semantics, and show how such language specifications can be interpreted to provide reverse, forward, and round-trip engineering of framework-specific models and framework-based application code.
We present a method for engineering FSMLs that was extracted post-mortem from the
experience of building four such languages. The method is driven by the use cases that the FSMLs under development are to support. We present the use cases, the overall process, and its instantiation for each language. The presentation focuses on providing concrete examples for engineering steps, outcomes, and challenges. It also provides strategies for making engineering decisions.
The presented method and experience are aimed at framework developers and tool
builders who are interested in engineering new FSMLs. Furthermore, the method represents a necessary step in the maturation of the FSML concept. Finally, the presented work offers a concrete example of software language engineering.
FSML engineering formalizes existing domain knowledge that is not present in language form and makes a strong case for the benefits of such formalization.
We evaluated the method and the exemplar languages. The evaluation is both empirical and analytical. The empirical evaluation involved measuring the precision and recall of reverse engineering and verifying the correctness or forward and round-trip
engineering. The analytical evaluation focused on the generality of the method.
|
435 |
Diversity Multiplexing Tradeoff and Capacity Results in Relayed Wireless NetworksOveis Gharan, Shahab January 2010 (has links)
This dissertation studies the diversity multiplexing tradeoff and the capacity of wireless multiple-relay network.
In part 1, we study the setup of the parallel Multi-Input Multi-Output (MIMO)
relay network. An amplify-and-forward relaying scheme, Incremental Cooperative
Beamforming, is introduced and shown to achieve the capacity of the network in
the asymptotic case of either the number of relays or the power of each relay goes to infinity.
In part 2, we study the general setup of multi-antenna multi-hop multiple- relay network. We propose a new scheme, which we call random sequential (RS), based on the amplify-and-forward relaying. Furthermore, we derive diversity- multiplexing tradeoff (DMT) of the proposed RS scheme for general single-antenna multiple-relay networks. It is shown that for single-antenna two-hop multiple- access multiple-relay (K > 1) networks (without direct link between the source(s) and the destination), the proposed RS scheme achieves the optimum DMT.
In part 3, we characterize the maximum achievable diversity gain of the multi- antenna multi-hop relay network and we show that the proposed RS scheme achieves the maximum diversity gain.
In part 4, RS scheme is utilized to investigate DMT of the general multi-antenna multiple-relay networks. First, we study the case of a multi-antenna full-duplex single-relay two-hop network, for which we show that the RS achieves the optimum DMT. Applying this result, we derive a new achievable DMT for the case of multi-antenna half-duplex parallel relay network. Interestingly, it turns out that the DMT of the RS scheme is optimum for the case of multi-antenna two parallel non-interfering half-duplex relays. Furthermore, we show that random unitary matrix multiplication also improves the DMT of the Non-Orthogonal AF relaying scheme in the case of a multi-antenna single relay channel. Finally, we study the general case of multi-antenna full-duplex relay networks and derive a new lower-bound on its DMT using the RS scheme.
Finally, in part 5, we study the multiplexing gain of the general multi-antenna multiple-relay networks. We prove that the traditional amplify-forward relaying achieves the maximum multiplexing gain of the network. Furthermore, we show that the maximum multiplexing gain of the network is equal to the minimum vertex cut-set of the underlying graph of the network, which can be computed in polynomial time in terms of the number of network nodes. Finally, the argument is extended to the multicast and multi-access scenarios.
|
436 |
Housing Diversity and Consolidation in Low-Income Colonias: Patterns of House Form and Household Arrangements in Colonias of the US-Mexico BorderReimers-Arias, Carlos Alberto 2009 August 1900 (has links)
Colonias are low-income settlements on the US-Mexico border characterized by
poor infrastructure, minimum services, and an active housing construction with a high
self-help and self-management component. Housing in colonias is very diverse showing
house forms that include temporary and permanent structures, campers, trailers or
manufactured houses and conventional homes. Most of this housing does not meet
construction standards and codes and is considered substandard. Colonias households are
also of diverse nature and composition including single households, nuclear and
extended families, as well as multiple households sharing lots. This wide variety of
house forms and households in colonias fits poorly within the nuclear household, single
family detached housing idealized by conventional low-income housing projects,
programs and policies. As a result, colonias marginally benefit from the resources
available to them and continue to depend mostly on the individual efforts of their
inhabitants. This research identifies the housing diversity and the process of housing
consolidation in colonias of the US-Mexico border by looking at the patterns of house
form and household arrangements in colonias of South Texas. Ten colonias located to
the east of the city of Laredo along Highway 359 in Webb County, Texas were selected
based on their characteristics, data availability and accessibility. Data collected included
periodic aerial images of the colonias spanning a period of 28 years, household
information from the 2000 census disaggregated at the block level for these colonias,
and information from a field survey and a semi structured interview made to a random
sample of 123 households between February and June 2007. The survey collected
information about house form and household characteristics. The survey also
incorporated descriptive accounts on how households completed their house from the
initial structure built or set on the lot until the current house form. Data was compiled
and analyzed using simple statistical methods looking for identifiable patterns on house
form and household characteristics and changes over time.
Findings showed that housing in colonias is built and consolidated following
identifiable patterns of successive changes to the house form. Findings also showed that
households in colonias share characteristics that change over time in similar ways. These
results suggest similarities of colonias with extra-legal settlements in other developing
areas. Based on these findings, the study reflects on possible considerations that could
improve the impact of projects, programs and policies directed to support colonias and
improve colonias housing.
|
437 |
Detailed Evaluation Of An Existing Reinforced Concrete Building Damaged Under Its Own WeightBayraktar, Atilla 01 May 2011 (has links) (PDF)
DETAILED EVALUATION OF AN EXISTING REINFORCED CONCRETE BUILDING DAMAGED UNDER ITS OWN WEIGHT
Bayraktar, Atilla
M.Sc., Department of Civil Engineering
Supervisor: Prof. Dr. Ahmet Yakut
May 2011, 130 pages
A significant part of the Turkey&rsquo / s building inventory consists of reinforced concrete frame structures. In addition to that a big part of the existing building inventory in Turkey shows insufficiency in seismic performance damage or failure of structures under their own loads has also been observed.
The failure of Zü / mrü / t Apartment building that occurred in 2004 in Konya and resulted in the death of 92 people brings the necessity of researches on robustness and reserve capacities of the buildings under gravity loading to front.
In the context of this thesis, the event in Konya that has resulted in the crushing of four columns in Dostlar Building Complex is studied. After the occurrence of the event, the building was visited, plans of existing condition were prepared and pre-assessment was performed. Original plans of the building, strength test results of the concrete samples and reinforcement detection results were obtained. The reasons behind the crushing of the columns have been investigated through a series of analyses based on a number of possible hypotheses. After modeling the building in SAP2000 program, demand-capacity ratios are calculated. Nonlinear behavior of the structure is determined by incremental static pushover analysis and the seismic performance of the building is evaluated by nonlinear procedure described in 2007 Turkish Earthquake Code. To determine the nonlinear behavior under gravity loading and collapse mechanism, incremental vertical pushover analysis is performed.
|
438 |
Low-cost testing of high-precision analog-to-digital convertersKook, Se Hun 05 July 2011 (has links)
The advent of deep submicron technology has resulted in a new generation of highly integrated mixed-signal system-on-chips (SoCs) and system-on-packages (SoPs). As a result, the cost of electrical products has sharply declined, and their performance has greatly improved. However, a testing throughput still remains one of the major contribution factors to final cost of the electrical products. In addition, highly precise and robust test methods and equipment are needed to promise non-defective products to customers. Hence, the testing is a critical part of the manufacturing process in the semiconductor industry. Testing such highly integrated systems and devices requires high-performance and high-cost equipment.
Analog-to-digital converters (A/D converters) are the largest volume mixed-signal circuits, and they play a key role in communication between the analog and digital domains in many mixed-signal systems. Due to the increasing complexity of the mixed-signal systems and the availability of the new generations of highly integrated systems, reliable and robust data conversion schemes are necessary for many mixed-signal designs. Many applications such as telecommunications, instrumentation, sensing, and data acquisition have demanded data converters that support ultra high-speed, wide-bandwidths, and high-precision with excellent dynamic performance and low-noise. However, as resolutions and speeds in the A/D converters increase, testing becomes much harder and more expensive.
In this research work, low-cost test strategies to reduce overall test cost for high-precision A/D converters are developed. The testing of data converters can be classified as dynamic (or alternating current (AC)) performance test and static (or direct current (DC)) performance test [1]. In the dynamic specification test, a low-cost test stimulus is generated using an optimization algorithm to stimulate high-precision sigma-delta A/D converters under test. Dynamic specifications are accurately predicted in two different ways using concepts of an alternate-based test and a signature-based test. For this test purpose, the output pulse stream of a sigma-delta modulator is made observable and useful. This technique does not require spectrally pure input signals, so the test cost can be reduced compared to a conventional test method. In addition, two low-cost test strategies for static specification testing of high-resolution A/D converters are developed using a polynomial-fitting method. The cost of testing can be significantly reduced as a result of the measurement of fewer samples than a conventional histogram test. While one test strategy needs no expensive high-precision stimulus generator, which can reduce the test cost, the other test strategy finds the optimal set of test-measurement points for the maximum fault coverage, which can use minimum-code measurement as a production test solution.
The theoretical concepts of the proposed test strategies are developed in software simulation and validated by hardware experiments using a commercially available A/D converter and designed converters on printed circuit board (PCB). This thesis provides low-cost test solutions for the high-resolution A/D converters.
|
439 |
Improving systematic constraint-driven analysis using incremental and parallel techniquesSiddiqui, Junaid Haroon 25 February 2013 (has links)
This dissertation introduces Pikse, a novel methodology for more effective and efficient checking of code conformance to specifications using parallel and incremental techniques, describes a prototype implementation that embodies the methodology, and presents experiments that demonstrate its efficacy. Pikse has at its foundation a well-studied approach -- systematic constraint-driven analysis -- that has two common forms: (1) constraint-based testing -- where logical constraints that define desired inputs and expected program behavior are used for test input generation and correctness checking, say to perform black-box testing; and (2) symbolic execution -- where a systematic exploration of (bounded) program paths using symbolic input values is used to check properties of program behavior, say to perform white-box testing.
Our insight at the heart of Pikse is that for certain path-based analyses, (1) the state of a run of the analysis can be encoded compactly, which provides a basis for parallel techniques that have low communication overhead; and (2) iterations performed by the analysis have commonalities, which provides the basis for incremental techniques that re-use results of computations common to successive iterations.
We embody our insight into a suite of parallel and incremental techniques that enable more effective and efficient constraint-driven analysis. Moreover, our techniques work in tandem, for example, for combined black-box constraint-based input generation with white-box symbolic execution. We present a series of experiments to evaluate our techniques. Experimental results show Pikse enables significant speedups over previous state-of-the-art. / text
|
440 |
Distributed processing techniques for parameter estimation and efficient data-gathering in wireless communication and sensor networks / Κατανεμημένες τεχνικές επεξεργασίας για εκτίμηση παραμέτρων και αποδοτική συλλογή δεδομένων σε ασύρματα δίκτυα επικοινωνιών και αισθητήρωνBogdanovic, Nikola 07 May 2015 (has links)
This dissertation deals with the distributed processing techniques for parameter estimation and efficient data-gathering in wireless communication and sensor networks.
With the aim of enabling an energy aware and low-complexity distributed implementation of the estimation task, several useful optimization techniques that generally yield linear estimators were derived in the literature. Up to now, most of the works considered that the nodes are interested in estimating the same vector of global parameters. This scenario can be viewed as a special case of a more general problem where the nodes of the network have overlapped but different estimation interests.
Motivated by this fact, this dissertation states a new Node-Specific Parameter Estimation (NSPE) formulation where the nodes are interested in estimating parameters of local, common and/or global interest. We consider a setting where the NSPE interests are partially overlapping, while the non-overlapping parts can be arbitrarily different. This setting can model several applications, e.g., cooperative spectrum sensing in cognitive radio networks, power system state estimation in smart grids etc. Unsurprisingly, the effectiveness of any distributed adaptive implementation is dependent on the ways cooperation is established at the network level, as well as the processing strategies considered at the node level.
At the network level, this dissertation is concerned with the incremental and diffusion cooperation schemes in the NSPE settings. Under the incremental mode, each node communicates with only one neighbor, and the data are processed in a cyclic manner throughout the network at each time instant. On the other hand, in the diffusion mode at each time step each node of the network cooperates with a set of neighboring nodes.
Based on Least-Mean Squares (LMS) and Recursive Least-Squares (RLS) learning rules employed at the node level, we derive novel distributed estimation algorithms that undertake distinct but coupled optimization processes in order to obtain adaptive solutions of the considered NSPE setting.
The detailed analyses of the mean convergence and the steady-state mean-square performance have been provided. Finally, different performance gains have been illustrated in the context of cooperative spectrum sensing in cognitive radio networks. Another fundamental problem that has been considered in this dissertation is the data-gathering problem, sometimes also named as the sensor reachback, that arises in Wireless Sensor Networks (WSN). In particular, the problem is related to the transmission of the acquired observations to a data-collecting node, often termed to as sink node, which has increased processing capabilities and more available power as compared to the other nodes. Here, we focus on WSNs deployed for structural health monitoring.
In general, there are several difficulties in the sensor reachback problem arising in such a network. Firstly, the amount of data generated by the sensor nodes may be immense, due to the fact that structural monitoring applications need to transfer relatively large amounts of dynamic response measurement data. Furthermore, the assumption that all sensors have direct, line-of-sight link to the sink does not hold in the case of these structures.
To reduce the amount of data required to be transmitted to the sink node, the correlation among measurements of neighboring nodes can be exploited. A possible approach to exploit spatial data correlation is Distributed Source Coding (DSC). A DSC technique may achieve lossless compression of multiple correlated sensor outputs without establishing any communication links between the nodes. Other approaches employ lossy techniques by taking advantage of the temporal correlations in the data and/or suitable stochastic modeling of the underlying processes. In this dissertation, we present a channel-aware lossless extension of sequential decoding based on cooperation between the nodes. Next, we also present a cooperative communication protocol based on adaptive spatio-temporal prediction. As a more practical approach, it allows a lossy reconstruction of transmitted data, while offering considerable energy savings in terms of transmissions toward the sink. / Η παρούσα διατριβή ασχολείται με τεχνικές κατανεμημένης επεξεργασίας για εκτίμηση παραμέτρων και για την αποδοτική συλλογή δεδομένων σε ασύρματα δίκτυα επικοινωνιών και αισθητήρων.
Το πρόβλημα της εκτίμησης συνίσταται στην εξαγωγή ενός συνόλου παραμέτρων από χρονικές και χωρικές θορυβώδεις μετρήσεις που συλλέγονται από διαφορετικούς κόμβους οι οποίοι παρακολουθούν μια περιοχή ή ένα πεδίο. Ο στόχος είναι να εξαχθεί μια εκτίμηση που θα είναι τόσο ακριβής όσο αυτή που θα πετυχαίναμε εάν κάθε κόμβος είχε πρόσβαση στην πληροφορία που έχει το σύνολο του δικτύου. Στο πρόσφατο σχετικά παρελθόν έγιναν διάφορες προσπάθειες που είχαν ως σκοπό την ανάπτυξη ενεργειακά αποδοτικών και χαμηλής πολυπλοκότητας κατανεμημένων υλοποίησεων του εκτιμητή. Έτσι, υπάρχουν πλέον στη βιβλιογραφία διάφορες ενδιαφέρουσες τεχνικές βελτιστοποίησης που οδηγούν σε γραμμικούς, κυρίως, εκτιμητές. Μέχρι τώρα, οι περισσότερες εργασίες θεωρούσαν ότι οι κόμβοι ενδιαφέρονται για την εκτίμηση ενός κοινού διανύσματος παραμέτρων, το οποίο είναι ίδιο για όλο το δίκτυο. Αυτό το σενάριο μπορεί να θεωρηθεί ως μια ειδική περίπτωση ενός γενικότερου προβλήματος, όπου οι κόμβοι του δικτύου έχουν επικαλυπτόμενα αλλά διαφορετικά ενδιαφέροντα εκτίμησης.
Παρακινημένη από αυτό το γεγονός, αυτή η Διατριβή ορίζει ένα νέο πλαίσιο της Κόμβο-Ειδικής Εκτίμησης Παραμέτρων (ΚΕΕΠ), όπου οι κόμβοι ενδιαφέρονται για την εκτίμηση των παραμέτρων τοπικού ενδιαφέροντος, των παραμέτρων που είναι κοινές σε ένα υποσύνολο των κόμβων ή/και των παραμέτρων που είναι κοινές σε όλο το δίκτυο. Θεωρούμε ένα περιβάλλον όπου η ΚΕΕΠ αναφέρεται σε ενδιαφέροντα που αλληλεπικαλύπτονται εν μέρει, ενώ τα μη επικαλυπτόμενα τμήματα μπορούν να είναι αυθαίρετα διαφορετικά. Αυτό το πλαίσιο μπορεί να μοντελοποιήσει διάφορες εφαρμογές, π.χ., συνεργατική ανίχνευση φάσματος σε γνωστικά δίκτυα ραδιοεπικοινωνιών, εκτίμηση της κατάστασης ενός δικτύου μεταφοράς ενέργειας κλπ. Όπως αναμένεται, η αποτελεσματικότητα της οποιασδήποτε κατανεμημένης προσαρμοστικής τεχνικής εξαρτάται και από τον συγκεκριμένο τρόπο με τον οποίο πραγματοποιείται η συνεργασία σε επίπεδο δικτύου, καθώς και από τις στρατηγικές επεξεργασίας που χρησιμοποιούνται σε επίπεδο κόμβου. Σε επίπεδο δικτύου, αυτή η διατριβή ασχολείται με τον incremental (κυκλικά εξελισσόμενο) και με τον diffusion (διαχεόμενο) τρόπο συνεργασίας στο πλαίσιο της ΚΕΕΠ. Στον incremental τρόπο, κάθε κόμβος επικοινωνεί μόνο με ένα γείτονα, και τα δεδομένα από το δίκτυο υποβάλλονται σε επεξεργασία με ένα κυκλικό τρόπο σε κάθε χρονική στιγμή. Από την άλλη πλευρά, στον diffusion τρόπο σε κάθε χρονική στιγμή κάθε κόμβος του δικτύου συνεργάζεται με ένα σύνολο γειτονικών κόμβων. Με βάση τους αλγορίθμους Ελαχίστων Μέσων Τετραγώνων (ΕΜΤ) και Αναδρομικών Ελαχίστων Τετραγώνων (ΑΕΤ) οι οποίοι χρησιμοποιούνται ως κανόνες μάθησης σε επίπεδο κόμβου, αναπτύσσουμε νέους κατανεμημένους αλγόριθμους για την εκτίμηση οι οποίοι αναλαμβάνουν ευδιακριτές, αλλά συνδεδεμένες διαδικασίες βελτιστοποίησης, προκειμένου να αποκτηθούν οι προσαρμοστικές λύσεις της εξεταζόμενης ΚΕΕΠ. Οι λεπτομερείς αναλύσεις για τη σύγκλιση ως προς τη μέση τιμή και για τη μέση τετραγωνική απόδοση σταθερής κατάστασης έχουν επίσης εξαχθεί στο πλαίσιο αυτής της Διατριβής. Τέλος, όπως αποδεικνύεται, η εφαρμογή των προτεινόμενων τεχνικών εκτίμησης στο πλαίσιο της συνεργατικής ανίχνευσης φάσματος σε γνωστικές ραδιοεπικοινωνίες, οδηγεί σε αισθητά κέρδη απόδοσης.
Ένα άλλο βασικό πρόβλημα που έχει μελετηθεί στην παρούσα εργασία είναι το πρόβλημα συλλογής δεδομένων, επίσης γνωστό ως sensor reachback, το οποίο προκύπτει σε ασύρματα δίκτυα αισθητήρων (ΑΔΑ). Πιο συγκεκριμένα, το πρόβλημα σχετίζεται με την μετάδοση των λαμβανόμενων μετρήσεων σε έναν κόμβο συλλογής δεδομένων, που ονομάζεται sink node, ο οποίος έχει αυξημένες δυνατότητες επεξεργασίας και περισσότερη διαθέσιμη ισχύ σε σύγκριση με τους άλλους κόμβους. Εδώ, έχουμε επικεντρωθεί σε ΑΔΑ που έχουν αναπτυχθεί για την παρακολούθηση της υγείας κατασκευών. Σε γενικές γραμμές, σε ένα τέτοιο δίκτυο προκύπτουν πολλές δυσκολίες σε ότι αφορά το sensor reachback προβλήμα. Πρώτον, η ποσότητα των δεδομένων που παράγονται από τους αισθητήρες μπορεί να είναι τεράστια, γεγονός που οφείλεται στο ότι για την παρακολούθηση της υγείας κατασκευών είναι απαραίτητο να μεταφερθούν σχετικά μεγάλες ποσότητες μετρήσεων δυναμικής απόκρισης. Επιπλέον, η υπόθεση ότι όλοι οι αισθητήρες έχουν απευθείας μονοπάτι μετάδοσης, με άλλα λόγια ότι βρίσκονται σε οπτική επαφή με τον sink node, δεν ισχύει στην περίπτωση των δομών αυτών.
Για να μειωθεί η ποσότητα των δεδομένων που απαιτούνται για να μεταδοθούν στον sink node, αξιοποιείται η συσχέτιση μεταξύ των μετρήσεων των γειτονικών κόμβων. Μία πιθανή προσέγγιση για την αξιοποίηση της χωρικής συσχέτισης μεταξύ δεδομένων σχετίζεται με την Κατανεμημένη Κωδικοποίηση Πηγής (ΚΚΠ). Η τεχνική ΚΚΠ επιτυγχάνει μη απωλεστική συμπίεση των πολλαπλών συσχετιζόμενων μετρήσεων των κόμβων χωρίς να απαιτεί την οποιαδήποτε επικοινωνία μεταξύ των κόμβων. Άλλες προσεγγίσεις χρησιμοποιούν απωλεστικές τεχνικές συμπίεσης εκμεταλλευόμενες τις χρονικές συσχετίσεις στα δεδομένα ή / και κάνοντας μία κατάλληλη στοχαστική μοντελοποίηση των σχετικών διαδικασιών. Σε αυτή τη Διατριβή, παρουσιάζουμε μία επέκταση της διαδοχικής αποκωδικοποίησης χωρίς απώλειες λαμβάνοντας υπόψιν το κανάλι και βασιζόμενοι σε κατάλληλα σχεδιασμένη συνεργασία μεταξύ των κόμβων. Επιπρόσθετα, παρουσιάζουμε ενα συνεργατικό πρωτόκολλο επικοινωνίας που στηρίζεται σε προσαρμοστική χωρο-χρονική πρόβλεψη. Ως μια πιο πρακτική προσέγγιση, το πρωτόκολλο επιτρέπει απώλειες στην ανακατασκευή των μεταδιδόμενων δεδομένων, ενώ προσφέρει σημαντική εξοικονόμηση ενέργειας μειώνοντας των αριθμό των απαιτούμενων μεταδόσεων προς τον sink node.
|
Page generated in 0.1003 seconds