• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 8
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 27
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Leachate-Based Biotic Ligand Model for Soil : Ecotoxicological Risk Assessment of Copper for Invertebrates, Plants, and Soil Microbial Processes

Florén, Tove January 2021 (has links)
Environmental pollution of heavy metals has become an increasing problem. Environmental risk assessments can be conducted to investigate and determine the potential risk of polluted terrestrial environments. Traditionally, risk assessments are based on total soil metal concentrations on a dry weight basis. Assessments based on total concentrations do not account for metal bioavailability. The bioavailable fraction of the metal is that available for metabolic uptake over a biological membrane, and it is largely controlled by the physiochemical characteristics of the soil solution. For soil-dwelling organisms the most important physiochemical parameters governing copper toxicity are pH and dissolved organic carbon. To incorporate these parameters into risk assessments mechanistic chemical equilibrium models can be used, such as biotic ligand models. These have previously been applied to mainly aquatic environments and only in recent years they have been expanded to the terrestrial realm. The overall aim of this thesis was to test the applicability of a leachate-based biotic ligand model, which takes pH dependency into account, for ecotoxicological risk assessment of soil-dwelling organisms. Toxicity data with associated soil solution pH for seven soil-dwelling organisms and microbially mediated soil processes were obtained from the Swedish Geotechnical Institute. Physiochemical soil characteristics of three Swedish field sampled soils amended with biochar were also obtained from the Swedish Geotechnical Institute. The toxicity data were used to derive two key parameters for calibration of the soil biotic ligand model through linear regression analysis i.e., the pH dependency and the species-specific intrinsic sensitivity. The calibrated biotic ligand models were applied to the field soils and species sensitivity distributions were derived for each soil to calculate hazardous metal concentrations. A simplified risk assessment of the soils was performed based on the results of the leachate-based biotic ligand models and on measured total concentrations on a dry weight basis.  As expected, the results of the regression analysis showed a strong pH dependency between toxicity effect concentrations and pH. For all included test organisms, the copper toxicity effect concentration decreased as the pH of the soil solution increased. Although Cu2+ toxicity increased with increasing pH, the considered organisms showed individual and varying pH-dependencies especially at pH 3-4 and pH 7-8. Further, the results showed that the risk assessment based on the Swedish EPA method, which use total metal concentrations on a dry weight basis, yielded different results than risk assessment based on the leachate-based biotic ligand models. The soils that had been amended with biochar contained lower total Cu concentrations on a dry weight basis compared to those which had not been amended. Consequently, total Cu concentrations exceeded the guideline value for sensitive land-use only in the non-amended soils. Total Cu concentrations exceeded the guideline value for less sensitive land-use in all field soils. Similar to total Cu concentrations on a dry weight basis, the total dissolved Cu concentrations also decreased with added biochar. The same trend could not be seen for Cu2+ in CaCl2 leachates. DOC in the leachates decreased with added biochar, suggesting that biochar sorbs DOC. A majority of the total dissolved Cu was bound to DOC and only a small fraction left as free ions.  The lower DOC concentrations led to higher Cu2+ concentrations in the leachate. Consequently, two of the biochar amended soils had Cu2+ concentrations exceeding the calculated HC50 (protection level for LSL). The predicted toxic effect concentrations ranged from 0.001 μg/L for the most sensitive organism Tomato shoot (L. esculentum) to 3.53 μg/L for the least sensitive organism Soil induced respiration (SIR). The most sensitive field soil had the highest measured pH and had been amended with 6% biochar, the two least sensitive field soils had the lowest measured pH and had been amended with 3 and 6% biochar respectively. The risk assessment based on the soil-BLM approach yielded different, but not less conservative, results compared to the traditional risk assessment based on total concentrations on a dry weight basis. The expected result was for the BLM-based risk assessment to be less conservative as it takes the site-specific bioavailability into account. The leachate-based soil-BLM seems to be sensitive to changes and variations of the input parameters in the speciation. To improve the robustness of the model, and accuracy of risk assessments, additional organisms should be included in the SSDs and speciation should be performed on soils with a wider range of pH. The potential of leachate-based BLMs for risk assessment has been demonstrated. The results invite to further v development of leachate-based soil-BLMs and has the potential to increase the knowledge of the chemistry and toxicology of copper in soils as well as the effects and behaviour of biochar as a metal sorbent.
92

Microscopie super-résolutive aux synapses inhibitrices mixtes : régulation différentielle des GlyRs et des GABAARs par l’activité excitatrice / Glycine/GABA mixed inhibitory synapses studied with super-resolution microscopy : differential regulation of GlyRs and GABAARs by excitatory activity

Yang, Xiaojuan 10 September 2019 (has links)
La microscopie optique stochastique de reconstruction (STORM) contourne la limite de diffraction en enregistrant des signaux monomoléculaires spatialement et temporellement séparés, atteignant une résolution de ~10-40 nm. Dans mon étude, j'ai développé une stratégie d'imagerie et d'analyse de données dSTORM bicolore afin d'étudier l'ultrastructure des synapses inhibitrices mixtes. Mes résultats ont montré que les GlyRs, les GABAARs, la géphyrine et RIM1/2 présentent une organisation intra-synaptique hétérogène et forment des domaines sous-synaptiques (SSDs). Les GlyR et les GABAAR ne sont pas complètement mélangés, mais peuvent occuper des espaces différents à la densité post-synaptique (PSD). De plus, les SSD de géphyrine postsynaptique sont alignées avec les SSD de RIM1/2 pré-synaptiques, formant des nanocolonnes trans-synaptiques. Au cours d'une activité neuronale élevée par traitement 4-AP, la corrélation spatiale entre les GlyRs, les GABAARs et la géphyrine a augmentée au PSD. De plus, la corrélation spatiale des GlyRs et RIM1/2 a également augmenté, tandis que celle des GABAARs et RIM1/2 n'a pas changé. Le nombre de SSD par synapse pour ces protéines synaptiques n'est pas modifié par 4-AP. Cette étude fourni un nouvel angle de compréhension des mécanismes sous-jacents à la co-transmission GABAergique/glycinergique. / Stochastic optical reconstruction microscopy (STORM) bypasses the diffraction limit by recording spatially and temporally separated single molecule signals, achieving a resolution of ~10-40 nm. In my study, I have developed a two-color dSTORM imaging and data analysis strategy, in order to investigate the ultrastructure of mixed inhibitory synapses. My results show that GlyRs, GABAARs, gephyrin and RIM1/2 exhibit a heterogeneous intra-synaptic organization and form sub-synaptic domains (SSDs). GlyRs and GABAARs were not fully intermingled, but sometimes occupied different spaces at the post-synaptic density (PSD). In addition, post-synaptic gephyrin SSDs were aligned with pre-synaptic RIM1/2 SSDs, forming trans-synaptic nanocolumns. During elevated neuronal activity by 4-AP treatment, the spatial correlation between GlyRs, GABAARs and gephyrin was increased at the PSD. Moreover, the spatial correlation of GlyRs and RIM1/2 was also increased, while that of GABAARs and RIM1/2 did not change. The number of SSDs per synapse for these synaptic proteins was not changed by 4-AP. My study thus provides a new angle for understanding the mechanisms underlying GABAergic/glycinergic co-transmission.
93

企業價值提升之研究-以創見資訊科技公司為例

林書良 Unknown Date (has links)
記憶體產業是最10年隨著電腦及通訊產業的發展而快速崛起的產業,但也由於國際及國內各大廠前仆後繼的投資擴產,最近幾年面臨產能過剩及供過於求現象,而2008年遇全球不景氣,消費性電子產品需求疲弱不振,即使減產也難挽頹勢;而在記憶體產業如此惡劣環境下,本研究之個案公司-創見資訊,仍能維持一相對高而穩定之獲利,本研究係透過對其所處產業—記憶體模組/通路商之產業分析、及其歷史性財務報表分析加上相關預測資料,據以作為評價的基礎,評估出創見資訊之真實價值,得到以下研究結論: 一、記憶體應用之產品,因低價電腦 (Netbook) 及固態硬碟 (SSD) 相繼問市,產業前景仍十分樂觀,但其同時存在有同業間彼此產品差異性不大,未來競爭更趨激烈。 二、創見資訊已具備如充沛的營運資金、具一定經濟規模及領先之市占率、高品牌知名度、實質緊密的供應商關係、產品組合差異化、分散的代理商客戶、遍及全球的運籌網路及快速的新產品的開發速度等記憶體模組/通路商應有之關鍵成功因素。 三、創見資訊MVA 與EVA 相關性分析中得知的相關性高達0.80326,由此得知EVA變化對MVA 變化有很大的解釋能力,顯示該公司較無資訊不對稱問題。 四、銷售利潤率的變化造成創見資訊 ROIC下降主因,其主要係受到上游關鍵性原材料供過於求導致價格急速下跌,進而影響終端產品售價下滑幅度;雖然創見資訊過去五的移動平均ROIC 雖超過30%,但移動平均ROS 則逐漸下滑,甚至低於10%,加上移動平均資本週轉率走高,推論近年來逐漸傾向於Commodity Business。 五、創見資訊除因2006年預期Vista效應將帶來業績成長而積極備貨,造成2006 年存貨指標略有惡化外;而毛利指標雖均大於0,係由於近年來無論是上游關鍵原材料IC (約占製造成本80%以上)及產品售價亦大幅下跌之影響,若與同業較,其銷貨毛利指標已明顯優於其他同業;其他二個指標均在0左右,顯示其應收帳款、管銷費用並無不當膨脹,整體盈餘品質尚佳。 六、在經營績效方面,無論是現金週轉天數、營運現金流量與稅後淨利相比方面,創見資訊均呈逐年改善趨勢,而威剛科技則有逐漸惡化現象。 七、本研究發現,即使價格波動幅度劇烈,無論是毛利率或是ROIC表現,創見資訊均明顯優於威剛科技,且在因應關鍵性材料價格波動之能力亦優於威剛科技。 八、在評價分析方面,由銷售導向DCF 法評價出來的每股股東價值區間在64.86元~102.10元之間而盈餘導向DCF 法評價出來的可能股東價值區間為85.40元~115.50元,若與最近12個月平均市價89.95元相較,則兩個方法評價結果,仍具參考價值。 九、根據企業評價分析,在敏感性分析後,本研究發現要提升公司價值之各個價值因子的重要性依序為邊際利潤率>銷售成長率>盈餘成長率>資金成本>總投資/銷售率,此研究結果可提供公司經營者在排列策略優先順序時的重要參考。
94

Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinata

Dixon, William J., bill.dixon@dse.vic.gov.au January 2005 (has links)
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
95

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
96

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
97

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
98

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
99

Improving Decision Support Systems for Water Resource Management

Chen, Chen, Dilley, Maura, Valente, Marco January 2008 (has links)
The Water Framework Directive (WFD) structures long-term plans for Europe's threatened water resources. Owning to the inherent and human-made complexities of the water cycle, stakeholders must move strategically to avoid crisis and restore sustainability. Yet, the reality of water resource management today is falling short on delivery. Stakeholders require strategic tools that will help them to build consensus and take action in the right direction. Using the Framework for Strategic Sustainable Development (FSSD), this study shows how Decision Support Systems can be strategically improved using a whole-systems approach grounded in basic Principles for Sustainability. In this way, stakeholders will be capable of making synchronized moves towards sustainability and thus more likely to realize the WFD’s goal of ‘good status’ for all European waterways by 2015.
100

Optimalizace interferencí v celulárních komunikačních systémech / Interference Optimization in Cellular Communication Systems

Kassem, Edward January 2019 (has links)
Tato práce je rozdělena do šesti kapitol. První kapitola vysvětluje rozdíly mezi fyzickou vrstvou uplinků systémů LTE a LTE Advanced, zkoumá charakteristiky kanálu komunikace D2D v rámci sítě LTE Advanced, a navrhuje mechanismy optimalizace interferencí. Rovněž je v práci prezentována struktura softwarově definované rádiové platformy, která může být využita pro vyhodnocení rádiových kanálů. Druhá kapitola hodnotí a porovnává výkony uplinkové části fyzické vrstvy systému LTE a LTE Advanced. V prostředí MATLAB je implementována struktura LTE Advanced vysílače a přijímače se všemi stupni zpracování signálu. Generované signály obou výše uvedených systémů jsou přenášeny přes různé modely kanálu ITU-R. Byly používány různé techniky odhadu kanálů a detekce signálu pro obnovení vysílaného signálu. Výsledky jsou prezentovány formou křivek BER a křivek datové prostupnosti. Třetí kapitola navrhuje způsob opakování frekvencí v celulární síti (frequency-reuse) se třemi úrovněmi výkonu, který se využívá jako typ pokročilé metody snižování interferencí. Jsou ukázány normalizované kapacitní hustoty buněk a jejich podoblastí se třemi různými případy distribuce uživatelů uvnitř buněk. Je zobrazena korelace mezi celkovou kapacitou a poloměrem každého regionu. Dosažené výsledky navrhovaného schématu jsou porovnávány s tradiční technikou opakovaného použití frekvence (Reuse-3). Čtvrtá kapitola se zabývá výzkumem alternativní metody optimalizace interferencí. Bylo provedeno ověření kooperačních metod snímání rádiového spektra ve čtyřech různých reálných prostředích: venkovní-interiérové, vnitřní-venkovní, venkovní-vnitřní a venkovní-venkovní. Navržený systém je testován pomocí zařízení Universal Software Radio Peripheral (USRP) a obsahuje dva typy detektorů; energetický detektor a statistický detektor založený na Kolmogorově-Smirnovově testu, které byly implementovány na přijímací straně. Jedním z hlavních požadavků komunikace D2D je znalost charakteristik impulzních odezev rádiového kanálu. Pátá kapitola proto představuje metodu měření kanálu pomocí Zadoff-Chu sekvencí ve frekvenční oblasti jako alternativní techniku k měření kanálu v časové nebo frekvenční doméně. Pomocí navržené metody se základní charakteristiky kanálu, jako je časové rozšíření kanálu (RMS delay spread, mean excess delay), útlum šířením a koherenční šířka pásma extrahují v (až 20x) kratším čase ve srovnání s klasickou metodou měření kanálu ve frekvenční doméně. Jsou také zkoumány charakteristiky venkovních statických rádiových kanálů na vzdálenost několika kilometrů pro pásma UHF a SHF s ko-polarizovanou horizontální a vertikální konfigurací antény. Šestá kapitola uzavírá tezi a shrnuje závěry.

Page generated in 0.0201 seconds