• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 38
  • 11
  • 8
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Subsampling Strategies for Bayesian Variable Selection and Model Averaging in GLM and BGNLM

Lachmann, Jon January 2021 (has links)
Bayesian Generalized Nonlinear Models (BGNLM) offer a flexible alternative to GLM while still providing better interpretability than machine learning techniques such as neural networks. In BGNLM, the methods of Bayesian Variable Selection and Model Averaging are applied in an extended GLM setting. Models are fitted to data using MCMC within a genetic framework in an algorithm called GMJMCMC. In this thesis, we present a new implementation of the algorithm as a package in the programming language R. We also present a novel algorithm called S-IRLS-SGD for estimating the MLE of a GLM by subsampling the data. Finally, we present some theory combining the novel algorithm with GMJMCMC/MJMCMC/MCMC and a number of experiments demonstrating the performance of the contributed algorithm.
22

Vodoznačení statických obrazů / Watermarking of static images

Bambuch, Petr January 2008 (has links)
The thesis deals with the security of static images. The main aim is to embed the watermark into the original data so effectively, to avoid removal of the watermark with the use of simple and fast attacks methods. With developing of the watermarking techniques the technique of attacks are improved and developed also. The main aim of the attacks is to remove and devalue the hidden watermark in the image. The goal of the thesis is to check current techniques of static image watermarking and implement two methods of watermarking, which are to be tested for robustness against attacks.
23

Handling Complexity via Statistical Methods

Evidence S Matangi (8082623) 05 December 2019 (has links)
<p>Phenomena investigated from complex systems are characteristically dynamic, multi-dimensional, and nonlinear. Their traits can be captured through data generating mechanisms (<i>DGM</i>) that explain the interactions among the systems’ components. Measurement is fundamental to advance science, and complexity requires deviation from linear thinking to handle. Simplifying the measurement of complex and heterogeneous data in statistical methodology can compromise their accuracy. In particular, conventional statistical methods make assumptions on the DGM that are rarely met in real world, which can make inference inaccurate. We posit that causal inference for complex systems phenomena requires at least the incorporation of subject-matter knowledge and use of dynamic metrics in statistical methods to improve on its accuracy.</p> <p>This thesis consists of two separate topics on handling data and data generating mechanisms complexities, the evaluation of bundled nutrition interventions and modeling atmospheric data.</p> <p>Firstly, when a public health problem requires multiple ways to address its contributing factors, bundling of the approaches can be cost-effective. Scaling up bundled interventions geographically requires a hierarchical structure in implementation, with central coordination and supervision of multiple sites and staff delivering a bundled intervention. The experimental design to evaluate such an intervention becomes complex to accommodate the multiple intervention components and hierarchical implementation structure. The components of a bundled intervention may impact targeted outcomes additively or synergistically. However, noncompliance and protocol deviation can impede this potential impact, and introduce data complexities. We identify several statistical considerations and recommendations for the implementation and evaluation of bundled interventions. </p> <p>The simple aggregate metrics used in clustering randomized controlled trials do not utilize all available information, and findings are prone to the ecological fallacy problem, in which inference at the aggregate level may not hold at the disaggregate level. Further, implementation heterogeneity impedes statistical power and consequently the accuracy of the inference from conventional comparison with a control arm. The intention-to-treat analysis can be inadequate for bundled interventions. We developed novel process-driven, disaggregated participation metrics to examine the mechanisms of impact of the Agriculture to Nutrition (ATONU) bundled intervention (ClinicalTrials.gov Identifier: NCT03152227). Logistic and beta-logistic hierarchical models were used to characterize these metrics, and generalized mixed models were employed to identify determinants of the study outcome, dietary diversity for women of reproductive age. Mediation analysis was applied to explore the underlying determinants by which the intervention affects the outcome through the process metrics. The determinants of greater participation should be the targets to improve implementation of future bundled interventions.</p> <p>Secondly, observed atmospheric records are often prohibitively short with only one record typically available for study. Classical nonlinear time series models applied to explain the nonlinear DGM exhibit some statistical properties of the phenomena being investigated, but have nothing to do with their physical properties. The data’s complex dependent structure invalidates inference from classical time series models involving strong statistical assumptions rarely met in real atmospheric and climate data. The subsampling method may yield valid statistical inference. Atmospheric records, however, are typically too short to satisfy<i> </i>asymptotic conditions for the method’s validity, which necessitates enhancements of subsampling with the use of approximating models (those sharing statistical properties with the series under study). </p> <p>Gyrostat models (<i>G-models</i>) are physically sound low-order models generated from the governing equations for atmospheric dynamics thus retaining some of their fundamental statistical and physical properties. We have demonstrated statistic that using G-models as approximating models in place of traditional time series models results in more precise subsampling confidence intervals with improved coverage probabilities. Future works will explore other types of G-models as approximating models for inference on atmospheric data. We will adopt this technique for inference on phenomena for AstroStatistics and pharmacokinetics. </p>
24

Collective Spiking Dynamics in Cortical Networks

Wilting, Jens 24 September 2020 (has links)
No description available.
25

Towards Building a High-Performance Intelligent Radio Network through Deep Learning: Addressing Data Privacy, Adversarial Robustness, Network Structure, and Latency Requirements.

Abu Shafin Moham Mahdee Jameel (18424200) 26 April 2024 (has links)
<p dir="ltr">With the increasing availability of inexpensive computing power in wireless radio network nodes, machine learning based models are being deployed in operations that traditionally relied on rule-based or statistical methods. Contemporary high bandwidth networks enable easy availability of significant amounts of training data in a comparatively short time, aiding in the development of better deep learning models. Specialized deep learning models developed for wireless networks have been shown to consistently outperform traditional methods in a variety of wireless network applications.</p><p><br></p><p dir="ltr">We aim to address some of the unique challenges inherent in the wireless radio communication domain. Firstly, as data is transmitted over the air, data privacy and adversarial attacks pose heightened risks. Secondly, due to the volume of data and the time-sensitive nature of the processing that is required, the speed of the machine learning model becomes a significant factor, often necessitating operation within a latency constraint. Thirdly, the impact of diverse and time-varying wireless environments means that any machine learning model also needs to be generalizable. The increasing computing power present in wireless nodes provides an opportunity to offload some of the deep learning to the edge, which also impacts data privacy.</p><p><br></p><p dir="ltr">Towards this goal, we work on deep learning methods that operate along different aspects of a wireless network—on network packets, error prediction, modulation classification, and channel estimation—and are able to operate within the latency constraint, while simultaneously providing better privacy and security. After proposing solutions that work in a traditional centralized learning environment, we explore edge learning paradigms where the learning happens in distributed nodes.</p>
26

Pricing of bonds and credit default swaps: Evidence from a panel of European companies

Smotlachová, Eva January 2016 (has links)
The aim of the thesis is to investigate determinants of corporate bond and CDS contract pricing using a sample of 34 European companies over the period 2008-2014. This work extends existing literature by studying differences in determinants of bond and CDS spreads not only for different time periods, but also for different sets of companies grouped by geography, industry, and profitability. The results reveal that bond and CDS spreads are generally influenced by similar factors, with a company's credit rating being the most influential factor. Nevertheless, the investigation of time-specific estimations suggests that firm-specific factors play a more significant role in pricing bonds, whereas market factors have a higher impact on CDS spreads. The analysis of the subsamples reveals substantial differences in regression results for individual groups of companies, which suggests a presence of idiosyncratic factors. Our conclusion is that the pricing of bonds and CDS contracts is not only time-dependent, but also unique for different groups of companies, which implies a necessity to use different pricing models for individual contracts.
27

Technologies and design methods for a highly integrated AIS transponder / Teknologier och design metoder för en högintegrerad AIS transponder

Ramquist, Henrik January 2003 (has links)
<p>The principle of universal shipborne automatic identification system (AIS) is to allow automatic exchange of shipboard information between one vessel and another. Saab TransponderTech AB has an operating AIS transponder on the market and the purpose of this report is to investigate alternative technologies that could result in a highly integrated replacement for the existing hardware. </p><p>Design aspects of a system-on-chip are discussed, such as: available system-on- chip technologies, intellectual property, on-chip bus structures and development tools. This information is applied to the existing hardware and the integration possibilities of the various parts of the AIS transponder is investigated. </p><p>The focus will be on two main transponder parts that are possible to replace with highly integrated circuits. The first of these parts is the so-called digital part where system-on-chip platforms for different technologies have been investigated with a special interest in a highly integrated FPGA implementation. The second part is the radio frequency receivers where alternatives to the existing superheterodyne receiver are discussed. </p><p>The conclusion drawn is that there exist technologies for developing a highly integrated AIS transponder. An attractive highly integrated transponder could consist of a FPGA system-on-chip platform with subsampling digital receivers and additional components that are unsuitable for integration.</p>
28

Technologies and design methods for a highly integrated AIS transponder / Teknologier och design metoder för en högintegrerad AIS transponder

Ramquist, Henrik January 2003 (has links)
The principle of universal shipborne automatic identification system (AIS) is to allow automatic exchange of shipboard information between one vessel and another. Saab TransponderTech AB has an operating AIS transponder on the market and the purpose of this report is to investigate alternative technologies that could result in a highly integrated replacement for the existing hardware. Design aspects of a system-on-chip are discussed, such as: available system-on- chip technologies, intellectual property, on-chip bus structures and development tools. This information is applied to the existing hardware and the integration possibilities of the various parts of the AIS transponder is investigated. The focus will be on two main transponder parts that are possible to replace with highly integrated circuits. The first of these parts is the so-called digital part where system-on-chip platforms for different technologies have been investigated with a special interest in a highly integrated FPGA implementation. The second part is the radio frequency receivers where alternatives to the existing superheterodyne receiver are discussed. The conclusion drawn is that there exist technologies for developing a highly integrated AIS transponder. An attractive highly integrated transponder could consist of a FPGA system-on-chip platform with subsampling digital receivers and additional components that are unsuitable for integration.
29

Direktsamplande digital transciever / Direct sampling digital transceiver

Karlsson, Magnus January 2002 (has links)
<p>Master thesis work at ITN (Department of Science and Technology) in the areas of A/D-construction and RF-circuit design. Major goal of project were to research suitable possibilities for implementations of direct conversion in transceivers operating in the 160MHz band, theoretic study followed by development of components in the construction environment Cadence. Suitable A/D- converter and other important parts were selected at the end of the theoretic study. Subsampling technique was applied to make A/D sample requirements more realistic to achieve. Besides lowering requirements on A/D-converter it allows a more simple construction, which saves more components than subsampling adds. Subsampling add extra noise, because of that an A/D-converter based on the RSD algorithm was chosen to improve error rate. To achieve high bit-processing rate compared to the used number of transistors, pipeline structure were selected as conversion method. The receiver was that part which gained largest attention because it’s the part which is most interesting to optimise. A/D-conversion is more difficult to construct than D/A conversion, besides there’s more to gain from eliminating mixers in the receiver than in the transmitter.</p>
30

Sampling Ocsilloscope On-Chip

Forsgren, Niklas January 2003 (has links)
Signal-integrity degradation from such factors as supply and substrate noise and cross talk between interconnects restricts the performance advances in Very Large Scale Integration (VLSI). To avoid this and to keep the signal-integrity, accurate measurements of the on-chip signal must be performed to get an insight in how the physical phenomenon affects the signals. High-speed digital signals can be taken off chip, through buffers that add delay. Propagating a signal through buffers restores the signal, which can be good if only information is wanted. But if the waveform is of importance, or if an analog signal should be measured the restoration is unwanted. Analog buffers can be used but they are limited to some hundred MHz. Even if the high-speed signal is taken off chip, the bandwidth of on-chip signals is getting very high, making the use of an external oscilloscope impossible for reliable measurement. Therefore other alternatives must be used. In this work, an on-chip measuring circuit is designed, which makes use of the principle of a sampling oscilloscope. Only one sample is taken each period, resulting in an output frequency much lower than the input frequency. A slower signal is easier to take off-chip and it can easily be processed with an ordinary oscilloscope.

Page generated in 0.0693 seconds