• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 78
  • 38
  • 24
  • 20
  • 18
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 544
  • 77
  • 65
  • 64
  • 60
  • 59
  • 51
  • 51
  • 48
  • 47
  • 42
  • 39
  • 37
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Investigation of phononic crystals for dispersive surface acoustic wave ozone sensors

Westafer, Ryan S. 01 July 2011 (has links)
The object of this research was to investigate dispersion in surface phononic crystals (PnCs) for application to a newly developed passive surface acoustic wave (SAW) ozone sensor. Frequency band gaps and slow sound already have been reported for PnC lattice structures. Such engineered structures are often advertised to reduce loss, increase sensitivity, and reduce device size. However, these advances have not yet been realized in the context of surface acoustic wave sensors. In early work, we computed SAW dispersion in patterned surface structures and we confirmed that our finite element computations of SAW dispersion in thin films and in one dimensional surface PnC structures agree with experimental results obtained by laser probe techniques. We analyzed the computations to guide device design in terms of sensitivity and joint spectral operating point. Next we conducted simulations and experiments to determine sensitivity and limit of detection for more conventional dispersive SAW devices and PnC sensors. Finally, we conducted extensive ozone detection trials on passive reflection mode SAW devices, using distinct components of the time dispersed response to compensate for the effect of temperature. The experimental work revealed that the devices may be used for dosimetry applications over periods of several days.
382

Modelling and Analysis of Interconnects for Deep Submicron Systems-on-Chip

Pamunuwa, Dinesh January 2003 (has links)
<p>The last few decades have been a very exciting period in thedevelopment of micro-electronics and brought us to the brink ofimplementing entire systems on a single chip, on a hithertounimagined scale. However an unforeseen challenge has croppedup in the form of managing wires, which have become the mainbottleneck in performance, masking the blinding speed of activedevices. A major problem is that increasingly complicatedeffects need to be modelled, but the computational complexityof any proposed model needs to be low enough to allow manyiterations in a design cycle.</p><p>This thesis addresses the issue of closed form modelling ofthe response of coupled interconnect systems. Following astrict mathematical approach, second order models for thetransfer functions of coupled RC trees based on the first andsecond moments of the impulse response are developed. The2-pole-1-zero transfer function that is the best possible fromthe available information is obtained for the signal path fromeach driver to the output in multiple aggressor systems. Thisallows the complete response to be estimated accurately bysumming up the individual waveforms. The model represents theminimum complexity for a 2-pole-1-zero estimate, for this classof circuits.</p><p>Also proposed are new techniques for the optimisation ofwires in on-chip buses. Rather than minimising the delay overeach individual wire, the configuration that maximises thetotal bandwidth over a number of parallel wires isinvestigated. It is shown from simulations that there is aunique optimal solution which does not necessarily translate tothe maximum possible number of wires, and in fact deviatesconsiderably from it when the resources available for repeatersare limited. Analytic guidelines dependent only on processparameters are derived for optimal sizing of wires andrepeaters.</p><p>Finally regular tiled architectures with a commoncommunication backplane are being proposed as being the mostefficient way to implement systems-on-chip in the deepsubmicron regime. This thesis also considers the feasibility ofimplementing a regular packet-switched network-on-chip in atypical future deep submicron technology. All major physicalissues and challenges are discussed for two differentarchitectures and important limitations are identified.</p>
383

權重效用在網路問題上之研究 / A Study on Weighted Utilizations of Network Dimensioning Problems

程雅惠, Cheng,Ya Hui Unknown Date (has links)
我們以公平頻寬配置考慮網路上多重等級與多重服務品質的效用函數, 利用權重效用函數提出兩種數學最佳化模型。 這兩個模型的目標都是要尋找權重效用函數總和值的最大值。 本篇論文特別以權重為決策變數, 研究最佳權重的行為模式, 並求得最佳權重分佈公式。 我們發現模型I的總權重效用只看重某個效用值最大的等級, 完全忽略其他效用值較小的等級; 即最大效用函數的最佳權重為1,其他效用較小的最佳權重為0。 在最佳化過程中, 模型II的數值資料呈現出最佳權重架構為:最佳權重中的每個權重均相等,且總和為1。 我們隨後證明這些結果,並利用GAMS軟體來呈現數值資料。 / We propose two mathematical models with weighted utility functions for the fair bandwidth allocation and QoS routing in communication networks which offer multiple services for several classes of users. The formulation and numerical experiments are carried out in a general utility-maximizing framework. In this work, instead of being fixed, the weight for each utility function is taken as a free variable. The objective of this thesis is to find the structure of optimal weights that maximize the weighted sum of utilities of the bandwidth allocation for each class. We solve it by proposing two models in terms of fairness. Model I and II are constructed to compare different choices for optimal weights. For Model I, the structure of optimal weights form a vector which consists of one for a class and zero otherwise. For Model II, the form of optimal weights is that each weight of utility function is equally assigned. The results are proved and illustrated by software GAMS numerically.
384

Applications of nonparametric methods in economic and political science / Anwendungen nichtparametrischer Verfahren in den Wirtschafts- und Staatswissenschaften

Heidenreich, Nils-Bastian 11 April 2011 (has links)
No description available.
385

Appliction-driven Memory System Design on FPGAs

Dai, Zefu 08 January 2014 (has links)
Moore's Law has helped Field Programmable Gate Arrays (FPGAs) scale continuously in speed, capacity and energy efficiency, allowing the integration of ever-larger systems into a single FPGA chip. This brings challenges to the productivity of developers in leveraging the sea of FPGA resources. Higher level of design abstractions and programming models are needed to improve the design productivity, which in turn require memory architectural supports on FPGAs. While previous efforts focus on computation-centric applications, we take a bandwidth-centric approach in designing memory systems. In particular, we investigate the scheduling, buffered switching and searching problems, which are common to a wide range of FPGA applications. Despite that the bandwidth problem has been extensively studied for general-purpose computing and application specific integrated circuit (ASIC) designs, the proposed techniques are often not applicable to FPGAs. In order to achieve optimized design implementations, designers need to take into consideration both the underlying FPGA physical characteristics as well as the requirements from applications. We therefore extract design requirements from four driving applications for the selected problems, and address them by exploiting the physical architectures and available resources of FPGAs. Towards solving the selected problems, we manage to advance state-of-the-art with a scheduling algorithm, a switch organization and a cache analytical model. These lead to performance improvements, resource savings and feasibilities of new approaches for well-known problems.
386

Appliction-driven Memory System Design on FPGAs

Dai, Zefu 08 January 2014 (has links)
Moore's Law has helped Field Programmable Gate Arrays (FPGAs) scale continuously in speed, capacity and energy efficiency, allowing the integration of ever-larger systems into a single FPGA chip. This brings challenges to the productivity of developers in leveraging the sea of FPGA resources. Higher level of design abstractions and programming models are needed to improve the design productivity, which in turn require memory architectural supports on FPGAs. While previous efforts focus on computation-centric applications, we take a bandwidth-centric approach in designing memory systems. In particular, we investigate the scheduling, buffered switching and searching problems, which are common to a wide range of FPGA applications. Despite that the bandwidth problem has been extensively studied for general-purpose computing and application specific integrated circuit (ASIC) designs, the proposed techniques are often not applicable to FPGAs. In order to achieve optimized design implementations, designers need to take into consideration both the underlying FPGA physical characteristics as well as the requirements from applications. We therefore extract design requirements from four driving applications for the selected problems, and address them by exploiting the physical architectures and available resources of FPGAs. Towards solving the selected problems, we manage to advance state-of-the-art with a scheduling algorithm, a switch organization and a cache analytical model. These lead to performance improvements, resource savings and feasibilities of new approaches for well-known problems.
387

Rapid application mobilization and delivery for smartphones

Tsao, Cheng-Lin 02 July 2012 (has links)
Smartphones form an emerging mobile computing platform that has hybrid characteristics borrowed from PC and feature phone environments. While maintaining great mobility and portability as feature phones, smartphones offers advanced computation capabilities and network connectivity. Although the smartphone platform can support PC-grade applications, the platform exhibits fundamentally different characteristics from the PC platform. Two important problems arise in the smartphone platform: how to mobilize applications and how to deliver them effectively. Traditional application mobilization involves significant cost in development and typically provides limited functionality of the PC version. Since the mobile applications rely on the embedded wireless interfaces of smartphones for network access, the application performance is impacted by the inferior characteristics of the wireless networks. Our first contribution is super-aggregation, a rapid application delivery protocol that in tandem uses the multiple interfaces intelligently to achieve a performance that is ``better than the sum of throughputs' achievable through each of the interfaces individually. The second contribution is MORPH, a remote computing protocol for heterogeneous devices that transforms the application views on the PC platform into smartphone-friendly views. MORPH virtualizes application views independent of the UI framework used into an abstract representation called virtual view. It allows transformation services to be easily programmed to realize a smartphone friendly view by manipulating the virtual view. The third contribution is the system design of super-aggregation and MORPH that achieve rapid application delivery and mobilization. Both solutions require only software modifications that can be easily deployed to smartphones.
388

Data-driven estimation for Aalen's additive risk model

Boruvka, Audrey 02 August 2007 (has links)
The proportional hazards model developed by Cox (1972) is by far the most widely used method for regression analysis of censored survival data. Application of the Cox model to more general event history data has become possible through extensions using counting process theory (e.g., Andersen and Borgan (1985), Therneau and Grambsch (2000)). With its development based entirely on counting processes, Aalen’s additive risk model offers a flexible, nonparametric alternative. Ordinary least squares, weighted least squares and ridge regression have been proposed in the literature as estimation schemes for Aalen’s model (Aalen (1989), Huffer and McKeague (1991), Aalen et al. (2004)). This thesis develops data-driven parameter selection criteria for the weighted least squares and ridge estimators. Using simulated survival data, these new methods are evaluated against existing approaches. A survey of the literature on the additive risk model and a demonstration of its application to real data sets are also provided. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-07-18 22:13:13.243
389

Design of Active CMOS Multiband Ultra-Wideband Receiver Front-End

Reja, Md Mahbub Unknown Date
No description available.
390

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014

Page generated in 0.0385 seconds