• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 638
  • 276
  • 107
  • 85
  • 66
  • 44
  • 31
  • 14
  • 12
  • 11
  • 9
  • 5
  • 5
  • 4
  • 4
  • Tagged with
  • 1543
  • 145
  • 135
  • 98
  • 91
  • 89
  • 84
  • 79
  • 79
  • 79
  • 76
  • 75
  • 72
  • 70
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

A case study in handling over-dispersion in nematode count data

Kreider, Scott Edwin Douglas January 1900 (has links)
Master of Science / Department of Statistics / Leigh W. Murray / Traditionally the Poisson process is used to model count response variables. However, a problem arises when the particular response variable contains an inordinate number of both zeros and large observations, relative to the mean, for a typical Poisson process. In cases such as these, the variance of the data is greater than the mean and as such the data are over-dispersed with respect to the Poisson distribution due to the fact that the mean equals the variance for the Poisson distribution. This case study looks at several common and uncommon ways to attempt to properly account for this over-dispersion in a specific set of nematode count data using various procedures in SAS 9.2. These methods include but are not limited to a basic linear regression model, a generalized linear (log-linear) model, a zero-inflated Poisson model, a generalized Poisson model, and a Poisson hurdle model. Based on the AIC statistics the generalized log-linear models with the Pearson-scale and deviance-scale corrections perform the best. However, based on residual plots, none of the models appear to fit the data adequately. Further work with non-parametric methods or the negative binomial distribution may yield more ideal results.
342

Regression Models for Count Data in R

Zeileis, Achim, Kleiber, Christian, Jackman, Simon January 2007 (has links) (PDF)
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of zero-inflated and hurdle regression models in the functions zeroinfl() and hurdle() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both model classes are able to incorporate over-dispersion and excess zeros - two problems that typically occur in count data sets in economics and the social and political sciences - better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
343

OPTIMIZED LOW BIT RATE PCM/FM TELEMETRY WITH WIDE IF BANDWIDTHS

Law, Eugene 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / This paper will present the results of some experiments with non-coherent, single symbol detection of pulse code modulation (PCM)/frequency modulation (FM) where the receiver intermediate frequency (IF) bandwidth is much wider than the bit rate. The experiments involved varying the peak deviation and measuring the bit error probability (BEP) at various signal energy per bit to noise power spectral density ratios (E(b)/N(o)). The experiments showed that the optimum peak-to-peak deviation was about 0.7 to 0.8 times the –3 dB IF bandwidth and that the E(b)/N(o) required for a given BEP increased as the ratio of IF bandwidth to bit rate increased. Further, bi-phase-level/FM performed slightly better than non-return-to-zero-level (NRZ-L)/FM with an ac coupled RF signal generator and IF bandwidths much wider than the bit rate.
344

Regression Models for Count Data in R

Zeileis, Achim, Kleiber, Christian, Jackman, Simon 29 July 2008 (has links) (PDF)
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of hurdle and zero-inflated regression models in the functions hurdle() and zeroinfl() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both hurdle and zero-inflated model, are able to incorporate over-dispersion and excess zeros-two problems that typically occur in count data sets in economics and the social sciences-better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (authors' abstract)
345

Hybrid van der Waals heterostructures of zero-dimensional and two-dimensional materials

Zheng, Zhikun, Zhang, Xianghui, Neumann, Christof, Emmrich, Daniel, Winter, Andreas, Vieker, Henning, Liu, Wei, Lensen, Marga, Gölzhäuser, Armin, Turchanin, Andrey 11 December 2015 (has links) (PDF)
van der Waals heterostructures meet other low-dimensional materials. Stacking of about 1 nm thick nanosheets with out-of-plane anchor groups functionalized with fullerenes integrates this zero-dimensional material into layered heterostructures with a well-defined chemical composition and without degrading the mechanical properties. The developed modular and highly applicable approach enables the incorporation of other low-dimensional materials, e.g. nanoparticles or nanotubes, into heterostructures significantly extending the possible building blocks.
346

Essays on climate change, energy, and independence

Comerford, David January 2013 (has links)
This thesis contains three separate papers. A balance of questions: what can we ask of climate change economics? is a critical analysis of the economics of climate change literature. It concludes that much more research effort needs to be put into studying the investment needed for a transition to a zero carbon energy infrastructure, rather than the focus on determining the social cost of carbon. The interaction of scale economies and energy quality is a theoretical study of the ability of economies to operate given different qualities of energy resources. Measuring costs and benefits of independence is an analysis of the welfare costs to Catalonia from reduced trade, which may arise on independence from Spain. These costs are set against the benefits to Catalonia of not paying fiscal transfers to the rest of Spain.
347

Exploring the Scalability and Performance of Networks-on-Chip with Deflection Routing in 3D Many-core Architecture

Weldezion, Awet Yemane January 2016 (has links)
Three-Dimensional (3D) integration of circuits based on die and wafer stacking using through-silicon-via is a critical technology in enabling "more-than-Moore", i.e. functional integration of devices beyond pure scaling ("more Moore"). In particular, the scaling from multi-core to many-core architecture is an excellent candidate for such integration. 3D systems design follows is a challenging and a complex design process involving integration of heterogeneous technologies. It is also expensive to prototype because the 3D industrial ecosystem is not yet complete and ready for low-cost mass production. Networks-on-Chip (NoCs) efficiently facilitates the communication of massively integrated cores on 3D many-core architecture. In this thesis scalability and performance issues of NoCs are explored in terms of architecture, organization and functionality of many-core systems. First, we evaluate on-chip network performance in massively integrated many-core architecture when network size grows. We propose link and channel models to analyze the network traffic and hence the performance. We develop a NoC simulation framework to evaluate the performance of a deflection routing network as the architecture scales up to 1000 cores. We propose and perform comparative analysis of 3D processor-memory model configurations in scalable many-core architectures. Second, we investigate how the deflection routing NoCs can be designed to maximize the benefit of the fast TSVs through clock pumping techniques. We propose multi-rate models for inter-layer communication. We quantify the performance benefit through cycle-accurate simulations for various configurations of 3D architectures. Finally, the complexity of massively integrated many-core architecture by itself brings a multitude of design challenges such as high-cost of prototyping, increasing complexity of the technology, irregularity of the communication network, and lack of reliable simulation models. We formulate a zero-load average distance model that accurately predicts the performance of deflection routing networks in the absence of data flow by capturing the average distance of a packet with spatial and temporal probability distributions of traffic. The thesis research goals are to explore the design space of vertical integration for many-core applications, and to provide solutions to 3D technology challenges through architectural innovations. We believe the research findings presented in the thesis work contribute in addressing few of the many challenges to the field of combined research in many-core architectural design and 3D integration technology. / <p>QC 20151221</p>
348

Evaluating the feasibility of 'zero carbon' compact dwellings in urban areas

Steijger, L. A. January 2013 (has links)
Reducing the carbon footprint of domestic properties is, due to global warming and social impact of increased energy costs, an ever increasing priority. Although the compulsive building standards are set by the building regulation part L1, The Code for Sustainable Homes have set more stringent requirements above the requirements of Building Regulations to achieve zero carbon emissions during occupation. This Code for Sustainable Homes (CSH) requires all new homes to be zero carbon by 2016. Land scarcity and lower number of people per household forces developers to develop compact apartment-based dwellings on brown field sites, constraining the design. The aim of this research is to understand the effect of practical constraints on real building design and technology on achieving zero carbon performance in compact urban dwellings in a maritime northern European climate. In this work, currently commercially implementable renewable generation technologies are evaluated for their suitability in a compact urban setting. A model-based approach is developed to evaluate the energy consumption (both regulated and unregulated) and energy balance under the specific constraints of compact urban buildings. Graphical representation enables the introduction of a demand envelope, which shows the boundaries of the minimum and maximum expected thermal and electrical energy consumption over one year period. The research has three key findings: 1. Due to variations in energy consumption by the occupants, mainly by the unregulated energy consumption, multiple renewable energy technologies would have to be implemented to achieve the lowest possible carbon emission. 2. Although the combination of PV, CHP and HP is the generation option with the lowest carbon emissions, it is not completely carbon free when producing the required electrical and thermal energy. This suggests that there is a high likelihood that zero-carbon energy generation can not be achieved in this case study of a compact urban dwelling with the currently available technology. 3. The simulations show that with highly insulated dwellings the amount of space heating required is less than 10% of the overall energy consumption, as opposed to the 60% generally achieved in the building industry. Subsequent on-site measurements showed an estimation of just under 30% of the total energy consumption was used in space heating, which is higher than the simulated value, but still less than half that of a conventional dwelling. The main academic recommendation resulting from this research is a requirement for further ongoing research into new generation technologies as they become mature. Recommendations for the sponsoring company include continuation of measurements at the case study building to enable confirmation of energy consumption/generation findings so far.
349

Identification and Simulation Methods for Nonlinear Mechanical Systems Subjected to Stochastic Excitation

Josefsson, Andreas January 2011 (has links)
With an ongoing desire to improve product performance, in combination with the continuously growing complexity of engineering structures, there is a need for well-tested and reliable engineering tools that can aid the decision making and facilitate an efficient and effective product development. The technical assessment of the dynamic characteristics of mechanical systems often relies on linear analysis techniques which are well developed and generally accepted. However, sometimes the errors due to linearization are too large to be acceptable, making it necessary to take nonlinear effects into account. Many existing analysis techniques for nonlinear mechanical systems build on the assumption that the input excitation of the system is periodic and deterministic. This often results in highly inefficient analysis procedures when nonlinear mechanical systems are studied in a non-deterministic environment where the excitation of the system is stochastic. The aim of this thesis is to develop and validate new efficient analysis methods for the theoretical and experimental study of nonlinear mechanical systems under stochastic excitation, with emphasis on two specific problem areas; forced response simulation and system identification from measurement data. A fundamental concept in the presented methodology is to model the nonlinearities as external forces acting on an underlying linear system, and thereby making it possible to use much of the linear theories for simulation and identification. The developed simulation methods utilize a digital filter to achieve a stable and condensed representation of the linear subparts of the system which is then solved recursively at each time step together with the counteracting nonlinear forces. The result is computationally efficient simulation routines, which are particularly suitable for performance predictions when the input excitation consist of long segments of discrete data representing a realization of the stochastic excitation of the system. Similarly, the presented identification methods take advantage of linear Multiple-Input-Multiple-Output theories for random data by using the measured responses to create artificial inputs which can separate the linear system from the nonlinear parameters. The developed methods have been tested with extensive numerical simulations and with experimental test rigs with promising results. Furthermore, an industrial case study of a wave energy converter, with nonlinear characteristics, has been carried out and an analysis procedure capable of evaluating the performance of the system in non-deterministic ocean waves is presented.
350

Design, development, and evaluation of a scalable micro perforated drug delivery device capable of long-term zero order release

Rastogi, Ashish 01 June 2010 (has links)
Chronic diseases can often be managed by constantly delivering therapeutic amounts of drug for prolonged periods. A controlled release for extended duration would replace the need for multiple and frequent dosing. Local drug release would provide added benefit as a lower dose of drug at the target site will be needed as opposed to higher doses required by whole body administration. This would provide maximum efficacy with minimum side effects. Nonetheless, a problem with the known implantable drug delivery devices is that the delivery rate cannot be controlled, which leads to drug being released in an unpredictable pattern resulting in poor therapeutic management of patients. This dissertation is the result of development of an implantable drug delivery system that is capable of long-term zero order local release of drugs. The device can be optimized to deliver any pharmaceutical agent for any time period up to several years maintaining a controlled and desired rate. Initially significant efforts were dedicated to the characterization, biocompatibility, and loading capacity of nanoporous metal surfaces for controlled release of drugs. The physical characterization of the nanoporous wafers using Scanning electron microscropy (SEM) and atomic force microscopy techniques (AFM) yielded 3.55 x 10⁴ nm³ of pore volume / μm² of wafer surface. In vitro drug release study using 2 - octyl cyanoacrylate and methyl orange as the polymer-drug matrix was conducted and after 7 days, 88.1 ± 5.0 % drug was released. However, the initial goal to achieve zero order drug release rates for long periods of time was not achieved. The search for a better delivery system led to the design of a perforated microtube. The delivery system was designed and appropriate dimensions for the device size and hole size were estimated. Polyimide microtubes in different sizes (125-1000 μm) were used. Micro holes with dimensions ranging from 20-600 μm were fabricated on these tubes using photolithography, laser drilling, or manual drilling procedures. Small molecules such as crystal violet, prednisolone, and ethinyl estradiol were successfully loaded inside the tubes in powder or solution using manual filling or capillary filling methods. A drug loading of 0.05 – 5.40 mg was achieved depending on the tube size and the drug filling method used. The delivery system in different dimensions was characterized by performing in vitro release studies in phosphate buffered saline (pH 7.1-7.4) and in vitreous humor from the rabbit’s eye at 37.0 ± 1.0°C for up to four weeks. The number of holes was varied between 1 and 3. The tubes were loaded with crystal violet (CV) and ethinyl estradiol (EE). Linear release rates with R²>0.9900 were obtained for all groups with CV and EE. Release rates of 7.8±2.5, 16.2±5.5, and 22.5±6.0 ng/day for CV and 30.1±5.8 ng/day for EE were obtained for small tubes (30 μm hole diameter; 125 μm tube diameter). For large tubes (362-542 μm hole diameter; 1000 μm tube diameter), a release rate of 10.8±4.1, 15.8±4.8 and 22.1±6.7 μg/day was observed in vitro in PBS and a release rate of 5.8±1.8 μg/day was observed ex vivo in vitreous humor. The delivery system was also evaluated for its ability to produce a biologically significant amounts in cells stably transfected with an estrogen receptor/luciferase construct (T47D-KBluc cells). These cells are engineered to produce a constant luminescent signal in proportion to drug exposure. The average luminescence of 1144.8±153.8 and 1219.9±127.7 RLU/day, (RLU = Relative Luminescence Units), yet again indicating the capability of the device for long-term zero order release. The polyimide device was characterized for biocompatibility. An automated goniometer was used to determine the contact angle for the device, which was found to be 63.7±3.7degreees indicating that it is hydrophilic and favors cell attachment. In addition, after 72 h incubation with mammalian cells (RAW 267.4), a high cell distribution was observed on the device’s surface. The polyimide tubes were also investigated for any signs of inflammation using inflammatory markers, TNF-α and IL-1β. No significant levels of either TNF-α or IL-1β were detected in polyimide device. The results indicated that polyimide tubes were biocompatible and did not produce an inflammatory response. / text

Page generated in 0.0616 seconds