• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 324
  • 235
  • 71
  • 40
  • 35
  • 20
  • 9
  • 6
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 909
  • 198
  • 155
  • 126
  • 103
  • 101
  • 89
  • 79
  • 77
  • 76
  • 58
  • 53
  • 48
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Mass Airflow Sensor and Flame Temperature Sensor for Efficiency Control of Combustion Systems

Shakya, Rikesh January 2015 (has links)
No description available.
412

Conceptual, Linguistic and Translational Aspects of Headline Metaphors used to Refer to the American and Ukrainian Presidential Campaigns of 2004

Yasynetska, Olena A. January 2005 (has links)
No description available.
413

Equivalence of symmetric factorial designs and characterization and ranking of two-level Split-lot designs

Katsaounis, Parthena I. 28 November 2006 (has links)
No description available.
414

Sample Size Determination for a Three-arm Biosimilar Trial

Chang, Yu-Wei January 2014 (has links)
The equivalence assessment usually consists of three tests and is often conducted through a three-arm clinical trial. The first two tests are to demonstrate the superiority of the test treatment and the reference treatment to placebo, and they are followed by the equivalence test between the test treatment and the reference treatment. The equivalence is commonly defined in terms of mean difference, mean ratio or ratio of mean differences, i.e. the ratio of the mean difference of the test and placebo to the mean difference of the reference and placebo. In this dissertation, the equivalence assessment for both continuous data and discrete data are discussed. For the continuous case, the test of the ratio of mean differences is applied. The advantage of this test is that it combines a superiority test of the test treatment over the placebo and an equivalence test through one hypothesis. For the discrete case, the two-step equivalence assessment approach is studied for both Poisson and negative binomial data. While a Poisson distribution implies that population mean and variance are the same, the advantage of applying a negative binomial model is that it accounts for overdispersion, which is a common phenomenon of count medical endpoints. The test statistics, power function, and required sample size examples for a three-arm equivalence trial are given for both continuous and discrete cases. In addition, discussions on power comparisons are complemented with numerical results. / Statistics
415

Partition Properties for Non-Ordinal Sets under the Axiom of Determinacy

Holshouser, Jared 05 1900 (has links)
In this paper we explore coloring theorems for the reals, its quotients, cardinals, and their combinations. This work is done under the scope of the axiom of determinacy. We also explore generalizations of Mycielski's theorem and show how these can be used to establish coloring theorems. To finish, we discuss the strange realm of long unions.
416

A Hybrid Computational Electromagnetics Formulation for Simulation of Antennas Coupled to Lossy and Dielectric Volumes

Abd-Alhameed, Raed, Excell, Peter S., Mangoud, Mohab A. January 2004 (has links)
No / A heterogeneous hybrid computational electromagnetics method is presented, which enables different parts of an antenna simulation problem to be treated by different methods, thus enabling the most appropriate method to be used for each part. The method uses a standard frequency-domain moment-method program and a finite-difference time-domain program to compute the fields in two regions. The two regions are interfaced by surfaces on which effective sources are defined by application of the Equivalence Principle. An extension to this permits conduction currents to cross the boundary between the different computational domains. Several validation cases are examined and the results compared with available data. The method is particularly suitable for simulation of the behavior of an antenna that is partially buried, or closely coupled with lossy dielectric volumes such as soil, building structures or the human body.
417

Circuit Design Methods with Emerging Nanotechnologies

Zheng, Yexin 28 December 2009 (has links)
As complementary metal-oxide semiconductor (CMOS) technology faces more and more severe physical barriers down the path of continuously feature size scaling, innovative nano-scale devices and other post-CMOS technologies have been developed to enhance future circuit design and computation. These nanotechnologies have shown promising potentials to achieve magnitude improvement in performance and integration density. The substitution of CMOS transistors with nano-devices is expected to not only continue along the exponential projection of Moore's Law, but also raise significant challenges and opportunities, especially in the field of electronic design automation. The major obstacles that the designers are experiencing with emerging nanotechnology design include: i) the existing computer-aided design (CAD) approaches in the context of conventional CMOS Boolean design cannot be directly employed in the nanoelectronic design process, because the intrinsic electrical characteristics of many nano-devices are not best suited for Boolean implementations but demonstrate strong capability for implementing non-conventional logic such as threshold logic and reversible logic; ii) due to the density and size factors of nano-devices, the defect rate of nanoelectronic system is much higher than conventional CMOS systems, therefore existing design paradigms cannot guarantee design quality and lead to even worse result in high failure ratio. Motivated by the compelling potentials and design challenges of emerging post-CMOS technologies, this dissertation work focuses on fundamental design methodologies to effectively and efficiently achieve high quality nanoscale design. A novel programmable logic element (PLE) is first proposed to explore the versatile functionalities of threshold gates (TGs) and multi-threshold threshold gates (MTTGs). This PLE structure can realize all three- or four-variable logic functions through configuring binary control bits. This is the first single threshold logic structure that provides complete Boolean logic implementation. Based on the PLEs, a reconfigurable architecture is constructed to offer dynamic reconfigurability with little or no reconfiguration overhead, due to the intrinsic self-latching property of nanopipelining. Our reconfiguration data generation algorithm can further reduce the reconfiguration cost. To fully take advantage of such threshold logic design using emerging nanotechnologies, we also developed a combinational equivalence checking (CEC) framework for threshold logic design. Based on the features of threshold logic gates and circuits, different techniques of formulating a given threshold logic in conjunctive normal form (CNF) are introduced to facilitate efficient SAT-based verification. Evaluated with mainstream benchmarks, our hybrid algorithm, which takes into account both input symmetry and input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time. Then the reversible logic synthesis problem is considered as we focus on efficient synthesis heuristics which can provide high quality synthesis results within a reasonable computation time. We have developed a weighted directed graph model for function representation and complexity measurement. An atomic transformation is constructed to associate the function complexity variation with reversible gates. The efficiency of our heuristic lies in maximally decreasing the function complexity during synthesis steps as well as the capability to climb out of local optimums. Thereafter, swarm intelligence, one of the machine learning techniques is employed in the space searching for reversible logic synthesis, which achieves further performance improvement. To tackle the high defect-rate during the emerging nanotechnology manufacturing process, we have developed a novel defect-aware logic mapping framework for nanowire-based PLA architecture via Boolean satisfiability (SAT). The PLA defects of various types are formulated as covering and closure constraints. The defect-aware logic mapping is then solved efficiently by using available SAT solvers. This approach can generate valid logic mapping with a defect rate as high as 20%. The proposed method is universally suitable for various nanoscale PLAs, including AND/OR, NOR/NOR structures, etc. In summary, this work provides some initial attempts to address two major problems confronting future nanoelectronic system designs: the development of electronic design automation tools and the reliability issues. However, there are still a lot of challenging open questions remain in this emerging and promising area. We hope our work can lay down stepstones on nano-scale circuit design optimization through exploiting the distinctive characteristics of emerging nanotechnologies. / Ph. D.
418

Simulations of Indentation at Continuum and Atomic levels

Jiang, Wen 31 March 2008 (has links)
The main goal of this work is to determine values of elastic constants of orthotropic, transversely isotropic and cubic materials through indentation tests on thin layers bonded to rigid substrates. Accordingly, we first use the Stroh formalism to provide an analytical solution for generalized plane strain deformations of a linear elastic anisotropic layer bonded to a rigid substrate, and indented by a rigid cylindrical indenter. The mixed boundary-value problem is challenging since the deformed indented surface of the layer contacting the rigid cylinder is unknown a priori, and is to be determined as a part of the solution of the problem. For a rigid parabolic prismatic indenter contacting either an isotropic layer or an orthotropic layer, the computed solution is found to compare well with solutions available in the literature. Parametric studies have been conducted to delimit the length and the thickness of the layer for which the derived relation between the axial load and the indentation depth is valid. We then derive an expression relating the axial load, the indentation depth, and the elastic constants of an orthotropic material. This relation is specialized to a cubic material (e.g., an FCC single crystal). By using results of three virtual (i.e., numerical) indentation tests on the same specimen oriented differently, we compute values of the elastic moduli, and show that they agree well with their expected values. The technique can be extended to other anisotropic materials. We review the literature on relations between deformations at the atomic level and stresses and strains defined at the continuum level. These are then used to compare stress and strain distributions in mechanical tests performed on atomic systems and their equivalent continuum structures. Whereas averaged stresses and strains defined in terms of the overall deformations of the atomic system match well with those derived from the continuum description of the body, their local spatial distributions differ. / Ph. D.
419

Testing and Verification Strategies for Enhancing Trust in Third Party IPs

Banga, Mainak 17 December 2010 (has links)
Globalization in semiconductor industry has surged up the trend of outsourcing component design and manufacturing process across geographical boundaries. While cost reduction and short time to market are the driving factors behind this trend, the authenticity of the final product remains a major question. Third party deliverables are solely based on mutual trust and any manufacturer with a malicious intent can fiddle with the original design to make it work otherwise than expected in certain specific situations. In case such a backfire happens, the consequences can be disastrous especially for mission critical systems such as space-explorations, defense equipments such as missiles, life saving equipments such as medical gadgets where a single failure can translate to a loss of lives or millions of dollars. Thus accompanied with outsourcing, comes the question of trustworthy design - "how to ensure that integrity of the product manufactured by a third party has not been compromised". This dissertation aims towards developing verification methodologies and implementing non-destructive testing strategies to ensure the authenticity of a third party IP. This can be accomplished at various levels in the IC product life cycle. At the design stage, special testability features can be incorporated in the circuit to enhance its overall testability thereby making the otherwise hard to test portions of the design testable at the post silicon stage. We propose two different approaches to enhance the testability of the overall circuit. The first allows improved at-speed testing for the design while the second aims to exaggerate the effect of unwanted tampering (if present) on the IC. At the verification level, techniques like sequential equivalence checking can be employed to compare the third-party IP against a genuine specification and filter out components showing any deviation from the intended behavior. At the post silicon stage power discrepancies beyond a certain threshold between two otherwise identical ICs can indicate the presence of a malicious insertion in one of them. We have addressed all of them in this dissertation and suggested techniques that can be employed at each stage. Our experiments show promising results for detecting such alterations/insertions in the original design. / Ph. D.
420

Leibniz’s Defence of Heliocentrism

Weinert, Friedel 17 August 2017 (has links)
yes / This paper discusses Leibniz’s view and defence of heliocentrism, which was one of the main achievements of the Scientific Revolution (1543-1687). As Leibniz was a defender of a strictly mechanistic worldview, it seems natural to assume that he accepted Copernican heliocentrism and its completion by figures like Kepler, Descartes and Newton without reservation. However, the fact that Leibniz speaks of the Copernican theory as a hypothesis (or plausible assumption) suggests that he had several reservations regarding heliocentrism. On a first approach Leibniz employed two of his most cherished principles to defend the Copernican hypothesis against the proponents of geocentrism: these were the principle of the relativity of motion and the principle of the equivalence of hypotheses. A closer analysis reveals, however, that Leibniz also appeals to dynamic causes of planetary motions, and these constitute a much stronger support for heliocentrism than his two philosophical principles alone.

Page generated in 0.0673 seconds