• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 11
  • Tagged with
  • 1243
  • 1243
  • 278
  • 269
  • 255
  • 255
  • 167
  • 164
  • 164
  • 130
  • 129
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Subtropical to Subpolar Lagrangian Pathways in the North Atlantic and Their Impact on High Latitude Property Fields

Burkholder, Kristin Cashman January 2011 (has links)
<p>In response to the differential heating of the earth, atmospheric and oceanic flows constantly act to carry surplus energy from low to high latitudes. In the ocean, this poleward energy flux occurs as part of the large scale meridional overturning circulation: warm, shallow waters are transported to high latitudes where they cool and sink, then follow subsurface pathways equatorward until they are once again upwelled to the surface and reheated. In the North Atlantic, the upper limb of this circulation has always been explained in simplistic terms: the Gulf Stream/North Atlantic Current system carries surface waters directly to high latitudes, resulting in elevated sea surface temperatures in the eastern subpolar gyre, and, because the prevailing winds sweeping across the Atlantic are warmed by these waters, anomalously warm temperatures in Western Europe. This view has long been supported by Eulerian measurements of North Atlantic sea surface temperature and surface velocities, which imply a direct and continuous transport of surface waters between the two gyres. However, though the importance of this redistribution of heat from low to high latitudes has been broadly recognized, few studies have focused on this transport within the Lagrangian frame. </p><p>The three studies included in this dissertation use data from the observational record and from a high resolution model of ocean circulation to re-examine our understanding of upper limb transport between the subtropical and subpolar gyres. Specifically, each chapter explores intergyre Lagrangian pathways and investigates the impact of those pathways on subpolar property fields. The findings from the studies suggest that intergyre transport pathways are primarily located beneath the surface and that subtropical surface waters are largely absent from the intergyre exchange process, a very different image of intergyre transport than that compiled from Eulerian data alone. As such, these studies also highlight the importance of including 3d Lagrangian information in examinations of transport pathways.</p> / Dissertation
422

High-level built-in self-testable synthesis of digital systems

Yang, Laurence Tianruo 26 January 2010 (has links)
Driven by the rapid growth of the Internet, communication technologies; pervasive computing, automobiles, airplanes, wireless and portable consumer electronics, Embedded Systems and Systems-on-Chip (SoC) have moved from a craft to an emerging and very promising discipline in today's electronic industry. Testing of a fabricated chip is a process that applies a sequence of inputs to the chip and analyzes the chip's output sequence to ascertain whether it functions correctly. As the chip density grows to beyond millions of gates, Embedded Systems and Systems-on-Chip testing becomes a formidable task. Vast amounts of time and money have been invested by the industry just to ensure the high testability of products. On the other hand, as design complexity drastically increases, current gate-level design and test methodology alone can no longer satisfy stringent time-to-market requirements. The High-Level Test Synthesis (HLTS) system, which this thesis mainly focuses on, is to develop new systematic techniques to integrate testability consideration, specially with Built-In Self-Test. (BIST) techniques into the synthesis process. It. makes it possible for an automatic synthesis tool to predict testability of the synthesized embedded systems or chips accurately in the early stage. It also optimizes the designs in terms of test cost as well as performance and hardware area cost.
423

Model order reduction for efficient modeling and simulation of interconnect networks

Ma, Min. January 2007 (has links)
As operating frequency increases and device sizes shrink, the complexity of current state-of-the-art designs has increased dramatically. One of the main contributors to this complexity is high speed interconnects. At high frequencies, interconnects become dominant contributors to signal degradation, and their effects such as delays, reflections, and crosstalk must be accurately simulated. Time domain analysis of such structures is however very difficult because, at high frequencies, they must be modeled as distributed transmission lines which, after discretization, result in very large networks. In order to improve the simulation efficiency of such structures, model order reduction has been proposed in the literature. Conventional model order reduction methods based on Krylov subspace have a number of limitations in many practical simulation problems. This restricts their usefulness in general commercial simulators. / In this thesis, a number of new reduction techniques were developed in order to address the key shortcomings of current model order reduction methods. Specifically a new approach for handling macromodels with a very large number of ports was developed, a multi-level reduction and sprasification method was proposed for regular as well as parametric macromodels, and finally a new time domain reduction method was presented for the macromodeling of nonlinear parametric systems. Using these approaches, CPU speedups of 1 to 2 orders of magnitude were obtained.
424

Student Learning Heterogeneity in School Mathematics

Cunningham, Malcolm 11 December 2012 (has links)
The phrase "opportunities to learn" (OTL) is most commonly interpreted in institutional, or inter-individual, terms but it can also be viewed as a cognitive, or intra-individual, phenomenon. How student learning heterogeneity (LH) - learning differences manifested when children's understanding is later assessed - is understood varies by OTL interpretation. In this study, I argue that the cognitive underpinning of learning disability, learning difficulty, typical achievement, and gifted achievement in mathematics is not well understood in part because of the ambiguity of LH assumptions in previous studies. Data from 104,315 Ontario students who had responded to provincially-mandated mathematics tests in grades 3, 6, and 9 dataset were analyzed using latent trait analysis (LTM) and latent class analysis (LCA). The tests were constructed to distinguish four achievement levels per grade and, either five curriculum strands (grades 3 and 6), three strands (grade 9 applied) or four strands (grade 9 academic). Best-fitting LTM models reflected 3- or 4-factors (grade 9 applied and grades 3, 6, 9 academic, respectively). Best-fitting LCA solutions reflected 4- or 5-classes (grade 3, 6 and grade 9 applied, academic, respectively). There were differences in relative proportions of students who were distributed across levels and classes. Moreover, grade 9 models were more complex than the reported four achievement levels. To explore intrinsic modeled results further, latent factors were plotted against latent classes. Implications of institutional versus cognitive interpretations are discussed.
425

Changes in School Results in EQAO Assessments from 2006 to 2010

Ram, Anita 19 December 2012 (has links)
Many accountability systems use data from large-scale assessments to make judgements about school performance. In Ontario, school performance is often assessed using the percentage of proficient students (PPS). The purpose of this study was to shed light on the degree and frequency of changes from year to year in the percentage of proficient students, at a school, in the areas of reading, writing and mathematics for both grades 3 and 6 in Ontario from 2006 to 2010. A second purpose was to assess the influence of cohort size on the variability in scores from year to year. Once schools not having data for 5 consecutive years and outliers were omitted secondary data analysis was used to examine nearly 3000 schools in each subject and grade. For the first part of the study, descriptive statistics and frequencies were the main method of examination. In the second part of the study, variance scores and correlations were used in order to understand the relationship between changes in PPS and cohort size. Findings revealed that changes in school scores from year to year are very large for many schools. Approximately 50 percent of schools experienced changes in PPS greater than 10 percent in any given year. When examining how often, from 2006 to 2010, a school experienced a similar amount of change – generally, both the smallest and largest change categories had a larger percentage of schools experiencing a similar amount of change for two and three years. Very seldom did schools experience the same degree of change in PPS across all 5 years. Results from correlations revealed a significant and inverse relationship between average cohort size and variability in PPS. Considering over 80 percent of schools have 60 or fewer students in a cohort the unpredictability in PPS may prove to be quite frustrating to schools and confusing to stakeholders. Annual PPS scores appear to be a poor indicator of real school performance, and their use to rank or rate schools should be avoided. Recommendations are made about using PPS to report school level results for EQAO, schools and the public.
426

Student Learning Heterogeneity in School Mathematics

Cunningham, Malcolm 11 December 2012 (has links)
The phrase "opportunities to learn" (OTL) is most commonly interpreted in institutional, or inter-individual, terms but it can also be viewed as a cognitive, or intra-individual, phenomenon. How student learning heterogeneity (LH) - learning differences manifested when children's understanding is later assessed - is understood varies by OTL interpretation. In this study, I argue that the cognitive underpinning of learning disability, learning difficulty, typical achievement, and gifted achievement in mathematics is not well understood in part because of the ambiguity of LH assumptions in previous studies. Data from 104,315 Ontario students who had responded to provincially-mandated mathematics tests in grades 3, 6, and 9 dataset were analyzed using latent trait analysis (LTM) and latent class analysis (LCA). The tests were constructed to distinguish four achievement levels per grade and, either five curriculum strands (grades 3 and 6), three strands (grade 9 applied) or four strands (grade 9 academic). Best-fitting LTM models reflected 3- or 4-factors (grade 9 applied and grades 3, 6, 9 academic, respectively). Best-fitting LCA solutions reflected 4- or 5-classes (grade 3, 6 and grade 9 applied, academic, respectively). There were differences in relative proportions of students who were distributed across levels and classes. Moreover, grade 9 models were more complex than the reported four achievement levels. To explore intrinsic modeled results further, latent factors were plotted against latent classes. Implications of institutional versus cognitive interpretations are discussed.
427

Changes in School Results in EQAO Assessments from 2006 to 2010

Ram, Anita 19 December 2012 (has links)
Many accountability systems use data from large-scale assessments to make judgements about school performance. In Ontario, school performance is often assessed using the percentage of proficient students (PPS). The purpose of this study was to shed light on the degree and frequency of changes from year to year in the percentage of proficient students, at a school, in the areas of reading, writing and mathematics for both grades 3 and 6 in Ontario from 2006 to 2010. A second purpose was to assess the influence of cohort size on the variability in scores from year to year. Once schools not having data for 5 consecutive years and outliers were omitted secondary data analysis was used to examine nearly 3000 schools in each subject and grade. For the first part of the study, descriptive statistics and frequencies were the main method of examination. In the second part of the study, variance scores and correlations were used in order to understand the relationship between changes in PPS and cohort size. Findings revealed that changes in school scores from year to year are very large for many schools. Approximately 50 percent of schools experienced changes in PPS greater than 10 percent in any given year. When examining how often, from 2006 to 2010, a school experienced a similar amount of change – generally, both the smallest and largest change categories had a larger percentage of schools experiencing a similar amount of change for two and three years. Very seldom did schools experience the same degree of change in PPS across all 5 years. Results from correlations revealed a significant and inverse relationship between average cohort size and variability in PPS. Considering over 80 percent of schools have 60 or fewer students in a cohort the unpredictability in PPS may prove to be quite frustrating to schools and confusing to stakeholders. Annual PPS scores appear to be a poor indicator of real school performance, and their use to rank or rate schools should be avoided. Recommendations are made about using PPS to report school level results for EQAO, schools and the public.
428

二次元混合層における物質拡散の粒子法解析

内山, 知実, UCHIYAMA, Tomomi, 村上, 賢司, MURAKAMI, Kenji, 大槻, 直洋, OTSUKI, Naohiro 04 1900 (has links)
No description available.
429

Ratio of membrane proteins in total proteomes of prokaryota

Sawada, Ryusuke, Ke, Runcong, Tsuji, Toshiyuki, Sonoyama, Masashi, Mitaku, Shigeki 07 1900 (has links) (PDF)
No description available.
430

Limitations and opportunities for wire length prediction in gigascale integration

Anbalagan, Pranav 21 February 2007 (has links)
Wires have become a major source of bottleneck in current VLSI designs, and wire length prediction is therefore essential to overcome these bottlenecks. Wire length prediction is broadly classified into two types: macroscopic prediction, which is the prediction of wire length distribution, and microscopic prediction, which is the prediction of individual wire lengths. The objective of this thesis is to develop a clear understanding of limitations to both macroscopic and microscopic a priori, post-placement, pre-routing wire length predictions, and thereby develop better wire length prediction models. Investigations carried out to understand the limitations to macroscopic prediction reveal that, in a given design (i) the variability of the wire length distribution increases with length and (ii) the use of Rent s rule with a constant Rent s exponent p, to calculate the terminal count of a given block size, limits the accuracy of the results from a macroscopic model. Therefore, a new model for the parameter p is developed to more accurately reflect the terminal count of a given block size in placement, and using this, a new more accurate macroscopic model is developed. In addition, a model to predict the variability is also incorporated into the macroscopic model. Studies to understand limitations to microscopic prediction reveal that (i) only a fraction of the wires in a given design are predictable, and these are mostly from shorter nets with smaller degrees and (ii) the current microscopic prediction models are built based on the assumption that a single metric could be used to accurately predict the individual length of all the wires in a design. In this thesis, an alternative microscopic model is developed for the predicting the shorter wires based on a hypothesis that there are multiple metrics that influence the length of the wires. Three different metrics are developed and fitted into a heuristic classification tree framework to provide a unified and more accurate microscopic model.

Page generated in 0.0461 seconds